Home / File/ test_tools.py — langchain Source File

test_tools.py — langchain Source File

Architecture documentation for test_tools.py, a python file in the langchain codebase. 3 imports, 0 dependents.

File python LangChainCore MessageInterface 3 imports 2 functions

Entity Profile

Dependency Diagram

graph LR
  ced23dac_81f2_ce6f_ba7e_95368f42c953["test_tools.py"]
  9444498b_8066_55c7_b3a2_1d90c4162a32["langchain_core.messages"]
  ced23dac_81f2_ce6f_ba7e_95368f42c953 --> 9444498b_8066_55c7_b3a2_1d90c4162a32
  121262a1_0bd6_d637_bce3_307ab6b3ecd4["langchain_core.tools"]
  ced23dac_81f2_ce6f_ba7e_95368f42c953 --> 121262a1_0bd6_d637_bce3_307ab6b3ecd4
  2cad93e6_586a_5d28_a74d_4ec6fd4d2227["langchain_openai"]
  ced23dac_81f2_ce6f_ba7e_95368f42c953 --> 2cad93e6_586a_5d28_a74d_4ec6fd4d2227
  style ced23dac_81f2_ce6f_ba7e_95368f42c953 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

from langchain_core.messages import AIMessage, HumanMessage, ToolMessage
from langchain_core.tools import Tool

from langchain_openai import ChatOpenAI, custom_tool


def test_custom_tool() -> None:
    @custom_tool
    def my_tool(x: str) -> str:
        """Do thing."""
        return "a" + x

    # Test decorator
    assert isinstance(my_tool, Tool)
    assert my_tool.metadata == {"type": "custom_tool"}
    assert my_tool.description == "Do thing."

    result = my_tool.invoke(
        {
            "type": "tool_call",
            "name": "my_tool",
            "args": {"whatever": "b"},
            "id": "abc",
            "extras": {"type": "custom_tool_call"},
        }
    )
    assert result == ToolMessage(
        [{"type": "custom_tool_call_output", "output": "ab"}],
        name="my_tool",
        tool_call_id="abc",
    )

    # Test tool schema
    ## Test with format
    @custom_tool(format={"type": "grammar", "syntax": "lark", "definition": "..."})
    def another_tool(x: str) -> None:
        """Do thing."""

    llm = ChatOpenAI(
        model="gpt-4.1", use_responses_api=True, output_version="responses/v1"
    ).bind_tools([another_tool])
    assert llm.kwargs == {  # type: ignore[attr-defined]
        "tools": [
            {
                "type": "custom",
                "name": "another_tool",
                "description": "Do thing.",
                "format": {"type": "grammar", "syntax": "lark", "definition": "..."},
            }
        ]
    }

    llm = ChatOpenAI(
        model="gpt-4.1", use_responses_api=True, output_version="responses/v1"
    ).bind_tools([my_tool])
    assert llm.kwargs == {  # type: ignore[attr-defined]
        "tools": [{"type": "custom", "name": "my_tool", "description": "Do thing."}]
    }

    # Test passing messages back
// ... (64 more lines)

Domain

Subdomains

Dependencies

  • langchain_core.messages
  • langchain_core.tools
  • langchain_openai

Frequently Asked Questions

What does test_tools.py do?
test_tools.py is a source file in the langchain codebase, written in python. It belongs to the LangChainCore domain, MessageInterface subdomain.
What functions are defined in test_tools.py?
test_tools.py defines 2 function(s): test_async_custom_tool, test_custom_tool.
What does test_tools.py depend on?
test_tools.py imports 3 module(s): langchain_core.messages, langchain_core.tools, langchain_openai.
Where is test_tools.py in the architecture?
test_tools.py is located at libs/partners/openai/tests/unit_tests/test_tools.py (domain: LangChainCore, subdomain: MessageInterface, directory: libs/partners/openai/tests/unit_tests).

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free