Home / Function/ test_custom_tool() — langchain Function Reference

test_custom_tool() — langchain Function Reference

Architecture documentation for the test_custom_tool() function in test_tools.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  97e5a5b6_9194_515f_387a_beb5009fdde1["test_custom_tool()"]
  ced23dac_81f2_ce6f_ba7e_95368f42c953["test_tools.py"]
  97e5a5b6_9194_515f_387a_beb5009fdde1 -->|defined in| ced23dac_81f2_ce6f_ba7e_95368f42c953
  style 97e5a5b6_9194_515f_387a_beb5009fdde1 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/openai/tests/unit_tests/test_tools.py lines 7–96

def test_custom_tool() -> None:
    @custom_tool
    def my_tool(x: str) -> str:
        """Do thing."""
        return "a" + x

    # Test decorator
    assert isinstance(my_tool, Tool)
    assert my_tool.metadata == {"type": "custom_tool"}
    assert my_tool.description == "Do thing."

    result = my_tool.invoke(
        {
            "type": "tool_call",
            "name": "my_tool",
            "args": {"whatever": "b"},
            "id": "abc",
            "extras": {"type": "custom_tool_call"},
        }
    )
    assert result == ToolMessage(
        [{"type": "custom_tool_call_output", "output": "ab"}],
        name="my_tool",
        tool_call_id="abc",
    )

    # Test tool schema
    ## Test with format
    @custom_tool(format={"type": "grammar", "syntax": "lark", "definition": "..."})
    def another_tool(x: str) -> None:
        """Do thing."""

    llm = ChatOpenAI(
        model="gpt-4.1", use_responses_api=True, output_version="responses/v1"
    ).bind_tools([another_tool])
    assert llm.kwargs == {  # type: ignore[attr-defined]
        "tools": [
            {
                "type": "custom",
                "name": "another_tool",
                "description": "Do thing.",
                "format": {"type": "grammar", "syntax": "lark", "definition": "..."},
            }
        ]
    }

    llm = ChatOpenAI(
        model="gpt-4.1", use_responses_api=True, output_version="responses/v1"
    ).bind_tools([my_tool])
    assert llm.kwargs == {  # type: ignore[attr-defined]
        "tools": [{"type": "custom", "name": "my_tool", "description": "Do thing."}]
    }

    # Test passing messages back
    message_history = [
        HumanMessage("Use the tool"),
        AIMessage(
            [
                {
                    "type": "custom_tool_call",
                    "id": "ctc_abc123",
                    "call_id": "abc",
                    "name": "my_tool",
                    "input": "a",
                }
            ],
            tool_calls=[
                {
                    "type": "tool_call",
                    "name": "my_tool",
                    "args": {"__arg1": "a"},
                    "id": "abc",
                }
            ],
        ),
        result,
    ]
    payload = llm._get_request_payload(message_history)  # type: ignore[attr-defined]
    expected_input = [
        {"content": "Use the tool", "role": "user"},
        {

Domain

Subdomains

Frequently Asked Questions

What does test_custom_tool() do?
test_custom_tool() is a function in the langchain codebase, defined in libs/partners/openai/tests/unit_tests/test_tools.py.
Where is test_custom_tool() defined?
test_custom_tool() is defined in libs/partners/openai/tests/unit_tests/test_tools.py at line 7.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free