Home / Function/ test_wrap_tool_call_caching_pattern() — langchain Function Reference

test_wrap_tool_call_caching_pattern() — langchain Function Reference

Architecture documentation for the test_wrap_tool_call_caching_pattern() function in test_wrap_tool_call.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  585999f7_2b45_44f7_bbf6_3c1bc1f5b588["test_wrap_tool_call_caching_pattern()"]
  e783c6bd_e3d7_7d3b_e64d_d062c5c12013["test_wrap_tool_call.py"]
  585999f7_2b45_44f7_bbf6_3c1bc1f5b588 -->|defined in| e783c6bd_e3d7_7d3b_e64d_d062c5c12013
  style 585999f7_2b45_44f7_bbf6_3c1bc1f5b588 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain_v1/tests/unit_tests/agents/middleware/core/test_wrap_tool_call.py lines 760–815

def test_wrap_tool_call_caching_pattern() -> None:
    """Test caching pattern with wrap_tool_call decorator."""
    cache: dict[tuple[str, str], Any] = {}
    handler_calls = []

    @wrap_tool_call
    def with_cache(
        request: ToolCallRequest, handler: Callable[[ToolCallRequest], ToolMessage | Command[Any]]
    ) -> ToolMessage | Command[Any]:
        assert isinstance(request.tool, BaseTool)
        # Create cache key from tool name and args
        cache_key = (request.tool.name, str(request.tool_call["args"]))

        # Check cache
        if cache_key in cache:
            return ToolMessage(
                content=cache[cache_key],
                tool_call_id=request.tool_call["id"],
                name=request.tool_call["name"],
            )

        # Execute tool and cache result
        handler_calls.append("executed")
        response = handler(request)

        if isinstance(response, ToolMessage):
            cache[cache_key] = response.content

        return response

    model = FakeToolCallingModel(
        tool_calls=[
            [ToolCall(name="search", args={"query": "test"}, id="1")],
            [ToolCall(name="search", args={"query": "test"}, id="2")],  # Same query
            [],
        ]
    )

    agent = create_agent(
        model=model,
        tools=[search],
        middleware=[with_cache],
        checkpointer=InMemorySaver(),
    )

    result = agent.invoke(
        {"messages": [HumanMessage("Search twice")]},
        {"configurable": {"thread_id": "test"}},
    )

    # Handler should only be called once (second call uses cache)
    assert len(handler_calls) == 1

    tool_messages = [m for m in result["messages"] if isinstance(m, ToolMessage)]
    # Both tool calls should have messages
    assert len(tool_messages) >= 1

Domain

Subdomains

Frequently Asked Questions

What does test_wrap_tool_call_caching_pattern() do?
test_wrap_tool_call_caching_pattern() is a function in the langchain codebase, defined in libs/langchain_v1/tests/unit_tests/agents/middleware/core/test_wrap_tool_call.py.
Where is test_wrap_tool_call_caching_pattern() defined?
test_wrap_tool_call_caching_pattern() is defined in libs/langchain_v1/tests/unit_tests/agents/middleware/core/test_wrap_tool_call.py at line 760.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free