Home / Function/ awrap_tool_call() — langchain Function Reference

awrap_tool_call() — langchain Function Reference

Architecture documentation for the awrap_tool_call() function in tool_emulator.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  ab9fc9f6_33ca_e973_875f_7e92eb02be7a["awrap_tool_call()"]
  a4266996_914c_18fd_7063_10ef49e72ec1["LLMToolEmulator"]
  ab9fc9f6_33ca_e973_875f_7e92eb02be7a -->|defined in| a4266996_914c_18fd_7063_10ef49e72ec1
  style ab9fc9f6_33ca_e973_875f_7e92eb02be7a fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain_v1/langchain/agents/middleware/tool_emulator.py lines 159–209

    async def awrap_tool_call(
        self,
        request: ToolCallRequest,
        handler: Callable[[ToolCallRequest], Awaitable[ToolMessage | Command[Any]]],
    ) -> ToolMessage | Command[Any]:
        """Async version of `wrap_tool_call`.

        Emulate tool execution using LLM if tool should be emulated.

        Args:
            request: Tool call request to potentially emulate.
            handler: Async callback to execute the tool (can be called multiple times).

        Returns:
            ToolMessage with emulated response if tool should be emulated,
                otherwise calls handler for normal execution.
        """
        tool_name = request.tool_call["name"]

        # Check if this tool should be emulated
        should_emulate = self.emulate_all or tool_name in self.tools_to_emulate

        if not should_emulate:
            # Let it execute normally by calling the handler
            return await handler(request)

        # Extract tool information for emulation
        tool_args = request.tool_call["args"]
        tool_description = request.tool.description if request.tool else "No description available"

        # Build prompt for emulator LLM
        prompt = (
            f"You are emulating a tool call for testing purposes.\n\n"
            f"Tool: {tool_name}\n"
            f"Description: {tool_description}\n"
            f"Arguments: {tool_args}\n\n"
            f"Generate a realistic response that this tool would return "
            f"given these arguments.\n"
            f"Return ONLY the tool's output, no explanation or preamble. "
            f"Introduce variation into your responses."
        )

        # Get emulated response from LLM (using async invoke)
        response = await self.model.ainvoke([HumanMessage(prompt)])

        # Short-circuit: return emulated result without executing real tool
        return ToolMessage(
            content=response.content,
            tool_call_id=request.tool_call["id"],
            name=tool_name,
        )

Domain

Subdomains

Frequently Asked Questions

What does awrap_tool_call() do?
awrap_tool_call() is a function in the langchain codebase, defined in libs/langchain_v1/langchain/agents/middleware/tool_emulator.py.
Where is awrap_tool_call() defined?
awrap_tool_call() is defined in libs/langchain_v1/langchain/agents/middleware/tool_emulator.py at line 159.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free