test_inference_to_tool_output() — langchain Function Reference
Architecture documentation for the test_inference_to_tool_output() function in test_response_format_integration.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD eb829299_08fb_2705_1f69_d6d2503162c0["test_inference_to_tool_output()"] 52a46c82_b592_7c71_f552_b6b987060948["test_response_format_integration.py"] eb829299_08fb_2705_1f69_d6d2503162c0 -->|defined in| 52a46c82_b592_7c71_f552_b6b987060948 c6508ca5_245f_42bb_93a1_984fc946d4d0["ChatOpenAI()"] eb829299_08fb_2705_1f69_d6d2503162c0 -->|calls| c6508ca5_245f_42bb_93a1_984fc946d4d0 style eb829299_08fb_2705_1f69_d6d2503162c0 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain_v1/tests/unit_tests/agents/test_response_format_integration.py lines 112–143
def test_inference_to_tool_output(*, use_responses_api: bool) -> None:
"""Test that tool output is inferred when a model supports it."""
model_kwargs: dict[str, Any] = {"model": "gpt-5", "use_responses_api": use_responses_api}
if "OPENAI_API_KEY" not in os.environ:
model_kwargs["api_key"] = "foo"
model = ChatOpenAI(**model_kwargs)
agent = create_agent(
model,
system_prompt=(
"You are a helpful weather assistant. Please call the get_weather tool "
"once, then use the WeatherReport tool to generate the final response."
),
tools=[get_weather],
response_format=ToolStrategy(WeatherBaseModel),
)
response = agent.invoke({"messages": [HumanMessage("What's the weather?")]})
assert isinstance(response["structured_response"], WeatherBaseModel)
assert response["structured_response"].temperature == 75.0
assert response["structured_response"].condition.lower() == "sunny"
assert len(response["messages"]) == 5
assert [m.type for m in response["messages"]] == [
"human", # "What's the weather?"
"ai", # "What's the weather?"
"tool", # "The weather is sunny and 75°F."
"ai", # structured response
"tool", # artificial tool message
]
Domain
Subdomains
Calls
Source
Frequently Asked Questions
What does test_inference_to_tool_output() do?
test_inference_to_tool_output() is a function in the langchain codebase, defined in libs/langchain_v1/tests/unit_tests/agents/test_response_format_integration.py.
Where is test_inference_to_tool_output() defined?
test_inference_to_tool_output() is defined in libs/langchain_v1/tests/unit_tests/agents/test_response_format_integration.py at line 112.
What does test_inference_to_tool_output() call?
test_inference_to_tool_output() calls 1 function(s): ChatOpenAI.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free