Home / Function/ test_inference_to_native_output() — langchain Function Reference

test_inference_to_native_output() — langchain Function Reference

Architecture documentation for the test_inference_to_native_output() function in test_response_format_integration.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  4c16bbfa_f478_8167_ed99_c4225cef7be7["test_inference_to_native_output()"]
  52a46c82_b592_7c71_f552_b6b987060948["test_response_format_integration.py"]
  4c16bbfa_f478_8167_ed99_c4225cef7be7 -->|defined in| 52a46c82_b592_7c71_f552_b6b987060948
  c6508ca5_245f_42bb_93a1_984fc946d4d0["ChatOpenAI()"]
  4c16bbfa_f478_8167_ed99_c4225cef7be7 -->|calls| c6508ca5_245f_42bb_93a1_984fc946d4d0
  style 4c16bbfa_f478_8167_ed99_c4225cef7be7 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain_v1/tests/unit_tests/agents/test_response_format_integration.py lines 77–107

def test_inference_to_native_output(*, use_responses_api: bool) -> None:
    """Test that native output is inferred when a model supports it."""
    model_kwargs: dict[str, Any] = {"model": "gpt-5", "use_responses_api": use_responses_api}

    if "OPENAI_API_KEY" not in os.environ:
        model_kwargs["api_key"] = "foo"

    model = ChatOpenAI(**model_kwargs)

    agent = create_agent(
        model,
        system_prompt=(
            "You are a helpful weather assistant. Please call the get_weather tool "
            "once, then use the WeatherReport tool to generate the final response."
        ),
        tools=[get_weather],
        response_format=WeatherBaseModel,
    )
    response = agent.invoke({"messages": [HumanMessage("What's the weather in Boston?")]})

    assert isinstance(response["structured_response"], WeatherBaseModel)
    assert response["structured_response"].temperature == 75.0
    assert response["structured_response"].condition.lower() == "sunny"
    assert len(response["messages"]) == 4

    assert [m.type for m in response["messages"]] == [
        "human",  # "What's the weather?"
        "ai",  # "What's the weather?"
        "tool",  # "The weather is sunny and 75°F."
        "ai",  # structured response
    ]

Domain

Subdomains

Calls

Frequently Asked Questions

What does test_inference_to_native_output() do?
test_inference_to_native_output() is a function in the langchain codebase, defined in libs/langchain_v1/tests/unit_tests/agents/test_response_format_integration.py.
Where is test_inference_to_native_output() defined?
test_inference_to_native_output() is defined in libs/langchain_v1/tests/unit_tests/agents/test_response_format_integration.py at line 77.
What does test_inference_to_native_output() call?
test_inference_to_native_output() calls 1 function(s): ChatOpenAI.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free