test_structured_output() — langchain Function Reference
Architecture documentation for the test_structured_output() function in test_chat_models.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 08b08bc9_1af6_66a8_bd1c_4a28541c8705["test_structured_output()"] 71dcb56e_a445_727d_c4bb_5dc733f24038["test_chat_models.py"] 08b08bc9_1af6_66a8_bd1c_4a28541c8705 -->|defined in| 71dcb56e_a445_727d_c4bb_5dc733f24038 style 08b08bc9_1af6_66a8_bd1c_4a28541c8705 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/ollama/tests/integration_tests/chat_models/test_chat_models.py lines 62–112
def test_structured_output(method: str) -> None:
"""Test to verify structured output via tool calling and `format` parameter."""
class Joke(BaseModel):
"""Joke to tell user."""
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
llm = ChatOllama(model=DEFAULT_MODEL_NAME, temperature=0)
query = "Tell me a joke about cats."
# Pydantic
if method == "function_calling":
structured_llm = llm.with_structured_output(Joke, method="function_calling")
result = structured_llm.invoke(query)
assert isinstance(result, Joke)
for chunk in structured_llm.stream(query):
assert isinstance(chunk, Joke)
# JSON Schema
if method == "json_schema":
structured_llm = llm.with_structured_output(
Joke.model_json_schema(), method="json_schema"
)
result = structured_llm.invoke(query)
assert isinstance(result, dict)
assert set(result.keys()) == {"setup", "punchline"}
for chunk in structured_llm.stream(query):
assert isinstance(chunk, dict)
assert isinstance(chunk, dict)
assert set(chunk.keys()) == {"setup", "punchline"}
# Typed Dict
class JokeSchema(TypedDict):
"""Joke to tell user."""
setup: Annotated[str, "question to set up a joke"]
punchline: Annotated[str, "answer to resolve the joke"]
structured_llm = llm.with_structured_output(JokeSchema, method="json_schema")
result = structured_llm.invoke(query)
assert isinstance(result, dict)
assert set(result.keys()) == {"setup", "punchline"}
for chunk in structured_llm.stream(query):
assert isinstance(chunk, dict)
assert isinstance(chunk, dict)
assert set(chunk.keys()) == {"setup", "punchline"}
Domain
Subdomains
Source
Frequently Asked Questions
What does test_structured_output() do?
test_structured_output() is a function in the langchain codebase, defined in libs/partners/ollama/tests/integration_tests/chat_models/test_chat_models.py.
Where is test_structured_output() defined?
test_structured_output() is defined in libs/partners/ollama/tests/integration_tests/chat_models/test_chat_models.py at line 62.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free