with_structured_output() — langchain Function Reference
Architecture documentation for the with_structured_output() function in chat_models.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD f9d42027_6ba1_4ed4_ab4f_381817fca789["with_structured_output()"] 44814818_ed14_7dba_0cd5_a8f2cd67fb61["ChatXAI"] f9d42027_6ba1_4ed4_ab4f_381817fca789 -->|defined in| 44814818_ed14_7dba_0cd5_a8f2cd67fb61 style f9d42027_6ba1_4ed4_ab4f_381817fca789 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/xai/langchain_xai/chat_models.py lines 650–734
def with_structured_output(
self,
schema: _DictOrPydanticClass | None = None,
*,
method: Literal[
"function_calling", "json_mode", "json_schema"
] = "function_calling",
include_raw: bool = False,
strict: bool | None = None,
**kwargs: Any,
) -> Runnable[LanguageModelInput, _DictOrPydantic]:
"""Model wrapper that returns outputs formatted to match the given schema.
Args:
schema: The output schema. Can be passed in as:
- An OpenAI function/tool schema,
- A JSON Schema,
- A `TypedDict` class,
- Or a Pydantic class.
If `schema` is a Pydantic class then the model output will be a
Pydantic instance of that class, and the model-generated fields will be
validated by the Pydantic class. Otherwise the model output will be a
dict and will not be validated.
See `langchain_core.utils.function_calling.convert_to_openai_tool` for
more on how to properly specify types and descriptions of schema fields
when specifying a Pydantic or `TypedDict` class.
method: The method for steering model generation, one of:
- `'function_calling'`:
Uses xAI's [tool-calling features](https://docs.x.ai/docs/guides/function-calling).
- `'json_schema'`:
Uses xAI's [structured output feature](https://docs.x.ai/docs/guides/structured-outputs).
- `'json_mode'`:
Uses xAI's JSON mode feature.
include_raw:
If `False` then only the parsed structured output is returned.
If an error occurs during model output parsing it will be raised.
If `True` then both the raw model response (a `BaseMessage`) and the
parsed model response will be returned.
If an error occurs during output parsing it will be caught and returned
as well.
The final output is always a `dict` with keys `'raw'`, `'parsed'`, and
`'parsing_error'`.
strict:
- `True`:
Model output is guaranteed to exactly match the schema.
The input schema will also be validated according to the [supported schemas](https://platform.openai.com/docs/guides/structured-outputs/supported-schemas?api-mode=responses#supported-schemas).
- `False`:
Input schema will not be validated and model output will not be
validated.
- `None`:
`strict` argument will not be passed to the model.
kwargs: Additional keyword args aren't supported.
Returns:
A `Runnable` that takes same inputs as a
`langchain_core.language_models.chat.BaseChatModel`. If `include_raw` is
`False` and `schema` is a Pydantic class, `Runnable` outputs an instance
of `schema` (i.e., a Pydantic object). Otherwise, if `include_raw` is
`False` then `Runnable` outputs a `dict`.
If `include_raw` is `True`, then `Runnable` outputs a `dict` with keys:
- `'raw'`: `BaseMessage`
- `'parsed'`: `None` if there was a parsing error, otherwise the type
depends on the `schema` as described above.
- `'parsing_error'`: `BaseException | None`
"""
# Some applications require that incompatible parameters (e.g., unsupported
# methods) be handled.
if method == "function_calling" and strict:
Domain
Subdomains
Source
Frequently Asked Questions
What does with_structured_output() do?
with_structured_output() is a function in the langchain codebase, defined in libs/partners/xai/langchain_xai/chat_models.py.
Where is with_structured_output() defined?
with_structured_output() is defined in libs/partners/xai/langchain_xai/chat_models.py at line 650.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free