Home / Function/ with_structured_output() — langchain Function Reference

with_structured_output() — langchain Function Reference

Architecture documentation for the with_structured_output() function in chat_models.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  f73f8b04_5858_f84e_8eb8_98cd9e9ef152["with_structured_output()"]
  24cf3ac3_ed46_66ab_27f4_5b5d67f0b7f0["ChatMistralAI"]
  f73f8b04_5858_f84e_8eb8_98cd9e9ef152 -->|defined in| 24cf3ac3_ed46_66ab_27f4_5b5d67f0b7f0
  28eecd69_58db_770a_6b8b_ddbe1d0c5e20["bind_tools()"]
  f73f8b04_5858_f84e_8eb8_98cd9e9ef152 -->|calls| 28eecd69_58db_770a_6b8b_ddbe1d0c5e20
  73843df6_a9a0_a28d_44fd_72611579a443["_convert_to_openai_response_format()"]
  f73f8b04_5858_f84e_8eb8_98cd9e9ef152 -->|calls| 73843df6_a9a0_a28d_44fd_72611579a443
  style f73f8b04_5858_f84e_8eb8_98cd9e9ef152 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/mistralai/langchain_mistralai/chat_models.py lines 825–1157

    def with_structured_output(
        self,
        schema: dict | type | None = None,
        *,
        method: Literal[
            "function_calling", "json_mode", "json_schema"
        ] = "function_calling",
        include_raw: bool = False,
        **kwargs: Any,
    ) -> Runnable[LanguageModelInput, dict | BaseModel]:
        r"""Model wrapper that returns outputs formatted to match the given schema.

        Args:
            schema: The output schema. Can be passed in as:

                - An OpenAI function/tool schema,
                - A JSON Schema,
                - A `TypedDict` class,
                - Or a Pydantic class.

                If `schema` is a Pydantic class then the model output will be a
                Pydantic instance of that class, and the model-generated fields will be
                validated by the Pydantic class. Otherwise the model output will be a
                dict and will not be validated.

                See `langchain_core.utils.function_calling.convert_to_openai_tool` for
                more on how to properly specify types and descriptions of schema fields
                when specifying a Pydantic or `TypedDict` class.

            method: The method for steering model generation, one of:

                - `'function_calling'`:
                    Uses Mistral's
                    [function-calling feature](https://docs.mistral.ai/capabilities/function_calling/).
                - `'json_schema'`:
                    Uses Mistral's
                    [structured output feature](https://docs.mistral.ai/capabilities/structured-output/custom_structured_output/).
                - `'json_mode'`:
                    Uses Mistral's
                    [JSON mode](https://docs.mistral.ai/capabilities/structured-output/json_mode/).
                    Note that if using JSON mode then you
                    must include instructions for formatting the output into the
                    desired schema into the model call.

                !!! warning "Behavior changed in `langchain-mistralai` 0.2.5"

                    Added method="json_schema"

            include_raw:
                If `False` then only the parsed structured output is returned.

                If an error occurs during model output parsing it will be raised.

                If `True` then both the raw model response (a `BaseMessage`) and the
                parsed model response will be returned.

                If an error occurs during output parsing it will be caught and returned
                as well.

                The final output is always a `dict` with keys `'raw'`, `'parsed'`, and
                `'parsing_error'`.

            kwargs: Any additional parameters are passed directly to
                `self.bind(**kwargs)`. This is useful for passing in
                parameters such as `tool_choice` or `tools` to control
                which tool the model should call, or to pass in parameters such as
                `stop` to control when the model should stop generating output.

        Returns:
            A `Runnable` that takes same inputs as a
                `langchain_core.language_models.chat.BaseChatModel`. If `include_raw` is
                `False` and `schema` is a Pydantic class, `Runnable` outputs an instance
                of `schema` (i.e., a Pydantic object). Otherwise, if `include_raw` is
                `False` then `Runnable` outputs a `dict`.

                If `include_raw` is `True`, then `Runnable` outputs a `dict` with keys:

                - `'raw'`: `BaseMessage`
                - `'parsed'`: `None` if there was a parsing error, otherwise the type
                    depends on the `schema` as described above.
                - `'parsing_error'`: `BaseException | None`

Domain

Subdomains

Frequently Asked Questions

What does with_structured_output() do?
with_structured_output() is a function in the langchain codebase, defined in libs/partners/mistralai/langchain_mistralai/chat_models.py.
Where is with_structured_output() defined?
with_structured_output() is defined in libs/partners/mistralai/langchain_mistralai/chat_models.py at line 825.
What does with_structured_output() call?
with_structured_output() calls 2 function(s): _convert_to_openai_response_format, bind_tools.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free