Home / Function/ with_structured_output() — langchain Function Reference

with_structured_output() — langchain Function Reference

Architecture documentation for the with_structured_output() function in azure.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  63667f5e_3ebf_f451_c218_a9e51a54e223["with_structured_output()"]
  4b940a4c_8618_3ca0_9fa5_c378f5b8f6ac["AzureChatOpenAI"]
  63667f5e_3ebf_f451_c218_a9e51a54e223 -->|defined in| 4b940a4c_8618_3ca0_9fa5_c378f5b8f6ac
  style 63667f5e_3ebf_f451_c218_a9e51a54e223 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/openai/langchain_openai/chat_models/azure.py lines 833–1164

    def with_structured_output(
        self,
        schema: _DictOrPydanticClass | None = None,
        *,
        method: Literal["function_calling", "json_mode", "json_schema"] = "json_schema",
        include_raw: bool = False,
        strict: bool | None = None,
        **kwargs: Any,
    ) -> Runnable[LanguageModelInput, _DictOrPydantic]:
        r"""Model wrapper that returns outputs formatted to match the given schema.

        Args:
            schema: The output schema. Can be passed in as:

                - A JSON Schema,
                - A `TypedDict` class,
                - A Pydantic class,
                - Or an OpenAI function/tool schema.

                If `schema` is a Pydantic class then the model output will be a
                Pydantic instance of that class, and the model-generated fields will be
                validated by the Pydantic class. Otherwise the model output will be a
                dict and will not be validated.

                See `langchain_core.utils.function_calling.convert_to_openai_tool` for
                more on how to properly specify types and descriptions of schema fields
                when specifying a Pydantic or `TypedDict` class.

            method: The method for steering model generation, one of:

                - `'json_schema'`:
                    Uses OpenAI's [Structured Output API](https://platform.openai.com/docs/guides/structured-outputs).
                    Supported for `'gpt-4o-mini'`, `'gpt-4o-2024-08-06'`, `'o1'`, and later
                    models.
                - `'function_calling'`:
                    Uses OpenAI's tool-calling (formerly called function calling)
                    [API](https://platform.openai.com/docs/guides/function-calling)
                - `'json_mode'`:
                    Uses OpenAI's [JSON mode](https://platform.openai.com/docs/guides/structured-outputs/json-mode).
                    Note that if using JSON mode then you must include instructions for
                    formatting the output into the desired schema into the model call

                Learn more about the differences between the methods and which models
                support which methods [here](https://platform.openai.com/docs/guides/structured-outputs/function-calling-vs-response-format).

            include_raw:
                If `False` then only the parsed structured output is returned.

                If an error occurs during model output parsing it will be raised.

                If `True` then both the raw model response (a `BaseMessage`) and the
                parsed model response will be returned.

                If an error occurs during output parsing it will be caught and returned
                as well.

                The final output is always a `dict` with keys `'raw'`, `'parsed'`, and
                `'parsing_error'`.
            strict:

                - True:
                    Model output is guaranteed to exactly match the schema.
                    The input schema will also be validated according to the [supported schemas](https://platform.openai.com/docs/guides/structured-outputs/supported-schemas?api-mode=responses#supported-schemas).
                - False:
                    Input schema will not be validated and model output will not be
                    validated.
                - None:
                    `strict` argument will not be passed to the model.

                If schema is specified via TypedDict or JSON schema, `strict` is not
                enabled by default. Pass `strict=True` to enable it.

                !!! note
                    `strict` can only be non-null if `method` is `'json_schema'`
                    or `'function_calling'`.
            kwargs: Additional keyword args are passed through to the model.

        Returns:
            A `Runnable` that takes same inputs as a
                `langchain_core.language_models.chat.BaseChatModel`. If `include_raw` is
                `False` and `schema` is a Pydantic class, `Runnable` outputs an instance

Domain

Subdomains

Frequently Asked Questions

What does with_structured_output() do?
with_structured_output() is a function in the langchain codebase, defined in libs/partners/openai/langchain_openai/chat_models/azure.py.
Where is with_structured_output() defined?
with_structured_output() is defined in libs/partners/openai/langchain_openai/chat_models/azure.py at line 833.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free