Home / Function/ create_structured_output_chain() — langchain Function Reference

create_structured_output_chain() — langchain Function Reference

Architecture documentation for the create_structured_output_chain() function in base.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  0f64b495_4bce_a08a_0377_fbd6b444de25["create_structured_output_chain()"]
  29b16d39_46bd_bda6_3f2c_ed7220442ac1["base.py"]
  0f64b495_4bce_a08a_0377_fbd6b444de25 -->|defined in| 29b16d39_46bd_bda6_3f2c_ed7220442ac1
  b5ea8739_90b8_abe3_047c_3d9f5bd78cb6["create_openai_fn_chain()"]
  0f64b495_4bce_a08a_0377_fbd6b444de25 -->|calls| b5ea8739_90b8_abe3_047c_3d9f5bd78cb6
  style 0f64b495_4bce_a08a_0377_fbd6b444de25 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/chains/openai_functions/base.py lines 149–239

def create_structured_output_chain(
    output_schema: dict[str, Any] | type[BaseModel],
    llm: BaseLanguageModel,
    prompt: BasePromptTemplate,
    *,
    output_key: str = "function",
    output_parser: BaseLLMOutputParser | None = None,
    **kwargs: Any,
) -> LLMChain:
    """[Legacy] Create an LLMChain that uses an OpenAI function to get a structured output.

    Args:
        output_schema: Either a dictionary or pydantic.BaseModel class. If a dictionary
            is passed in, it's assumed to already be a valid JsonSchema.
            For best results, pydantic.BaseModels should have docstrings describing what
            the schema represents and descriptions for the parameters.
        llm: Language model to use, assumed to support the OpenAI function-calling API.
        prompt: BasePromptTemplate to pass to the model.
        output_key: The key to use when returning the output in LLMChain.__call__.
        output_parser: BaseLLMOutputParser to use for parsing model outputs. By default
            will be inferred from the function types. If pydantic.BaseModels are passed
            in, then the OutputParser will try to parse outputs using those. Otherwise
            model outputs will simply be parsed as JSON.
        **kwargs: Additional keyword arguments to pass to LLMChain.

    Returns:
        An LLMChain that will pass the given function to the model.

    Example:
        ```python
        from typing import Optional

        from langchain_classic.chains.openai_functions import create_structured_output_chain
        from langchain_openai import ChatOpenAI
        from langchain_core.prompts import ChatPromptTemplate

        from pydantic import BaseModel, Field

        class Dog(BaseModel):
            \"\"\"Identifying information about a dog.\"\"\"

            name: str = Field(..., description="The dog's name")
            color: str = Field(..., description="The dog's color")
            fav_food: str | None = Field(None, description="The dog's favorite food")

        model = ChatOpenAI(model="gpt-3.5-turbo-0613", temperature=0)
        prompt = ChatPromptTemplate.from_messages(
            [
                ("system", "You are a world class algorithm for extracting information in structured formats."),
                ("human", "Use the given format to extract information from the following input: {input}"),
                ("human", "Tip: Make sure to answer in the correct format"),
            ]
        )
        chain = create_structured_output_chain(Dog, model, prompt)
        chain.run("Harry was a chubby brown beagle who loved chicken")
        # -> Dog(name="Harry", color="brown", fav_food="chicken")

        ```
    """  # noqa: E501
    if isinstance(output_schema, dict):
        function: Any = {
            "name": "output_formatter",
            "description": (
                "Output formatter. Should always be used to format your response to the"
                " user."
            ),
            "parameters": output_schema,
        }
    else:

        class _OutputFormatter(BaseModel):
            """Output formatter.

            Should always be used to format your response to the user.
            """

            output: output_schema  # type: ignore[valid-type]

        function = _OutputFormatter
        output_parser = output_parser or PydanticAttrOutputFunctionsParser(
            pydantic_schema=_OutputFormatter,

Subdomains

Frequently Asked Questions

What does create_structured_output_chain() do?
create_structured_output_chain() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/chains/openai_functions/base.py.
Where is create_structured_output_chain() defined?
create_structured_output_chain() is defined in libs/langchain/langchain_classic/chains/openai_functions/base.py at line 149.
What does create_structured_output_chain() call?
create_structured_output_chain() calls 1 function(s): create_openai_fn_chain.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free