create_structured_output_runnable() — langchain Function Reference
Architecture documentation for the create_structured_output_runnable() function in base.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD f3f63b38_7673_1a74_f67c_06b8e8cfcdf3["create_structured_output_runnable()"] 22e1446e_2db6_2965_eef8_bf9239c6dbfc["base.py"] f3f63b38_7673_1a74_f67c_06b8e8cfcdf3 -->|defined in| 22e1446e_2db6_2965_eef8_bf9239c6dbfc 74a2fd31_540f_8c92_debc_5e39815247bf["_create_openai_tools_runnable()"] f3f63b38_7673_1a74_f67c_06b8e8cfcdf3 -->|calls| 74a2fd31_540f_8c92_debc_5e39815247bf d352db91_94a6_ec14_88eb_6644997dcde0["_create_openai_functions_structured_output_runnable()"] f3f63b38_7673_1a74_f67c_06b8e8cfcdf3 -->|calls| d352db91_94a6_ec14_88eb_6644997dcde0 c9fa65ca_917c_8e4a_2662_8155e1c9ba25["_create_openai_json_runnable()"] f3f63b38_7673_1a74_f67c_06b8e8cfcdf3 -->|calls| c9fa65ca_917c_8e4a_2662_8155e1c9ba25 style f3f63b38_7673_1a74_f67c_06b8e8cfcdf3 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain/langchain_classic/chains/structured_output/base.py lines 185–447
def create_structured_output_runnable(
output_schema: dict[str, Any] | type[BaseModel],
llm: Runnable,
prompt: BasePromptTemplate | None = None,
*,
output_parser: BaseOutputParser | BaseGenerationOutputParser | None = None,
enforce_function_usage: bool = True,
return_single: bool = True,
mode: Literal[
"openai-functions",
"openai-tools",
"openai-json",
] = "openai-functions",
**kwargs: Any,
) -> Runnable:
"""Create a runnable for extracting structured outputs.
Args:
output_schema: Either a dictionary or pydantic.BaseModel class. If a dictionary
is passed in, it's assumed to already be a valid JsonSchema.
For best results, pydantic.BaseModels should have docstrings describing what
the schema represents and descriptions for the parameters.
llm: Language model to use. Assumed to support the OpenAI function-calling API
if mode is 'openai-function'. Assumed to support OpenAI response_format
parameter if mode is 'openai-json'.
prompt: BasePromptTemplate to pass to the model. If mode is 'openai-json' and
prompt has input variable 'output_schema' then the given output_schema
will be converted to a JsonSchema and inserted in the prompt.
output_parser: Output parser to use for parsing model outputs. By default
will be inferred from the function types. If pydantic.BaseModel is passed
in, then the OutputParser will try to parse outputs using the pydantic
class. Otherwise model outputs will be parsed as JSON.
mode: How structured outputs are extracted from the model. If 'openai-functions'
then OpenAI function calling is used with the deprecated 'functions',
'function_call' schema. If 'openai-tools' then OpenAI function
calling with the latest 'tools', 'tool_choice' schema is used. This is
recommended over 'openai-functions'. If 'openai-json' then OpenAI model
with response_format set to JSON is used.
enforce_function_usage: Only applies when mode is 'openai-tools' or
'openai-functions'. If `True`, then the model will be forced to use the given
output schema. If `False`, then the model can elect whether to use the output
schema.
return_single: Only applies when mode is 'openai-tools'. Whether to a list of
structured outputs or a single one. If `True` and model does not return any
structured outputs then chain output is None. If `False` and model does not
return any structured outputs then chain output is an empty list.
kwargs: Additional named arguments.
Returns:
A runnable sequence that will return a structured output(s) matching the given
output_schema.
OpenAI tools example with Pydantic schema (mode='openai-tools'):
```python
from typing import Optional
from langchain_classic.chains import create_structured_output_runnable
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
class RecordDog(BaseModel):
'''Record some identifying information about a dog.'''
name: str = Field(..., description="The dog's name")
color: str = Field(..., description="The dog's color")
fav_food: str | None = Field(None, description="The dog's favorite food")
model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are an extraction algorithm. Please extract every possible instance"),
('human', '{input}')
]
)
structured_model = create_structured_output_runnable(
RecordDog,
model,
mode="openai-tools",
enforce_function_usage=True,
return_single=True
Domain
Subdomains
Calls
Source
Frequently Asked Questions
What does create_structured_output_runnable() do?
create_structured_output_runnable() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/chains/structured_output/base.py.
Where is create_structured_output_runnable() defined?
create_structured_output_runnable() is defined in libs/langchain/langchain_classic/chains/structured_output/base.py at line 185.
What does create_structured_output_runnable() call?
create_structured_output_runnable() calls 3 function(s): _create_openai_functions_structured_output_runnable, _create_openai_json_runnable, _create_openai_tools_runnable.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free