create_qa_with_structure_chain() — langchain Function Reference
Architecture documentation for the create_qa_with_structure_chain() function in qa_with_structure.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 56edef7e_60db_7dfa_4330_a366b60a8313["create_qa_with_structure_chain()"] c4d776aa_af27_d81b_af91_432812868d2f["qa_with_structure.py"] 56edef7e_60db_7dfa_4330_a366b60a8313 -->|defined in| c4d776aa_af27_d81b_af91_432812868d2f e9c7b58c_129e_1fab_e2c5_41fde21064bb["create_qa_with_sources_chain()"] e9c7b58c_129e_1fab_e2c5_41fde21064bb -->|calls| 56edef7e_60db_7dfa_4330_a366b60a8313 style 56edef7e_60db_7dfa_4330_a366b60a8313 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain/langchain_classic/chains/openai_functions/qa_with_structure.py lines 39–110
def create_qa_with_structure_chain(
llm: BaseLanguageModel,
schema: dict | type[BaseModel],
output_parser: str = "base",
prompt: PromptTemplate | ChatPromptTemplate | None = None,
verbose: bool = False, # noqa: FBT001,FBT002
) -> LLMChain:
"""Create a question answering chain with structure.
Create a question answering chain that returns an answer with sources
based on schema.
Args:
llm: Language model to use for the chain.
schema: Pydantic schema to use for the output.
output_parser: Output parser to use. Should be one of `'pydantic'` or `'base'`.
prompt: Optional prompt to use for the chain.
verbose: Whether to run the chain in verbose mode.
Returns:
The question answering chain.
"""
if output_parser == "pydantic":
if not (isinstance(schema, type) and is_basemodel_subclass(schema)):
msg = (
"Must provide a pydantic class for schema when output_parser is "
"'pydantic'."
)
raise ValueError(msg)
_output_parser: BaseLLMOutputParser = PydanticOutputFunctionsParser(
pydantic_schema=schema,
)
elif output_parser == "base":
_output_parser = OutputFunctionsParser()
else:
msg = (
f"Got unexpected output_parser: {output_parser}. "
f"Should be one of `pydantic` or `base`."
)
raise ValueError(msg)
if isinstance(schema, type) and is_basemodel_subclass(schema):
schema_dict = cast("dict", schema.model_json_schema())
else:
schema_dict = cast("dict", schema)
function = {
"name": schema_dict["title"],
"description": schema_dict["description"],
"parameters": schema_dict,
}
llm_kwargs = get_llm_kwargs(function)
messages = [
SystemMessage(
content=(
"You are a world class algorithm to answer "
"questions in a specific format."
),
),
HumanMessage(content="Answer question using the following context"),
HumanMessagePromptTemplate.from_template("{context}"),
HumanMessagePromptTemplate.from_template("Question: {question}"),
HumanMessage(content="Tips: Make sure to answer in the correct format"),
]
prompt = prompt or ChatPromptTemplate(messages=messages) # type: ignore[arg-type]
return LLMChain(
llm=llm,
prompt=prompt,
llm_kwargs=llm_kwargs,
output_parser=_output_parser,
verbose=verbose,
)
Domain
Subdomains
Called By
Source
Frequently Asked Questions
What does create_qa_with_structure_chain() do?
create_qa_with_structure_chain() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/chains/openai_functions/qa_with_structure.py.
Where is create_qa_with_structure_chain() defined?
create_qa_with_structure_chain() is defined in libs/langchain/langchain_classic/chains/openai_functions/qa_with_structure.py at line 39.
What calls create_qa_with_structure_chain()?
create_qa_with_structure_chain() is called by 1 function(s): create_qa_with_sources_chain.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free