_load_map_reduce_chain() — langchain Function Reference
Architecture documentation for the _load_map_reduce_chain() function in chain.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 93985420_65fc_2070_7048_c77879117460["_load_map_reduce_chain()"] 7bff8b59_614d_4352_11dd_db15fa7a7056["chain.py"] 93985420_65fc_2070_7048_c77879117460 -->|defined in| 7bff8b59_614d_4352_11dd_db15fa7a7056 style 93985420_65fc_2070_7048_c77879117460 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain/langchain_classic/chains/question_answering/chain.py lines 102–183
def _load_map_reduce_chain(
llm: BaseLanguageModel,
*,
question_prompt: BasePromptTemplate | None = None,
combine_prompt: BasePromptTemplate | None = None,
combine_document_variable_name: str = "summaries",
map_reduce_document_variable_name: str = "context",
collapse_prompt: BasePromptTemplate | None = None,
reduce_llm: BaseLanguageModel | None = None,
collapse_llm: BaseLanguageModel | None = None,
verbose: bool | None = None,
callback_manager: BaseCallbackManager | None = None,
callbacks: Callbacks = None,
token_max: int = 3000,
**kwargs: Any,
) -> MapReduceDocumentsChain:
_question_prompt = (
question_prompt or map_reduce_prompt.QUESTION_PROMPT_SELECTOR.get_prompt(llm)
)
_combine_prompt = (
combine_prompt or map_reduce_prompt.COMBINE_PROMPT_SELECTOR.get_prompt(llm)
)
map_chain = LLMChain(
llm=llm,
prompt=_question_prompt,
verbose=verbose,
callback_manager=callback_manager,
callbacks=callbacks,
)
_reduce_llm = reduce_llm or llm
reduce_chain = LLMChain(
llm=_reduce_llm,
prompt=_combine_prompt,
verbose=verbose,
callback_manager=callback_manager,
callbacks=callbacks,
)
# TODO: document prompt
combine_documents_chain = StuffDocumentsChain(
llm_chain=reduce_chain,
document_variable_name=combine_document_variable_name,
verbose=verbose,
callback_manager=callback_manager,
callbacks=callbacks,
)
if collapse_prompt is None:
collapse_chain = None
if collapse_llm is not None:
msg = (
"collapse_llm provided, but collapse_prompt was not: please "
"provide one or stop providing collapse_llm."
)
raise ValueError(msg)
else:
_collapse_llm = collapse_llm or llm
collapse_chain = StuffDocumentsChain(
llm_chain=LLMChain(
llm=_collapse_llm,
prompt=collapse_prompt,
verbose=verbose,
callback_manager=callback_manager,
callbacks=callbacks,
),
document_variable_name=combine_document_variable_name,
verbose=verbose,
callback_manager=callback_manager,
)
reduce_documents_chain = ReduceDocumentsChain(
combine_documents_chain=combine_documents_chain,
collapse_documents_chain=collapse_chain,
token_max=token_max,
verbose=verbose,
)
return MapReduceDocumentsChain(
llm_chain=map_chain,
document_variable_name=map_reduce_document_variable_name,
reduce_documents_chain=reduce_documents_chain,
verbose=verbose,
callback_manager=callback_manager,
callbacks=callbacks,
**kwargs,
Domain
Subdomains
Source
Frequently Asked Questions
What does _load_map_reduce_chain() do?
_load_map_reduce_chain() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/chains/question_answering/chain.py.
Where is _load_map_reduce_chain() defined?
_load_map_reduce_chain() is defined in libs/langchain/langchain_classic/chains/question_answering/chain.py at line 102.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free