_load_map_reduce_chain() — langchain Function Reference
Architecture documentation for the _load_map_reduce_chain() function in chain.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD c9353ac1_1c05_3f86_9d56_ed433db1be6c["_load_map_reduce_chain()"] efa3839a_04cc_4e5d_7ba0_06993a200d6c["chain.py"] c9353ac1_1c05_3f86_9d56_ed433db1be6c -->|defined in| efa3839a_04cc_4e5d_7ba0_06993a200d6c style c9353ac1_1c05_3f86_9d56_ed433db1be6c fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain/langchain_classic/chains/summarize/chain.py lines 68–169
def _load_map_reduce_chain(
llm: BaseLanguageModel,
*,
map_prompt: BasePromptTemplate = map_reduce_prompt.PROMPT,
combine_prompt: BasePromptTemplate = map_reduce_prompt.PROMPT,
combine_document_variable_name: str = "text",
map_reduce_document_variable_name: str = "text",
collapse_prompt: BasePromptTemplate | None = None,
reduce_llm: BaseLanguageModel | None = None,
collapse_llm: BaseLanguageModel | None = None,
verbose: bool | None = None,
token_max: int = 3000,
callbacks: Callbacks = None,
collapse_max_retries: int | None = None,
**kwargs: Any,
) -> MapReduceDocumentsChain:
map_chain = LLMChain(
llm=llm,
prompt=map_prompt,
verbose=verbose,
callbacks=callbacks,
)
_reduce_llm = reduce_llm or llm
reduce_chain = LLMChain(
llm=_reduce_llm,
prompt=combine_prompt,
verbose=verbose,
callbacks=callbacks,
)
"""Load a MapReduceDocumentsChain for summarization.
This chain first applies a "map" step to summarize each document,
then applies a "reduce" step to combine the summaries into a
final result. Optionally, a "collapse" step can be used to handle
long intermediate results.
Args:
llm: Language Model to use for map and reduce steps.
map_prompt: Prompt used to summarize each document in the map step.
combine_prompt: Prompt used to combine summaries in the reduce step.
combine_document_variable_name: Variable name in the `combine_prompt` where
the mapped summaries are inserted.
map_reduce_document_variable_name: Variable name in the `map_prompt`
where document text is inserted.
collapse_prompt: Optional prompt used to collapse intermediate summaries
if they exceed the token limit (`token_max`).
reduce_llm: Optional separate LLM for the reduce step.
which uses the same model as the map step.
collapse_llm: Optional separate LLM for the collapse step.
which uses the same model as the map step.
verbose: Whether to log progress and intermediate steps.
token_max: Token threshold that triggers the collapse step during reduction.
callbacks: Optional callbacks for logging and tracing.
collapse_max_retries: Maximum retries for the collapse step if it fails.
**kwargs: Additional keyword arguments passed to the MapReduceDocumentsChain.
Returns:
A MapReduceDocumentsChain that maps each document to a summary,
then reduces all summaries into a single cohesive result.
"""
combine_documents_chain = StuffDocumentsChain(
llm_chain=reduce_chain,
document_variable_name=combine_document_variable_name,
verbose=verbose,
callbacks=callbacks,
)
if collapse_prompt is None:
collapse_chain = None
if collapse_llm is not None:
msg = (
"collapse_llm provided, but collapse_prompt was not: please "
"provide one or stop providing collapse_llm."
)
raise ValueError(msg)
else:
_collapse_llm = collapse_llm or llm
collapse_chain = StuffDocumentsChain(
llm_chain=LLMChain(
llm=_collapse_llm,
prompt=collapse_prompt,
Domain
Subdomains
Source
Frequently Asked Questions
What does _load_map_reduce_chain() do?
_load_map_reduce_chain() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/chains/summarize/chain.py.
Where is _load_map_reduce_chain() defined?
_load_map_reduce_chain() is defined in libs/langchain/langchain_classic/chains/summarize/chain.py at line 68.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free