MultiPromptChain Class — langchain Architecture
Architecture documentation for the MultiPromptChain class in multi_prompt.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 3ae7eef8_cc33_69bc_5027_f4aeaf44c219["MultiPromptChain"] 741a7f73_f662_9d17_6b91_e7e8709f4338["MultiRouteChain"] 3ae7eef8_cc33_69bc_5027_f4aeaf44c219 -->|extends| 741a7f73_f662_9d17_6b91_e7e8709f4338 cb6f3b1f_efd0_9cbd_ea23_b31bf637a289["multi_prompt.py"] 3ae7eef8_cc33_69bc_5027_f4aeaf44c219 -->|defined in| cb6f3b1f_efd0_9cbd_ea23_b31bf637a289 79ae7ae4_18fa_79aa_3fdb_3aef4f5c8dc6["output_keys()"] 3ae7eef8_cc33_69bc_5027_f4aeaf44c219 -->|method| 79ae7ae4_18fa_79aa_3fdb_3aef4f5c8dc6 d887c8ae_ab3e_431f_44fe_a37a7abaf796["from_prompts()"] 3ae7eef8_cc33_69bc_5027_f4aeaf44c219 -->|method| d887c8ae_ab3e_431f_44fe_a37a7abaf796
Relationship Graph
Source Code
libs/langchain/langchain_classic/chains/router/multi_prompt.py lines 33–190
class MultiPromptChain(MultiRouteChain):
"""A multi-route chain that uses an LLM router chain to choose amongst prompts.
This class is deprecated. See below for a replacement, which offers several
benefits, including streaming and batch support.
Below is an example implementation:
```python
from operator import itemgetter
from typing import Literal
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableConfig
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, StateGraph
from typing_extensions import TypedDict
model = ChatOpenAI(model="gpt-4o-mini")
# Define the prompts we will route to
prompt_1 = ChatPromptTemplate.from_messages(
[
("system", "You are an expert on animals."),
("human", "{input}"),
]
)
prompt_2 = ChatPromptTemplate.from_messages(
[
("system", "You are an expert on vegetables."),
("human", "{input}"),
]
)
# Construct the chains we will route to. These format the input query
# into the respective prompt, run it through a chat model, and cast
# the result to a string.
chain_1 = prompt_1 | model | StrOutputParser()
chain_2 = prompt_2 | model | StrOutputParser()
# Next: define the chain that selects which branch to route to.
# Here we will take advantage of tool-calling features to force
# the output to select one of two desired branches.
route_system = "Route the user's query to either the animal "
"or vegetable expert."
route_prompt = ChatPromptTemplate.from_messages(
[
("system", route_system),
("human", "{input}"),
]
)
# Define schema for output:
class RouteQuery(TypedDict):
\"\"\"Route query to destination expert.\"\"\"
destination: Literal["animal", "vegetable"]
route_chain = route_prompt | model.with_structured_output(RouteQuery)
# For LangGraph, we will define the state of the graph to hold the query,
# destination, and final answer.
class State(TypedDict):
query: str
destination: RouteQuery
answer: str
# We define functions for each node, including routing the query:
async def route_query(state: State, config: RunnableConfig):
destination = await route_chain.ainvoke(state["query"], config)
return {"destination": destination}
# And one node for each prompt
async def prompt_1(state: State, config: RunnableConfig):
Extends
Source
Frequently Asked Questions
What is the MultiPromptChain class?
MultiPromptChain is a class in the langchain codebase, defined in libs/langchain/langchain_classic/chains/router/multi_prompt.py.
Where is MultiPromptChain defined?
MultiPromptChain is defined in libs/langchain/langchain_classic/chains/router/multi_prompt.py at line 33.
What does MultiPromptChain extend?
MultiPromptChain extends MultiRouteChain.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free