from_retrievers() — langchain Function Reference
Architecture documentation for the from_retrievers() function in multi_retrieval_qa.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 6adee4b9_b989_5078_d057_fce159e870b7["from_retrievers()"] 990cc7ef_647a_c860_f824_1bcc6abd9bc2["MultiRetrievalQAChain"] 6adee4b9_b989_5078_d057_fce159e870b7 -->|defined in| 990cc7ef_647a_c860_f824_1bcc6abd9bc2 style 6adee4b9_b989_5078_d057_fce159e870b7 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain/langchain_classic/chains/router/multi_retrieval_qa.py lines 47–134
def from_retrievers(
cls,
llm: BaseLanguageModel,
retriever_infos: list[dict[str, Any]],
default_retriever: BaseRetriever | None = None,
default_prompt: PromptTemplate | None = None,
default_chain: Chain | None = None,
*,
default_chain_llm: BaseLanguageModel | None = None,
**kwargs: Any,
) -> MultiRetrievalQAChain:
"""Create a multi retrieval qa chain from an LLM and a default chain.
Args:
llm: The language model to use.
retriever_infos: Dictionaries containing retriever information.
default_retriever: Optional default retriever to use if no default chain
is provided.
default_prompt: Optional prompt template to use for the default retriever.
default_chain: Optional default chain to use when router doesn't map input
to one of the destinations.
default_chain_llm: Optional language model to use if no default chain and
no default retriever are provided.
**kwargs: Additional keyword arguments to pass to the chain.
Returns:
An instance of the multi retrieval qa chain.
"""
if default_prompt and not default_retriever:
msg = (
"`default_retriever` must be specified if `default_prompt` is "
"provided. Received only `default_prompt`."
)
raise ValueError(msg)
destinations = [f"{r['name']}: {r['description']}" for r in retriever_infos]
destinations_str = "\n".join(destinations)
router_template = MULTI_RETRIEVAL_ROUTER_TEMPLATE.format(
destinations=destinations_str,
)
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser(next_inputs_inner_key="query"),
)
router_chain = LLMRouterChain.from_llm(llm, router_prompt)
destination_chains = {}
for r_info in retriever_infos:
prompt = r_info.get("prompt")
retriever = r_info["retriever"]
chain = RetrievalQA.from_llm(llm, prompt=prompt, retriever=retriever)
name = r_info["name"]
destination_chains[name] = chain
if default_chain:
_default_chain = default_chain
elif default_retriever:
_default_chain = RetrievalQA.from_llm(
llm,
prompt=default_prompt,
retriever=default_retriever,
)
else:
prompt_template = DEFAULT_TEMPLATE.replace("input", "query")
prompt = PromptTemplate(
template=prompt_template,
input_variables=["history", "query"],
)
if default_chain_llm is None:
msg = (
"conversation_llm must be provided if default_chain is not "
"specified. This API has been changed to avoid instantiating "
"default LLMs on behalf of users."
"You can provide a conversation LLM like so:\n"
"from langchain_openai import ChatOpenAI\n"
"model = ChatOpenAI()"
)
raise NotImplementedError(msg)
_default_chain = ConversationChain(
llm=default_chain_llm,
prompt=prompt,
input_key="query",
output_key="result",
Domain
Subdomains
Source
Frequently Asked Questions
What does from_retrievers() do?
from_retrievers() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/chains/router/multi_retrieval_qa.py.
Where is from_retrievers() defined?
from_retrievers() is defined in libs/langchain/langchain_classic/chains/router/multi_retrieval_qa.py at line 47.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free