from_llm() — langchain Function Reference
Architecture documentation for the from_llm() function in chain_filter.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD fabf73af_d79e_d974_a75d_4ae7d496f6fe["from_llm()"] c18417f6_2ef7_e19d_e7f5_79027ef3dd22["LLMChainFilter"] fabf73af_d79e_d974_a75d_4ae7d496f6fe -->|defined in| c18417f6_2ef7_e19d_e7f5_79027ef3dd22 3bb64d93_abb2_004b_0389_814a82fbb490["_get_default_chain_prompt()"] fabf73af_d79e_d974_a75d_4ae7d496f6fe -->|calls| 3bb64d93_abb2_004b_0389_814a82fbb490 style fabf73af_d79e_d974_a75d_4ae7d496f6fe fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain/langchain_classic/retrievers/document_compressors/chain_filter.py lines 113–135
def from_llm(
cls,
llm: BaseLanguageModel,
prompt: BasePromptTemplate | None = None,
**kwargs: Any,
) -> "LLMChainFilter":
"""Create a LLMChainFilter from a language model.
Args:
llm: The language model to use for filtering.
prompt: The prompt to use for the filter.
kwargs: Additional arguments to pass to the constructor.
Returns:
A LLMChainFilter that uses the given language model.
"""
_prompt = prompt if prompt is not None else _get_default_chain_prompt()
if _prompt.output_parser is not None:
parser = _prompt.output_parser
else:
parser = StrOutputParser()
llm_chain = _prompt | llm | parser
return cls(llm_chain=llm_chain, **kwargs)
Domain
Subdomains
Source
Frequently Asked Questions
What does from_llm() do?
from_llm() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/retrievers/document_compressors/chain_filter.py.
Where is from_llm() defined?
from_llm() is defined in libs/langchain/langchain_classic/retrievers/document_compressors/chain_filter.py at line 113.
What does from_llm() call?
from_llm() calls 1 function(s): _get_default_chain_prompt.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free