from_llm() — langchain Function Reference
Architecture documentation for the from_llm() function in chain_extract.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 3ded29e8_4b66_b09b_5821_98f6a1d72ec1["from_llm()"] d6c70f63_58ba_abd9_fdbd_d94844756192["LLMChainExtractor"] 3ded29e8_4b66_b09b_5821_98f6a1d72ec1 -->|defined in| d6c70f63_58ba_abd9_fdbd_d94844756192 d0fa9aff_4084_869a_579a_5f2349477ea4["_get_default_chain_prompt()"] 3ded29e8_4b66_b09b_5821_98f6a1d72ec1 -->|calls| d0fa9aff_4084_869a_579a_5f2349477ea4 style 3ded29e8_4b66_b09b_5821_98f6a1d72ec1 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain/langchain_classic/retrievers/document_compressors/chain_extract.py lines 111–126
def from_llm(
cls,
llm: BaseLanguageModel,
prompt: PromptTemplate | None = None,
get_input: Callable[[str, Document], str] | None = None,
llm_chain_kwargs: dict | None = None, # noqa: ARG003
) -> LLMChainExtractor:
"""Initialize from LLM."""
_prompt = prompt if prompt is not None else _get_default_chain_prompt()
_get_input = get_input if get_input is not None else default_get_input
if _prompt.output_parser is not None:
parser = _prompt.output_parser
else:
parser = StrOutputParser()
llm_chain = _prompt | llm | parser
return cls(llm_chain=llm_chain, get_input=_get_input)
Domain
Subdomains
Source
Frequently Asked Questions
What does from_llm() do?
from_llm() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/retrievers/document_compressors/chain_extract.py.
Where is from_llm() defined?
from_llm() is defined in libs/langchain/langchain_classic/retrievers/document_compressors/chain_extract.py at line 111.
What does from_llm() call?
from_llm() calls 1 function(s): _get_default_chain_prompt.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free