from_llm() — langchain Function Reference
Architecture documentation for the from_llm() function in base.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 37f412df_56f8_5f88_6665_6688ab4d38b7["from_llm()"] c827daf0_9d8e_4865_a678_af8144586d0e["FlareChain"] 37f412df_56f8_5f88_6665_6688ab4d38b7 -->|defined in| c827daf0_9d8e_4865_a678_af8144586d0e style 37f412df_56f8_5f88_6665_6688ab4d38b7 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain/langchain_classic/chains/flare/base.py lines 250–311
def from_llm(
cls,
llm: BaseLanguageModel | None,
max_generation_len: int = 32,
**kwargs: Any,
) -> FlareChain:
"""Creates a FlareChain from a language model.
Args:
llm: Language model to use.
max_generation_len: Maximum length of the generated response.
kwargs: Additional arguments to pass to the constructor.
Returns:
FlareChain class with the given language model.
"""
try:
from langchain_openai import ChatOpenAI
except ImportError as e:
msg = (
"OpenAI is required for FlareChain. "
"Please install langchain-openai."
"pip install langchain-openai"
)
raise ImportError(msg) from e
# Preserve supplied llm instead of always creating a new ChatOpenAI.
# Enforce ChatOpenAI requirement (token logprobs needed for FLARE).
if llm is None:
llm = ChatOpenAI(
max_completion_tokens=max_generation_len,
logprobs=True,
temperature=0,
)
else:
if not isinstance(llm, ChatOpenAI):
msg = (
f"FlareChain.from_llm requires ChatOpenAI; got "
f"{type(llm).__name__}."
)
raise TypeError(msg)
if not getattr(llm, "logprobs", False): # attribute presence may vary
msg = (
"Provided ChatOpenAI instance must be constructed with "
"logprobs=True for FlareChain."
)
raise ValueError(msg)
current_max = getattr(llm, "max_completion_tokens", None)
if current_max is not None and current_max != max_generation_len:
logger.debug(
"FlareChain.from_llm: supplied llm max_completion_tokens=%s "
"differs from requested max_generation_len=%s; "
"leaving model unchanged.",
current_max,
max_generation_len,
)
response_chain = PROMPT | llm
question_gen_chain = QUESTION_GENERATOR_PROMPT | llm | StrOutputParser()
return cls(
question_generator_chain=question_gen_chain,
response_chain=response_chain,
**kwargs,
)
Domain
Subdomains
Source
Frequently Asked Questions
What does from_llm() do?
from_llm() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/chains/flare/base.py.
Where is from_llm() defined?
from_llm() is defined in libs/langchain/langchain_classic/chains/flare/base.py at line 250.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free