from_llm() — langchain Function Reference
Architecture documentation for the from_llm() function in eval_chain.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 52373e3f_ed16_e930_78b1_4cfd309eb9b6["from_llm()"] 530b536b_29e3_b2c0_9528_f38501e896c1["LabeledCriteriaEvalChain"] 52373e3f_ed16_e930_78b1_4cfd309eb9b6 -->|defined in| 530b536b_29e3_b2c0_9528_f38501e896c1 a41998df_2f6e_21cb_6927_203388623323["from_llm()"] a41998df_2f6e_21cb_6927_203388623323 -->|calls| 52373e3f_ed16_e930_78b1_4cfd309eb9b6 a41998df_2f6e_21cb_6927_203388623323["from_llm()"] 52373e3f_ed16_e930_78b1_4cfd309eb9b6 -->|calls| a41998df_2f6e_21cb_6927_203388623323 b7bf8076_06ca_7d01_49db_01bbd3e75d24["_resolve_prompt()"] 52373e3f_ed16_e930_78b1_4cfd309eb9b6 -->|calls| b7bf8076_06ca_7d01_49db_01bbd3e75d24 a9d899da_d5d0_329a_f04c_073673c2d5f9["resolve_criteria()"] 52373e3f_ed16_e930_78b1_4cfd309eb9b6 -->|calls| a9d899da_d5d0_329a_f04c_073673c2d5f9 style 52373e3f_ed16_e930_78b1_4cfd309eb9b6 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain/langchain_classic/evaluation/criteria/eval_chain.py lines 537–593
def from_llm(
cls,
llm: BaseLanguageModel,
criteria: CRITERIA_TYPE | None = None,
*,
prompt: BasePromptTemplate | None = None,
**kwargs: Any,
) -> CriteriaEvalChain:
"""Create a `LabeledCriteriaEvalChain` instance from an llm and criteria.
Parameters
----------
llm : BaseLanguageModel
The language model to use for evaluation.
criteria : CRITERIA_TYPE - default=None for "helpfulness"
The criteria to evaluate the runs against. It can be:
- a mapping of a criterion name to its description
- a single criterion name present in one of the default criteria
- a single `ConstitutionalPrinciple` instance
prompt : Optional[BasePromptTemplate], default=None
The prompt template to use for generating prompts. If not provided,
a default prompt will be used.
**kwargs : Any
Additional keyword arguments to pass to the `LLMChain`
constructor.
Returns:
-------
LabeledCriteriaEvalChain
An instance of the `LabeledCriteriaEvalChain` class.
Examples:
--------
>>> from langchain_openai import OpenAI
>>> from langchain_classic.evaluation.criteria import LabeledCriteriaEvalChain
>>> model = OpenAI()
>>> criteria = {
"hallucination": (
"Does this submission contain information"
" not present in the input or reference?"
),
}
>>> chain = LabeledCriteriaEvalChain.from_llm(
llm=model,
criteria=criteria,
)
"""
prompt = cls._resolve_prompt(prompt)
criteria_ = cls.resolve_criteria(criteria)
criteria_str = "\n".join(f"{k}: {v}" for k, v in criteria_.items())
prompt_ = prompt.partial(criteria=criteria_str)
return cls(
llm=llm,
prompt=prompt_,
criterion_name="-".join(criteria_),
**kwargs,
)
Domain
Subdomains
Called By
Source
Frequently Asked Questions
What does from_llm() do?
from_llm() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/evaluation/criteria/eval_chain.py.
Where is from_llm() defined?
from_llm() is defined in libs/langchain/langchain_classic/evaluation/criteria/eval_chain.py at line 537.
What does from_llm() call?
from_llm() calls 3 function(s): _resolve_prompt, from_llm, resolve_criteria.
What calls from_llm()?
from_llm() is called by 1 function(s): from_llm.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free