Home / Function/ from_llm() — langchain Function Reference

from_llm() — langchain Function Reference

Architecture documentation for the from_llm() function in eval_chain.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  a41998df_2f6e_21cb_6927_203388623323["from_llm()"]
  e476a8f9_5ced_15c1_3631_8a65948b94ed["CriteriaEvalChain"]
  a41998df_2f6e_21cb_6927_203388623323 -->|defined in| e476a8f9_5ced_15c1_3631_8a65948b94ed
  52373e3f_ed16_e930_78b1_4cfd309eb9b6["from_llm()"]
  52373e3f_ed16_e930_78b1_4cfd309eb9b6 -->|calls| a41998df_2f6e_21cb_6927_203388623323
  dfe00c3b_61a4_4940_d85c_397366806f14["_evaluate_strings()"]
  dfe00c3b_61a4_4940_d85c_397366806f14 -->|calls| a41998df_2f6e_21cb_6927_203388623323
  375de358_d0b8_d70d_d3fd_da353f95ec25["_aevaluate_strings()"]
  375de358_d0b8_d70d_d3fd_da353f95ec25 -->|calls| a41998df_2f6e_21cb_6927_203388623323
  52373e3f_ed16_e930_78b1_4cfd309eb9b6["from_llm()"]
  a41998df_2f6e_21cb_6927_203388623323 -->|calls| 52373e3f_ed16_e930_78b1_4cfd309eb9b6
  b7bf8076_06ca_7d01_49db_01bbd3e75d24["_resolve_prompt()"]
  a41998df_2f6e_21cb_6927_203388623323 -->|calls| b7bf8076_06ca_7d01_49db_01bbd3e75d24
  a9d899da_d5d0_329a_f04c_073673c2d5f9["resolve_criteria()"]
  a41998df_2f6e_21cb_6927_203388623323 -->|calls| a9d899da_d5d0_329a_f04c_073673c2d5f9
  style a41998df_2f6e_21cb_6927_203388623323 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/evaluation/criteria/eval_chain.py lines 315–379

    def from_llm(
        cls,
        llm: BaseLanguageModel,
        criteria: CRITERIA_TYPE | None = None,
        *,
        prompt: BasePromptTemplate | None = None,
        **kwargs: Any,
    ) -> CriteriaEvalChain:
        """Create a `CriteriaEvalChain` instance from an llm and criteria.

        Parameters
        ----------
        llm : BaseLanguageModel
            The language model to use for evaluation.
        criteria : CRITERIA_TYPE - default=None for "helpfulness"
            The criteria to evaluate the runs against. It can be:
                -  a mapping of a criterion name to its description
                -  a single criterion name present in one of the default criteria
                -  a single `ConstitutionalPrinciple` instance
        prompt : Optional[BasePromptTemplate], default=None
            The prompt template to use for generating prompts. If not provided,
            a default prompt template will be used.
        **kwargs : Any
            Additional keyword arguments to pass to the `LLMChain`
            constructor.

        Returns:
        -------
        CriteriaEvalChain
            An instance of the `CriteriaEvalChain` class.

        Examples:
        --------
        >>> from langchain_openai import OpenAI
        >>> from langchain_classic.evaluation.criteria import LabeledCriteriaEvalChain
        >>> model = OpenAI()
        >>> criteria = {
                "hallucination": (
                    "Does this submission contain information"
                    " not present in the input or reference?"
                ),
            }
        >>> chain = LabeledCriteriaEvalChain.from_llm(
                llm=model,
                criteria=criteria,
            )
        """
        prompt_ = cls._resolve_prompt(prompt)
        if criteria == Criteria.CORRECTNESS:
            msg = (
                "Correctness should not be used in the reference-free"
                " 'criteria' evaluator (CriteriaEvalChain)."
                " Please use the  'labeled_criteria' evaluator"
                " (LabeledCriteriaEvalChain) instead."
            )
            raise ValueError(msg)
        criteria_ = cls.resolve_criteria(criteria)
        criteria_str = "\n".join(f"{k}: {v}" for k, v in criteria_.items())
        prompt_ = prompt_.partial(criteria=criteria_str)
        return cls(
            llm=llm,
            prompt=prompt_,
            criterion_name="-".join(criteria_),
            **kwargs,
        )

Domain

Subdomains

Frequently Asked Questions

What does from_llm() do?
from_llm() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/evaluation/criteria/eval_chain.py.
Where is from_llm() defined?
from_llm() is defined in libs/langchain/langchain_classic/evaluation/criteria/eval_chain.py at line 315.
What does from_llm() call?
from_llm() calls 3 function(s): _resolve_prompt, from_llm, resolve_criteria.
What calls from_llm()?
from_llm() is called by 3 function(s): _aevaluate_strings, _evaluate_strings, from_llm.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free