Home / Class/ CriteriaEvalChain Class — langchain Architecture

CriteriaEvalChain Class — langchain Architecture

Architecture documentation for the CriteriaEvalChain class in eval_chain.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  3fc0512a_701d_ac5f_f0e5_cf130ce672d7["CriteriaEvalChain"]
  42f35457_68a1_961e_1ac4_cbaa4a2b48b3["StringEvaluator"]
  3fc0512a_701d_ac5f_f0e5_cf130ce672d7 -->|extends| 42f35457_68a1_961e_1ac4_cbaa4a2b48b3
  649622c5_b1b0_2ee7_22ee_c9c12162f9c3["LLMEvalChain"]
  3fc0512a_701d_ac5f_f0e5_cf130ce672d7 -->|extends| 649622c5_b1b0_2ee7_22ee_c9c12162f9c3
  8d3a235d_a08f_2979_f52a_1772067dd1d3["LLMChain"]
  3fc0512a_701d_ac5f_f0e5_cf130ce672d7 -->|extends| 8d3a235d_a08f_2979_f52a_1772067dd1d3
  9aaa1120_6e40_8d2f_2735_1d75cd6580a9["eval_chain.py"]
  3fc0512a_701d_ac5f_f0e5_cf130ce672d7 -->|defined in| 9aaa1120_6e40_8d2f_2735_1d75cd6580a9
  6ac3892b_98b0_6970_7d44_233c9ed4b2fa["is_lc_serializable()"]
  3fc0512a_701d_ac5f_f0e5_cf130ce672d7 -->|method| 6ac3892b_98b0_6970_7d44_233c9ed4b2fa
  26f8a5c5_659c_2bc5_c5e7_c7b2dc79fe3c["requires_reference()"]
  3fc0512a_701d_ac5f_f0e5_cf130ce672d7 -->|method| 26f8a5c5_659c_2bc5_c5e7_c7b2dc79fe3c
  fe4ecde4_9bd4_7eec_b196_8bc0d798862f["requires_input()"]
  3fc0512a_701d_ac5f_f0e5_cf130ce672d7 -->|method| fe4ecde4_9bd4_7eec_b196_8bc0d798862f
  b0761458_956d_501e_9449_bd7f2d618a3d["evaluation_name()"]
  3fc0512a_701d_ac5f_f0e5_cf130ce672d7 -->|method| b0761458_956d_501e_9449_bd7f2d618a3d
  37107d72_9c94_1209_00da_4f2965ed6248["_skip_reference_warning()"]
  3fc0512a_701d_ac5f_f0e5_cf130ce672d7 -->|method| 37107d72_9c94_1209_00da_4f2965ed6248
  0a27bb4c_51f5_6c4d_e65f_7bf7e7eefd04["_resolve_prompt()"]
  3fc0512a_701d_ac5f_f0e5_cf130ce672d7 -->|method| 0a27bb4c_51f5_6c4d_e65f_7bf7e7eefd04
  fef41b3c_6e40_3f44_ea81_461703947f9b["resolve_criteria()"]
  3fc0512a_701d_ac5f_f0e5_cf130ce672d7 -->|method| fef41b3c_6e40_3f44_ea81_461703947f9b
  f73a4f2e_d6cb_1d1d_5fc7_b61e3d6644c6["from_llm()"]
  3fc0512a_701d_ac5f_f0e5_cf130ce672d7 -->|method| f73a4f2e_d6cb_1d1d_5fc7_b61e3d6644c6
  34adaba8_c87b_7ed8_e7a3_aacef28f0f86["_get_eval_input()"]
  3fc0512a_701d_ac5f_f0e5_cf130ce672d7 -->|method| 34adaba8_c87b_7ed8_e7a3_aacef28f0f86
  cc769c35_0aa8_eca9_a1bf_b58802d923eb["_prepare_output()"]
  3fc0512a_701d_ac5f_f0e5_cf130ce672d7 -->|method| cc769c35_0aa8_eca9_a1bf_b58802d923eb

Relationship Graph

Source Code

libs/langchain/langchain_classic/evaluation/criteria/eval_chain.py lines 162–505

class CriteriaEvalChain(StringEvaluator, LLMEvalChain, LLMChain):
    r"""LLM Chain for evaluating runs against criteria.

    Parameters
    ----------
    llm : BaseLanguageModel
        The language model to use for evaluation.
    criteria : Union[Mapping[str, str]]
        The criteria or rubric to evaluate the runs against. It can be a mapping of
        criterion name to its description, or a single criterion name.
    prompt : Optional[BasePromptTemplate], default=None
        The prompt template to use for generating prompts. If not provided, a
        default prompt template will be used based on the value of
        `requires_reference`.
    requires_reference : bool, default=False
        Whether the evaluation requires a reference text. If `True`, the
        `PROMPT_WITH_REFERENCES` template will be used, which includes the
        reference labels in the prompt. Otherwise, the `PROMPT` template will be
        used, which is a reference-free prompt.
    **kwargs : Any
        Additional keyword arguments to pass to the `LLMChain` constructor.

    Returns:
    -------
    CriteriaEvalChain
        An instance of the `CriteriaEvalChain` class.

    Examples:
    --------
    >>> from langchain_anthropic import ChatAnthropic
    >>> from langchain_classic.evaluation.criteria import CriteriaEvalChain
    >>> model = ChatAnthropic(temperature=0)
    >>> criteria = {"my-custom-criterion": "Is the submission the most amazing ever?"}
    >>> evaluator = CriteriaEvalChain.from_llm(llm=model, criteria=criteria)
    >>> evaluator.evaluate_strings(
    ...     prediction="Imagine an ice cream flavor for the color aquamarine",
    ...     input="Tell me an idea",
    ... )
    {
        'reasoning': 'Here is my step-by-step reasoning for the given criteria:\n\nThe criterion is: "Is the submission the most amazing ever?" This is a subjective criterion and open to interpretation. The submission suggests an aquamarine-colored ice cream flavor which is creative but may or may not be considered the most amazing idea ever conceived. There are many possible amazing ideas and this one ice cream flavor suggestion may or may not rise to that level for every person. \n\nN',
        'value': 'N',
        'score': 0,
    }

    >>> from langchain_openai import ChatOpenAI
    >>> from langchain_classic.evaluation.criteria import LabeledCriteriaEvalChain
    >>> model = ChatOpenAI(model="gpt-4", temperature=0)
    >>> criteria = "correctness"
    >>> evaluator = LabeledCriteriaEvalChain.from_llm(
    ...     llm=model,
    ...     criteria=criteria,
    ... )
    >>> evaluator.evaluate_strings(
    ...     prediction="The answer is 4",
    ...     input="How many apples are there?",
    ...     reference="There are 3 apples",
    ... )
    {
        'score': 0,
        'reasoning': 'The criterion for this task is the correctness of the submission. The submission states that there are 4 apples, but the reference indicates that there are actually 3 apples. Therefore, the submission is not correct, accurate, or factual according to the given criterion.\n\nN',
        'value': 'N',
    }

    """  # noqa: E501

    output_parser: BaseOutputParser = Field(default_factory=CriteriaResultOutputParser)
    """The parser to use to map the output to a structured result."""
    criterion_name: str
    """The name of the criterion being evaluated."""
    output_key: str = "results"

    @classmethod
    @override
    def is_lc_serializable(cls) -> bool:
        return False

    model_config = ConfigDict(
        extra="ignore",
    )

    @property

Frequently Asked Questions

What is the CriteriaEvalChain class?
CriteriaEvalChain is a class in the langchain codebase, defined in libs/langchain/langchain_classic/evaluation/criteria/eval_chain.py.
Where is CriteriaEvalChain defined?
CriteriaEvalChain is defined in libs/langchain/langchain_classic/evaluation/criteria/eval_chain.py at line 162.
What does CriteriaEvalChain extend?
CriteriaEvalChain extends StringEvaluator, LLMEvalChain, LLMChain.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free