Home / File/ eval_chain.py — langchain Source File

eval_chain.py — langchain Source File

Architecture documentation for eval_chain.py, a python file in the langchain codebase. 13 imports, 0 dependents.

File python CoreAbstractions RunnableInterface 13 imports 2 functions 3 classes

Entity Profile

Dependency Diagram

graph LR
  2b85c9f2_3b3e_8497_58e2_8cb5e7dceb45["eval_chain.py"]
  67ec3255_645e_8b6e_1eff_1eb3c648ed95["re"]
  2b85c9f2_3b3e_8497_58e2_8cb5e7dceb45 --> 67ec3255_645e_8b6e_1eff_1eb3c648ed95
  06ab3965_70ce_6e2c_feb9_564d849aa5f4["string"]
  2b85c9f2_3b3e_8497_58e2_8cb5e7dceb45 --> 06ab3965_70ce_6e2c_feb9_564d849aa5f4
  cfe2bde5_180e_e3b0_df2b_55b3ebaca8e7["collections.abc"]
  2b85c9f2_3b3e_8497_58e2_8cb5e7dceb45 --> cfe2bde5_180e_e3b0_df2b_55b3ebaca8e7
  8e2034b7_ceb8_963f_29fc_2ea6b50ef9b3["typing"]
  2b85c9f2_3b3e_8497_58e2_8cb5e7dceb45 --> 8e2034b7_ceb8_963f_29fc_2ea6b50ef9b3
  f3bc7443_c889_119d_0744_aacc3620d8d2["langchain_core.callbacks"]
  2b85c9f2_3b3e_8497_58e2_8cb5e7dceb45 --> f3bc7443_c889_119d_0744_aacc3620d8d2
  ba43b74d_3099_7e1c_aac3_cf594720469e["langchain_core.language_models"]
  2b85c9f2_3b3e_8497_58e2_8cb5e7dceb45 --> ba43b74d_3099_7e1c_aac3_cf594720469e
  e6b4f61e_7b98_6666_3641_26b069517d4a["langchain_core.prompts"]
  2b85c9f2_3b3e_8497_58e2_8cb5e7dceb45 --> e6b4f61e_7b98_6666_3641_26b069517d4a
  6e58aaea_f08e_c099_3cc7_f9567bfb1ae7["pydantic"]
  2b85c9f2_3b3e_8497_58e2_8cb5e7dceb45 --> 6e58aaea_f08e_c099_3cc7_f9567bfb1ae7
  91721f45_4909_e489_8c1f_084f8bd87145["typing_extensions"]
  2b85c9f2_3b3e_8497_58e2_8cb5e7dceb45 --> 91721f45_4909_e489_8c1f_084f8bd87145
  31974615_0d58_bd26_13f1_776e0a9d1413["langchain_classic.chains.llm"]
  2b85c9f2_3b3e_8497_58e2_8cb5e7dceb45 --> 31974615_0d58_bd26_13f1_776e0a9d1413
  96545da3_f918_c086_5795_5effaea6ff97["langchain_classic.evaluation.qa.eval_prompt"]
  2b85c9f2_3b3e_8497_58e2_8cb5e7dceb45 --> 96545da3_f918_c086_5795_5effaea6ff97
  538b302b_528d_b6e6_cf56_04147780d18b["langchain_classic.evaluation.schema"]
  2b85c9f2_3b3e_8497_58e2_8cb5e7dceb45 --> 538b302b_528d_b6e6_cf56_04147780d18b
  52a02ed3_b44b_45aa_e71c_064994c739be["langchain_classic.schema"]
  2b85c9f2_3b3e_8497_58e2_8cb5e7dceb45 --> 52a02ed3_b44b_45aa_e71c_064994c739be
  style 2b85c9f2_3b3e_8497_58e2_8cb5e7dceb45 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

"""LLM Chains for evaluating question answering."""

from __future__ import annotations

import re
import string
from collections.abc import Sequence
from typing import Any

from langchain_core.callbacks import Callbacks
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts import PromptTemplate
from pydantic import ConfigDict
from typing_extensions import override

from langchain_classic.chains.llm import LLMChain
from langchain_classic.evaluation.qa.eval_prompt import (
    CONTEXT_PROMPT,
    COT_PROMPT,
    PROMPT,
)
from langchain_classic.evaluation.schema import LLMEvalChain, StringEvaluator
from langchain_classic.schema import RUN_KEY


def _get_score(text: str) -> tuple[str, int] | None:
    match = re.search(r"grade:\s*(correct|incorrect)", text.strip(), re.IGNORECASE)
    if match:
        if match.group(1).upper() == "CORRECT":
            return "CORRECT", 1
        if match.group(1).upper() == "INCORRECT":
            return "INCORRECT", 0
    try:
        first_word = (
            text.strip().split()[0].translate(str.maketrans("", "", string.punctuation))
        )
        if first_word.upper() == "CORRECT":
            return "CORRECT", 1
        if first_word.upper() == "INCORRECT":
            return "INCORRECT", 0
        last_word = (
            text.strip()
            .split()[-1]
            .translate(str.maketrans("", "", string.punctuation))
        )
        if last_word.upper() == "CORRECT":
            return "CORRECT", 1
        if last_word.upper() == "INCORRECT":
            return "INCORRECT", 0
    except IndexError:
        pass
    return None


def _parse_string_eval_output(text: str) -> dict:
    """Parse the output text.

    Args:
        text: The output text to parse.

// ... (314 more lines)

Subdomains

Dependencies

  • collections.abc
  • langchain_classic.chains.llm
  • langchain_classic.evaluation.qa.eval_prompt
  • langchain_classic.evaluation.schema
  • langchain_classic.schema
  • langchain_core.callbacks
  • langchain_core.language_models
  • langchain_core.prompts
  • pydantic
  • re
  • string
  • typing
  • typing_extensions

Frequently Asked Questions

What does eval_chain.py do?
eval_chain.py is a source file in the langchain codebase, written in python. It belongs to the CoreAbstractions domain, RunnableInterface subdomain.
What functions are defined in eval_chain.py?
eval_chain.py defines 2 function(s): _get_score, _parse_string_eval_output.
What does eval_chain.py depend on?
eval_chain.py imports 13 module(s): collections.abc, langchain_classic.chains.llm, langchain_classic.evaluation.qa.eval_prompt, langchain_classic.evaluation.schema, langchain_classic.schema, langchain_core.callbacks, langchain_core.language_models, langchain_core.prompts, and 5 more.
Where is eval_chain.py in the architecture?
eval_chain.py is located at libs/langchain/langchain_classic/evaluation/qa/eval_chain.py (domain: CoreAbstractions, subdomain: RunnableInterface, directory: libs/langchain/langchain_classic/evaluation/qa).

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free