Home / File/ test_eval_chain.py — langchain Source File

test_eval_chain.py — langchain Source File

Architecture documentation for test_eval_chain.py, a python file in the langchain codebase. 4 imports, 0 dependents.

File python LangChainCore ApiManagement 4 imports 3 functions

Entity Profile

Dependency Diagram

graph LR
  4517f9a2_997d_f5a5_81ee_8a1c0c7e427d["test_eval_chain.py"]
  b7996424_637b_0b54_6edf_2e18e9c1a8bf["re"]
  4517f9a2_997d_f5a5_81ee_8a1c0c7e427d --> b7996424_637b_0b54_6edf_2e18e9c1a8bf
  f69d6389_263d_68a4_7fbf_f14c0602a9ba["pytest"]
  4517f9a2_997d_f5a5_81ee_8a1c0c7e427d --> f69d6389_263d_68a4_7fbf_f14c0602a9ba
  b24c58a9_b7e7_0d40_177d_3e6558f24262["langchain_classic.evaluation.scoring.eval_chain"]
  4517f9a2_997d_f5a5_81ee_8a1c0c7e427d --> b24c58a9_b7e7_0d40_177d_3e6558f24262
  55289260_666e_fae6_ddf7_d3d78be29813["tests.unit_tests.llms.fake_llm"]
  4517f9a2_997d_f5a5_81ee_8a1c0c7e427d --> 55289260_666e_fae6_ddf7_d3d78be29813
  style 4517f9a2_997d_f5a5_81ee_8a1c0c7e427d fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

"""Test the scoring chains."""

import re

import pytest

from langchain_classic.evaluation.scoring.eval_chain import (
    LabeledScoreStringEvalChain,
    ScoreStringEvalChain,
    ScoreStringResultOutputParser,
)
from tests.unit_tests.llms.fake_llm import FakeLLM


def test_pairwise_string_result_output_parser_parse() -> None:
    output_parser = ScoreStringResultOutputParser()
    text = """This answer is really good.
Rating: [[10]]"""
    got = output_parser.parse(text)
    want = {
        "reasoning": text,
        "score": 10,
    }
    assert got.get("reasoning") == want["reasoning"]
    assert got.get("score") == want["score"]

    text = """This answer is really good.
Rating: 10"""
    with pytest.raises(
        ValueError, match="Output must contain a double bracketed string"
    ):
        output_parser.parse(text)

    text = """This answer is really good.
Rating: [[0]]"""
    # Rating is not in range [1, 10]
    with pytest.raises(ValueError, match="with the verdict between 1 and 10"):
        output_parser.parse(text)


def test_pairwise_string_comparison_chain() -> None:
    llm = FakeLLM(
        queries={
            "a": "This is a rather good answer. Rating: [[9]]",
            "b": "This is a rather bad answer. Rating: [[1]]",
        },
        sequential_responses=True,
    )
    chain = ScoreStringEvalChain.from_llm(llm=llm)
    res = chain.evaluate_strings(
        prediction="I like pie.",
        input="What is your favorite food?",
    )
    assert res["score"] == 9
    assert res["reasoning"] == "This is a rather good answer. Rating: [[9]]"
    with pytest.warns(UserWarning, match=re.escape(chain._skip_reference_warning)):
        res = chain.evaluate_strings(
            prediction="I like pie.",
            input="What is your favorite food?",
            reference="I enjoy pie.",
        )
    assert res["score"] == 1
    assert res["reasoning"] == "This is a rather bad answer. Rating: [[1]]"


def test_labeled_pairwise_string_comparison_chain_missing_ref() -> None:
    llm = FakeLLM(
        queries={
            "a": "This is a rather good answer. Rating: [[9]]",
        },
        sequential_responses=True,
    )
    chain = LabeledScoreStringEvalChain.from_llm(llm=llm)
    with pytest.raises(
        ValueError, match="LabeledScoreStringEvalChain requires a reference string"
    ):
        chain.evaluate_strings(
            prediction="I like pie.",
            input="What is your favorite food?",
        )

Domain

Subdomains

Dependencies

  • langchain_classic.evaluation.scoring.eval_chain
  • pytest
  • re
  • tests.unit_tests.llms.fake_llm

Frequently Asked Questions

What does test_eval_chain.py do?
test_eval_chain.py is a source file in the langchain codebase, written in python. It belongs to the LangChainCore domain, ApiManagement subdomain.
What functions are defined in test_eval_chain.py?
test_eval_chain.py defines 3 function(s): test_labeled_pairwise_string_comparison_chain_missing_ref, test_pairwise_string_comparison_chain, test_pairwise_string_result_output_parser_parse.
What does test_eval_chain.py depend on?
test_eval_chain.py imports 4 module(s): langchain_classic.evaluation.scoring.eval_chain, pytest, re, tests.unit_tests.llms.fake_llm.
Where is test_eval_chain.py in the architecture?
test_eval_chain.py is located at libs/langchain/tests/unit_tests/evaluation/scoring/test_eval_chain.py (domain: LangChainCore, subdomain: ApiManagement, directory: libs/langchain/tests/unit_tests/evaluation/scoring).

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free