Home / File/ test_eval_chain.py — langchain Source File

test_eval_chain.py — langchain Source File

Architecture documentation for test_eval_chain.py, a python file in the langchain codebase. 10 imports, 0 dependents.

File python LangChainCore ApiManagement 10 imports 6 functions

Entity Profile

Dependency Diagram

graph LR
  919cda35_c0f5_cbb5_866a_0591c5815209["test_eval_chain.py"]
  0029f612_c503_ebcf_a452_a0fae8c9f2c3["os"]
  919cda35_c0f5_cbb5_866a_0591c5815209 --> 0029f612_c503_ebcf_a452_a0fae8c9f2c3
  02625e10_fb78_7ecd_1ee2_105ee470faf5["sys"]
  919cda35_c0f5_cbb5_866a_0591c5815209 --> 02625e10_fb78_7ecd_1ee2_105ee470faf5
  23cb242e_1754_041d_200a_553fcb8abe1b["unittest.mock"]
  919cda35_c0f5_cbb5_866a_0591c5815209 --> 23cb242e_1754_041d_200a_553fcb8abe1b
  f69d6389_263d_68a4_7fbf_f14c0602a9ba["pytest"]
  919cda35_c0f5_cbb5_866a_0591c5815209 --> f69d6389_263d_68a4_7fbf_f14c0602a9ba
  4044d59c_c0a5_a371_f49b_bea3da4e20ac["langchain_classic.chains.llm"]
  919cda35_c0f5_cbb5_866a_0591c5815209 --> 4044d59c_c0a5_a371_f49b_bea3da4e20ac
  aa747152_2c0e_5408_796a_68aba8e883a9["langchain_classic.evaluation.loading"]
  919cda35_c0f5_cbb5_866a_0591c5815209 --> aa747152_2c0e_5408_796a_68aba8e883a9
  0d2dca2c_c5a0_d72f_29fd_cff02a85d051["langchain_classic.evaluation.qa.eval_chain"]
  919cda35_c0f5_cbb5_866a_0591c5815209 --> 0d2dca2c_c5a0_d72f_29fd_cff02a85d051
  37291248_07a6_a05c_821a_71cc0592429f["langchain_classic.evaluation.schema"]
  919cda35_c0f5_cbb5_866a_0591c5815209 --> 37291248_07a6_a05c_821a_71cc0592429f
  55289260_666e_fae6_ddf7_d3d78be29813["tests.unit_tests.llms.fake_llm"]
  919cda35_c0f5_cbb5_866a_0591c5815209 --> 55289260_666e_fae6_ddf7_d3d78be29813
  2cad93e6_586a_5d28_a74d_4ec6fd4d2227["langchain_openai"]
  919cda35_c0f5_cbb5_866a_0591c5815209 --> 2cad93e6_586a_5d28_a74d_4ec6fd4d2227
  style 919cda35_c0f5_cbb5_866a_0591c5815209 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

"""Test LLM Bash functionality."""

import os
import sys
from unittest.mock import patch

import pytest

from langchain_classic.chains.llm import LLMChain
from langchain_classic.evaluation.loading import load_evaluator
from langchain_classic.evaluation.qa.eval_chain import (
    ContextQAEvalChain,
    CotQAEvalChain,
    QAEvalChain,
    _parse_string_eval_output,
)
from langchain_classic.evaluation.schema import StringEvaluator
from tests.unit_tests.llms.fake_llm import FakeLLM


@pytest.mark.skipif(
    sys.platform.startswith("win"),
    reason="Test not supported on Windows",
)
def test_eval_chain() -> None:
    """Test a simple eval chain."""
    example = {"query": "What's my name", "answer": "John Doe"}
    prediction = {"result": "John Doe"}
    fake_qa_eval_chain = QAEvalChain.from_llm(FakeLLM())

    outputs = fake_qa_eval_chain.evaluate([example, example], [prediction, prediction])
    assert outputs[0] == outputs[1]
    assert fake_qa_eval_chain.output_key in outputs[0]
    assert outputs[0][fake_qa_eval_chain.output_key] == "foo"


@pytest.mark.skipif(
    sys.platform.startswith("win"),
    reason="Test not supported on Windows",
)
@pytest.mark.parametrize("chain_cls", [ContextQAEvalChain, CotQAEvalChain])
def test_context_eval_chain(chain_cls: type[ContextQAEvalChain]) -> None:
    """Test a simple eval chain."""
    example = {
        "query": "What's my name",
        "context": "The name of this person is John Doe",
    }
    prediction = {"result": "John Doe"}
    fake_qa_eval_chain = chain_cls.from_llm(FakeLLM())

    outputs = fake_qa_eval_chain.evaluate([example, example], [prediction, prediction])
    assert outputs[0] == outputs[1]
    assert "text" in outputs[0]
    assert outputs[0]["text"] == "foo"


def test_load_criteria_evaluator() -> None:
    """Test loading a criteria evaluator."""
    try:
        from langchain_openai import ChatOpenAI  # noqa: F401
// ... (93 more lines)

Domain

Subdomains

Dependencies

  • langchain_classic.chains.llm
  • langchain_classic.evaluation.loading
  • langchain_classic.evaluation.qa.eval_chain
  • langchain_classic.evaluation.schema
  • langchain_openai
  • os
  • pytest
  • sys
  • tests.unit_tests.llms.fake_llm
  • unittest.mock

Frequently Asked Questions

What does test_eval_chain.py do?
test_eval_chain.py is a source file in the langchain codebase, written in python. It belongs to the LangChainCore domain, ApiManagement subdomain.
What functions are defined in test_eval_chain.py?
test_eval_chain.py defines 6 function(s): test_context_eval_chain, test_eval_chain, test_implements_string_evaluator_protocol, test_load_criteria_evaluator, test_qa_output_parser, test_returns_expected_results.
What does test_eval_chain.py depend on?
test_eval_chain.py imports 10 module(s): langchain_classic.chains.llm, langchain_classic.evaluation.loading, langchain_classic.evaluation.qa.eval_chain, langchain_classic.evaluation.schema, langchain_openai, os, pytest, sys, and 2 more.
Where is test_eval_chain.py in the architecture?
test_eval_chain.py is located at libs/langchain/tests/unit_tests/evaluation/qa/test_eval_chain.py (domain: LangChainCore, subdomain: ApiManagement, directory: libs/langchain/tests/unit_tests/evaluation/qa).

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free