Home / Class/ QAEvalChain Class — langchain Architecture

QAEvalChain Class — langchain Architecture

Architecture documentation for the QAEvalChain class in eval_chain.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  cb20f5c6_80c7_fbed_b351_e0b3c9587d96["QAEvalChain"]
  ccf50fe1_4990_cf19_1e2d_25efe83f53c5["LLMChain"]
  cb20f5c6_80c7_fbed_b351_e0b3c9587d96 -->|extends| ccf50fe1_4990_cf19_1e2d_25efe83f53c5
  3bfa222e_74bd_6c80_db4a_a3d3a09b5e7b["StringEvaluator"]
  cb20f5c6_80c7_fbed_b351_e0b3c9587d96 -->|extends| 3bfa222e_74bd_6c80_db4a_a3d3a09b5e7b
  5415ac56_b757_05e4_1f48_99970c59f799["LLMEvalChain"]
  cb20f5c6_80c7_fbed_b351_e0b3c9587d96 -->|extends| 5415ac56_b757_05e4_1f48_99970c59f799
  514ad815_2197_8c98_58ac_640cc349a6ab["eval_chain.py"]
  cb20f5c6_80c7_fbed_b351_e0b3c9587d96 -->|defined in| 514ad815_2197_8c98_58ac_640cc349a6ab
  1379964b_8b1b_291d_979e_baf5711cc813["is_lc_serializable()"]
  cb20f5c6_80c7_fbed_b351_e0b3c9587d96 -->|method| 1379964b_8b1b_291d_979e_baf5711cc813
  051039bb_c62d_1072_8745_52c1a5806602["evaluation_name()"]
  cb20f5c6_80c7_fbed_b351_e0b3c9587d96 -->|method| 051039bb_c62d_1072_8745_52c1a5806602
  28263549_027c_2f6d_5b88_2576ceacc900["requires_reference()"]
  cb20f5c6_80c7_fbed_b351_e0b3c9587d96 -->|method| 28263549_027c_2f6d_5b88_2576ceacc900
  cdb2c989_3b4d_3871_03d9_813cf856c525["requires_input()"]
  cb20f5c6_80c7_fbed_b351_e0b3c9587d96 -->|method| cdb2c989_3b4d_3871_03d9_813cf856c525
  a3155b02_dbf2_ea43_042a_2d3c1404bdeb["from_llm()"]
  cb20f5c6_80c7_fbed_b351_e0b3c9587d96 -->|method| a3155b02_dbf2_ea43_042a_2d3c1404bdeb
  28c54882_f371_ef45_e599_5af436fe1f65["evaluate()"]
  cb20f5c6_80c7_fbed_b351_e0b3c9587d96 -->|method| 28c54882_f371_ef45_e599_5af436fe1f65
  c8ae767c_58c1_d764_8793_0d3dfcb69a27["_prepare_output()"]
  cb20f5c6_80c7_fbed_b351_e0b3c9587d96 -->|method| c8ae767c_58c1_d764_8793_0d3dfcb69a27
  f9194a2f_4428_16e2_7306_921eac4fbfb5["_evaluate_strings()"]
  cb20f5c6_80c7_fbed_b351_e0b3c9587d96 -->|method| f9194a2f_4428_16e2_7306_921eac4fbfb5
  0279c918_e577_ce74_974d_bdaf572cb688["_aevaluate_strings()"]
  cb20f5c6_80c7_fbed_b351_e0b3c9587d96 -->|method| 0279c918_e577_ce74_974d_bdaf572cb688

Relationship Graph

Source Code

libs/langchain/langchain_classic/evaluation/qa/eval_chain.py lines 77–216

class QAEvalChain(LLMChain, StringEvaluator, LLMEvalChain):
    """LLM Chain for evaluating question answering."""

    output_key: str = "results"

    model_config = ConfigDict(
        extra="ignore",
    )

    @classmethod
    @override
    def is_lc_serializable(cls) -> bool:
        return False

    @property
    @override
    def evaluation_name(self) -> str:
        return "correctness"

    @property
    @override
    def requires_reference(self) -> bool:
        return True

    @property
    @override
    def requires_input(self) -> bool:
        return True

    @classmethod
    def from_llm(
        cls,
        llm: BaseLanguageModel,
        prompt: PromptTemplate | None = None,
        **kwargs: Any,
    ) -> QAEvalChain:
        """Load QA Eval Chain from LLM.

        Args:
            llm: The base language model to use.
            prompt: A prompt template containing the input_variables:
                `'input'`, `'answer'` and `'result'` that will be used as the prompt
                for evaluation.

                Defaults to `PROMPT`.
            **kwargs: Additional keyword arguments.

        Returns:
            The loaded QA eval chain.
        """
        prompt = prompt or PROMPT
        expected_input_vars = {"query", "answer", "result"}
        if expected_input_vars != set(prompt.input_variables):
            msg = (
                f"Input variables should be {expected_input_vars}, "
                f"but got {prompt.input_variables}"
            )
            raise ValueError(msg)
        return cls(llm=llm, prompt=prompt, **kwargs)

    def evaluate(
        self,
        examples: Sequence[dict],
        predictions: Sequence[dict],
        question_key: str = "query",
        answer_key: str = "answer",
        prediction_key: str = "result",
        *,
        callbacks: Callbacks = None,
    ) -> list[dict]:
        """Evaluate question answering examples and predictions."""
        inputs = [
            {
                "query": example[question_key],
                "answer": example[answer_key],
                "result": predictions[i][prediction_key],
            }
            for i, example in enumerate(examples)
        ]

        return self.apply(inputs, callbacks=callbacks)

Frequently Asked Questions

What is the QAEvalChain class?
QAEvalChain is a class in the langchain codebase, defined in libs/langchain/langchain_classic/evaluation/qa/eval_chain.py.
Where is QAEvalChain defined?
QAEvalChain is defined in libs/langchain/langchain_classic/evaluation/qa/eval_chain.py at line 77.
What does QAEvalChain extend?
QAEvalChain extends LLMChain, StringEvaluator, LLMEvalChain.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free