Home / Function/ from_llm() — langchain Function Reference

from_llm() — langchain Function Reference

Architecture documentation for the from_llm() function in eval_chain.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  a3155b02_dbf2_ea43_042a_2d3c1404bdeb["from_llm()"]
  cb20f5c6_80c7_fbed_b351_e0b3c9587d96["QAEvalChain"]
  a3155b02_dbf2_ea43_042a_2d3c1404bdeb -->|defined in| cb20f5c6_80c7_fbed_b351_e0b3c9587d96
  909f249d_6333_28c0_ed05_2bfa8676c170["from_llm()"]
  909f249d_6333_28c0_ed05_2bfa8676c170 -->|calls| a3155b02_dbf2_ea43_042a_2d3c1404bdeb
  80297720_85ce_00c4_c038_befbdc6c8ea0["from_llm()"]
  80297720_85ce_00c4_c038_befbdc6c8ea0 -->|calls| a3155b02_dbf2_ea43_042a_2d3c1404bdeb
  909f249d_6333_28c0_ed05_2bfa8676c170["from_llm()"]
  a3155b02_dbf2_ea43_042a_2d3c1404bdeb -->|calls| 909f249d_6333_28c0_ed05_2bfa8676c170
  style a3155b02_dbf2_ea43_042a_2d3c1404bdeb fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/evaluation/qa/eval_chain.py lines 107–135

    def from_llm(
        cls,
        llm: BaseLanguageModel,
        prompt: PromptTemplate | None = None,
        **kwargs: Any,
    ) -> QAEvalChain:
        """Load QA Eval Chain from LLM.

        Args:
            llm: The base language model to use.
            prompt: A prompt template containing the input_variables:
                `'input'`, `'answer'` and `'result'` that will be used as the prompt
                for evaluation.

                Defaults to `PROMPT`.
            **kwargs: Additional keyword arguments.

        Returns:
            The loaded QA eval chain.
        """
        prompt = prompt or PROMPT
        expected_input_vars = {"query", "answer", "result"}
        if expected_input_vars != set(prompt.input_variables):
            msg = (
                f"Input variables should be {expected_input_vars}, "
                f"but got {prompt.input_variables}"
            )
            raise ValueError(msg)
        return cls(llm=llm, prompt=prompt, **kwargs)

Domain

Subdomains

Calls

Frequently Asked Questions

What does from_llm() do?
from_llm() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/evaluation/qa/eval_chain.py.
Where is from_llm() defined?
from_llm() is defined in libs/langchain/langchain_classic/evaluation/qa/eval_chain.py at line 107.
What does from_llm() call?
from_llm() calls 1 function(s): from_llm.
What calls from_llm()?
from_llm() is called by 2 function(s): from_llm, from_llm.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free