Home / Function/ from_llm() — langchain Function Reference

from_llm() — langchain Function Reference

Architecture documentation for the from_llm() function in eval_chain.py from the langchain codebase.

Function python LangChainCore Runnables calls 2 called by 1

Entity Profile

Dependency Diagram

graph TD
  3c6dc9b9_bf18_3520_a593_9147d40d2f2a["from_llm()"]
  a58ec485_afe5_c733_6e70_92d4365f961c["ScoreStringEvalChain"]
  3c6dc9b9_bf18_3520_a593_9147d40d2f2a -->|defined in| a58ec485_afe5_c733_6e70_92d4365f961c
  3987e2f0_a8de_74dd_9ceb_d96cade3aefa["from_llm()"]
  3987e2f0_a8de_74dd_9ceb_d96cade3aefa -->|calls| 3c6dc9b9_bf18_3520_a593_9147d40d2f2a
  3987e2f0_a8de_74dd_9ceb_d96cade3aefa["from_llm()"]
  3c6dc9b9_bf18_3520_a593_9147d40d2f2a -->|calls| 3987e2f0_a8de_74dd_9ceb_d96cade3aefa
  351b62b7_d2ea_9d3f_cc78_2712a988a68f["resolve_criteria()"]
  3c6dc9b9_bf18_3520_a593_9147d40d2f2a -->|calls| 351b62b7_d2ea_9d3f_cc78_2712a988a68f
  style 3c6dc9b9_bf18_3520_a593_9147d40d2f2a fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/evaluation/scoring/eval_chain.py lines 240–294

    def from_llm(
        cls,
        llm: BaseLanguageModel,
        *,
        prompt: PromptTemplate | None = None,
        criteria: CRITERIA_TYPE | str | None = None,
        normalize_by: float | None = None,
        **kwargs: Any,
    ) -> ScoreStringEvalChain:
        """Initialize the ScoreStringEvalChain from an LLM.

        Args:
            llm: The LLM to use (GPT-4 recommended).
            prompt: The prompt to use.
            criteria: The criteria to use.
            normalize_by: The value to normalize the score by.
            **kwargs: Additional keyword arguments.

        Returns:
            The initialized ScoreStringEvalChain.

        Raises:
            ValueError: If the input variables are not as expected.

        """
        if not (hasattr(llm, "model_name") and not llm.model_name.startswith("gpt-4")):
            logger.warning(
                "This chain was only tested with GPT-4. \
Performance may be significantly worse with other models.",
            )

        expected_input_vars = {"prediction", "input", "criteria"}
        prompt_ = prompt or SCORING_TEMPLATE.partial(reference="")
        if expected_input_vars != set(prompt_.input_variables):
            msg = (
                f"Input variables should be {expected_input_vars}, "
                f"but got {prompt_.input_variables}"
            )
            raise ValueError(msg)
        criteria_ = resolve_criteria(criteria)
        criteria_str = "\n".join(
            f"{k}: {v}" if v else k for k, v in criteria_.items()
        ).strip()
        criteria_str = (
            CRITERIA_INSTRUCTIONS + f"{criteria_str}\n"
            if criteria_str
            else DEFAULT_CRITERIA
        )
        return cls(
            llm=llm,
            prompt=prompt_.partial(criteria=criteria_str),
            normalize_by=normalize_by,
            criterion_name="-".join(criteria_),
            **kwargs,
        )

Domain

Subdomains

Called By

Frequently Asked Questions

What does from_llm() do?
from_llm() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/evaluation/scoring/eval_chain.py.
Where is from_llm() defined?
from_llm() is defined in libs/langchain/langchain_classic/evaluation/scoring/eval_chain.py at line 240.
What does from_llm() call?
from_llm() calls 2 function(s): from_llm, resolve_criteria.
What calls from_llm()?
from_llm() is called by 1 function(s): from_llm.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free