Home / Function/ from_llm() — langchain Function Reference

from_llm() — langchain Function Reference

Architecture documentation for the from_llm() function in eval_chain.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  7f35a36c_35ea_7f92_9706_72edf0eab7e3["from_llm()"]
  6997d03c_6524_f97b_7017_b2f56540bc07["PairwiseStringEvalChain"]
  7f35a36c_35ea_7f92_9706_72edf0eab7e3 -->|defined in| 6997d03c_6524_f97b_7017_b2f56540bc07
  69672bbd_1b9c_053c_d740_7c250335ad18["from_llm()"]
  69672bbd_1b9c_053c_d740_7c250335ad18 -->|calls| 7f35a36c_35ea_7f92_9706_72edf0eab7e3
  69672bbd_1b9c_053c_d740_7c250335ad18["from_llm()"]
  7f35a36c_35ea_7f92_9706_72edf0eab7e3 -->|calls| 69672bbd_1b9c_053c_d740_7c250335ad18
  4995eef9_93cd_17ab_915e_ed4ae1cf159f["resolve_pairwise_criteria()"]
  7f35a36c_35ea_7f92_9706_72edf0eab7e3 -->|calls| 4995eef9_93cd_17ab_915e_ed4ae1cf159f
  style 7f35a36c_35ea_7f92_9706_72edf0eab7e3 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/evaluation/comparison/eval_chain.py lines 240–281

    def from_llm(
        cls,
        llm: BaseLanguageModel,
        *,
        prompt: PromptTemplate | None = None,
        criteria: CRITERIA_TYPE | str | None = None,
        **kwargs: Any,
    ) -> PairwiseStringEvalChain:
        """Initialize the PairwiseStringEvalChain from an LLM.

        Args:
            llm: The LLM to use (GPT-4 recommended).
            prompt: The prompt to use.
            criteria: The criteria to use.
            **kwargs: Additional keyword arguments.

        Returns:
            The initialized PairwiseStringEvalChain.

        Raises:
            ValueError: If the input variables are not as expected.

        """
        # Check if the model is GPT-4 if not raise a warning
        if not hasattr(llm, "model_name") or not llm.model_name.startswith("gpt-4"):
            logger.warning(
                "This chain was only tested with GPT-4. \
Performance may be significantly worse with other models.",
            )

        expected_input_vars = {"prediction", "prediction_b", "input", "criteria"}
        prompt_ = prompt or COMPARISON_TEMPLATE.partial(reference="")
        if expected_input_vars != set(prompt_.input_variables):
            msg = (
                f"Input variables should be {expected_input_vars}, "
                f"but got {prompt_.input_variables}"
            )
            raise ValueError(msg)
        criteria_ = resolve_pairwise_criteria(criteria)
        criteria_str = "\n".join(f"{k}: {v}" if v else k for k, v in criteria_.items())
        criteria_str = CRITERIA_INSTRUCTIONS + criteria_str if criteria_str else ""
        return cls(llm=llm, prompt=prompt_.partial(criteria=criteria_str), **kwargs)

Domain

Subdomains

Called By

Frequently Asked Questions

What does from_llm() do?
from_llm() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/evaluation/comparison/eval_chain.py.
Where is from_llm() defined?
from_llm() is defined in libs/langchain/langchain_classic/evaluation/comparison/eval_chain.py at line 240.
What does from_llm() call?
from_llm() calls 2 function(s): from_llm, resolve_pairwise_criteria.
What calls from_llm()?
from_llm() is called by 1 function(s): from_llm.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free