Home / Function/ from_llm() — langchain Function Reference

from_llm() — langchain Function Reference

Architecture documentation for the from_llm() function in trajectory_eval_chain.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  2b7272ab_8d89_430f_feef_e480d4b55c71["from_llm()"]
  9066f65c_c5a3_1534_5336_72609f4ff02b["TrajectoryEvalChain"]
  2b7272ab_8d89_430f_feef_e480d4b55c71 -->|defined in| 9066f65c_c5a3_1534_5336_72609f4ff02b
  style 2b7272ab_8d89_430f_feef_e480d4b55c71 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/evaluation/agents/trajectory_eval_chain.py lines 226–255

    def from_llm(
        cls,
        llm: BaseLanguageModel,
        agent_tools: Sequence[BaseTool] | None = None,
        output_parser: TrajectoryOutputParser | None = None,
        **kwargs: Any,
    ) -> "TrajectoryEvalChain":
        """Create a TrajectoryEvalChain object from a language model chain.

        Args:
            llm: The language model chain.
            agent_tools: A list of tools available to the agent.
            output_parser : The output parser used to parse the chain output into a
                score.
            **kwargs: Additional keyword arguments.

        Returns:
            The `TrajectoryEvalChain` object.
        """
        if not isinstance(llm, BaseChatModel):
            msg = "Only chat models supported by the current trajectory eval"
            raise NotImplementedError(msg)
        prompt = EVAL_CHAT_PROMPT if agent_tools else TOOL_FREE_EVAL_CHAT_PROMPT
        eval_chain = LLMChain(llm=llm, prompt=prompt)
        return cls(
            agent_tools=agent_tools,
            eval_chain=eval_chain,
            output_parser=output_parser or TrajectoryOutputParser(),
            **kwargs,
        )

Domain

Subdomains

Frequently Asked Questions

What does from_llm() do?
from_llm() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/evaluation/agents/trajectory_eval_chain.py.
Where is from_llm() defined?
from_llm() is defined in libs/langchain/langchain_classic/evaluation/agents/trajectory_eval_chain.py at line 226.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free