Home / Function/ from_llm_and_tools() — langchain Function Reference

from_llm_and_tools() — langchain Function Reference

Architecture documentation for the from_llm_and_tools() function in base.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  f5b65376_6bec_6d97_9db6_78e842df66ea["from_llm_and_tools()"]
  56062173_377c_1c6c_4d10_62181f6c83f8["ConversationalAgent"]
  f5b65376_6bec_6d97_9db6_78e842df66ea -->|defined in| 56062173_377c_1c6c_4d10_62181f6c83f8
  11415413_3e29_8e1d_229b_7f514c227457["_validate_tools()"]
  f5b65376_6bec_6d97_9db6_78e842df66ea -->|calls| 11415413_3e29_8e1d_229b_7f514c227457
  3d034d31_079b_a79a_e7e6_64b9cb2b9334["create_prompt()"]
  f5b65376_6bec_6d97_9db6_78e842df66ea -->|calls| 3d034d31_079b_a79a_e7e6_64b9cb2b9334
  512de41d_1de4_0c53_016f_bc19a5ac9c57["_get_default_output_parser()"]
  f5b65376_6bec_6d97_9db6_78e842df66ea -->|calls| 512de41d_1de4_0c53_016f_bc19a5ac9c57
  style f5b65376_6bec_6d97_9db6_78e842df66ea fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/agents/conversational/base.py lines 121–178

    def from_llm_and_tools(
        cls,
        llm: BaseLanguageModel,
        tools: Sequence[BaseTool],
        callback_manager: BaseCallbackManager | None = None,
        output_parser: AgentOutputParser | None = None,
        prefix: str = PREFIX,
        suffix: str = SUFFIX,
        format_instructions: str = FORMAT_INSTRUCTIONS,
        ai_prefix: str = "AI",
        human_prefix: str = "Human",
        input_variables: list[str] | None = None,
        **kwargs: Any,
    ) -> Agent:
        """Construct an agent from an LLM and tools.

        Args:
            llm: The language model to use.
            tools: A list of tools to use.
            callback_manager: The callback manager to use.
            output_parser: The output parser to use.
            prefix: The prefix to use in the prompt.
            suffix: The suffix to use in the prompt.
            format_instructions: The format instructions to use.
            ai_prefix: The prefix to use before AI output.
            human_prefix: The prefix to use before human output.
            input_variables: The input variables to use.
            **kwargs: Any additional keyword arguments to pass to the agent.

        Returns:
            An agent.
        """
        cls._validate_tools(tools)
        prompt = cls.create_prompt(
            tools,
            ai_prefix=ai_prefix,
            human_prefix=human_prefix,
            prefix=prefix,
            suffix=suffix,
            format_instructions=format_instructions,
            input_variables=input_variables,
        )
        llm_chain = LLMChain(
            llm=llm,
            prompt=prompt,
            callback_manager=callback_manager,
        )
        tool_names = [tool.name for tool in tools]
        _output_parser = output_parser or cls._get_default_output_parser(
            ai_prefix=ai_prefix,
        )
        return cls(
            llm_chain=llm_chain,
            allowed_tools=tool_names,
            ai_prefix=ai_prefix,
            output_parser=_output_parser,
            **kwargs,
        )

Subdomains

Frequently Asked Questions

What does from_llm_and_tools() do?
from_llm_and_tools() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/agents/conversational/base.py.
Where is from_llm_and_tools() defined?
from_llm_and_tools() is defined in libs/langchain/langchain_classic/agents/conversational/base.py at line 121.
What does from_llm_and_tools() call?
from_llm_and_tools() calls 3 function(s): _get_default_output_parser, _validate_tools, create_prompt.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free