Home / File/ base.py — langchain Source File

base.py — langchain Source File

Architecture documentation for base.py, a python file in the langchain codebase. 9 imports, 0 dependents.

File python AgentOrchestration ActionLogic 9 imports 1 functions

Entity Profile

Dependency Diagram

graph LR
  5e705163_f2ec_0cda_3a5c_58793e650cbe["base.py"]
  cfe2bde5_180e_e3b0_df2b_55b3ebaca8e7["collections.abc"]
  5e705163_f2ec_0cda_3a5c_58793e650cbe --> cfe2bde5_180e_e3b0_df2b_55b3ebaca8e7
  80d582c5_7cc3_ac96_2742_3dbe1cbd4e2b["langchain_core.agents"]
  5e705163_f2ec_0cda_3a5c_58793e650cbe --> 80d582c5_7cc3_ac96_2742_3dbe1cbd4e2b
  ba43b74d_3099_7e1c_aac3_cf594720469e["langchain_core.language_models"]
  5e705163_f2ec_0cda_3a5c_58793e650cbe --> ba43b74d_3099_7e1c_aac3_cf594720469e
  d758344f_537f_649e_f467_b9d7442e86df["langchain_core.messages"]
  5e705163_f2ec_0cda_3a5c_58793e650cbe --> d758344f_537f_649e_f467_b9d7442e86df
  e45722a2_0136_a972_1f58_7b5987500404["langchain_core.prompts.chat"]
  5e705163_f2ec_0cda_3a5c_58793e650cbe --> e45722a2_0136_a972_1f58_7b5987500404
  2ceb1686_0f8c_8ae0_36d1_7c0b702fda1c["langchain_core.runnables"]
  5e705163_f2ec_0cda_3a5c_58793e650cbe --> 2ceb1686_0f8c_8ae0_36d1_7c0b702fda1c
  43d88577_548b_2248_b01b_7987bae85dcc["langchain_core.tools"]
  5e705163_f2ec_0cda_3a5c_58793e650cbe --> 43d88577_548b_2248_b01b_7987bae85dcc
  491501a6_aee0_8a4c_c0d7_7abfdc2845b1["langchain_classic.agents.format_scratchpad.tools"]
  5e705163_f2ec_0cda_3a5c_58793e650cbe --> 491501a6_aee0_8a4c_c0d7_7abfdc2845b1
  07acf81c_3473_eef9_7bd1_815539c71249["langchain_classic.agents.output_parsers.tools"]
  5e705163_f2ec_0cda_3a5c_58793e650cbe --> 07acf81c_3473_eef9_7bd1_815539c71249
  style 5e705163_f2ec_0cda_3a5c_58793e650cbe fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

from collections.abc import Callable, Sequence

from langchain_core.agents import AgentAction
from langchain_core.language_models import BaseLanguageModel
from langchain_core.messages import BaseMessage
from langchain_core.prompts.chat import ChatPromptTemplate
from langchain_core.runnables import Runnable, RunnablePassthrough
from langchain_core.tools import BaseTool

from langchain_classic.agents.format_scratchpad.tools import (
    format_to_tool_messages,
)
from langchain_classic.agents.output_parsers.tools import ToolsAgentOutputParser

MessageFormatter = Callable[[Sequence[tuple[AgentAction, str]]], list[BaseMessage]]


def create_tool_calling_agent(
    llm: BaseLanguageModel,
    tools: Sequence[BaseTool],
    prompt: ChatPromptTemplate,
    *,
    message_formatter: MessageFormatter = format_to_tool_messages,
) -> Runnable:
    """Create an agent that uses tools.

    Args:
        llm: LLM to use as the agent.
        tools: Tools this agent has access to.
        prompt: The prompt to use. See Prompt section below for more on the expected
            input variables.
        message_formatter: Formatter function to convert (AgentAction, tool output)
            tuples into FunctionMessages.

    Returns:
        A Runnable sequence representing an agent. It takes as input all the same input
        variables as the prompt passed in does. It returns as output either an
        AgentAction or AgentFinish.

    Example:
        ```python
        from langchain_classic.agents import (
            AgentExecutor,
            create_tool_calling_agent,
            tool,
        )
        from langchain_anthropic import ChatAnthropic
        from langchain_core.prompts import ChatPromptTemplate

        prompt = ChatPromptTemplate.from_messages(
            [
                ("system", "You are a helpful assistant"),
                ("placeholder", "{chat_history}"),
                ("human", "{input}"),
                ("placeholder", "{agent_scratchpad}"),
            ]
        )
        model = ChatAnthropic(model="claude-opus-4-1-20250805")

        @tool
        def magic_function(input: int) -> int:
            \"\"\"Applies a magic function to an input.\"\"\"
            return input + 2

        tools = [magic_function]

        agent = create_tool_calling_agent(model, tools, prompt)
        agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

        agent_executor.invoke({"input": "what is the value of magic_function(3)?"})

        # Using with chat history
        from langchain_core.messages import AIMessage, HumanMessage
        agent_executor.invoke(
            {
                "input": "what's my name?",
                "chat_history": [
                    HumanMessage(content="hi! my name is bob"),
                    AIMessage(content="Hello Bob! How can I assist you today?"),
                ],
            }
        )
        ```

    Prompt:
        The agent prompt must have an `agent_scratchpad` key that is a
            `MessagesPlaceholder`. Intermediate agent actions and tool output
            messages will be passed in here.

    Troubleshooting:
        - If you encounter `invalid_tool_calls` errors, ensure that your tool
          functions return properly formatted responses. Tool outputs should be
          serializable to JSON. For custom objects, implement proper __str__ or
          to_dict methods.
    """
    missing_vars = {"agent_scratchpad"}.difference(
        prompt.input_variables + list(prompt.partial_variables),
    )
    if missing_vars:
        msg = f"Prompt missing required variables: {missing_vars}"
        raise ValueError(msg)

    if not hasattr(llm, "bind_tools"):
        msg = "This function requires a bind_tools() method be implemented on the LLM."
        raise ValueError(
            msg,
        )
    llm_with_tools = llm.bind_tools(tools)

    return (
        RunnablePassthrough.assign(
            agent_scratchpad=lambda x: message_formatter(x["intermediate_steps"]),
        )
        | prompt
        | llm_with_tools
        | ToolsAgentOutputParser()
    )

Subdomains

Dependencies

  • collections.abc
  • langchain_classic.agents.format_scratchpad.tools
  • langchain_classic.agents.output_parsers.tools
  • langchain_core.agents
  • langchain_core.language_models
  • langchain_core.messages
  • langchain_core.prompts.chat
  • langchain_core.runnables
  • langchain_core.tools

Frequently Asked Questions

What does base.py do?
base.py is a source file in the langchain codebase, written in python. It belongs to the AgentOrchestration domain, ActionLogic subdomain.
What functions are defined in base.py?
base.py defines 1 function(s): create_tool_calling_agent.
What does base.py depend on?
base.py imports 9 module(s): collections.abc, langchain_classic.agents.format_scratchpad.tools, langchain_classic.agents.output_parsers.tools, langchain_core.agents, langchain_core.language_models, langchain_core.messages, langchain_core.prompts.chat, langchain_core.runnables, and 1 more.
Where is base.py in the architecture?
base.py is located at libs/langchain/langchain_classic/agents/tool_calling_agent/base.py (domain: AgentOrchestration, subdomain: ActionLogic, directory: libs/langchain/langchain_classic/agents/tool_calling_agent).

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free