Home / File/ base.py — langchain Source File

base.py — langchain Source File

Architecture documentation for base.py, a python file in the langchain codebase. 9 imports, 0 dependents.

Entity Profile

Dependency Diagram

graph LR
  9a58518d_6eb6_2ed8_7a0d_9b7203681833["base.py"]
  2bf6d401_816d_d011_3b05_a6114f55ff58["collections.abc"]
  9a58518d_6eb6_2ed8_7a0d_9b7203681833 --> 2bf6d401_816d_d011_3b05_a6114f55ff58
  e929cf21_6ab8_6ff3_3765_0d35a099a053["langchain_core.language_models"]
  9a58518d_6eb6_2ed8_7a0d_9b7203681833 --> e929cf21_6ab8_6ff3_3765_0d35a099a053
  16c7d167_e2e4_cd42_2bc2_d182459cd93c["langchain_core.prompts.chat"]
  9a58518d_6eb6_2ed8_7a0d_9b7203681833 --> 16c7d167_e2e4_cd42_2bc2_d182459cd93c
  31eab4ab_7281_1e6c_b17d_12e6ad9de07a["langchain_core.runnables"]
  9a58518d_6eb6_2ed8_7a0d_9b7203681833 --> 31eab4ab_7281_1e6c_b17d_12e6ad9de07a
  121262a1_0bd6_d637_bce3_307ab6b3ecd4["langchain_core.tools"]
  9a58518d_6eb6_2ed8_7a0d_9b7203681833 --> 121262a1_0bd6_d637_bce3_307ab6b3ecd4
  255236bb_6d88_54a6_3d01_c4f086541fd2["langchain_core.tools.render"]
  9a58518d_6eb6_2ed8_7a0d_9b7203681833 --> 255236bb_6d88_54a6_3d01_c4f086541fd2
  baf79d94_a75f_7385_18f7_213b5e5f9034["langchain_classic.agents.format_scratchpad"]
  9a58518d_6eb6_2ed8_7a0d_9b7203681833 --> baf79d94_a75f_7385_18f7_213b5e5f9034
  d79ebf2c_719f_f27c_0650_45945ca0cb1d["langchain_classic.agents.json_chat.prompt"]
  9a58518d_6eb6_2ed8_7a0d_9b7203681833 --> d79ebf2c_719f_f27c_0650_45945ca0cb1d
  711702da_4598_dbb6_2ffc_00d0d650a1b8["langchain_classic.agents.output_parsers"]
  9a58518d_6eb6_2ed8_7a0d_9b7203681833 --> 711702da_4598_dbb6_2ffc_00d0d650a1b8
  style 9a58518d_6eb6_2ed8_7a0d_9b7203681833 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

from collections.abc import Sequence

from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts.chat import ChatPromptTemplate
from langchain_core.runnables import Runnable, RunnablePassthrough
from langchain_core.tools import BaseTool
from langchain_core.tools.render import ToolsRenderer, render_text_description

from langchain_classic.agents.format_scratchpad import format_log_to_messages
from langchain_classic.agents.json_chat.prompt import TEMPLATE_TOOL_RESPONSE
from langchain_classic.agents.output_parsers import JSONAgentOutputParser


def create_json_chat_agent(
    llm: BaseLanguageModel,
    tools: Sequence[BaseTool],
    prompt: ChatPromptTemplate,
    stop_sequence: bool | list[str] = True,  # noqa: FBT001,FBT002
    tools_renderer: ToolsRenderer = render_text_description,
    template_tool_response: str = TEMPLATE_TOOL_RESPONSE,
) -> Runnable:
    r"""Create an agent that uses JSON to format its logic, build for Chat Models.

    Args:
        llm: LLM to use as the agent.
        tools: Tools this agent has access to.
        prompt: The prompt to use. See Prompt section below for more.
        stop_sequence: bool or list of str.
            If `True`, adds a stop token of "Observation:" to avoid hallucinates.
            If `False`, does not add a stop token.
            If a list of str, uses the provided list as the stop tokens.

            You may to set this to False if the LLM you are using does not support stop
            sequences.
        tools_renderer: This controls how the tools are converted into a string and
            then passed into the LLM.
        template_tool_response: Template prompt that uses the tool response
            (observation) to make the LLM generate the next action to take.

    Returns:
        A Runnable sequence representing an agent. It takes as input all the same input
        variables as the prompt passed in does. It returns as output either an
        AgentAction or AgentFinish.

    Raises:
        ValueError: If the prompt is missing required variables.
        ValueError: If the template_tool_response is missing
            the required variable 'observation'.

    Example:
        ```python
        from langchain_classic import hub
        from langchain_openai import ChatOpenAI
        from langchain_classic.agents import AgentExecutor, create_json_chat_agent

        prompt = hub.pull("hwchase17/react-chat-json")
        model = ChatOpenAI()
        tools = ...

        agent = create_json_chat_agent(model, tools, prompt)
// ... (136 more lines)

Subdomains

Dependencies

  • collections.abc
  • langchain_classic.agents.format_scratchpad
  • langchain_classic.agents.json_chat.prompt
  • langchain_classic.agents.output_parsers
  • langchain_core.language_models
  • langchain_core.prompts.chat
  • langchain_core.runnables
  • langchain_core.tools
  • langchain_core.tools.render

Frequently Asked Questions

What does base.py do?
base.py is a source file in the langchain codebase, written in python. It belongs to the AgentOrchestration domain, ClassicChains subdomain.
What functions are defined in base.py?
base.py defines 1 function(s): create_json_chat_agent.
What does base.py depend on?
base.py imports 9 module(s): collections.abc, langchain_classic.agents.format_scratchpad, langchain_classic.agents.json_chat.prompt, langchain_classic.agents.output_parsers, langchain_core.language_models, langchain_core.prompts.chat, langchain_core.runnables, langchain_core.tools, and 1 more.
Where is base.py in the architecture?
base.py is located at libs/langchain/langchain_classic/agents/json_chat/base.py (domain: AgentOrchestration, subdomain: ClassicChains, directory: libs/langchain/langchain_classic/agents/json_chat).

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free