base.py — langchain Source File
Architecture documentation for base.py, a python file in the langchain codebase. 8 imports, 0 dependents.
Entity Profile
Dependency Diagram
graph LR 117d269f_041a_81c1_4599_9b3ce5564460["base.py"] 2bf6d401_816d_d011_3b05_a6114f55ff58["collections.abc"] 117d269f_041a_81c1_4599_9b3ce5564460 --> 2bf6d401_816d_d011_3b05_a6114f55ff58 e929cf21_6ab8_6ff3_3765_0d35a099a053["langchain_core.language_models"] 117d269f_041a_81c1_4599_9b3ce5564460 --> e929cf21_6ab8_6ff3_3765_0d35a099a053 16c7d167_e2e4_cd42_2bc2_d182459cd93c["langchain_core.prompts.chat"] 117d269f_041a_81c1_4599_9b3ce5564460 --> 16c7d167_e2e4_cd42_2bc2_d182459cd93c 31eab4ab_7281_1e6c_b17d_12e6ad9de07a["langchain_core.runnables"] 117d269f_041a_81c1_4599_9b3ce5564460 --> 31eab4ab_7281_1e6c_b17d_12e6ad9de07a 121262a1_0bd6_d637_bce3_307ab6b3ecd4["langchain_core.tools"] 117d269f_041a_81c1_4599_9b3ce5564460 --> 121262a1_0bd6_d637_bce3_307ab6b3ecd4 5c738c12_cc4f_cee1_0e1d_562012a5f844["langchain_core.utils.function_calling"] 117d269f_041a_81c1_4599_9b3ce5564460 --> 5c738c12_cc4f_cee1_0e1d_562012a5f844 1b584883_70bf_28d9_739f_aaff0ff692d7["langchain_classic.agents.format_scratchpad.openai_tools"] 117d269f_041a_81c1_4599_9b3ce5564460 --> 1b584883_70bf_28d9_739f_aaff0ff692d7 cbe72457_20d2_4fbf_73af_b810c87ce51d["langchain_classic.agents.output_parsers.openai_tools"] 117d269f_041a_81c1_4599_9b3ce5564460 --> cbe72457_20d2_4fbf_73af_b810c87ce51d style 117d269f_041a_81c1_4599_9b3ce5564460 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
from collections.abc import Sequence
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts.chat import ChatPromptTemplate
from langchain_core.runnables import Runnable, RunnablePassthrough
from langchain_core.tools import BaseTool
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_classic.agents.format_scratchpad.openai_tools import (
format_to_openai_tool_messages,
)
from langchain_classic.agents.output_parsers.openai_tools import (
OpenAIToolsAgentOutputParser,
)
def create_openai_tools_agent(
llm: BaseLanguageModel,
tools: Sequence[BaseTool],
prompt: ChatPromptTemplate,
strict: bool | None = None, # noqa: FBT001
) -> Runnable:
"""Create an agent that uses OpenAI tools.
Args:
llm: LLM to use as the agent.
tools: Tools this agent has access to.
prompt: The prompt to use. See Prompt section below for more on the expected
input variables.
strict: Whether strict mode should be used for OpenAI tools.
Returns:
A Runnable sequence representing an agent. It takes as input all the same input
variables as the prompt passed in does. It returns as output either an
AgentAction or AgentFinish.
Raises:
ValueError: If the prompt is missing required variables.
Example:
```python
from langchain_classic import hub
from langchain_openai import ChatOpenAI
from langchain_classic.agents import (
AgentExecutor,
create_openai_tools_agent,
)
prompt = hub.pull("hwchase17/openai-tools-agent")
model = ChatOpenAI()
tools = ...
agent = create_openai_tools_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
agent_executor.invoke({"input": "hi"})
# Using with chat history
from langchain_core.messages import AIMessage, HumanMessage
agent_executor.invoke(
{
"input": "what's my name?",
"chat_history": [
HumanMessage(content="hi! my name is bob"),
AIMessage(content="Hello Bob! How can I assist you today?"),
],
}
)
```
Prompt:
The agent prompt must have an `agent_scratchpad` key that is a
`MessagesPlaceholder`. Intermediate agent actions and tool output
messages will be passed in here.
Here's an example:
```python
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
MessagesPlaceholder("chat_history", optional=True),
("human", "{input}"),
MessagesPlaceholder("agent_scratchpad"),
]
)
```
"""
missing_vars = {"agent_scratchpad"}.difference(
prompt.input_variables + list(prompt.partial_variables),
)
if missing_vars:
msg = f"Prompt missing required variables: {missing_vars}"
raise ValueError(msg)
llm_with_tools = llm.bind(
tools=[convert_to_openai_tool(tool, strict=strict) for tool in tools],
)
return (
RunnablePassthrough.assign(
agent_scratchpad=lambda x: format_to_openai_tool_messages(
x["intermediate_steps"],
),
)
| prompt
| llm_with_tools
| OpenAIToolsAgentOutputParser()
)
Domain
Subdomains
Functions
Dependencies
- collections.abc
- langchain_classic.agents.format_scratchpad.openai_tools
- langchain_classic.agents.output_parsers.openai_tools
- langchain_core.language_models
- langchain_core.prompts.chat
- langchain_core.runnables
- langchain_core.tools
- langchain_core.utils.function_calling
Source
Frequently Asked Questions
What does base.py do?
base.py is a source file in the langchain codebase, written in python. It belongs to the AgentOrchestration domain, ClassicChains subdomain.
What functions are defined in base.py?
base.py defines 1 function(s): create_openai_tools_agent.
What does base.py depend on?
base.py imports 8 module(s): collections.abc, langchain_classic.agents.format_scratchpad.openai_tools, langchain_classic.agents.output_parsers.openai_tools, langchain_core.language_models, langchain_core.prompts.chat, langchain_core.runnables, langchain_core.tools, langchain_core.utils.function_calling.
Where is base.py in the architecture?
base.py is located at libs/langchain/langchain_classic/agents/openai_tools/base.py (domain: AgentOrchestration, subdomain: ClassicChains, directory: libs/langchain/langchain_classic/agents/openai_tools).
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free