create_xml_agent() — langchain Function Reference
Architecture documentation for the create_xml_agent() function in base.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD abd2827a_df43_98ec_7fa3_e2a1c9a42996["create_xml_agent()"] c2f0c081_8237_df3b_5e48_c1a86e2e0cd0["base.py"] abd2827a_df43_98ec_7fa3_e2a1c9a42996 -->|defined in| c2f0c081_8237_df3b_5e48_c1a86e2e0cd0 style abd2827a_df43_98ec_7fa3_e2a1c9a42996 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain/langchain_classic/agents/xml/base.py lines 115–236
def create_xml_agent(
llm: BaseLanguageModel,
tools: Sequence[BaseTool],
prompt: BasePromptTemplate,
tools_renderer: ToolsRenderer = render_text_description,
*,
stop_sequence: bool | list[str] = True,
) -> Runnable:
r"""Create an agent that uses XML to format its logic.
Args:
llm: LLM to use as the agent.
tools: Tools this agent has access to.
prompt: The prompt to use, must have input keys
`tools`: contains descriptions for each tool.
`agent_scratchpad`: contains previous agent actions and tool outputs.
tools_renderer: This controls how the tools are converted into a string and
then passed into the LLM.
stop_sequence: bool or list of str.
If `True`, adds a stop token of "</tool_input>" to avoid hallucinates.
If `False`, does not add a stop token.
If a list of str, uses the provided list as the stop tokens.
You may to set this to False if the LLM you are using
does not support stop sequences.
Returns:
A Runnable sequence representing an agent. It takes as input all the same input
variables as the prompt passed in does. It returns as output either an
AgentAction or AgentFinish.
Example:
```python
from langchain_classic import hub
from langchain_anthropic import ChatAnthropic
from langchain_classic.agents import AgentExecutor, create_xml_agent
prompt = hub.pull("hwchase17/xml-agent-convo")
model = ChatAnthropic(model="claude-3-haiku-20240307")
tools = ...
agent = create_xml_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
agent_executor.invoke({"input": "hi"})
# Use with chat history
from langchain_core.messages import AIMessage, HumanMessage
agent_executor.invoke(
{
"input": "what's my name?",
# Notice that chat_history is a string
# since this prompt is aimed at LLMs, not chat models
"chat_history": "Human: My name is Bob\nAI: Hello Bob!",
}
)
```
Prompt:
The prompt must have input keys:
* `tools`: contains descriptions for each tool.
* `agent_scratchpad`: contains previous agent actions and tool outputs as
an XML string.
Here's an example:
```python
from langchain_core.prompts import PromptTemplate
template = '''You are a helpful assistant. Help the user answer any questions.
You have access to the following tools:
{tools}
In order to use a tool, you can use <tool></tool> and <tool_input></tool_input> tags. You will then get back a response in the form <observation></observation>
For example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:
<tool>search</tool><tool_input>weather in SF</tool_input>
Domain
Subdomains
Source
Frequently Asked Questions
What does create_xml_agent() do?
create_xml_agent() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/agents/xml/base.py.
Where is create_xml_agent() defined?
create_xml_agent() is defined in libs/langchain/langchain_classic/agents/xml/base.py at line 115.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free