tool_example_to_messages() — langchain Function Reference
Architecture documentation for the tool_example_to_messages() function in function_calling.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 5698c317_1512_4bd9_d73a_5eabccbd039a["tool_example_to_messages()"] 344b2838_87a8_d5dc_b550_fdb443ff6c4e["function_calling.py"] 5698c317_1512_4bd9_d73a_5eabccbd039a -->|defined in| 344b2838_87a8_d5dc_b550_fdb443ff6c4e style 5698c317_1512_4bd9_d73a_5eabccbd039a fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/core/langchain_core/utils/function_calling.py lines 604–705
def tool_example_to_messages(
input: str,
tool_calls: list[BaseModel],
tool_outputs: list[str] | None = None,
*,
ai_response: str | None = None,
) -> list[BaseMessage]:
"""Convert an example into a list of messages that can be fed into an LLM.
This code is an adapter that converts a single example to a list of messages
that can be fed into a chat model.
The list of messages per example by default corresponds to:
1. `HumanMessage`: contains the content from which content should be extracted.
2. `AIMessage`: contains the extracted information from the model
3. `ToolMessage`: contains confirmation to the model that the model requested a
tool correctly.
If `ai_response` is specified, there will be a final `AIMessage` with that
response.
The `ToolMessage` is required because some chat models are hyper-optimized for
agents rather than for an extraction use case.
Args:
input: The user input
tool_calls: Tool calls represented as Pydantic BaseModels
tool_outputs: Tool call outputs.
Does not need to be provided.
If not provided, a placeholder value will be inserted.
ai_response: If provided, content for a final `AIMessage`.
Returns:
A list of messages
Examples:
```python
from typing import Optional
from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI
class Person(BaseModel):
'''Information about a person.'''
name: str | None = Field(..., description="The name of the person")
hair_color: str | None = Field(
..., description="The color of the person's hair if known"
)
height_in_meters: str | None = Field(..., description="Height in METERS")
examples = [
(
"The ocean is vast and blue. It's more than 20,000 feet deep.",
Person(name=None, height_in_meters=None, hair_color=None),
),
(
"Fiona traveled far from France to Spain.",
Person(name="Fiona", height_in_meters=None, hair_color=None),
),
]
messages = []
for txt, tool_call in examples:
messages.extend(tool_example_to_messages(txt, [tool_call]))
```
"""
messages: list[BaseMessage] = [HumanMessage(content=input)]
openai_tool_calls = [
{
"id": str(uuid.uuid4()),
"type": "function",
"function": {
# The name of the function right now corresponds to the name
Domain
Subdomains
Source
Frequently Asked Questions
What does tool_example_to_messages() do?
tool_example_to_messages() is a function in the langchain codebase, defined in libs/core/langchain_core/utils/function_calling.py.
Where is tool_example_to_messages() defined?
tool_example_to_messages() is defined in libs/core/langchain_core/utils/function_calling.py at line 604.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free