bind_tools() — langchain Function Reference
Architecture documentation for the bind_tools() function in chat_models.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD b4ebc2e5_c582_39ab_6403_117a559ab366["bind_tools()"] 977b57b2_5d0e_bcf4_a43e_b52857105005["ChatAnthropic"] b4ebc2e5_c582_39ab_6403_117a559ab366 -->|defined in| 977b57b2_5d0e_bcf4_a43e_b52857105005 57f22736_087b_6915_6548_5529978001fa["_get_llm_for_structured_output_when_thinking_is_enabled()"] 57f22736_087b_6915_6548_5529978001fa -->|calls| b4ebc2e5_c582_39ab_6403_117a559ab366 a484b53c_8c1c_5314_de44_ea0330c8aed2["with_structured_output()"] a484b53c_8c1c_5314_de44_ea0330c8aed2 -->|calls| b4ebc2e5_c582_39ab_6403_117a559ab366 f0c80129_8b9d_d60d_2093_7e4d90e48a1e["_is_builtin_tool()"] b4ebc2e5_c582_39ab_6403_117a559ab366 -->|calls| f0c80129_8b9d_d60d_2093_7e4d90e48a1e b9533de7_77f0_ac15_f27f_92372b897314["convert_to_anthropic_tool()"] b4ebc2e5_c582_39ab_6403_117a559ab366 -->|calls| b9533de7_77f0_ac15_f27f_92372b897314 style b4ebc2e5_c582_39ab_6403_117a559ab366 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/anthropic/langchain_anthropic/chat_models.py lines 1444–1546
def bind_tools(
self,
tools: Sequence[Mapping[str, Any] | type | Callable | BaseTool],
*,
tool_choice: dict[str, str] | str | None = None,
parallel_tool_calls: bool | None = None,
strict: bool | None = None,
**kwargs: Any,
) -> Runnable[LanguageModelInput, AIMessage]:
r"""Bind tool-like objects to `ChatAnthropic`.
Args:
tools: A list of tool definitions to bind to this chat model.
Supports Anthropic format tool schemas and any tool definition handled
by [`convert_to_openai_tool`][langchain_core.utils.function_calling.convert_to_openai_tool].
tool_choice: Which tool to require the model to call. Options are:
- Name of the tool as a string or as dict `{"type": "tool", "name": "<<tool_name>>"}`: calls corresponding tool
- `'auto'`, `{"type: "auto"}`, or `None`: automatically selects a tool (including no tool)
- `'any'` or `{"type: "any"}`: force at least one tool to be called
parallel_tool_calls: Set to `False` to disable parallel tool use.
Defaults to `None` (no specification, which allows parallel tool use).
!!! version-added "Added in `langchain-anthropic` 0.3.2"
strict: If `True`, Claude's schema adherence is applied to tool calls.
See the [docs](https://docs.langchain.com/oss/python/integrations/chat/anthropic#strict-tool-use) for more info.
kwargs: Any additional parameters are passed directly to `bind`.
Example:
```python
from langchain_anthropic import ChatAnthropic
from pydantic import BaseModel, Field
class GetWeather(BaseModel):
'''Get the current weather in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
class GetPrice(BaseModel):
'''Get the price of a specific product.'''
product: str = Field(..., description="The product to look up.")
model = ChatAnthropic(model="claude-sonnet-4-5-20250929", temperature=0)
model_with_tools = model.bind_tools([GetWeather, GetPrice])
model_with_tools.invoke(
"What is the weather like in San Francisco",
)
# -> AIMessage(
# content=[
# {'text': '<thinking>\nBased on the user\'s question, the relevant function to call is GetWeather, which requires the "location" parameter.\n\nThe user has directly specified the location as "San Francisco". Since San Francisco is a well known city, I can reasonably infer they mean San Francisco, CA without needing the state specified.\n\nAll the required parameters are provided, so I can proceed with the API call.\n</thinking>', 'type': 'text'},
# {'text': None, 'type': 'tool_use', 'id': 'toolu_01SCgExKzQ7eqSkMHfygvYuu', 'name': 'GetWeather', 'input': {'location': 'San Francisco, CA'}}
# ],
# response_metadata={'id': 'msg_01GM3zQtoFv8jGQMW7abLnhi', 'model': 'claude-sonnet-4-5-20250929', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 487, 'output_tokens': 145}},
# id='run-87b1331e-9251-4a68-acef-f0a018b639cc-0'
# )
```
""" # noqa: E501
# Allows built-in tools either by their:
# - Raw `dict` format
# - Extracting extras["provider_tool_definition"] if provided on a BaseTool
formatted_tools = [
tool
if _is_builtin_tool(tool)
else convert_to_anthropic_tool(tool, strict=strict)
for tool in tools
]
if not tool_choice:
pass
elif isinstance(tool_choice, dict):
kwargs["tool_choice"] = tool_choice
elif isinstance(tool_choice, str) and tool_choice in ("any", "auto"):
kwargs["tool_choice"] = {"type": tool_choice}
elif isinstance(tool_choice, str):
kwargs["tool_choice"] = {"type": "tool", "name": tool_choice}
Domain
Subdomains
Source
Frequently Asked Questions
What does bind_tools() do?
bind_tools() is a function in the langchain codebase, defined in libs/partners/anthropic/langchain_anthropic/chat_models.py.
Where is bind_tools() defined?
bind_tools() is defined in libs/partners/anthropic/langchain_anthropic/chat_models.py at line 1444.
What does bind_tools() call?
bind_tools() calls 2 function(s): _is_builtin_tool, convert_to_anthropic_tool.
What calls bind_tools()?
bind_tools() is called by 2 function(s): _get_llm_for_structured_output_when_thinking_is_enabled, with_structured_output.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free