test_agent_loop() — langchain Function Reference
Architecture documentation for the test_agent_loop() function in chat_models.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 849e274b_b44e_7cf4_2548_402b45a5fc07["test_agent_loop()"] 971e928f_9c9b_ce7a_b93d_e762f2f5aa54["ChatModelIntegrationTests"] 849e274b_b44e_7cf4_2548_402b45a5fc07 -->|defined in| 971e928f_9c9b_ce7a_b93d_e762f2f5aa54 style 849e274b_b44e_7cf4_2548_402b45a5fc07 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/standard-tests/langchain_tests/integration_tests/chat_models.py lines 3233–3300
def test_agent_loop(self, model: BaseChatModel) -> None:
"""Test that the model supports a simple ReAct agent loop.
This test is skipped if the `has_tool_calling` property on the test class is
set to `False`.
This test is optional and should be skipped if the model does not support
tool calling (see configuration below).
??? note "Configuration"
To disable tool calling tests, set `has_tool_calling` to `False` in your
test class:
```python
class TestMyChatModelIntegration(ChatModelIntegrationTests):
@property
def has_tool_calling(self) -> bool:
return False
```
??? question "Troubleshooting"
If this test fails, check that `bind_tools` is implemented to correctly
translate LangChain tool objects into the appropriate schema for your
chat model.
Check also that all required information (e.g., tool calling identifiers)
from `AIMessage` objects is propagated correctly to model payloads.
This test may fail if the chat model does not consistently generate tool
calls in response to an appropriate query. In these cases you can `xfail`
the test:
```python
@pytest.mark.xfail(reason=("Does not support tool_choice."))
def test_agent_loop(self, model: BaseChatModel) -> None:
super().test_agent_loop(model)
```
"""
if not self.has_tool_calling:
pytest.skip("Test requires tool calling.")
@tool
def get_weather(location: str) -> str: # noqa: ARG001
"""Get the weather at a location."""
return "It's sunny."
llm_with_tools = model.bind_tools([get_weather])
input_message = HumanMessage("What is the weather in San Francisco, CA?")
tool_call_message = llm_with_tools.invoke([input_message])
assert isinstance(tool_call_message, AIMessage)
content_blocks = tool_call_message.content_blocks
assert any(block["type"] == "tool_call" for block in content_blocks)
tool_calls = tool_call_message.tool_calls
assert len(tool_calls) == 1
tool_call = tool_calls[0]
tool_message = get_weather.invoke(tool_call)
assert isinstance(tool_message, ToolMessage)
response = llm_with_tools.invoke(
[
input_message,
tool_call_message,
tool_message,
]
)
assert isinstance(response, AIMessage)
Domain
Subdomains
Source
Frequently Asked Questions
What does test_agent_loop() do?
test_agent_loop() is a function in the langchain codebase, defined in libs/standard-tests/langchain_tests/integration_tests/chat_models.py.
Where is test_agent_loop() defined?
test_agent_loop() is defined in libs/standard-tests/langchain_tests/integration_tests/chat_models.py at line 3233.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free