test_run_limit_with_multiple_human_messages() — langchain Function Reference
Architecture documentation for the test_run_limit_with_multiple_human_messages() function in test_tool_call_limit.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD bd3b4c38_16fa_d821_803e_f242457f5e53["test_run_limit_with_multiple_human_messages()"] a75b8390_08d3_7137_c8a7_9d78fc0c4517["test_tool_call_limit.py"] bd3b4c38_16fa_d821_803e_f242457f5e53 -->|defined in| a75b8390_08d3_7137_c8a7_9d78fc0c4517 style bd3b4c38_16fa_d821_803e_f242457f5e53 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain_v1/tests/unit_tests/agents/middleware/implementations/test_tool_call_limit.py lines 317–370
def test_run_limit_with_multiple_human_messages() -> None:
"""Test that run limits reset between invocations.
Verifies that when using run_limit, the count resets for each new user message,
allowing execution to continue across multiple invocations in the same thread.
"""
@tool
def search(query: str) -> str:
"""Search for information."""
return f"Results for {query}"
model = FakeToolCallingModel(
tool_calls=[
[ToolCall(name="search", args={"query": "test1"}, id="1")],
[ToolCall(name="search", args={"query": "test2"}, id="2")],
[],
]
)
middleware = ToolCallLimitMiddleware(run_limit=1, exit_behavior="end")
agent = create_agent(
model=model, tools=[search], middleware=[middleware], checkpointer=InMemorySaver()
)
# First invocation: test1 executes successfully, test2 exceeds limit
result1 = agent.invoke(
{"messages": [HumanMessage("Question 1")]},
{"configurable": {"thread_id": "test_thread"}},
)
tool_messages = [msg for msg in result1["messages"] if isinstance(msg, ToolMessage)]
successful_tool_msgs = [msg for msg in tool_messages if msg.status != "error"]
error_tool_msgs = [msg for msg in tool_messages if msg.status == "error"]
ai_limit_msgs = []
for msg in result1["messages"]:
if not isinstance(msg, AIMessage):
continue
assert isinstance(msg.content, str)
if "limit" in msg.content.lower() and not msg.tool_calls:
ai_limit_msgs.append(msg)
assert len(successful_tool_msgs) == 1, "Should have 1 successful tool execution (test1)"
assert len(error_tool_msgs) == 1, "Should have 1 artificial error ToolMessage (test2)"
assert len(ai_limit_msgs) == 1, "Should have AI limit message after test2 proposed"
# Second invocation: run limit should reset, allowing continued execution
result2 = agent.invoke(
{"messages": [HumanMessage("Question 2")]},
{"configurable": {"thread_id": "test_thread"}},
)
assert len(result2["messages"]) > len(result1["messages"]), (
"Second invocation should add new messages, proving run limit reset"
)
Domain
Subdomains
Defined In
Source
Frequently Asked Questions
What does test_run_limit_with_multiple_human_messages() do?
test_run_limit_with_multiple_human_messages() is a function in the langchain codebase, defined in libs/langchain_v1/tests/unit_tests/agents/middleware/implementations/test_tool_call_limit.py.
Where is test_run_limit_with_multiple_human_messages() defined?
test_run_limit_with_multiple_human_messages() is defined in libs/langchain_v1/tests/unit_tests/agents/middleware/implementations/test_tool_call_limit.py at line 317.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free