Home / Function/ test_human_in_the_loop_middleware_multiple_tools_mixed_responses() — langchain Function Reference

test_human_in_the_loop_middleware_multiple_tools_mixed_responses() — langchain Function Reference

Architecture documentation for the test_human_in_the_loop_middleware_multiple_tools_mixed_responses() function in test_human_in_the_loop.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  04ea582e_6ad5_09c6_cd55_987419688af4["test_human_in_the_loop_middleware_multiple_tools_mixed_responses()"]
  b9ab5ab1_a37b_d0e1_974a_34ca8a76a788["test_human_in_the_loop.py"]
  04ea582e_6ad5_09c6_cd55_987419688af4 -->|defined in| b9ab5ab1_a37b_d0e1_974a_34ca8a76a788
  style 04ea582e_6ad5_09c6_cd55_987419688af4 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain_v1/tests/unit_tests/agents/middleware/implementations/test_human_in_the_loop.py lines 153–199

def test_human_in_the_loop_middleware_multiple_tools_mixed_responses() -> None:
    """Test HumanInTheLoopMiddleware with multiple tools and mixed response types."""
    middleware = HumanInTheLoopMiddleware(
        interrupt_on={
            "get_forecast": {"allowed_decisions": ["approve", "edit", "reject"]},
            "get_temperature": {"allowed_decisions": ["approve", "edit", "reject"]},
        }
    )

    ai_message = AIMessage(
        content="I'll help you with weather",
        tool_calls=[
            {"name": "get_forecast", "args": {"location": "San Francisco"}, "id": "1"},
            {"name": "get_temperature", "args": {"location": "San Francisco"}, "id": "2"},
        ],
    )
    state = AgentState[Any](messages=[HumanMessage(content="What's the weather?"), ai_message])

    def mock_mixed_responses(_: Any) -> dict[str, Any]:
        return {
            "decisions": [
                {"type": "approve"},
                {"type": "reject", "message": "User rejected this tool call"},
            ]
        }

    with patch(
        "langchain.agents.middleware.human_in_the_loop.interrupt", side_effect=mock_mixed_responses
    ):
        result = middleware.after_model(state, Runtime())
        assert result is not None
        assert "messages" in result
        assert (
            len(result["messages"]) == 2
        )  # AI message with accepted tool call + tool message for rejected

        # First message should be the AI message with both tool calls
        updated_ai_message = result["messages"][0]
        assert len(updated_ai_message.tool_calls) == 2  # Both tool calls remain
        assert updated_ai_message.tool_calls[0]["name"] == "get_forecast"  # Accepted
        assert updated_ai_message.tool_calls[1]["name"] == "get_temperature"  # Got response

        # Second message should be the tool message for the rejected tool call
        tool_message = result["messages"][1]
        assert isinstance(tool_message, ToolMessage)
        assert tool_message.content == "User rejected this tool call"
        assert tool_message.name == "get_temperature"

Domain

Subdomains

Frequently Asked Questions

What does test_human_in_the_loop_middleware_multiple_tools_mixed_responses() do?
test_human_in_the_loop_middleware_multiple_tools_mixed_responses() is a function in the langchain codebase, defined in libs/langchain_v1/tests/unit_tests/agents/middleware/implementations/test_human_in_the_loop.py.
Where is test_human_in_the_loop_middleware_multiple_tools_mixed_responses() defined?
test_human_in_the_loop_middleware_multiple_tools_mixed_responses() is defined in libs/langchain_v1/tests/unit_tests/agents/middleware/implementations/test_human_in_the_loop.py at line 153.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free