Home / Function/ test_human_in_the_loop_middleware_multiple_tools_edit_responses() — langchain Function Reference

test_human_in_the_loop_middleware_multiple_tools_edit_responses() — langchain Function Reference

Architecture documentation for the test_human_in_the_loop_middleware_multiple_tools_edit_responses() function in test_human_in_the_loop.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  15e7607d_dd77_d078_b165_a89691bcd063["test_human_in_the_loop_middleware_multiple_tools_edit_responses()"]
  b9ab5ab1_a37b_d0e1_974a_34ca8a76a788["test_human_in_the_loop.py"]
  15e7607d_dd77_d078_b165_a89691bcd063 -->|defined in| b9ab5ab1_a37b_d0e1_974a_34ca8a76a788
  style 15e7607d_dd77_d078_b165_a89691bcd063 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain_v1/tests/unit_tests/agents/middleware/implementations/test_human_in_the_loop.py lines 202–252

def test_human_in_the_loop_middleware_multiple_tools_edit_responses() -> None:
    """Test HumanInTheLoopMiddleware with multiple tools and edit responses."""
    middleware = HumanInTheLoopMiddleware(
        interrupt_on={
            "get_forecast": {"allowed_decisions": ["approve", "edit", "reject"]},
            "get_temperature": {"allowed_decisions": ["approve", "edit", "reject"]},
        }
    )

    ai_message = AIMessage(
        content="I'll help you with weather",
        tool_calls=[
            {"name": "get_forecast", "args": {"location": "San Francisco"}, "id": "1"},
            {"name": "get_temperature", "args": {"location": "San Francisco"}, "id": "2"},
        ],
    )
    state = AgentState[Any](messages=[HumanMessage(content="What's the weather?"), ai_message])

    def mock_edit_responses(_: Any) -> dict[str, Any]:
        return {
            "decisions": [
                {
                    "type": "edit",
                    "edited_action": Action(
                        name="get_forecast",
                        args={"location": "New York"},
                    ),
                },
                {
                    "type": "edit",
                    "edited_action": Action(
                        name="get_temperature",
                        args={"location": "New York"},
                    ),
                },
            ]
        }

    with patch(
        "langchain.agents.middleware.human_in_the_loop.interrupt", side_effect=mock_edit_responses
    ):
        result = middleware.after_model(state, Runtime())
        assert result is not None
        assert "messages" in result
        assert len(result["messages"]) == 1

        updated_ai_message = result["messages"][0]
        assert updated_ai_message.tool_calls[0]["args"] == {"location": "New York"}
        assert updated_ai_message.tool_calls[0]["id"] == "1"  # ID preserved
        assert updated_ai_message.tool_calls[1]["args"] == {"location": "New York"}
        assert updated_ai_message.tool_calls[1]["id"] == "2"  # ID preserved

Domain

Subdomains

Frequently Asked Questions

What does test_human_in_the_loop_middleware_multiple_tools_edit_responses() do?
test_human_in_the_loop_middleware_multiple_tools_edit_responses() is a function in the langchain codebase, defined in libs/langchain_v1/tests/unit_tests/agents/middleware/implementations/test_human_in_the_loop.py.
Where is test_human_in_the_loop_middleware_multiple_tools_edit_responses() defined?
test_human_in_the_loop_middleware_multiple_tools_edit_responses() is defined in libs/langchain_v1/tests/unit_tests/agents/middleware/implementations/test_human_in_the_loop.py at line 202.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free