Home / Function/ test_human_in_the_loop_middleware_sequence_mismatch() — langchain Function Reference

test_human_in_the_loop_middleware_sequence_mismatch() — langchain Function Reference

Architecture documentation for the test_human_in_the_loop_middleware_sequence_mismatch() function in test_human_in_the_loop.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  fb0cd7f8_6d35_3442_0882_c84a0ec1a595["test_human_in_the_loop_middleware_sequence_mismatch()"]
  b9ab5ab1_a37b_d0e1_974a_34ca8a76a788["test_human_in_the_loop.py"]
  fb0cd7f8_6d35_3442_0882_c84a0ec1a595 -->|defined in| b9ab5ab1_a37b_d0e1_974a_34ca8a76a788
  style fb0cd7f8_6d35_3442_0882_c84a0ec1a595 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain_v1/tests/unit_tests/agents/middleware/implementations/test_human_in_the_loop.py lines 493–536

def test_human_in_the_loop_middleware_sequence_mismatch() -> None:
    """Test that sequence mismatch in resume raises an error."""
    middleware = HumanInTheLoopMiddleware(interrupt_on={"test_tool": True})

    ai_message = AIMessage(
        content="I'll help you",
        tool_calls=[{"name": "test_tool", "args": {"input": "test"}, "id": "1"}],
    )
    state = AgentState[Any](messages=[HumanMessage(content="Hello"), ai_message])

    # Test with too few responses
    with (
        patch(
            "langchain.agents.middleware.human_in_the_loop.interrupt",
            return_value={"decisions": []},  # No responses for 1 tool call
        ),
        pytest.raises(
            ValueError,
            match=re.escape(
                "Number of human decisions (0) does not match number of hanging tool calls (1)."
            ),
        ),
    ):
        middleware.after_model(state, Runtime())

    # Test with too many responses
    with (
        patch(
            "langchain.agents.middleware.human_in_the_loop.interrupt",
            return_value={
                "decisions": [
                    {"type": "approve"},
                    {"type": "approve"},
                ]
            },  # 2 responses for 1 tool call
        ),
        pytest.raises(
            ValueError,
            match=re.escape(
                "Number of human decisions (2) does not match number of hanging tool calls (1)."
            ),
        ),
    ):
        middleware.after_model(state, Runtime())

Domain

Subdomains

Frequently Asked Questions

What does test_human_in_the_loop_middleware_sequence_mismatch() do?
test_human_in_the_loop_middleware_sequence_mismatch() is a function in the langchain codebase, defined in libs/langchain_v1/tests/unit_tests/agents/middleware/implementations/test_human_in_the_loop.py.
Where is test_human_in_the_loop_middleware_sequence_mismatch() defined?
test_human_in_the_loop_middleware_sequence_mismatch() is defined in libs/langchain_v1/tests/unit_tests/agents/middleware/implementations/test_human_in_the_loop.py at line 493.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free