Home / Function/ after_model() — langchain Function Reference

after_model() — langchain Function Reference

Architecture documentation for the after_model() function in human_in_the_loop.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  c27d424a_40fd_dcec_b4d0_8dbf62b73ff2["after_model()"]
  b706912e_28f0_afbf_eb10_723aa3e74c52["HumanInTheLoopMiddleware"]
  c27d424a_40fd_dcec_b4d0_8dbf62b73ff2 -->|defined in| b706912e_28f0_afbf_eb10_723aa3e74c52
  7648034b_ec9a_ba69_ba13_196efa046263["aafter_model()"]
  7648034b_ec9a_ba69_ba13_196efa046263 -->|calls| c27d424a_40fd_dcec_b4d0_8dbf62b73ff2
  ba9e7a24_05d8_9c3c_8018_6e7b08711eb2["_create_action_and_config()"]
  c27d424a_40fd_dcec_b4d0_8dbf62b73ff2 -->|calls| ba9e7a24_05d8_9c3c_8018_6e7b08711eb2
  786034cd_4c3c_2a16_2580_fdaf6714c856["_process_decision()"]
  c27d424a_40fd_dcec_b4d0_8dbf62b73ff2 -->|calls| 786034cd_4c3c_2a16_2580_fdaf6714c856
  style c27d424a_40fd_dcec_b4d0_8dbf62b73ff2 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain_v1/langchain/agents/middleware/human_in_the_loop.py lines 288–373

    def after_model(
        self, state: AgentState[Any], runtime: Runtime[ContextT]
    ) -> dict[str, Any] | None:
        """Trigger interrupt flows for relevant tool calls after an `AIMessage`.

        Args:
            state: The current agent state.
            runtime: The runtime context.

        Returns:
            Updated message with the revised tool calls.

        Raises:
            ValueError: If the number of human decisions does not match the number of
                interrupted tool calls.
        """
        messages = state["messages"]
        if not messages:
            return None

        last_ai_msg = next((msg for msg in reversed(messages) if isinstance(msg, AIMessage)), None)
        if not last_ai_msg or not last_ai_msg.tool_calls:
            return None

        # Create action requests and review configs for tools that need approval
        action_requests: list[ActionRequest] = []
        review_configs: list[ReviewConfig] = []
        interrupt_indices: list[int] = []

        for idx, tool_call in enumerate(last_ai_msg.tool_calls):
            if (config := self.interrupt_on.get(tool_call["name"])) is not None:
                action_request, review_config = self._create_action_and_config(
                    tool_call, config, state, runtime
                )
                action_requests.append(action_request)
                review_configs.append(review_config)
                interrupt_indices.append(idx)

        # If no interrupts needed, return early
        if not action_requests:
            return None

        # Create single HITLRequest with all actions and configs
        hitl_request = HITLRequest(
            action_requests=action_requests,
            review_configs=review_configs,
        )

        # Send interrupt and get response
        decisions = interrupt(hitl_request)["decisions"]

        # Validate that the number of decisions matches the number of interrupt tool calls
        if (decisions_len := len(decisions)) != (interrupt_count := len(interrupt_indices)):
            msg = (
                f"Number of human decisions ({decisions_len}) does not match "
                f"number of hanging tool calls ({interrupt_count})."
            )
            raise ValueError(msg)

        # Process decisions and rebuild tool calls in original order
        revised_tool_calls: list[ToolCall] = []
        artificial_tool_messages: list[ToolMessage] = []
        decision_idx = 0

        for idx, tool_call in enumerate(last_ai_msg.tool_calls):
            if idx in interrupt_indices:
                # This was an interrupt tool call - process the decision
                config = self.interrupt_on[tool_call["name"]]
                decision = decisions[decision_idx]
                decision_idx += 1

                revised_tool_call, tool_message = self._process_decision(
                    decision, tool_call, config
                )
                if revised_tool_call is not None:
                    revised_tool_calls.append(revised_tool_call)
                if tool_message:
                    artificial_tool_messages.append(tool_message)
            else:
                # This was auto-approved - keep original
                revised_tool_calls.append(tool_call)

Domain

Subdomains

Called By

Frequently Asked Questions

What does after_model() do?
after_model() is a function in the langchain codebase, defined in libs/langchain_v1/langchain/agents/middleware/human_in_the_loop.py.
Where is after_model() defined?
after_model() is defined in libs/langchain_v1/langchain/agents/middleware/human_in_the_loop.py at line 288.
What does after_model() call?
after_model() calls 2 function(s): _create_action_and_config, _process_decision.
What calls after_model()?
after_model() is called by 1 function(s): aafter_model.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free