Home / Function/ after_model() — langchain Function Reference

after_model() — langchain Function Reference

Architecture documentation for the after_model() function in todo.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  73f986fb_c908_2f34_a4a9_57bb8e22cdaa["after_model()"]
  60552546_6c10_6b0f_fd5d_93ed6530c806["TodoListMiddleware"]
  73f986fb_c908_2f34_a4a9_57bb8e22cdaa -->|defined in| 60552546_6c10_6b0f_fd5d_93ed6530c806
  76ba8848_1b95_5006_0db6_dc2692e6deda["aafter_model()"]
  76ba8848_1b95_5006_0db6_dc2692e6deda -->|calls| 73f986fb_c908_2f34_a4a9_57bb8e22cdaa
  style 73f986fb_c908_2f34_a4a9_57bb8e22cdaa fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain_v1/langchain/agents/middleware/todo.py lines 256–305

    def after_model(
        self, state: PlanningState[ResponseT], runtime: Runtime[ContextT]
    ) -> dict[str, Any] | None:
        """Check for parallel write_todos tool calls and return errors if detected.

        The todo list is designed to be updated at most once per model turn. Since
        the `write_todos` tool replaces the entire todo list with each call, making
        multiple parallel calls would create ambiguity about which update should take
        precedence. This method prevents such conflicts by rejecting any response that
        contains multiple write_todos tool calls.

        Args:
            state: The current agent state containing messages.
            runtime: The LangGraph runtime instance.

        Returns:
            A dict containing error ToolMessages for each write_todos call if multiple
            parallel calls are detected, otherwise None to allow normal execution.
        """
        messages = state["messages"]
        if not messages:
            return None

        last_ai_msg = next((msg for msg in reversed(messages) if isinstance(msg, AIMessage)), None)
        if not last_ai_msg or not last_ai_msg.tool_calls:
            return None

        # Count write_todos tool calls
        write_todos_calls = [tc for tc in last_ai_msg.tool_calls if tc["name"] == "write_todos"]

        if len(write_todos_calls) > 1:
            # Create error tool messages for all write_todos calls
            error_messages = [
                ToolMessage(
                    content=(
                        "Error: The `write_todos` tool should never be called multiple times "
                        "in parallel. Please call it only once per model invocation to update "
                        "the todo list."
                    ),
                    tool_call_id=tc["id"],
                    status="error",
                )
                for tc in write_todos_calls
            ]

            # Keep the tool calls in the AI message but return error messages
            # This follows the same pattern as HumanInTheLoopMiddleware
            return {"messages": error_messages}

        return None

Domain

Subdomains

Called By

Frequently Asked Questions

What does after_model() do?
after_model() is a function in the langchain codebase, defined in libs/langchain_v1/langchain/agents/middleware/todo.py.
Where is after_model() defined?
after_model() is defined in libs/langchain_v1/langchain/agents/middleware/todo.py at line 256.
What calls after_model()?
after_model() is called by 1 function(s): aafter_model.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free