before_model() — langchain Function Reference
Architecture documentation for the before_model() function in model_call_limit.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 42246a70_bac1_7ccd_4e78_d9a0900cd789["before_model()"] 190da0f4_c252_0e20_de0d_30a94a781fc9["ModelCallLimitMiddleware"] 42246a70_bac1_7ccd_4e78_d9a0900cd789 -->|defined in| 190da0f4_c252_0e20_de0d_30a94a781fc9 4985bec8_d229_380b_2c59_773aa3599fcd["abefore_model()"] 4985bec8_d229_380b_2c59_773aa3599fcd -->|calls| 42246a70_bac1_7ccd_4e78_d9a0900cd789 e2f72737_9375_a63d_9045_66bd6c4593f9["_build_limit_exceeded_message()"] 42246a70_bac1_7ccd_4e78_d9a0900cd789 -->|calls| e2f72737_9375_a63d_9045_66bd6c4593f9 style 42246a70_bac1_7ccd_4e78_d9a0900cd789 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain_v1/langchain/agents/middleware/model_call_limit.py lines 168–210
def before_model(
self, state: ModelCallLimitState[ResponseT], runtime: Runtime[ContextT]
) -> dict[str, Any] | None:
"""Check model call limits before making a model call.
Args:
state: The current agent state containing call counts.
runtime: The langgraph runtime.
Returns:
If limits are exceeded and exit_behavior is `'end'`, returns
a `Command` to jump to the end with a limit exceeded message. Otherwise
returns `None`.
Raises:
ModelCallLimitExceededError: If limits are exceeded and `exit_behavior`
is `'error'`.
"""
thread_count = state.get("thread_model_call_count", 0)
run_count = state.get("run_model_call_count", 0)
# Check if any limits will be exceeded after the next call
thread_limit_exceeded = self.thread_limit is not None and thread_count >= self.thread_limit
run_limit_exceeded = self.run_limit is not None and run_count >= self.run_limit
if thread_limit_exceeded or run_limit_exceeded:
if self.exit_behavior == "error":
raise ModelCallLimitExceededError(
thread_count=thread_count,
run_count=run_count,
thread_limit=self.thread_limit,
run_limit=self.run_limit,
)
if self.exit_behavior == "end":
# Create a message indicating the limit was exceeded
limit_message = _build_limit_exceeded_message(
thread_count, run_count, self.thread_limit, self.run_limit
)
limit_ai_message = AIMessage(content=limit_message)
return {"jump_to": "end", "messages": [limit_ai_message]}
return None
Domain
Subdomains
Called By
Source
Frequently Asked Questions
What does before_model() do?
before_model() is a function in the langchain codebase, defined in libs/langchain_v1/langchain/agents/middleware/model_call_limit.py.
Where is before_model() defined?
before_model() is defined in libs/langchain_v1/langchain/agents/middleware/model_call_limit.py at line 168.
What does before_model() call?
before_model() calls 1 function(s): _build_limit_exceeded_message.
What calls before_model()?
before_model() is called by 1 function(s): abefore_model.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free