test_cache_short_circuit() — langchain Function Reference
Architecture documentation for the test_cache_short_circuit() function in test_wrap_model_call.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 7844b1b1_45ba_f97d_a3c3_a65b2fc62b6d["test_cache_short_circuit()"] e65ae53d_1f12_d0c5_97fd_2d688ea38311["TestShortCircuit"] 7844b1b1_45ba_f97d_a3c3_a65b2fc62b6d -->|defined in| e65ae53d_1f12_d0c5_97fd_2d688ea38311 style 7844b1b1_45ba_f97d_a3c3_a65b2fc62b6d fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain_v1/tests/unit_tests/agents/middleware/core/test_wrap_model_call.py lines 499–556
def test_cache_short_circuit(self) -> None:
"""Test middleware that short-circuits with cached response."""
cache: dict[str, ModelResponse] = {}
model_calls = []
class CachingMiddleware(AgentMiddleware):
def wrap_model_call(
self,
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse],
) -> ModelCallResult:
# Simple cache key based on last message
cache_key = str(request.messages[-1].content) if request.messages else ""
if cache_key in cache:
# Short-circuit with cached result
return cache[cache_key]
# Execute and cache
result = handler(request)
cache[cache_key] = result
return result
class TrackingModel(GenericFakeChatModel):
@override
def _generate(
self,
messages: list[BaseMessage],
stop: list[str] | None = None,
run_manager: CallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> ChatResult:
model_calls.append(len(messages))
return super()._generate(messages, **kwargs)
model = TrackingModel(
messages=iter(
[
AIMessage(content="Response 1"),
AIMessage(content="Response 2"),
]
)
)
agent = create_agent(model=model, middleware=[CachingMiddleware()])
# First call - cache miss, calls model
result1 = agent.invoke({"messages": [HumanMessage("Hello")]})
assert result1["messages"][1].content == "Response 1"
assert len(model_calls) == 1
# Second call with same message - cache hit, doesn't call model
result2 = agent.invoke({"messages": [HumanMessage("Hello")]})
assert result2["messages"][1].content == "Response 1"
assert len(model_calls) == 1 # Still 1, no new call
# Third call with different message - cache miss, calls model
result3 = agent.invoke({"messages": [HumanMessage("Goodbye")]})
assert result3["messages"][1].content == "Response 2"
assert len(model_calls) == 2 # New call
Domain
Subdomains
Source
Frequently Asked Questions
What does test_cache_short_circuit() do?
test_cache_short_circuit() is a function in the langchain codebase, defined in libs/langchain_v1/tests/unit_tests/agents/middleware/core/test_wrap_model_call.py.
Where is test_cache_short_circuit() defined?
test_cache_short_circuit() is defined in libs/langchain_v1/tests/unit_tests/agents/middleware/core/test_wrap_model_call.py at line 499.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free