Home / Function/ test_anthropic_prompt_caching_middleware_async_with_system_prompt() — langchain Function Reference

test_anthropic_prompt_caching_middleware_async_with_system_prompt() — langchain Function Reference

Architecture documentation for the test_anthropic_prompt_caching_middleware_async_with_system_prompt() function in test_prompt_caching.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  aa8bb754_d575_ab4e_4fc4_b222d2d17029["test_anthropic_prompt_caching_middleware_async_with_system_prompt()"]
  80b9bbd1_b825_9778_a36c_351fbf1d2478["test_prompt_caching.py"]
  aa8bb754_d575_ab4e_4fc4_b222d2d17029 -->|defined in| 80b9bbd1_b825_9778_a36c_351fbf1d2478
  style aa8bb754_d575_ab4e_4fc4_b222d2d17029 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/anthropic/tests/unit_tests/middleware/test_prompt_caching.py lines 268–303

async def test_anthropic_prompt_caching_middleware_async_with_system_prompt() -> None:
    """Test async path counts system prompt in message count."""
    middleware = AnthropicPromptCachingMiddleware(
        type="ephemeral", ttl="1h", min_messages_to_cache=3
    )

    # Create a mock ChatAnthropic instance
    mock_chat_anthropic = MagicMock(spec=ChatAnthropic)

    # Test with system prompt: 2 messages + 1 system = 3 total (meets minimum)
    fake_request = ModelRequest(
        model=mock_chat_anthropic,
        messages=[HumanMessage("Hello"), HumanMessage("World")],
        system_prompt="You are a helpful assistant",
        tool_choice=None,
        tools=[],
        response_format=None,
        state={"messages": [HumanMessage("Hello"), HumanMessage("World")]},
        runtime=cast(Runtime, object()),
        model_settings={},
    )

    modified_request: ModelRequest | None = None

    async def mock_handler(req: ModelRequest) -> ModelResponse:
        nonlocal modified_request
        modified_request = req
        return ModelResponse(result=[AIMessage(content="mock response")])

    result = await middleware.awrap_model_call(fake_request, mock_handler)
    assert isinstance(result, ModelResponse)
    # Cache control should be added when system prompt pushes count to minimum
    assert modified_request is not None
    assert modified_request.model_settings == {
        "cache_control": {"type": "ephemeral", "ttl": "1h"}
    }

Domain

Subdomains

Frequently Asked Questions

What does test_anthropic_prompt_caching_middleware_async_with_system_prompt() do?
test_anthropic_prompt_caching_middleware_async_with_system_prompt() is a function in the langchain codebase, defined in libs/partners/anthropic/tests/unit_tests/middleware/test_prompt_caching.py.
Where is test_anthropic_prompt_caching_middleware_async_with_system_prompt() defined?
test_anthropic_prompt_caching_middleware_async_with_system_prompt() is defined in libs/partners/anthropic/tests/unit_tests/middleware/test_prompt_caching.py at line 268.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free