test_streaming_cache_token_reporting() — langchain Function Reference
Architecture documentation for the test_streaming_cache_token_reporting() function in test_chat_models.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD e6f4e3f4_f134_304f_c266_ee752a48d011["test_streaming_cache_token_reporting()"] 18428dc5_a41b_90c6_88ad_615296ee3311["test_chat_models.py"] e6f4e3f4_f134_304f_c266_ee752a48d011 -->|defined in| 18428dc5_a41b_90c6_88ad_615296ee3311 style e6f4e3f4_f134_304f_c266_ee752a48d011 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/anthropic/tests/unit_tests/test_chat_models.py lines 1591–1661
def test_streaming_cache_token_reporting() -> None:
"""Test that cache tokens are properly reported in streaming events."""
from unittest.mock import MagicMock
from anthropic.types import MessageDeltaUsage
from langchain_anthropic.chat_models import _make_message_chunk_from_anthropic_event
# Create a mock message_start event
mock_message = MagicMock()
mock_message.model = MODEL_NAME
mock_message.usage.input_tokens = 100
mock_message.usage.output_tokens = 0
mock_message.usage.cache_read_input_tokens = 25
mock_message.usage.cache_creation_input_tokens = 10
message_start_event = MagicMock()
message_start_event.type = "message_start"
message_start_event.message = mock_message
# Create a mock message_delta event with complete usage info
mock_delta_usage = MessageDeltaUsage(
output_tokens=50,
input_tokens=100,
cache_read_input_tokens=25,
cache_creation_input_tokens=10,
)
mock_delta = MagicMock()
mock_delta.stop_reason = "end_turn"
mock_delta.stop_sequence = None
message_delta_event = MagicMock()
message_delta_event.type = "message_delta"
message_delta_event.usage = mock_delta_usage
message_delta_event.delta = mock_delta
# Test message_start event
start_chunk, _ = _make_message_chunk_from_anthropic_event(
message_start_event,
stream_usage=True,
coerce_content_to_string=True,
block_start_event=None,
)
# Test message_delta event - should contain complete usage metadata (w/ cache)
delta_chunk, _ = _make_message_chunk_from_anthropic_event(
message_delta_event,
stream_usage=True,
coerce_content_to_string=True,
block_start_event=None,
)
# Verify message_delta has complete usage_metadata including cache tokens
assert start_chunk is not None, "message_start should produce a chunk"
assert getattr(start_chunk, "usage_metadata", None) is None, (
"message_start should not have usage_metadata"
)
assert delta_chunk is not None, "message_delta should produce a chunk"
assert delta_chunk.usage_metadata is not None, (
"message_delta should have usage_metadata"
)
assert "input_token_details" in delta_chunk.usage_metadata
input_details = delta_chunk.usage_metadata["input_token_details"]
assert input_details.get("cache_read") == 25
assert input_details.get("cache_creation") == 10
# Verify totals are correct: 100 base + 25 cache_read + 10 cache_creation = 135
assert delta_chunk.usage_metadata["input_tokens"] == 135
assert delta_chunk.usage_metadata["output_tokens"] == 50
assert delta_chunk.usage_metadata["total_tokens"] == 185
Domain
Subdomains
Source
Frequently Asked Questions
What does test_streaming_cache_token_reporting() do?
test_streaming_cache_token_reporting() is a function in the langchain codebase, defined in libs/partners/anthropic/tests/unit_tests/test_chat_models.py.
Where is test_streaming_cache_token_reporting() defined?
test_streaming_cache_token_reporting() is defined in libs/partners/anthropic/tests/unit_tests/test_chat_models.py at line 1591.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free