test__format_output_cached() — langchain Function Reference
Architecture documentation for the test__format_output_cached() function in test_chat_models.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD e8f16cbb_279d_6f62_63eb_b2487d32cb0d["test__format_output_cached()"] 18428dc5_a41b_90c6_88ad_615296ee3311["test_chat_models.py"] e8f16cbb_279d_6f62_63eb_b2487d32cb0d -->|defined in| 18428dc5_a41b_90c6_88ad_615296ee3311 style e8f16cbb_279d_6f62_63eb_b2487d32cb0d fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/anthropic/tests/unit_tests/test_chat_models.py lines 223–252
def test__format_output_cached() -> None:
anthropic_msg = Message(
id="foo",
content=[TextBlock(type="text", text="bar")],
model="baz",
role="assistant",
stop_reason=None,
stop_sequence=None,
usage=Usage(
input_tokens=2,
output_tokens=1,
cache_creation_input_tokens=3,
cache_read_input_tokens=4,
),
type="message",
)
expected = AIMessage( # type: ignore[misc]
"bar",
usage_metadata={
"input_tokens": 9,
"output_tokens": 1,
"total_tokens": 10,
"input_token_details": {"cache_creation": 3, "cache_read": 4},
},
response_metadata={"model_provider": "anthropic"},
)
llm = ChatAnthropic(model=MODEL_NAME, anthropic_api_key="test") # type: ignore[call-arg, call-arg]
actual = llm._format_output(anthropic_msg)
assert actual.generations[0].message == expected
Domain
Subdomains
Source
Frequently Asked Questions
What does test__format_output_cached() do?
test__format_output_cached() is a function in the langchain codebase, defined in libs/partners/anthropic/tests/unit_tests/test_chat_models.py.
Where is test__format_output_cached() defined?
test__format_output_cached() is defined in libs/partners/anthropic/tests/unit_tests/test_chat_models.py at line 223.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free