test_prompt_cache_key.py — langchain Source File
Architecture documentation for test_prompt_cache_key.py, a python file in the langchain codebase. 2 imports, 0 dependents.
Entity Profile
Dependency Diagram
graph LR 88675377_c5a7_deb4_6393_67dec22d158b["test_prompt_cache_key.py"] d758344f_537f_649e_f467_b9d7442e86df["langchain_core.messages"] 88675377_c5a7_deb4_6393_67dec22d158b --> d758344f_537f_649e_f467_b9d7442e86df 0b28cff6_d823_1571_d2bb_ec61508cc89c["langchain_openai"] 88675377_c5a7_deb4_6393_67dec22d158b --> 0b28cff6_d823_1571_d2bb_ec61508cc89c style 88675377_c5a7_deb4_6393_67dec22d158b fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
"""Unit tests for prompt_cache_key parameter."""
from langchain_core.messages import HumanMessage
from langchain_openai import ChatOpenAI
def test_prompt_cache_key_parameter_inclusion() -> None:
"""Test that prompt_cache_key parameter is properly included in request payload."""
chat = ChatOpenAI(model="gpt-4o-mini", max_completion_tokens=10)
messages = [HumanMessage("Hello")]
payload = chat._get_request_payload(messages, prompt_cache_key="test-cache-key")
assert "prompt_cache_key" in payload
assert payload["prompt_cache_key"] == "test-cache-key"
def test_prompt_cache_key_parameter_exclusion() -> None:
"""Test that prompt_cache_key parameter behavior matches OpenAI API."""
chat = ChatOpenAI(model="gpt-4o-mini", max_completion_tokens=10)
messages = [HumanMessage("Hello")]
# Test with explicit None (OpenAI should accept None values (marked Optional))
payload = chat._get_request_payload(messages, prompt_cache_key=None)
assert "prompt_cache_key" in payload
assert payload["prompt_cache_key"] is None
def test_prompt_cache_key_per_call() -> None:
"""Test that prompt_cache_key can be passed per-call with different values."""
chat = ChatOpenAI(model="gpt-4o-mini", max_completion_tokens=10)
messages = [HumanMessage("Hello")]
# Test different cache keys per call
payload1 = chat._get_request_payload(messages, prompt_cache_key="cache-v1")
payload2 = chat._get_request_payload(messages, prompt_cache_key="cache-v2")
assert payload1["prompt_cache_key"] == "cache-v1"
assert payload2["prompt_cache_key"] == "cache-v2"
# Test dynamic cache key assignment
cache_keys = ["customer-v1", "support-v1", "feedback-v1"]
for cache_key in cache_keys:
payload = chat._get_request_payload(messages, prompt_cache_key=cache_key)
assert "prompt_cache_key" in payload
assert payload["prompt_cache_key"] == cache_key
def test_prompt_cache_key_model_kwargs() -> None:
"""Test prompt_cache_key via model_kwargs and method precedence."""
messages = [HumanMessage("Hello world")]
# Test model-level via model_kwargs
chat = ChatOpenAI(
model="gpt-4o-mini",
max_completion_tokens=10,
model_kwargs={"prompt_cache_key": "model-level-cache"},
)
payload = chat._get_request_payload(messages)
assert "prompt_cache_key" in payload
assert payload["prompt_cache_key"] == "model-level-cache"
# Test that per-call cache key overrides model-level
payload_override = chat._get_request_payload(
messages, prompt_cache_key="per-call-cache"
)
assert payload_override["prompt_cache_key"] == "per-call-cache"
def test_prompt_cache_key_responses_api() -> None:
"""Test that prompt_cache_key works with Responses API."""
chat = ChatOpenAI(
model="gpt-4o-mini",
use_responses_api=True,
output_version="responses/v1",
max_completion_tokens=10,
)
messages = [HumanMessage("Hello")]
payload = chat._get_request_payload(
messages, prompt_cache_key="responses-api-cache-v1"
)
# prompt_cache_key should be present regardless of API type
assert "prompt_cache_key" in payload
assert payload["prompt_cache_key"] == "responses-api-cache-v1"
Domain
Subdomains
Functions
Dependencies
- langchain_core.messages
- langchain_openai
Source
Frequently Asked Questions
What does test_prompt_cache_key.py do?
test_prompt_cache_key.py is a source file in the langchain codebase, written in python. It belongs to the CoreAbstractions domain, MessageSchema subdomain.
What functions are defined in test_prompt_cache_key.py?
test_prompt_cache_key.py defines 5 function(s): test_prompt_cache_key_model_kwargs, test_prompt_cache_key_parameter_exclusion, test_prompt_cache_key_parameter_inclusion, test_prompt_cache_key_per_call, test_prompt_cache_key_responses_api.
What does test_prompt_cache_key.py depend on?
test_prompt_cache_key.py imports 2 module(s): langchain_core.messages, langchain_openai.
Where is test_prompt_cache_key.py in the architecture?
test_prompt_cache_key.py is located at libs/partners/openai/tests/unit_tests/chat_models/test_prompt_cache_key.py (domain: CoreAbstractions, subdomain: MessageSchema, directory: libs/partners/openai/tests/unit_tests/chat_models).
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free