test_prompt_cache_key_model_kwargs() — langchain Function Reference
Architecture documentation for the test_prompt_cache_key_model_kwargs() function in test_prompt_cache_key.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 1c658a68_c3aa_0fd8_ed95_ff160999e4db["test_prompt_cache_key_model_kwargs()"] adfe4542_d9fc_113e_aad7_23a86789157a["test_prompt_cache_key.py"] 1c658a68_c3aa_0fd8_ed95_ff160999e4db -->|defined in| adfe4542_d9fc_113e_aad7_23a86789157a style 1c658a68_c3aa_0fd8_ed95_ff160999e4db fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/tests/unit_tests/chat_models/test_prompt_cache_key.py lines 50–68
def test_prompt_cache_key_model_kwargs() -> None:
"""Test prompt_cache_key via model_kwargs and method precedence."""
messages = [HumanMessage("Hello world")]
# Test model-level via model_kwargs
chat = ChatOpenAI(
model="gpt-4o-mini",
max_completion_tokens=10,
model_kwargs={"prompt_cache_key": "model-level-cache"},
)
payload = chat._get_request_payload(messages)
assert "prompt_cache_key" in payload
assert payload["prompt_cache_key"] == "model-level-cache"
# Test that per-call cache key overrides model-level
payload_override = chat._get_request_payload(
messages, prompt_cache_key="per-call-cache"
)
assert payload_override["prompt_cache_key"] == "per-call-cache"
Domain
Subdomains
Source
Frequently Asked Questions
What does test_prompt_cache_key_model_kwargs() do?
test_prompt_cache_key_model_kwargs() is a function in the langchain codebase, defined in libs/partners/openai/tests/unit_tests/chat_models/test_prompt_cache_key.py.
Where is test_prompt_cache_key_model_kwargs() defined?
test_prompt_cache_key_model_kwargs() is defined in libs/partners/openai/tests/unit_tests/chat_models/test_prompt_cache_key.py at line 50.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free