Home / Function/ test_cache_key_ignores_message_id_async() — langchain Function Reference

test_cache_key_ignores_message_id_async() — langchain Function Reference

Architecture documentation for the test_cache_key_ignores_message_id_async() function in test_cache.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  f0f1bd41_23b2_b461_e046_042fedefa926["test_cache_key_ignores_message_id_async()"]
  51f634bf_713d_3f19_d694_5c6ef3e59c57["test_cache.py"]
  f0f1bd41_23b2_b461_e046_042fedefa926 -->|defined in| 51f634bf_713d_3f19_d694_5c6ef3e59c57
  style f0f1bd41_23b2_b461_e046_042fedefa926 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/core/tests/unit_tests/language_models/chat_models/test_cache.py lines 509–535

async def test_cache_key_ignores_message_id_async() -> None:
    """Test that message IDs are stripped from cache keys (async).

    Functionally identical messages with different IDs should produce
    the same cache key and result in cache hits.
    """
    local_cache = InMemoryCache()
    model = FakeListChatModel(cache=local_cache, responses=["hello", "goodbye"])

    # First call with a message that has an ID
    msg_with_id_1 = HumanMessage(content="How are you?", id="unique-id-1")
    result_1 = await model.ainvoke([msg_with_id_1])
    assert result_1.content == "hello"

    # Second call with the same content but different ID should hit cache
    msg_with_id_2 = HumanMessage(content="How are you?", id="unique-id-2")
    result_2 = await model.ainvoke([msg_with_id_2])
    # Should get cached response, not "goodbye"
    assert result_2.content == "hello"

    # Third call with no ID should also hit cache
    msg_no_id = HumanMessage(content="How are you?")
    result_3 = await model.ainvoke([msg_no_id])
    assert result_3.content == "hello"

    # Verify only one cache entry exists
    assert len(local_cache._cache) == 1

Subdomains

Frequently Asked Questions

What does test_cache_key_ignores_message_id_async() do?
test_cache_key_ignores_message_id_async() is a function in the langchain codebase, defined in libs/core/tests/unit_tests/language_models/chat_models/test_cache.py.
Where is test_cache_key_ignores_message_id_async() defined?
test_cache_key_ignores_message_id_async() is defined in libs/core/tests/unit_tests/language_models/chat_models/test_cache.py at line 509.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free