Home / File/ test_cache.py — langchain Source File

test_cache.py — langchain Source File

Architecture documentation for test_cache.py, a python file in the langchain codebase. 11 imports, 0 dependents.

File python CoreAbstractions MessageSchema 11 imports 16 functions 3 classes

Entity Profile

Dependency Diagram

graph LR
  51f634bf_713d_3f19_d694_5c6ef3e59c57["test_cache.py"]
  8e2034b7_ceb8_963f_29fc_2ea6b50ef9b3["typing"]
  51f634bf_713d_3f19_d694_5c6ef3e59c57 --> 8e2034b7_ceb8_963f_29fc_2ea6b50ef9b3
  120e2591_3e15_b895_72b6_cb26195e40a6["pytest"]
  51f634bf_713d_3f19_d694_5c6ef3e59c57 --> 120e2591_3e15_b895_72b6_cb26195e40a6
  91721f45_4909_e489_8c1f_084f8bd87145["typing_extensions"]
  51f634bf_713d_3f19_d694_5c6ef3e59c57 --> 91721f45_4909_e489_8c1f_084f8bd87145
  e51e78c8_f355_3edd_309e_1aec4323616a["langchain_core.caches"]
  51f634bf_713d_3f19_d694_5c6ef3e59c57 --> e51e78c8_f355_3edd_309e_1aec4323616a
  85390fd0_d51c_6478_9be2_2d6a9c15d720["langchain_core.globals"]
  51f634bf_713d_3f19_d694_5c6ef3e59c57 --> 85390fd0_d51c_6478_9be2_2d6a9c15d720
  2312f229_c199_ac88_c29f_62e2a2958404["langchain_core.language_models.chat_models"]
  51f634bf_713d_3f19_d694_5c6ef3e59c57 --> 2312f229_c199_ac88_c29f_62e2a2958404
  833aeadc_c3e9_bfcf_db07_ecb37ad3ba24["langchain_core.language_models.fake_chat_models"]
  51f634bf_713d_3f19_d694_5c6ef3e59c57 --> 833aeadc_c3e9_bfcf_db07_ecb37ad3ba24
  36cce5da_d805_04c3_7e86_e1b4dd49b497["langchain_core.load"]
  51f634bf_713d_3f19_d694_5c6ef3e59c57 --> 36cce5da_d805_04c3_7e86_e1b4dd49b497
  d758344f_537f_649e_f467_b9d7442e86df["langchain_core.messages"]
  51f634bf_713d_3f19_d694_5c6ef3e59c57 --> d758344f_537f_649e_f467_b9d7442e86df
  ac2a9b92_4484_491e_1b48_ec85e71e1d58["langchain_core.outputs"]
  51f634bf_713d_3f19_d694_5c6ef3e59c57 --> ac2a9b92_4484_491e_1b48_ec85e71e1d58
  10de79d5_a0f1_d6a2_4c20_abd3ee601196["langchain_core.outputs.chat_result"]
  51f634bf_713d_3f19_d694_5c6ef3e59c57 --> 10de79d5_a0f1_d6a2_4c20_abd3ee601196
  style 51f634bf_713d_3f19_d694_5c6ef3e59c57 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

"""Module tests interaction of chat model with caching abstraction.."""

from typing import Any

import pytest
from typing_extensions import override

from langchain_core.caches import RETURN_VAL_TYPE, BaseCache
from langchain_core.globals import set_llm_cache
from langchain_core.language_models.chat_models import _cleanup_llm_representation
from langchain_core.language_models.fake_chat_models import (
    FakeListChatModel,
    GenericFakeChatModel,
)
from langchain_core.load import dumps
from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.outputs import ChatGeneration, Generation
from langchain_core.outputs.chat_result import ChatResult


class InMemoryCache(BaseCache):
    """In-memory cache used for testing purposes."""

    def __init__(self) -> None:
        """Initialize with empty cache."""
        self._cache: dict[tuple[str, str], RETURN_VAL_TYPE] = {}

    def lookup(self, prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None:
        """Look up based on `prompt` and `llm_string`."""
        return self._cache.get((prompt, llm_string), None)

    def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
        """Update cache based on `prompt` and `llm_string`."""
        self._cache[prompt, llm_string] = return_val

    @override
    def clear(self, **kwargs: Any) -> None:
        """Clear cache."""
        self._cache = {}


def test_local_cache_sync() -> None:
    """Test that the local cache is being populated but not the global one."""
    global_cache = InMemoryCache()
    local_cache = InMemoryCache()
    try:
        set_llm_cache(global_cache)
        chat_model = FakeListChatModel(
            cache=local_cache, responses=["hello", "goodbye"]
        )
        assert chat_model.invoke("How are you?").content == "hello"
        # If the cache works we should get the same response since
        # the prompt is the same
        assert chat_model.invoke("How are you?").content == "hello"
        # The global cache should be empty
        assert global_cache._cache == {}
        # The local cache should be populated
        assert len(local_cache._cache) == 1
        llm_result = list(local_cache._cache.values())
        chat_generation = llm_result[0][0]
// ... (476 more lines)

Subdomains

Dependencies

  • langchain_core.caches
  • langchain_core.globals
  • langchain_core.language_models.chat_models
  • langchain_core.language_models.fake_chat_models
  • langchain_core.load
  • langchain_core.messages
  • langchain_core.outputs
  • langchain_core.outputs.chat_result
  • pytest
  • typing
  • typing_extensions

Frequently Asked Questions

What does test_cache.py do?
test_cache.py is a source file in the langchain codebase, written in python. It belongs to the CoreAbstractions domain, MessageSchema subdomain.
What functions are defined in test_cache.py?
test_cache.py defines 16 function(s): test_cache_key_ignores_message_id_async, test_cache_key_ignores_message_id_sync, test_cache_with_generation_objects, test_can_swap_caches, test_cleanup_serialized, test_global_cache_abatch, test_global_cache_async, test_global_cache_batch, test_global_cache_stream, test_global_cache_sync, and 6 more.
What does test_cache.py depend on?
test_cache.py imports 11 module(s): langchain_core.caches, langchain_core.globals, langchain_core.language_models.chat_models, langchain_core.language_models.fake_chat_models, langchain_core.load, langchain_core.messages, langchain_core.outputs, langchain_core.outputs.chat_result, and 3 more.
Where is test_cache.py in the architecture?
test_cache.py is located at libs/core/tests/unit_tests/language_models/chat_models/test_cache.py (domain: CoreAbstractions, subdomain: MessageSchema, directory: libs/core/tests/unit_tests/language_models/chat_models).

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free