Home / Class/ InMemoryCache Class — langchain Architecture

InMemoryCache Class — langchain Architecture

Architecture documentation for the InMemoryCache class in caches.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  0ee9738d_2c54_02d3_a6a1_bc5b38ba0016["InMemoryCache"]
  523f3c01_ffbb_1a97_9161_fec704fe8c2e["BaseCache"]
  0ee9738d_2c54_02d3_a6a1_bc5b38ba0016 -->|extends| 523f3c01_ffbb_1a97_9161_fec704fe8c2e
  e41c20be_5cd4_2191_1344_c6fbd9700956["caches.py"]
  0ee9738d_2c54_02d3_a6a1_bc5b38ba0016 -->|defined in| e41c20be_5cd4_2191_1344_c6fbd9700956
  d447ff9f_dfd0_d835_a4c4_330084a1deb8["__init__()"]
  0ee9738d_2c54_02d3_a6a1_bc5b38ba0016 -->|method| d447ff9f_dfd0_d835_a4c4_330084a1deb8
  109a6a7f_2a4c_c1a4_c2b9_555baf69f4fa["lookup()"]
  0ee9738d_2c54_02d3_a6a1_bc5b38ba0016 -->|method| 109a6a7f_2a4c_c1a4_c2b9_555baf69f4fa
  581d25d5_b1c2_7810_bee7_7d99cd462502["update()"]
  0ee9738d_2c54_02d3_a6a1_bc5b38ba0016 -->|method| 581d25d5_b1c2_7810_bee7_7d99cd462502
  2d0e9c0d_929f_d63a_e755_8ea122b4c370["clear()"]
  0ee9738d_2c54_02d3_a6a1_bc5b38ba0016 -->|method| 2d0e9c0d_929f_d63a_e755_8ea122b4c370
  ef3ea1ef_aa98_e2ce_b34a_cce0eb60c0c3["alookup()"]
  0ee9738d_2c54_02d3_a6a1_bc5b38ba0016 -->|method| ef3ea1ef_aa98_e2ce_b34a_cce0eb60c0c3
  2717878b_5e77_161b_b61c_334f8315cc04["aupdate()"]
  0ee9738d_2c54_02d3_a6a1_bc5b38ba0016 -->|method| 2717878b_5e77_161b_b61c_334f8315cc04
  c5f5f4ff_ff6f_49df_ab4d_7bbc0f2e6342["aclear()"]
  0ee9738d_2c54_02d3_a6a1_bc5b38ba0016 -->|method| c5f5f4ff_ff6f_49df_ab4d_7bbc0f2e6342

Relationship Graph

Source Code

libs/core/langchain_core/caches.py lines 155–272

class InMemoryCache(BaseCache):
    """Cache that stores things in memory.

    Example:
        ```python
        from langchain_core.caches import InMemoryCache
        from langchain_core.outputs import Generation

        # Initialize cache
        cache = InMemoryCache()

        # Update cache
        cache.update(
            prompt="What is the capital of France?",
            llm_string="model='gpt-3.5-turbo', temperature=0.1",
            return_val=[Generation(text="Paris")],
        )

        # Lookup cache
        result = cache.lookup(
            prompt="What is the capital of France?",
            llm_string="model='gpt-3.5-turbo', temperature=0.1",
        )
        # result is [Generation(text="Paris")]
        ```
    """

    def __init__(self, *, maxsize: int | None = None) -> None:
        """Initialize with empty cache.

        Args:
            maxsize: The maximum number of items to store in the cache.

                If `None`, the cache has no maximum size.

                If the cache exceeds the maximum size, the oldest items are removed.

        Raises:
            ValueError: If `maxsize` is less than or equal to `0`.
        """
        self._cache: dict[tuple[str, str], RETURN_VAL_TYPE] = {}
        if maxsize is not None and maxsize <= 0:
            msg = "maxsize must be greater than 0"
            raise ValueError(msg)
        self._maxsize = maxsize

    def lookup(self, prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None:
        """Look up based on `prompt` and `llm_string`.

        Args:
            prompt: A string representation of the prompt.

                In the case of a chat model, the prompt is a non-trivial
                serialization of the prompt into the language model.
            llm_string: A string representation of the LLM configuration.

        Returns:
            On a cache miss, return `None`. On a cache hit, return the cached value.
        """
        return self._cache.get((prompt, llm_string), None)

    def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
        """Update cache based on `prompt` and `llm_string`.

        Args:
            prompt: A string representation of the prompt.

                In the case of a chat model, the prompt is a non-trivial
                serialization of the prompt into the language model.
            llm_string: A string representation of the LLM configuration.
            return_val: The value to be cached.

                The value is a list of `Generation` (or subclasses).
        """
        if self._maxsize is not None and len(self._cache) == self._maxsize:
            del self._cache[next(iter(self._cache))]
        self._cache[prompt, llm_string] = return_val

    @override
    def clear(self, **kwargs: Any) -> None:
        """Clear cache."""

Extends

Frequently Asked Questions

What is the InMemoryCache class?
InMemoryCache is a class in the langchain codebase, defined in libs/core/langchain_core/caches.py.
Where is InMemoryCache defined?
InMemoryCache is defined in libs/core/langchain_core/caches.py at line 155.
What does InMemoryCache extend?
InMemoryCache extends BaseCache.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free