BaseCache Class — langchain Architecture
Architecture documentation for the BaseCache class in caches.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 523f3c01_ffbb_1a97_9161_fec704fe8c2e["BaseCache"] e41c20be_5cd4_2191_1344_c6fbd9700956["caches.py"] 523f3c01_ffbb_1a97_9161_fec704fe8c2e -->|defined in| e41c20be_5cd4_2191_1344_c6fbd9700956 fa084637_f868_5765_cad6_a67599dd0e6f["lookup()"] 523f3c01_ffbb_1a97_9161_fec704fe8c2e -->|method| fa084637_f868_5765_cad6_a67599dd0e6f 33f37e5c_0ed6_2d15_1ba4_05267b51efd8["update()"] 523f3c01_ffbb_1a97_9161_fec704fe8c2e -->|method| 33f37e5c_0ed6_2d15_1ba4_05267b51efd8 6f73e966_75e7_9a67_788a_5dabfd6738f0["clear()"] 523f3c01_ffbb_1a97_9161_fec704fe8c2e -->|method| 6f73e966_75e7_9a67_788a_5dabfd6738f0 1d92ad0f_0b93_b2aa_cbad_fc4f8cfb835e["alookup()"] 523f3c01_ffbb_1a97_9161_fec704fe8c2e -->|method| 1d92ad0f_0b93_b2aa_cbad_fc4f8cfb835e e92b4ea9_900f_691c_9fab_9cb8af453aed["aupdate()"] 523f3c01_ffbb_1a97_9161_fec704fe8c2e -->|method| e92b4ea9_900f_691c_9fab_9cb8af453aed a7fa4944_4178_201c_ac06_09bfcb8180bc["aclear()"] 523f3c01_ffbb_1a97_9161_fec704fe8c2e -->|method| a7fa4944_4178_201c_ac06_09bfcb8180bc
Relationship Graph
Source Code
libs/core/langchain_core/caches.py lines 32–152
class BaseCache(ABC):
"""Interface for a caching layer for LLMs and Chat models.
The cache interface consists of the following methods:
- lookup: Look up a value based on a prompt and `llm_string`.
- update: Update the cache based on a prompt and `llm_string`.
- clear: Clear the cache.
In addition, the cache interface provides an async version of each method.
The default implementation of the async methods is to run the synchronous
method in an executor. It's recommended to override the async methods
and provide async implementations to avoid unnecessary overhead.
"""
@abstractmethod
def lookup(self, prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None:
"""Look up based on `prompt` and `llm_string`.
A cache implementation is expected to generate a key from the 2-tuple
of `prompt` and `llm_string` (e.g., by concatenating them with a delimiter).
Args:
prompt: A string representation of the prompt.
In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
This is used to capture the invocation parameters of the LLM
(e.g., model name, temperature, stop tokens, max tokens, etc.).
These invocation parameters are serialized into a string representation.
Returns:
On a cache miss, return `None`. On a cache hit, return the cached value.
The cached value is a list of `Generation` (or subclasses).
"""
@abstractmethod
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update cache based on `prompt` and `llm_string`.
The `prompt` and `llm_string` are used to generate a key for the cache. The key
should match that of the lookup method.
Args:
prompt: A string representation of the prompt.
In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
This is used to capture the invocation parameters of the LLM
(e.g., model name, temperature, stop tokens, max tokens, etc.).
These invocation parameters are serialized into a string
representation.
return_val: The value to be cached.
The value is a list of `Generation` (or subclasses).
"""
@abstractmethod
def clear(self, **kwargs: Any) -> None:
"""Clear cache that can take additional keyword arguments."""
async def alookup(self, prompt: str, llm_string: str) -> RETURN_VAL_TYPE | None:
"""Async look up based on `prompt` and `llm_string`.
A cache implementation is expected to generate a key from the 2-tuple
of `prompt` and `llm_string` (e.g., by concatenating them with a delimiter).
Args:
prompt: A string representation of the prompt.
In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
Defined In
Source
Frequently Asked Questions
What is the BaseCache class?
BaseCache is a class in the langchain codebase, defined in libs/core/langchain_core/caches.py.
Where is BaseCache defined?
BaseCache is defined in libs/core/langchain_core/caches.py at line 32.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free