test_local_cache_async() — langchain Function Reference
Architecture documentation for the test_local_cache_async() function in test_cache.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 9aedbd65_8263_ddf1_572a_6687baf81269["test_local_cache_async()"] 51f634bf_713d_3f19_d694_5c6ef3e59c57["test_cache.py"] 9aedbd65_8263_ddf1_572a_6687baf81269 -->|defined in| 51f634bf_713d_3f19_d694_5c6ef3e59c57 style 9aedbd65_8263_ddf1_572a_6687baf81269 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/core/tests/unit_tests/language_models/chat_models/test_cache.py lines 73–101
async def test_local_cache_async() -> None:
# Use MockCache as the cache
global_cache = InMemoryCache()
local_cache = InMemoryCache()
try:
set_llm_cache(global_cache)
chat_model = FakeListChatModel(
cache=local_cache, responses=["hello", "goodbye"]
)
assert (await chat_model.ainvoke("How are you?")).content == "hello"
# If the cache works we should get the same response since
# the prompt is the same
assert (await chat_model.ainvoke("How are you?")).content == "hello"
# The global cache should be empty
assert global_cache._cache == {}
# The local cache should be populated
assert len(local_cache._cache) == 1
llm_result = list(local_cache._cache.values())
chat_generation = llm_result[0][0]
assert isinstance(chat_generation, ChatGeneration)
assert chat_generation.message.content == "hello"
# Verify that another prompt will trigger the call to the model
assert chat_model.invoke("meow?").content == "goodbye"
# The global cache should be empty
assert global_cache._cache == {}
# The local cache should be populated
assert len(local_cache._cache) == 2
finally:
set_llm_cache(None)
Domain
Subdomains
Source
Frequently Asked Questions
What does test_local_cache_async() do?
test_local_cache_async() is a function in the langchain codebase, defined in libs/core/tests/unit_tests/language_models/chat_models/test_cache.py.
Where is test_local_cache_async() defined?
test_local_cache_async() is defined in libs/core/tests/unit_tests/language_models/chat_models/test_cache.py at line 73.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free