Home / Function/ test_local_cache_generate_async() — langchain Function Reference

test_local_cache_generate_async() — langchain Function Reference

Architecture documentation for the test_local_cache_generate_async() function in test_cache.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  9892d683_5dd9_4a1e_9ea0_86f2200af98a["test_local_cache_generate_async()"]
  4448d00a_7fa0_afd0_1877_b0eb9e910890["test_cache.py"]
  9892d683_5dd9_4a1e_9ea0_86f2200af98a -->|defined in| 4448d00a_7fa0_afd0_1877_b0eb9e910890
  style 9892d683_5dd9_4a1e_9ea0_86f2200af98a fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/core/tests/unit_tests/language_models/llms/test_cache.py lines 31–44

async def test_local_cache_generate_async() -> None:
    global_cache = InMemoryCache()
    local_cache = InMemoryCache()
    try:
        set_llm_cache(global_cache)
        llm = FakeListLLM(cache=local_cache, responses=["foo", "bar"])
        output = await llm.agenerate(["foo"])
        assert output.generations[0][0].text == "foo"
        output = await llm.agenerate(["foo"])
        assert output.generations[0][0].text == "foo"
        assert global_cache._cache == {}
        assert len(local_cache._cache) == 1
    finally:
        set_llm_cache(None)

Subdomains

Frequently Asked Questions

What does test_local_cache_generate_async() do?
test_local_cache_generate_async() is a function in the langchain codebase, defined in libs/core/tests/unit_tests/language_models/llms/test_cache.py.
Where is test_local_cache_generate_async() defined?
test_local_cache_generate_async() is defined in libs/core/tests/unit_tests/language_models/llms/test_cache.py at line 31.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free