Home / Function/ test_global_cache_batch() — langchain Function Reference

test_global_cache_batch() — langchain Function Reference

Architecture documentation for the test_global_cache_batch() function in test_cache.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  1873383d_a315_b483_9f5a_826cd76ddf42["test_global_cache_batch()"]
  51f634bf_713d_3f19_d694_5c6ef3e59c57["test_cache.py"]
  1873383d_a315_b483_9f5a_826cd76ddf42 -->|defined in| 51f634bf_713d_3f19_d694_5c6ef3e59c57
  style 1873383d_a315_b483_9f5a_826cd76ddf42 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/core/tests/unit_tests/language_models/chat_models/test_cache.py lines 217–249

def test_global_cache_batch() -> None:
    global_cache = InMemoryCache()
    try:
        set_llm_cache(global_cache)
        chat_model = FakeListChatModel(
            cache=True, responses=["hello", "goodbye", "meow", "woof"]
        )
        results = chat_model.batch(["first prompt", "second prompt"])
        # These may be in any order
        assert {results[0].content, results[1].content} == {"hello", "goodbye"}

        # Now try with the same prompt
        results = chat_model.batch(["first prompt", "first prompt"])
        # These could be either "hello" or "goodbye" and should be identical
        assert results[0].content == results[1].content
        assert {results[0].content, results[1].content}.issubset({"hello", "goodbye"})

        # RACE CONDITION -- note behavior is different from async
        # Now, reset cache and test the race condition
        # For now we just hard-code the result, if this changes
        # we can investigate further
        global_cache = InMemoryCache()
        set_llm_cache(global_cache)
        assert global_cache._cache == {}
        results = chat_model.batch(
            [
                "prompt",
                "prompt",
            ]
        )
        assert {results[0].content, results[1].content} == {"meow"}
    finally:
        set_llm_cache(None)

Subdomains

Frequently Asked Questions

What does test_global_cache_batch() do?
test_global_cache_batch() is a function in the langchain codebase, defined in libs/core/tests/unit_tests/language_models/chat_models/test_cache.py.
Where is test_global_cache_batch() defined?
test_global_cache_batch() is defined in libs/core/tests/unit_tests/language_models/chat_models/test_cache.py at line 217.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free