test_caching() — langchain Function Reference
Architecture documentation for the test_caching() function in test_base.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD af91bab6_0b7a_58a5_e81a_0398ef603357["test_caching()"] aecd824f_70b2_ac20_4966_65af1dc88831["test_base.py"] af91bab6_0b7a_58a5_e81a_0398ef603357 -->|defined in| aecd824f_70b2_ac20_4966_65af1dc88831 style af91bab6_0b7a_58a5_e81a_0398ef603357 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain/tests/unit_tests/llms/test_base.py lines 21–45
def test_caching() -> None:
"""Test caching behavior."""
set_llm_cache(InMemoryCache())
llm = FakeLLM()
params = llm.dict()
params["stop"] = None
llm_string = str(sorted([(k, v) for k, v in params.items()]))
cache = get_llm_cache()
assert cache is not None
cache.update("foo", llm_string, [Generation(text="fizz")])
output = llm.generate(["foo", "bar", "foo"])
expected_cache_output = [Generation(text="foo")]
cache_output = cache.lookup("bar", llm_string)
assert cache_output == expected_cache_output
set_llm_cache(None)
expected_generations = [
[Generation(text="fizz")],
[Generation(text="foo")],
[Generation(text="fizz")],
]
expected_output = LLMResult(
generations=expected_generations,
llm_output=None,
)
assert output == expected_output
Domain
Subdomains
Source
Frequently Asked Questions
What does test_caching() do?
test_caching() is a function in the langchain codebase, defined in libs/langchain/tests/unit_tests/llms/test_base.py.
Where is test_caching() defined?
test_caching() is defined in libs/langchain/tests/unit_tests/llms/test_base.py at line 21.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free