generate_response() — langchain Function Reference
Architecture documentation for the generate_response() function in test_cache.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 2ef2fddf_1cd3_0353_dd94_9d062db35943["generate_response()"] 869ed915_e1a9_5e24_b91e_9a20f6bfff96["SimpleFakeChat"] 2ef2fddf_1cd3_0353_dd94_9d062db35943 -->|defined in| 869ed915_e1a9_5e24_b91e_9a20f6bfff96 c250d7cc_279f_bb86_05bc_5d88ec01da3b["test_cache_with_generation_objects()"] c250d7cc_279f_bb86_05bc_5d88ec01da3b -->|calls| 2ef2fddf_1cd3_0353_dd94_9d062db35943 873a0144_ec7d_5873_8bb8_65ef4d23a42b["_get_llm_string()"] 2ef2fddf_1cd3_0353_dd94_9d062db35943 -->|calls| 873a0144_ec7d_5873_8bb8_65ef4d23a42b 43637354_61fd_e66e_2d55_c3babf4600ab["lookup()"] 2ef2fddf_1cd3_0353_dd94_9d062db35943 -->|calls| 43637354_61fd_e66e_2d55_c3babf4600ab e7ec136e_3f0a_a8b8_4834_5854e272f6e4["update()"] 2ef2fddf_1cd3_0353_dd94_9d062db35943 -->|calls| e7ec136e_3f0a_a8b8_4834_5854e272f6e4 style 2ef2fddf_1cd3_0353_dd94_9d062db35943 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/core/tests/unit_tests/language_models/chat_models/test_cache.py lines 332–364
def generate_response(self, prompt: str) -> ChatResult:
"""Simulate the cache lookup and generation logic."""
llm_string = self._get_llm_string()
prompt_str = dumps([prompt])
# Check cache first
cache_val = self.cache.lookup(prompt_str, llm_string)
if cache_val:
# This is where our fix should work
converted_generations = []
for gen in cache_val:
if isinstance(gen, Generation) and not isinstance(
gen, ChatGeneration
):
# Convert Generation to ChatGeneration by creating an AIMessage
chat_gen = ChatGeneration(
message=AIMessage(content=gen.text),
generation_info=gen.generation_info,
)
converted_generations.append(chat_gen)
else:
converted_generations.append(gen)
return ChatResult(generations=converted_generations)
# Generate new response
chat_gen = ChatGeneration(
message=AIMessage(content=self.response), generation_info={}
)
result = ChatResult(generations=[chat_gen])
# Store in cache
self.cache.update(prompt_str, llm_string, result.generations)
return result
Domain
Subdomains
Called By
Source
Frequently Asked Questions
What does generate_response() do?
generate_response() is a function in the langchain codebase, defined in libs/core/tests/unit_tests/language_models/chat_models/test_cache.py.
Where is generate_response() defined?
generate_response() is defined in libs/core/tests/unit_tests/language_models/chat_models/test_cache.py at line 332.
What does generate_response() call?
generate_response() calls 3 function(s): _get_llm_string, lookup, update.
What calls generate_response()?
generate_response() is called by 1 function(s): test_cache_with_generation_objects.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free