Home / Function/ aupdate_cache() — langchain Function Reference

aupdate_cache() — langchain Function Reference

Architecture documentation for the aupdate_cache() function in llms.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  ad254dee_89a5_8838_896b_a1459cb6753a["aupdate_cache()"]
  a4692bf1_369d_4673_b1eb_6b9a8cbb9994["llms.py"]
  ad254dee_89a5_8838_896b_a1459cb6753a -->|defined in| a4692bf1_369d_4673_b1eb_6b9a8cbb9994
  99297396_752a_7590_6d2b_c4757fc5a9d8["agenerate()"]
  99297396_752a_7590_6d2b_c4757fc5a9d8 -->|calls| ad254dee_89a5_8838_896b_a1459cb6753a
  850d0849_18b0_9d36_a562_50a644c67a24["_resolve_cache()"]
  ad254dee_89a5_8838_896b_a1459cb6753a -->|calls| 850d0849_18b0_9d36_a562_50a644c67a24
  style ad254dee_89a5_8838_896b_a1459cb6753a fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/core/langchain_core/language_models/llms.py lines 259–289

async def aupdate_cache(
    cache: BaseCache | bool | None,  # noqa: FBT001
    existing_prompts: dict[int, list],
    llm_string: str,
    missing_prompt_idxs: list[int],
    new_results: LLMResult,
    prompts: list[str],
) -> dict | None:
    """Update the cache and get the LLM output. Async version.

    Args:
        cache: Cache object.
        existing_prompts: Dictionary of existing prompts.
        llm_string: LLM string.
        missing_prompt_idxs: List of missing prompt indexes.
        new_results: LLMResult object.
        prompts: List of prompts.

    Returns:
        LLM output.

    Raises:
        ValueError: If the cache is not set and cache is True.
    """
    llm_cache = _resolve_cache(cache=cache)
    for i, result in enumerate(new_results.generations):
        existing_prompts[missing_prompt_idxs[i]] = result
        prompt = prompts[missing_prompt_idxs[i]]
        if llm_cache:
            await llm_cache.aupdate(prompt, llm_string, result)
    return new_results.llm_output

Domain

Subdomains

Called By

Frequently Asked Questions

What does aupdate_cache() do?
aupdate_cache() is a function in the langchain codebase, defined in libs/core/langchain_core/language_models/llms.py.
Where is aupdate_cache() defined?
aupdate_cache() is defined in libs/core/langchain_core/language_models/llms.py at line 259.
What does aupdate_cache() call?
aupdate_cache() calls 1 function(s): _resolve_cache.
What calls aupdate_cache()?
aupdate_cache() is called by 1 function(s): agenerate.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free