Home / Function/ agenerate() — langchain Function Reference

agenerate() — langchain Function Reference

Architecture documentation for the agenerate() function in llms.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  99297396_752a_7590_6d2b_c4757fc5a9d8["agenerate()"]
  ce4aa464_3868_179e_5d99_df48bc307c5f["BaseLLM"]
  99297396_752a_7590_6d2b_c4757fc5a9d8 -->|defined in| ce4aa464_3868_179e_5d99_df48bc307c5f
  55376563_12dc_0d47_e55b_0b52a5d2db0e["agenerate_prompt()"]
  55376563_12dc_0d47_e55b_0b52a5d2db0e -->|calls| 99297396_752a_7590_6d2b_c4757fc5a9d8
  667f2693_fe60_ba54_4f98_db95101471d4["_call_async()"]
  667f2693_fe60_ba54_4f98_db95101471d4 -->|calls| 99297396_752a_7590_6d2b_c4757fc5a9d8
  4573eeb7_3c0a_7ca6_23d7_ed5efc46fdb1["_get_ls_params()"]
  99297396_752a_7590_6d2b_c4757fc5a9d8 -->|calls| 4573eeb7_3c0a_7ca6_23d7_ed5efc46fdb1
  028461e5_de04_9807_5074_7a6e87bec6f3["_get_run_ids_list()"]
  99297396_752a_7590_6d2b_c4757fc5a9d8 -->|calls| 028461e5_de04_9807_5074_7a6e87bec6f3
  b4a028e5_e42e_3478_739f_03ee8ab9100d["dict()"]
  99297396_752a_7590_6d2b_c4757fc5a9d8 -->|calls| b4a028e5_e42e_3478_739f_03ee8ab9100d
  b5b0ca06_5826_6cbb_e9a3_9ad03072565c["_agenerate_helper()"]
  99297396_752a_7590_6d2b_c4757fc5a9d8 -->|calls| b5b0ca06_5826_6cbb_e9a3_9ad03072565c
  3b4b1ea5_be93_ee8f_c595_0a2429d9380b["aget_prompts()"]
  99297396_752a_7590_6d2b_c4757fc5a9d8 -->|calls| 3b4b1ea5_be93_ee8f_c595_0a2429d9380b
  ad254dee_89a5_8838_896b_a1459cb6753a["aupdate_cache()"]
  99297396_752a_7590_6d2b_c4757fc5a9d8 -->|calls| ad254dee_89a5_8838_896b_a1459cb6753a
  style 99297396_752a_7590_6d2b_c4757fc5a9d8 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/core/langchain_core/language_models/llms.py lines 1115–1327

    async def agenerate(
        self,
        prompts: list[str],
        stop: list[str] | None = None,
        callbacks: Callbacks | list[Callbacks] | None = None,
        *,
        tags: list[str] | list[list[str]] | None = None,
        metadata: dict[str, Any] | list[dict[str, Any]] | None = None,
        run_name: str | list[str] | None = None,
        run_id: uuid.UUID | list[uuid.UUID | None] | None = None,
        **kwargs: Any,
    ) -> LLMResult:
        """Asynchronously pass a sequence of prompts to a model and return generations.

        This method should make use of batched calls for models that expose a batched
        API.

        Use this method when you want to:

        1. Take advantage of batched calls,
        2. Need more output from the model than just the top generated value,
        3. Are building chains that are agnostic to the underlying language model
            type (e.g., pure text completion models vs chat models).

        Args:
            prompts: List of string prompts.
            stop: Stop words to use when generating.

                Model output is cut off at the first occurrence of any of these
                substrings.
            callbacks: `Callbacks` to pass through.

                Used for executing additional functionality, such as logging or
                streaming, throughout generation.
            tags: List of tags to associate with each prompt. If provided, the length
                of the list must match the length of the prompts list.
            metadata: List of metadata dictionaries to associate with each prompt. If
                provided, the length of the list must match the length of the prompts
                list.
            run_name: List of run names to associate with each prompt. If provided, the
                length of the list must match the length of the prompts list.
            run_id: List of run IDs to associate with each prompt. If provided, the
                length of the list must match the length of the prompts list.
            **kwargs: Arbitrary additional keyword arguments.

                These are usually passed to the model provider API call.

        Raises:
            ValueError: If the length of `callbacks`, `tags`, `metadata`, or
                `run_name` (if provided) does not match the length of prompts.

        Returns:
            An `LLMResult`, which contains a list of candidate `Generations` for each
                input prompt and additional model provider-specific output.
        """
        if isinstance(metadata, list):
            metadata = [
                {
                    **(meta or {}),
                    **self._get_ls_params(stop=stop, **kwargs),
                }
                for meta in metadata
            ]
        elif isinstance(metadata, dict):
            metadata = {
                **(metadata or {}),
                **self._get_ls_params(stop=stop, **kwargs),
            }
        # Create callback managers
        if isinstance(callbacks, list) and (
            isinstance(callbacks[0], (list, BaseCallbackManager))
            or callbacks[0] is None
        ):
            # We've received a list of callbacks args to apply to each input
            if len(callbacks) != len(prompts):
                msg = "callbacks must be the same length as prompts"
                raise ValueError(msg)
            if tags is not None and not (
                isinstance(tags, list) and len(tags) == len(prompts)
            ):
                msg = "tags must be a list of the same length as prompts"

Domain

Subdomains

Frequently Asked Questions

What does agenerate() do?
agenerate() is a function in the langchain codebase, defined in libs/core/langchain_core/language_models/llms.py.
Where is agenerate() defined?
agenerate() is defined in libs/core/langchain_core/language_models/llms.py at line 1115.
What does agenerate() call?
agenerate() calls 6 function(s): _agenerate_helper, _get_ls_params, _get_run_ids_list, aget_prompts, aupdate_cache, dict.
What calls agenerate()?
agenerate() is called by 2 function(s): _call_async, agenerate_prompt.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free