Home / Function/ agenerate() — langchain Function Reference

agenerate() — langchain Function Reference

Architecture documentation for the agenerate() function in chat_models.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  e539ab1d_5151_8ba1_cfe0_47ef5adc1f67["agenerate()"]
  d009a608_c505_bd50_7200_0de8a69ba4b7["BaseChatModel"]
  e539ab1d_5151_8ba1_cfe0_47ef5adc1f67 -->|defined in| d009a608_c505_bd50_7200_0de8a69ba4b7
  ca927775_2127_c46e_2da5_a78e49c8789f["agenerate_prompt()"]
  ca927775_2127_c46e_2da5_a78e49c8789f -->|calls| e539ab1d_5151_8ba1_cfe0_47ef5adc1f67
  15be9ef2_4ef5_bd6d_35dc_480f1d07bb0c["_call_async()"]
  15be9ef2_4ef5_bd6d_35dc_480f1d07bb0c -->|calls| e539ab1d_5151_8ba1_cfe0_47ef5adc1f67
  5c320356_b8cd_92a3_38a3_2878a3c460d0["_get_invocation_params()"]
  e539ab1d_5151_8ba1_cfe0_47ef5adc1f67 -->|calls| 5c320356_b8cd_92a3_38a3_2878a3c460d0
  47283f0d_d8e7_addf_ea32_c0ecefe3d97c["_get_ls_params()"]
  e539ab1d_5151_8ba1_cfe0_47ef5adc1f67 -->|calls| 47283f0d_d8e7_addf_ea32_c0ecefe3d97c
  1444b9d3_5ad9_5b23_967b_eb8224746e4f["_agenerate_with_cache()"]
  e539ab1d_5151_8ba1_cfe0_47ef5adc1f67 -->|calls| 1444b9d3_5ad9_5b23_967b_eb8224746e4f
  89c18fab_57d8_be52_2a32_610717915217["_combine_llm_outputs()"]
  e539ab1d_5151_8ba1_cfe0_47ef5adc1f67 -->|calls| 89c18fab_57d8_be52_2a32_610717915217
  5f652461_f9fa_fdc2_d659_cde32ef53f66["_format_ls_structured_output()"]
  e539ab1d_5151_8ba1_cfe0_47ef5adc1f67 -->|calls| 5f652461_f9fa_fdc2_d659_cde32ef53f66
  f1b77769_1c98_a324_e709_fd921b433e56["_format_for_tracing()"]
  e539ab1d_5151_8ba1_cfe0_47ef5adc1f67 -->|calls| f1b77769_1c98_a324_e709_fd921b433e56
  0a0401bd_a59c_7ac5_1c91_a5f406b3cdc6["_generate_response_from_error()"]
  e539ab1d_5151_8ba1_cfe0_47ef5adc1f67 -->|calls| 0a0401bd_a59c_7ac5_1c91_a5f406b3cdc6
  style e539ab1d_5151_8ba1_cfe0_47ef5adc1f67 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/core/langchain_core/language_models/chat_models.py lines 965–1110

    async def agenerate(
        self,
        messages: list[list[BaseMessage]],
        stop: list[str] | None = None,
        callbacks: Callbacks = None,
        *,
        tags: list[str] | None = None,
        metadata: dict[str, Any] | None = None,
        run_name: str | None = None,
        run_id: uuid.UUID | None = None,
        **kwargs: Any,
    ) -> LLMResult:
        """Asynchronously pass a sequence of prompts to a model and return generations.

        This method should make use of batched calls for models that expose a batched
        API.

        Use this method when you want to:

        1. Take advantage of batched calls,
        2. Need more output from the model than just the top generated value,
        3. Are building chains that are agnostic to the underlying language model
            type (e.g., pure text completion models vs chat models).

        Args:
            messages: List of list of messages.
            stop: Stop words to use when generating.

                Model output is cut off at the first occurrence of any of these
                substrings.
            callbacks: `Callbacks` to pass through.

                Used for executing additional functionality, such as logging or
                streaming, throughout generation.
            tags: The tags to apply.
            metadata: The metadata to apply.
            run_name: The name of the run.
            run_id: The ID of the run.
            **kwargs: Arbitrary additional keyword arguments.

                These are usually passed to the model provider API call.

        Returns:
            An `LLMResult`, which contains a list of candidate `Generations` for each
                input prompt and additional model provider-specific output.

        """
        ls_structured_output_format = kwargs.pop(
            "ls_structured_output_format", None
        ) or kwargs.pop("structured_output_format", None)
        ls_structured_output_format_dict = _format_ls_structured_output(
            ls_structured_output_format
        )

        params = self._get_invocation_params(stop=stop, **kwargs)
        options = {"stop": stop, **ls_structured_output_format_dict}
        inheritable_metadata = {
            **(metadata or {}),
            **self._get_ls_params(stop=stop, **kwargs),
        }

        callback_manager = AsyncCallbackManager.configure(
            callbacks,
            self.callbacks,
            self.verbose,
            tags,
            self.tags,
            inheritable_metadata,
            self.metadata,
        )

        messages_to_trace = [
            _format_for_tracing(message_list) for message_list in messages
        ]
        run_managers = await callback_manager.on_chat_model_start(
            self._serialized,
            messages_to_trace,
            invocation_params=params,
            options=options,
            name=run_name,
            batch_size=len(messages),

Subdomains

Frequently Asked Questions

What does agenerate() do?
agenerate() is a function in the langchain codebase, defined in libs/core/langchain_core/language_models/chat_models.py.
Where is agenerate() defined?
agenerate() is defined in libs/core/langchain_core/language_models/chat_models.py at line 965.
What does agenerate() call?
agenerate() calls 7 function(s): _agenerate_with_cache, _combine_llm_outputs, _format_for_tracing, _format_ls_structured_output, _generate_response_from_error, _get_invocation_params, _get_ls_params.
What calls agenerate()?
agenerate() is called by 2 function(s): _call_async, agenerate_prompt.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free