Home / Function/ generate() — langchain Function Reference

generate() — langchain Function Reference

Architecture documentation for the generate() function in chat_models.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  fccabb0a_e230_0454_e79d_6a55d54eee3b["generate()"]
  d009a608_c505_bd50_7200_0de8a69ba4b7["BaseChatModel"]
  fccabb0a_e230_0454_e79d_6a55d54eee3b -->|defined in| d009a608_c505_bd50_7200_0de8a69ba4b7
  6d94b0b8_55a2_e50f_52cf_a54b459ca081["generate_prompt()"]
  6d94b0b8_55a2_e50f_52cf_a54b459ca081 -->|calls| fccabb0a_e230_0454_e79d_6a55d54eee3b
  5c320356_b8cd_92a3_38a3_2878a3c460d0["_get_invocation_params()"]
  fccabb0a_e230_0454_e79d_6a55d54eee3b -->|calls| 5c320356_b8cd_92a3_38a3_2878a3c460d0
  47283f0d_d8e7_addf_ea32_c0ecefe3d97c["_get_ls_params()"]
  fccabb0a_e230_0454_e79d_6a55d54eee3b -->|calls| 47283f0d_d8e7_addf_ea32_c0ecefe3d97c
  c75ce290_1863_e639_39d8_1d5eede2b115["_generate_with_cache()"]
  fccabb0a_e230_0454_e79d_6a55d54eee3b -->|calls| c75ce290_1863_e639_39d8_1d5eede2b115
  89c18fab_57d8_be52_2a32_610717915217["_combine_llm_outputs()"]
  fccabb0a_e230_0454_e79d_6a55d54eee3b -->|calls| 89c18fab_57d8_be52_2a32_610717915217
  5f652461_f9fa_fdc2_d659_cde32ef53f66["_format_ls_structured_output()"]
  fccabb0a_e230_0454_e79d_6a55d54eee3b -->|calls| 5f652461_f9fa_fdc2_d659_cde32ef53f66
  f1b77769_1c98_a324_e709_fd921b433e56["_format_for_tracing()"]
  fccabb0a_e230_0454_e79d_6a55d54eee3b -->|calls| f1b77769_1c98_a324_e709_fd921b433e56
  0a0401bd_a59c_7ac5_1c91_a5f406b3cdc6["_generate_response_from_error()"]
  fccabb0a_e230_0454_e79d_6a55d54eee3b -->|calls| 0a0401bd_a59c_7ac5_1c91_a5f406b3cdc6
  style fccabb0a_e230_0454_e79d_6a55d54eee3b fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/core/langchain_core/language_models/chat_models.py lines 842–963

    def generate(
        self,
        messages: list[list[BaseMessage]],
        stop: list[str] | None = None,
        callbacks: Callbacks = None,
        *,
        tags: list[str] | None = None,
        metadata: dict[str, Any] | None = None,
        run_name: str | None = None,
        run_id: uuid.UUID | None = None,
        **kwargs: Any,
    ) -> LLMResult:
        """Pass a sequence of prompts to the model and return model generations.

        This method should make use of batched calls for models that expose a batched
        API.

        Use this method when you want to:

        1. Take advantage of batched calls,
        2. Need more output from the model than just the top generated value,
        3. Are building chains that are agnostic to the underlying language model
            type (e.g., pure text completion models vs chat models).

        Args:
            messages: List of list of messages.
            stop: Stop words to use when generating.

                Model output is cut off at the first occurrence of any of these
                substrings.
            callbacks: `Callbacks` to pass through.

                Used for executing additional functionality, such as logging or
                streaming, throughout generation.
            tags: The tags to apply.
            metadata: The metadata to apply.
            run_name: The name of the run.
            run_id: The ID of the run.
            **kwargs: Arbitrary additional keyword arguments.

                These are usually passed to the model provider API call.

        Returns:
            An `LLMResult`, which contains a list of candidate `Generations` for each
                input prompt and additional model provider-specific output.

        """
        ls_structured_output_format = kwargs.pop(
            "ls_structured_output_format", None
        ) or kwargs.pop("structured_output_format", None)
        ls_structured_output_format_dict = _format_ls_structured_output(
            ls_structured_output_format
        )

        params = self._get_invocation_params(stop=stop, **kwargs)
        options = {"stop": stop, **ls_structured_output_format_dict}
        inheritable_metadata = {
            **(metadata or {}),
            **self._get_ls_params(stop=stop, **kwargs),
        }

        callback_manager = CallbackManager.configure(
            callbacks,
            self.callbacks,
            self.verbose,
            tags,
            self.tags,
            inheritable_metadata,
            self.metadata,
        )
        messages_to_trace = [
            _format_for_tracing(message_list) for message_list in messages
        ]
        run_managers = callback_manager.on_chat_model_start(
            self._serialized,
            messages_to_trace,
            invocation_params=params,
            options=options,
            name=run_name,
            run_id=run_id,
            batch_size=len(messages),

Subdomains

Called By

Frequently Asked Questions

What does generate() do?
generate() is a function in the langchain codebase, defined in libs/core/langchain_core/language_models/chat_models.py.
Where is generate() defined?
generate() is defined in libs/core/langchain_core/language_models/chat_models.py at line 842.
What does generate() call?
generate() calls 7 function(s): _combine_llm_outputs, _format_for_tracing, _format_ls_structured_output, _generate_response_from_error, _generate_with_cache, _get_invocation_params, _get_ls_params.
What calls generate()?
generate() is called by 1 function(s): generate_prompt.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free