generate() — langchain Function Reference
Architecture documentation for the generate() function in llms.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 3ab24a73_7222_b6b9_1708_d227f0b8a684["generate()"] ce4aa464_3868_179e_5d99_df48bc307c5f["BaseLLM"] 3ab24a73_7222_b6b9_1708_d227f0b8a684 -->|defined in| ce4aa464_3868_179e_5d99_df48bc307c5f bb2bf093_28c4_0d6f_65af_734ed19d76d4["generate_prompt()"] bb2bf093_28c4_0d6f_65af_734ed19d76d4 -->|calls| 3ab24a73_7222_b6b9_1708_d227f0b8a684 4573eeb7_3c0a_7ca6_23d7_ed5efc46fdb1["_get_ls_params()"] 3ab24a73_7222_b6b9_1708_d227f0b8a684 -->|calls| 4573eeb7_3c0a_7ca6_23d7_ed5efc46fdb1 028461e5_de04_9807_5074_7a6e87bec6f3["_get_run_ids_list()"] 3ab24a73_7222_b6b9_1708_d227f0b8a684 -->|calls| 028461e5_de04_9807_5074_7a6e87bec6f3 b4a028e5_e42e_3478_739f_03ee8ab9100d["dict()"] 3ab24a73_7222_b6b9_1708_d227f0b8a684 -->|calls| b4a028e5_e42e_3478_739f_03ee8ab9100d 8ed672d1_e64a_9b66_6a84_5934f70d0441["_generate_helper()"] 3ab24a73_7222_b6b9_1708_d227f0b8a684 -->|calls| 8ed672d1_e64a_9b66_6a84_5934f70d0441 9ba11b11_f2bb_a080_89d9_77a37ac38030["get_prompts()"] 3ab24a73_7222_b6b9_1708_d227f0b8a684 -->|calls| 9ba11b11_f2bb_a080_89d9_77a37ac38030 37c0466f_ed37_9b74_0dad_7d106ca16ce8["update_cache()"] 3ab24a73_7222_b6b9_1708_d227f0b8a684 -->|calls| 37c0466f_ed37_9b74_0dad_7d106ca16ce8 style 3ab24a73_7222_b6b9_1708_d227f0b8a684 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/core/langchain_core/language_models/llms.py lines 840–1054
def generate(
self,
prompts: list[str],
stop: list[str] | None = None,
callbacks: Callbacks | list[Callbacks] | None = None,
*,
tags: list[str] | list[list[str]] | None = None,
metadata: dict[str, Any] | list[dict[str, Any]] | None = None,
run_name: str | list[str] | None = None,
run_id: uuid.UUID | list[uuid.UUID | None] | None = None,
**kwargs: Any,
) -> LLMResult:
"""Pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
1. Take advantage of batched calls,
2. Need more output from the model than just the top generated value,
3. Are building chains that are agnostic to the underlying language model
type (e.g., pure text completion models vs chat models).
Args:
prompts: List of string prompts.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
callbacks: `Callbacks` to pass through.
Used for executing additional functionality, such as logging or
streaming, throughout generation.
tags: List of tags to associate with each prompt. If provided, the length
of the list must match the length of the prompts list.
metadata: List of metadata dictionaries to associate with each prompt. If
provided, the length of the list must match the length of the prompts
list.
run_name: List of run names to associate with each prompt. If provided, the
length of the list must match the length of the prompts list.
run_id: List of run IDs to associate with each prompt. If provided, the
length of the list must match the length of the prompts list.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
Raises:
ValueError: If prompts is not a list.
ValueError: If the length of `callbacks`, `tags`, `metadata`, or
`run_name` (if provided) does not match the length of prompts.
Returns:
An `LLMResult`, which contains a list of candidate `Generations` for each
input prompt and additional model provider-specific output.
"""
if not isinstance(prompts, list):
msg = (
"Argument 'prompts' is expected to be of type list[str], received"
f" argument of type {type(prompts)}."
)
raise ValueError(msg) # noqa: TRY004
# Create callback managers
if isinstance(metadata, list):
metadata = [
{
**(meta or {}),
**self._get_ls_params(stop=stop, **kwargs),
}
for meta in metadata
]
elif isinstance(metadata, dict):
metadata = {
**(metadata or {}),
**self._get_ls_params(stop=stop, **kwargs),
}
if (
isinstance(callbacks, list)
and callbacks
and (
isinstance(callbacks[0], (list, BaseCallbackManager))
Domain
Subdomains
Called By
Source
Frequently Asked Questions
What does generate() do?
generate() is a function in the langchain codebase, defined in libs/core/langchain_core/language_models/llms.py.
Where is generate() defined?
generate() is defined in libs/core/langchain_core/language_models/llms.py at line 840.
What does generate() call?
generate() calls 6 function(s): _generate_helper, _get_ls_params, _get_run_ids_list, dict, get_prompts, update_cache.
What calls generate()?
generate() is called by 1 function(s): generate_prompt.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free