_agenerate() — langchain Function Reference
Architecture documentation for the _agenerate() function in base.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 2d321de2_cda4_2af7_7550_f3f179b1dddb["_agenerate()"] 6bee45b2_b649_e251_1fdc_dcf49f8bb331["BaseOpenAI"] 2d321de2_cda4_2af7_7550_f3f179b1dddb -->|defined in| 6bee45b2_b649_e251_1fdc_dcf49f8bb331 11f618dd_a369_1622_0130_0cc8a952e5cd["get_sub_prompts()"] 2d321de2_cda4_2af7_7550_f3f179b1dddb -->|calls| 11f618dd_a369_1622_0130_0cc8a952e5cd 58b8669b_d421_b6e5_b179_03394129fd02["_astream()"] 2d321de2_cda4_2af7_7550_f3f179b1dddb -->|calls| 58b8669b_d421_b6e5_b179_03394129fd02 01611288_3478_a4f4_894c_f5db6dd12168["create_llm_result()"] 2d321de2_cda4_2af7_7550_f3f179b1dddb -->|calls| 01611288_3478_a4f4_894c_f5db6dd12168 7817fc40_57c0_6468_1c16_3ded8817a9b8["_update_token_usage()"] 2d321de2_cda4_2af7_7550_f3f179b1dddb -->|calls| 7817fc40_57c0_6468_1c16_3ded8817a9b8 style 2d321de2_cda4_2af7_7550_f3f179b1dddb fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/langchain_openai/llms/base.py lines 513–570
async def _agenerate(
self,
prompts: list[str],
stop: list[str] | None = None,
run_manager: AsyncCallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> LLMResult:
"""Call out to OpenAI's endpoint async with k unique prompts."""
params = self._invocation_params
params = {**params, **kwargs}
sub_prompts = self.get_sub_prompts(params, prompts, stop)
choices = []
token_usage: dict[str, int] = {}
# Get the token usage from the response.
# Includes prompt, completion, and total tokens used.
_keys = {"completion_tokens", "prompt_tokens", "total_tokens"}
system_fingerprint: str | None = None
for _prompts in sub_prompts:
if self.streaming:
if len(_prompts) > 1:
msg = "Cannot stream results with multiple prompts."
raise ValueError(msg)
generation: GenerationChunk | None = None
async for chunk in self._astream(
_prompts[0], stop, run_manager, **kwargs
):
if generation is None:
generation = chunk
else:
generation += chunk
if generation is None:
msg = "Generation is empty after streaming."
raise ValueError(msg)
choices.append(
{
"text": generation.text,
"finish_reason": (
generation.generation_info.get("finish_reason")
if generation.generation_info
else None
),
"logprobs": (
generation.generation_info.get("logprobs")
if generation.generation_info
else None
),
}
)
else:
response = await self.async_client.create(prompt=_prompts, **params)
if not isinstance(response, dict):
response = response.model_dump()
choices.extend(response["choices"])
_update_token_usage(_keys, response, token_usage)
return self.create_llm_result(
choices, prompts, params, token_usage, system_fingerprint=system_fingerprint
)
Domain
Subdomains
Source
Frequently Asked Questions
What does _agenerate() do?
_agenerate() is a function in the langchain codebase, defined in libs/partners/openai/langchain_openai/llms/base.py.
Where is _agenerate() defined?
_agenerate() is defined in libs/partners/openai/langchain_openai/llms/base.py at line 513.
What does _agenerate() call?
_agenerate() calls 4 function(s): _astream, _update_token_usage, create_llm_result, get_sub_prompts.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free