_generate() — langchain Function Reference
Architecture documentation for the _generate() function in base.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD f5892a19_915a_df0a_fbd3_f7a5a0993b64["_generate()"] 6bee45b2_b649_e251_1fdc_dcf49f8bb331["BaseOpenAI"] f5892a19_915a_df0a_fbd3_f7a5a0993b64 -->|defined in| 6bee45b2_b649_e251_1fdc_dcf49f8bb331 11f618dd_a369_1622_0130_0cc8a952e5cd["get_sub_prompts()"] f5892a19_915a_df0a_fbd3_f7a5a0993b64 -->|calls| 11f618dd_a369_1622_0130_0cc8a952e5cd 9a3f8476_ed23_4017_8e21_6ee0b6176365["_stream()"] f5892a19_915a_df0a_fbd3_f7a5a0993b64 -->|calls| 9a3f8476_ed23_4017_8e21_6ee0b6176365 01611288_3478_a4f4_894c_f5db6dd12168["create_llm_result()"] f5892a19_915a_df0a_fbd3_f7a5a0993b64 -->|calls| 01611288_3478_a4f4_894c_f5db6dd12168 7817fc40_57c0_6468_1c16_3ded8817a9b8["_update_token_usage()"] f5892a19_915a_df0a_fbd3_f7a5a0993b64 -->|calls| 7817fc40_57c0_6468_1c16_3ded8817a9b8 style f5892a19_915a_df0a_fbd3_f7a5a0993b64 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/langchain_openai/llms/base.py lines 429–511
def _generate(
self,
prompts: list[str],
stop: list[str] | None = None,
run_manager: CallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> LLMResult:
"""Call out to OpenAI's endpoint with k unique prompts.
Args:
prompts: The prompts to pass into the model.
stop: Optional list of stop words to use when generating.
run_manager: Optional callback manager to use for the call.
Returns:
The full LLM output.
Example:
```python
response = openai.generate(["Tell me a joke."])
```
"""
# TODO: write a unit test for this
params = self._invocation_params
params = {**params, **kwargs}
sub_prompts = self.get_sub_prompts(params, prompts, stop)
choices = []
token_usage: dict[str, int] = {}
# Get the token usage from the response.
# Includes prompt, completion, and total tokens used.
_keys = {"completion_tokens", "prompt_tokens", "total_tokens"}
system_fingerprint: str | None = None
for _prompts in sub_prompts:
if self.streaming:
if len(_prompts) > 1:
msg = "Cannot stream results with multiple prompts."
raise ValueError(msg)
generation: GenerationChunk | None = None
for chunk in self._stream(_prompts[0], stop, run_manager, **kwargs):
if generation is None:
generation = chunk
else:
generation += chunk
if generation is None:
msg = "Generation is empty after streaming."
raise ValueError(msg)
choices.append(
{
"text": generation.text,
"finish_reason": (
generation.generation_info.get("finish_reason")
if generation.generation_info
else None
),
"logprobs": (
generation.generation_info.get("logprobs")
if generation.generation_info
else None
),
}
)
else:
response = self.client.create(prompt=_prompts, **params)
if not isinstance(response, dict):
# V1 client returns the response in an PyDantic object instead of
# dict. For the transition period, we deep convert it to dict.
response = response.model_dump()
# Sometimes the AI Model calling will get error, we should raise it.
# Otherwise, the next code 'choices.extend(response["choices"])'
# will throw a "TypeError: 'NoneType' object is not iterable" error
# to mask the true error. Because 'response["choices"]' is None.
if response.get("error"):
raise ValueError(response.get("error"))
choices.extend(response["choices"])
_update_token_usage(_keys, response, token_usage)
if not system_fingerprint:
system_fingerprint = response.get("system_fingerprint")
return self.create_llm_result(
Domain
Subdomains
Source
Frequently Asked Questions
What does _generate() do?
_generate() is a function in the langchain codebase, defined in libs/partners/openai/langchain_openai/llms/base.py.
Where is _generate() defined?
_generate() is defined in libs/partners/openai/langchain_openai/llms/base.py at line 429.
What does _generate() call?
_generate() calls 4 function(s): _stream, _update_token_usage, create_llm_result, get_sub_prompts.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free