max_tokens_for_prompt() — langchain Function Reference
Architecture documentation for the max_tokens_for_prompt() function in base.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD c14f7ea9_3fb1_6c9f_d4ed_47bad90dcea8["max_tokens_for_prompt()"] 6bee45b2_b649_e251_1fdc_dcf49f8bb331["BaseOpenAI"] c14f7ea9_3fb1_6c9f_d4ed_47bad90dcea8 -->|defined in| 6bee45b2_b649_e251_1fdc_dcf49f8bb331 11f618dd_a369_1622_0130_0cc8a952e5cd["get_sub_prompts()"] 11f618dd_a369_1622_0130_0cc8a952e5cd -->|calls| c14f7ea9_3fb1_6c9f_d4ed_47bad90dcea8 style c14f7ea9_3fb1_6c9f_d4ed_47bad90dcea8 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/langchain_openai/llms/base.py lines 722–737
def max_tokens_for_prompt(self, prompt: str) -> int:
"""Calculate the maximum number of tokens possible to generate for a prompt.
Args:
prompt: The prompt to pass into the model.
Returns:
The maximum number of tokens to generate for a prompt.
Example:
```python
max_tokens = openai.max_tokens_for_prompt("Tell me a joke.")
```
"""
num_tokens = self.get_num_tokens(prompt)
return self.max_context_size - num_tokens
Domain
Subdomains
Called By
Source
Frequently Asked Questions
What does max_tokens_for_prompt() do?
max_tokens_for_prompt() is a function in the langchain codebase, defined in libs/partners/openai/langchain_openai/llms/base.py.
Where is max_tokens_for_prompt() defined?
max_tokens_for_prompt() is defined in libs/partners/openai/langchain_openai/llms/base.py at line 722.
What calls max_tokens_for_prompt()?
max_tokens_for_prompt() is called by 1 function(s): get_sub_prompts.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free