Home / Function/ modelname_to_contextsize() — langchain Function Reference

modelname_to_contextsize() — langchain Function Reference

Architecture documentation for the modelname_to_contextsize() function in base.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  9bcd4d53_5596_ad7e_582c_67410503af45["modelname_to_contextsize()"]
  6bee45b2_b649_e251_1fdc_dcf49f8bb331["BaseOpenAI"]
  9bcd4d53_5596_ad7e_582c_67410503af45 -->|defined in| 6bee45b2_b649_e251_1fdc_dcf49f8bb331
  6152d0df_fb0e_6c99_1b70_7be848174dd6["max_context_size()"]
  6152d0df_fb0e_6c99_1b70_7be848174dd6 -->|calls| 9bcd4d53_5596_ad7e_582c_67410503af45
  style 9bcd4d53_5596_ad7e_582c_67410503af45 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/openai/langchain_openai/llms/base.py lines 658–715

    def modelname_to_contextsize(modelname: str) -> int:
        """Calculate the maximum number of tokens possible to generate for a model.

        Args:
            modelname: The modelname we want to know the context size for.

        Returns:
            The maximum context size

        Example:
            ```python
            max_tokens = openai.modelname_to_contextsize("gpt-3.5-turbo-instruct")
            ```
        """
        model_token_mapping = {
            "gpt-4o-mini": 128_000,
            "gpt-4o": 128_000,
            "gpt-4o-2024-05-13": 128_000,
            "gpt-4": 8192,
            "gpt-4-0314": 8192,
            "gpt-4-0613": 8192,
            "gpt-4-32k": 32768,
            "gpt-4-32k-0314": 32768,
            "gpt-4-32k-0613": 32768,
            "gpt-3.5-turbo": 4096,
            "gpt-3.5-turbo-0301": 4096,
            "gpt-3.5-turbo-0613": 4096,
            "gpt-3.5-turbo-16k": 16385,
            "gpt-3.5-turbo-16k-0613": 16385,
            "gpt-3.5-turbo-instruct": 4096,
            "text-ada-001": 2049,
            "ada": 2049,
            "text-babbage-001": 2040,
            "babbage": 2049,
            "text-curie-001": 2049,
            "curie": 2049,
            "davinci": 2049,
            "text-davinci-003": 4097,
            "text-davinci-002": 4097,
            "code-davinci-002": 8001,
            "code-davinci-001": 8001,
            "code-cushman-002": 2048,
            "code-cushman-001": 2048,
        }

        # handling finetuned models
        if "ft-" in modelname:
            modelname = modelname.split(":")[0]

        context_size = model_token_mapping.get(modelname)

        if context_size is None:
            raise ValueError(
                f"Unknown model: {modelname}. Please provide a valid OpenAI model name."
                "Known models are: " + ", ".join(model_token_mapping.keys())
            )

        return context_size

Domain

Subdomains

Called By

Frequently Asked Questions

What does modelname_to_contextsize() do?
modelname_to_contextsize() is a function in the langchain codebase, defined in libs/partners/openai/langchain_openai/llms/base.py.
Where is modelname_to_contextsize() defined?
modelname_to_contextsize() is defined in libs/partners/openai/langchain_openai/llms/base.py at line 658.
What calls modelname_to_contextsize()?
modelname_to_contextsize() is called by 1 function(s): max_context_size.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free