Home / Class/ BaseLanguageModel Class — langchain Architecture

BaseLanguageModel Class — langchain Architecture

Architecture documentation for the BaseLanguageModel class in base.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  f5a29046_740d_1358_3615_8cc689b26ca9["BaseLanguageModel"]
  8bd596cf_1a7f_2f26_b1cd_1b6b61ca0549["base.py"]
  f5a29046_740d_1358_3615_8cc689b26ca9 -->|defined in| 8bd596cf_1a7f_2f26_b1cd_1b6b61ca0549
  acd4b119_3c4d_98a0_10ba_f2eed71e717b["set_verbose()"]
  f5a29046_740d_1358_3615_8cc689b26ca9 -->|method| acd4b119_3c4d_98a0_10ba_f2eed71e717b
  5970390e_eb12_d216_7e15_3c89530cee10["InputType()"]
  f5a29046_740d_1358_3615_8cc689b26ca9 -->|method| 5970390e_eb12_d216_7e15_3c89530cee10
  c73294e8_26c4_ecd0_fa53_ec6d0a9c6c56["generate_prompt()"]
  f5a29046_740d_1358_3615_8cc689b26ca9 -->|method| c73294e8_26c4_ecd0_fa53_ec6d0a9c6c56
  61a4be48_28a1_3476_27b7_7f2b5fdae997["agenerate_prompt()"]
  f5a29046_740d_1358_3615_8cc689b26ca9 -->|method| 61a4be48_28a1_3476_27b7_7f2b5fdae997
  d86a7481_8922_e18b_7819_39cb605334fa["with_structured_output()"]
  f5a29046_740d_1358_3615_8cc689b26ca9 -->|method| d86a7481_8922_e18b_7819_39cb605334fa
  fc65a65f_0ada_bad2_3ada_d5605349f766["_identifying_params()"]
  f5a29046_740d_1358_3615_8cc689b26ca9 -->|method| fc65a65f_0ada_bad2_3ada_d5605349f766
  4bf94ed3_6d3e_2b7c_07b9_290dd3f7edf8["get_token_ids()"]
  f5a29046_740d_1358_3615_8cc689b26ca9 -->|method| 4bf94ed3_6d3e_2b7c_07b9_290dd3f7edf8
  4b493e08_cb6e_df06_b3b7_c129ab4710fd["get_num_tokens()"]
  f5a29046_740d_1358_3615_8cc689b26ca9 -->|method| 4b493e08_cb6e_df06_b3b7_c129ab4710fd
  20140000_1d28_439f_e76d_4451abf1068a["get_num_tokens_from_messages()"]
  f5a29046_740d_1358_3615_8cc689b26ca9 -->|method| 20140000_1d28_439f_e76d_4451abf1068a

Relationship Graph

Source Code

libs/core/langchain_core/language_models/base.py lines 139–373

class BaseLanguageModel(
    RunnableSerializable[LanguageModelInput, LanguageModelOutputVar], ABC
):
    """Abstract base class for interfacing with language models.

    All language model wrappers inherited from `BaseLanguageModel`.

    """

    cache: BaseCache | bool | None = Field(default=None, exclude=True)
    """Whether to cache the response.

    * If `True`, will use the global cache.
    * If `False`, will not use a cache
    * If `None`, will use the global cache if it's set, otherwise no cache.
    * If instance of `BaseCache`, will use the provided cache.

    Caching is not currently supported for streaming methods of models.
    """

    verbose: bool = Field(default_factory=_get_verbosity, exclude=True, repr=False)
    """Whether to print out response text."""

    callbacks: Callbacks = Field(default=None, exclude=True)
    """Callbacks to add to the run trace."""

    tags: list[str] | None = Field(default=None, exclude=True)
    """Tags to add to the run trace."""

    metadata: dict[str, Any] | None = Field(default=None, exclude=True)
    """Metadata to add to the run trace."""

    custom_get_token_ids: Callable[[str], list[int]] | None = Field(
        default=None, exclude=True
    )
    """Optional encoder to use for counting tokens."""

    model_config = ConfigDict(
        arbitrary_types_allowed=True,
    )

    @field_validator("verbose", mode="before")
    def set_verbose(cls, verbose: bool | None) -> bool:  # noqa: FBT001
        """If verbose is `None`, set it.

        This allows users to pass in `None` as verbose to access the global setting.

        Args:
            verbose: The verbosity setting to use.

        Returns:
            The verbosity setting to use.

        """
        if verbose is None:
            return _get_verbosity()
        return verbose

    @property
    @override
    def InputType(self) -> TypeAlias:
        """Get the input type for this `Runnable`."""
        # This is a version of LanguageModelInput which replaces the abstract
        # base class BaseMessage with a union of its subclasses, which makes
        # for a much better schema.
        return str | StringPromptValue | ChatPromptValueConcrete | list[AnyMessage]

    @abstractmethod
    def generate_prompt(
        self,
        prompts: list[PromptValue],
        stop: list[str] | None = None,
        callbacks: Callbacks = None,
        **kwargs: Any,
    ) -> LLMResult:
        """Pass a sequence of prompts to the model and return model generations.

        This method should make use of batched calls for models that expose a batched
        API.

        Use this method when you want to:

Frequently Asked Questions

What is the BaseLanguageModel class?
BaseLanguageModel is a class in the langchain codebase, defined in libs/core/langchain_core/language_models/base.py.
Where is BaseLanguageModel defined?
BaseLanguageModel is defined in libs/core/langchain_core/language_models/base.py at line 139.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free