BaseChatModel Class — langchain Architecture
Architecture documentation for the BaseChatModel class in chat_models.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD d009a608_c505_bd50_7200_0de8a69ba4b7["BaseChatModel"] 7deca977_9668_e807_4e77_0bc4519c0bfb["PromptValue"] d009a608_c505_bd50_7200_0de8a69ba4b7 -->|extends| 7deca977_9668_e807_4e77_0bc4519c0bfb 9c9ba0ab_539f_aaa1_399f_ec00f912f41f["_StreamingCallbackHandler"] d009a608_c505_bd50_7200_0de8a69ba4b7 -->|extends| 9c9ba0ab_539f_aaa1_399f_ec00f912f41f 17a9b92d_bb83_78d8_7df7_7200745cc17b["AIMessageChunk"] d009a608_c505_bd50_7200_0de8a69ba4b7 -->|extends| 17a9b92d_bb83_78d8_7df7_7200745cc17b e24b5be2_a888_4700_a118_b05d34a03395["Generation"] d009a608_c505_bd50_7200_0de8a69ba4b7 -->|extends| e24b5be2_a888_4700_a118_b05d34a03395 fb3554e0_291b_93d2_d325_51461432ed8a["ChatGeneration"] d009a608_c505_bd50_7200_0de8a69ba4b7 -->|extends| fb3554e0_291b_93d2_d325_51461432ed8a fcfa55b0_4a86_fa31_a156_3c38c76a0a9b["AIMessage"] d009a608_c505_bd50_7200_0de8a69ba4b7 -->|extends| fcfa55b0_4a86_fa31_a156_3c38c76a0a9b 523f3c01_ffbb_1a97_9161_fec704fe8c2e["BaseCache"] d009a608_c505_bd50_7200_0de8a69ba4b7 -->|extends| 523f3c01_ffbb_1a97_9161_fec704fe8c2e 20f4116a_d26d_2a5f_4a10_67af6940e081["chat_models.py"] d009a608_c505_bd50_7200_0de8a69ba4b7 -->|defined in| 20f4116a_d26d_2a5f_4a10_67af6940e081 a73491b0_56c2_5e9d_709c_b917522e4d84["_serialized()"] d009a608_c505_bd50_7200_0de8a69ba4b7 -->|method| a73491b0_56c2_5e9d_709c_b917522e4d84 1d15ce52_e504_d8df_651f_578dad2ed105["OutputType()"] d009a608_c505_bd50_7200_0de8a69ba4b7 -->|method| 1d15ce52_e504_d8df_651f_578dad2ed105 b0e1e167_44cd_1c63_6e71_0caf683fc904["_convert_input()"] d009a608_c505_bd50_7200_0de8a69ba4b7 -->|method| b0e1e167_44cd_1c63_6e71_0caf683fc904 f5ae3987_a3e6_8941_d427_2ceec1002585["invoke()"] d009a608_c505_bd50_7200_0de8a69ba4b7 -->|method| f5ae3987_a3e6_8941_d427_2ceec1002585 39de84e5_4b2e_9b0f_208f_399a7b79ebc1["ainvoke()"] d009a608_c505_bd50_7200_0de8a69ba4b7 -->|method| 39de84e5_4b2e_9b0f_208f_399a7b79ebc1 85ef976a_84b2_6421_f697_db0f396b2444["_should_stream()"] d009a608_c505_bd50_7200_0de8a69ba4b7 -->|method| 85ef976a_84b2_6421_f697_db0f396b2444
Relationship Graph
Source Code
libs/core/langchain_core/language_models/chat_models.py lines 246–1721
class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
r"""Base class for chat models.
Key imperative methods:
Methods that actually call the underlying model.
This table provides a brief overview of the main imperative methods. Please see the base `Runnable` reference for full documentation.
| Method | Input | Output | Description |
| ---------------------- | ------------------------------------------------------------ | ---------------------------------------------------------- | -------------------------------------------------------------------------------- |
| `invoke` | `str` \| `list[dict | tuple | BaseMessage]` \| `PromptValue` | `BaseMessage` | A single chat model call. |
| `ainvoke` | `'''` | `BaseMessage` | Defaults to running `invoke` in an async executor. |
| `stream` | `'''` | `Iterator[BaseMessageChunk]` | Defaults to yielding output of `invoke`. |
| `astream` | `'''` | `AsyncIterator[BaseMessageChunk]` | Defaults to yielding output of `ainvoke`. |
| `astream_events` | `'''` | `AsyncIterator[StreamEvent]` | Event types: `on_chat_model_start`, `on_chat_model_stream`, `on_chat_model_end`. |
| `batch` | `list[''']` | `list[BaseMessage]` | Defaults to running `invoke` in concurrent threads. |
| `abatch` | `list[''']` | `list[BaseMessage]` | Defaults to running `ainvoke` in concurrent threads. |
| `batch_as_completed` | `list[''']` | `Iterator[tuple[int, Union[BaseMessage, Exception]]]` | Defaults to running `invoke` in concurrent threads. |
| `abatch_as_completed` | `list[''']` | `AsyncIterator[tuple[int, Union[BaseMessage, Exception]]]` | Defaults to running `ainvoke` in concurrent threads. |
Key declarative methods:
Methods for creating another `Runnable` using the chat model.
This table provides a brief overview of the main declarative methods. Please see the reference for each method for full documentation.
| Method | Description |
| ---------------------------- | ------------------------------------------------------------------------------------------ |
| `bind_tools` | Create chat model that can call tools. |
| `with_structured_output` | Create wrapper that structures model output using schema. |
| `with_retry` | Create wrapper that retries model calls on failure. |
| `with_fallbacks` | Create wrapper that falls back to other models on failure. |
| `configurable_fields` | Specify init args of the model that can be configured at runtime via the `RunnableConfig`. |
| `configurable_alternatives` | Specify alternative models which can be swapped in at runtime via the `RunnableConfig`. |
Creating custom chat model:
Custom chat model implementations should inherit from this class.
Please reference the table below for information about which
methods and properties are required or optional for implementations.
| Method/Property | Description | Required |
| -------------------------------- | ------------------------------------------------------------------ | ----------------- |
| `_generate` | Use to generate a chat result from a prompt | Required |
| `_llm_type` (property) | Used to uniquely identify the type of the model. Used for logging. | Required |
| `_identifying_params` (property) | Represent model parameterization for tracing purposes. | Optional |
| `_stream` | Use to implement streaming | Optional |
| `_agenerate` | Use to implement a native async method | Optional |
| `_astream` | Use to implement async version of `_stream` | Optional |
""" # noqa: E501
rate_limiter: BaseRateLimiter | None = Field(default=None, exclude=True)
"An optional rate limiter to use for limiting the number of requests."
disable_streaming: bool | Literal["tool_calling"] = False
"""Whether to disable streaming for this model.
If streaming is bypassed, then `stream`/`astream`/`astream_events` will
defer to `invoke`/`ainvoke`.
- If `True`, will always bypass streaming case.
- If `'tool_calling'`, will bypass streaming case only when the model is called
with a `tools` keyword argument. In other words, LangChain will automatically
switch to non-streaming behavior (`invoke`) only when the tools argument is
provided. This offers the best of both worlds.
- If `False` (Default), will always use streaming case if available.
The main reason for this flag is that code might be written using `stream` and
a user may want to swap out a given model for another model whose the implementation
does not properly support streaming.
"""
output_version: str | None = Field(
default_factory=from_env("LC_OUTPUT_VERSION", default=None)
)
"""Version of `AIMessage` output format to store in message content.
`AIMessage.content_blocks` will lazily parse the contents of `content` into a
standard format. This flag can be used to additionally store the standard format
in message content, e.g., for serialization purposes.
Supported values:
Extends
Source
Frequently Asked Questions
What is the BaseChatModel class?
BaseChatModel is a class in the langchain codebase, defined in libs/core/langchain_core/language_models/chat_models.py.
Where is BaseChatModel defined?
BaseChatModel is defined in libs/core/langchain_core/language_models/chat_models.py at line 246.
What does BaseChatModel extend?
BaseChatModel extends PromptValue, _StreamingCallbackHandler, AIMessageChunk, Generation, ChatGeneration, AIMessage, BaseCache.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free