LLM Class — langchain Architecture
Architecture documentation for the LLM class in llms.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD b2c7d2a5_0852_93df_c3e1_828c36a88999["LLM"] ce4aa464_3868_179e_5d99_df48bc307c5f["BaseLLM"] b2c7d2a5_0852_93df_c3e1_828c36a88999 -->|extends| ce4aa464_3868_179e_5d99_df48bc307c5f a4692bf1_369d_4673_b1eb_6b9a8cbb9994["llms.py"] b2c7d2a5_0852_93df_c3e1_828c36a88999 -->|defined in| a4692bf1_369d_4673_b1eb_6b9a8cbb9994 862f8f36_13e4_466c_ffc2_266e2e8b78e4["_call()"] b2c7d2a5_0852_93df_c3e1_828c36a88999 -->|method| 862f8f36_13e4_466c_ffc2_266e2e8b78e4 a27ba324_f950_5f71_c654_3c089ef7f49a["_acall()"] b2c7d2a5_0852_93df_c3e1_828c36a88999 -->|method| a27ba324_f950_5f71_c654_3c089ef7f49a 1958c025_da10_4021_3597_57eb74b3eae7["_generate()"] b2c7d2a5_0852_93df_c3e1_828c36a88999 -->|method| 1958c025_da10_4021_3597_57eb74b3eae7 ffe354d9_4df1_2cd5_e12d_ff4ee146557e["_agenerate()"] b2c7d2a5_0852_93df_c3e1_828c36a88999 -->|method| ffe354d9_4df1_2cd5_e12d_ff4ee146557e
Relationship Graph
Source Code
libs/core/langchain_core/language_models/llms.py lines 1401–1528
class LLM(BaseLLM):
"""Simple interface for implementing a custom LLM.
You should subclass this class and implement the following:
- `_call` method: Run the LLM on the given prompt and input (used by `invoke`).
- `_identifying_params` property: Return a dictionary of the identifying parameters
This is critical for caching and tracing purposes. Identifying parameters
is a dict that identifies the LLM.
It should mostly include a `model_name`.
Optional: Override the following methods to provide more optimizations:
- `_acall`: Provide a native async version of the `_call` method.
If not provided, will delegate to the synchronous version using
`run_in_executor`. (Used by `ainvoke`).
- `_stream`: Stream the LLM on the given prompt and input.
`stream` will use `_stream` if provided, otherwise it
use `_call` and output will arrive in one chunk.
- `_astream`: Override to provide a native async version of the `_stream` method.
`astream` will use `_astream` if provided, otherwise it will implement
a fallback behavior that will use `_stream` if `_stream` is implemented,
and use `_acall` if `_stream` is not implemented.
"""
@abstractmethod
def _call(
self,
prompt: str,
stop: list[str] | None = None,
run_manager: CallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> str:
"""Run the LLM on the given input.
Override this method to implement the LLM logic.
Args:
prompt: The prompt to generate from.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
If stop tokens are not supported consider raising `NotImplementedError`.
run_manager: Callback manager for the run.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
Returns:
The model output as a string. SHOULD NOT include the prompt.
"""
async def _acall(
self,
prompt: str,
stop: list[str] | None = None,
run_manager: AsyncCallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> str:
"""Async version of the _call method.
The default implementation delegates to the synchronous _call method using
`run_in_executor`. Subclasses that need to provide a true async implementation
should override this method to reduce the overhead of using `run_in_executor`.
Args:
prompt: The prompt to generate from.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
If stop tokens are not supported consider raising `NotImplementedError`.
run_manager: Callback manager for the run.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
Returns:
Extends
Source
Frequently Asked Questions
What is the LLM class?
LLM is a class in the langchain codebase, defined in libs/core/langchain_core/language_models/llms.py.
Where is LLM defined?
LLM is defined in libs/core/langchain_core/language_models/llms.py at line 1401.
What does LLM extend?
LLM extends BaseLLM.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free