AsyncCallbackManagerForLLMRun Class — langchain Architecture
Architecture documentation for the AsyncCallbackManagerForLLMRun class in manager.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD edb9d036_ba6e_cc5d_0607_e2486edb64dc["AsyncCallbackManagerForLLMRun"] de4f02df_e081_b738_0c6d_90fdd2ae605d["AsyncRunManager"] edb9d036_ba6e_cc5d_0607_e2486edb64dc -->|extends| de4f02df_e081_b738_0c6d_90fdd2ae605d 8d0067ae_0de0_c143_63e7_e97f7fd9f614["LLMManagerMixin"] edb9d036_ba6e_cc5d_0607_e2486edb64dc -->|extends| 8d0067ae_0de0_c143_63e7_e97f7fd9f614 ef55be46_0333_682d_8311_b4dd35c3e34c["manager.py"] edb9d036_ba6e_cc5d_0607_e2486edb64dc -->|defined in| ef55be46_0333_682d_8311_b4dd35c3e34c 2c5343be_082e_e6aa_57e8_6f93fac4d029["get_sync()"] edb9d036_ba6e_cc5d_0607_e2486edb64dc -->|method| 2c5343be_082e_e6aa_57e8_6f93fac4d029 00662245_87d9_b092_6f7f_ae1aca037dce["on_llm_new_token()"] edb9d036_ba6e_cc5d_0607_e2486edb64dc -->|method| 00662245_87d9_b092_6f7f_ae1aca037dce 3dd7711b_7288_48d1_9e92_1b31506ed350["on_llm_end()"] edb9d036_ba6e_cc5d_0607_e2486edb64dc -->|method| 3dd7711b_7288_48d1_9e92_1b31506ed350 d048a783_3474_9659_8d90_9c136dd64460["on_llm_error()"] edb9d036_ba6e_cc5d_0607_e2486edb64dc -->|method| d048a783_3474_9659_8d90_9c136dd64460
Relationship Graph
Source Code
libs/core/langchain_core/callbacks/manager.py lines 753–852
class AsyncCallbackManagerForLLMRun(AsyncRunManager, LLMManagerMixin):
"""Async callback manager for LLM run."""
def get_sync(self) -> CallbackManagerForLLMRun:
"""Get the equivalent sync `RunManager`.
Returns:
The sync `RunManager`.
"""
return CallbackManagerForLLMRun(
run_id=self.run_id,
handlers=self.handlers,
inheritable_handlers=self.inheritable_handlers,
parent_run_id=self.parent_run_id,
tags=self.tags,
inheritable_tags=self.inheritable_tags,
metadata=self.metadata,
inheritable_metadata=self.inheritable_metadata,
)
async def on_llm_new_token(
self,
token: str,
*,
chunk: GenerationChunk | ChatGenerationChunk | None = None,
**kwargs: Any,
) -> None:
"""Run when LLM generates a new token.
Args:
token: The new token.
chunk: The chunk.
**kwargs: Additional keyword arguments.
"""
if not self.handlers:
return
await ahandle_event(
self.handlers,
"on_llm_new_token",
"ignore_llm",
token,
chunk=chunk,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
tags=self.tags,
**kwargs,
)
@shielded
async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Run when LLM ends running.
Args:
response: The LLM result.
**kwargs: Additional keyword arguments.
"""
if not self.handlers:
return
await ahandle_event(
self.handlers,
"on_llm_end",
"ignore_llm",
response,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
tags=self.tags,
**kwargs,
)
@shielded
async def on_llm_error(
self,
error: BaseException,
**kwargs: Any,
) -> None:
"""Run when LLM errors.
Args:
Extends
Source
Frequently Asked Questions
What is the AsyncCallbackManagerForLLMRun class?
AsyncCallbackManagerForLLMRun is a class in the langchain codebase, defined in libs/core/langchain_core/callbacks/manager.py.
Where is AsyncCallbackManagerForLLMRun defined?
AsyncCallbackManagerForLLMRun is defined in libs/core/langchain_core/callbacks/manager.py at line 753.
What does AsyncCallbackManagerForLLMRun extend?
AsyncCallbackManagerForLLMRun extends AsyncRunManager, LLMManagerMixin.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free