CallbackManagerMixin Class — langchain Architecture
Architecture documentation for the CallbackManagerMixin class in base.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD bd3198da_155d_9b39_861e_b84609fa260e["CallbackManagerMixin"] aa78d849_32e0_cbe3_8323_1a62fafa0824["base.py"] bd3198da_155d_9b39_861e_b84609fa260e -->|defined in| aa78d849_32e0_cbe3_8323_1a62fafa0824 8da70ca3_7b25_b031_b39a_69779f1764e1["on_llm_start()"] bd3198da_155d_9b39_861e_b84609fa260e -->|method| 8da70ca3_7b25_b031_b39a_69779f1764e1 643bbe41_b6f5_5118_385c_d6be8985c9b8["on_chat_model_start()"] bd3198da_155d_9b39_861e_b84609fa260e -->|method| 643bbe41_b6f5_5118_385c_d6be8985c9b8 572dadfc_38f3_b2aa_df02_c86ec5371c3c["on_retriever_start()"] bd3198da_155d_9b39_861e_b84609fa260e -->|method| 572dadfc_38f3_b2aa_df02_c86ec5371c3c 7b9b0260_e24a_4e8e_5e79_89ed29989da0["on_chain_start()"] bd3198da_155d_9b39_861e_b84609fa260e -->|method| 7b9b0260_e24a_4e8e_5e79_89ed29989da0 eba6bc0a_21e8_ce8a_ea5b_e61515d4818b["on_tool_start()"] bd3198da_155d_9b39_861e_b84609fa260e -->|method| eba6bc0a_21e8_ce8a_ea5b_e61515d4818b
Relationship Graph
Source Code
libs/core/langchain_core/callbacks/base.py lines 238–371
class CallbackManagerMixin:
"""Mixin for callback manager."""
def on_llm_start(
self,
serialized: dict[str, Any],
prompts: list[str],
*,
run_id: UUID,
parent_run_id: UUID | None = None,
tags: list[str] | None = None,
metadata: dict[str, Any] | None = None,
**kwargs: Any,
) -> Any:
"""Run when LLM starts running.
!!! warning
This method is called for non-chat models (regular text completion LLMs). If
you're implementing a handler for a chat model, you should use
`on_chat_model_start` instead.
Args:
serialized: The serialized LLM.
prompts: The prompts.
run_id: The ID of the current run.
parent_run_id: The ID of the parent run.
tags: The tags.
metadata: The metadata.
**kwargs: Additional keyword arguments.
"""
def on_chat_model_start(
self,
serialized: dict[str, Any],
messages: list[list[BaseMessage]],
*,
run_id: UUID,
parent_run_id: UUID | None = None,
tags: list[str] | None = None,
metadata: dict[str, Any] | None = None,
**kwargs: Any,
) -> Any:
"""Run when a chat model starts running.
!!! warning
This method is called for chat models. If you're implementing a handler for
a non-chat model, you should use `on_llm_start` instead.
Args:
serialized: The serialized chat model.
messages: The messages.
run_id: The ID of the current run.
parent_run_id: The ID of the parent run.
tags: The tags.
metadata: The metadata.
**kwargs: Additional keyword arguments.
"""
# NotImplementedError is thrown intentionally
# Callback handler will fall back to on_llm_start if this is exception is thrown
msg = f"{self.__class__.__name__} does not implement `on_chat_model_start`"
raise NotImplementedError(msg)
def on_retriever_start(
self,
serialized: dict[str, Any],
query: str,
*,
run_id: UUID,
parent_run_id: UUID | None = None,
tags: list[str] | None = None,
metadata: dict[str, Any] | None = None,
**kwargs: Any,
) -> Any:
"""Run when the `Retriever` starts running.
Args:
serialized: The serialized `Retriever`.
query: The query.
run_id: The ID of the current run.
Defined In
Source
Frequently Asked Questions
What is the CallbackManagerMixin class?
CallbackManagerMixin is a class in the langchain codebase, defined in libs/core/langchain_core/callbacks/base.py.
Where is CallbackManagerMixin defined?
CallbackManagerMixin is defined in libs/core/langchain_core/callbacks/base.py at line 238.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free