Home / Class/ AsyncCallbackHandler Class — langchain Architecture

AsyncCallbackHandler Class — langchain Architecture

Architecture documentation for the AsyncCallbackHandler class in base.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  e108f394_9734_a4fd_0bef_19ef9b674d50["AsyncCallbackHandler"]
  50ee4473_6788_2806_dc06_5fbbc8f64275["BaseCallbackHandler"]
  e108f394_9734_a4fd_0bef_19ef9b674d50 -->|extends| 50ee4473_6788_2806_dc06_5fbbc8f64275
  aa78d849_32e0_cbe3_8323_1a62fafa0824["base.py"]
  e108f394_9734_a4fd_0bef_19ef9b674d50 -->|defined in| aa78d849_32e0_cbe3_8323_1a62fafa0824
  adef9e51_721a_87fa_53b1_76d10b14c8eb["on_llm_start()"]
  e108f394_9734_a4fd_0bef_19ef9b674d50 -->|method| adef9e51_721a_87fa_53b1_76d10b14c8eb
  c2672010_a9d5_23bc_9011_836200372283["on_chat_model_start()"]
  e108f394_9734_a4fd_0bef_19ef9b674d50 -->|method| c2672010_a9d5_23bc_9011_836200372283
  8e37cbe8_8692_c0ed_1c2a_61d2ba2eac0b["on_llm_new_token()"]
  e108f394_9734_a4fd_0bef_19ef9b674d50 -->|method| 8e37cbe8_8692_c0ed_1c2a_61d2ba2eac0b
  77590a17_2c3a_5860_cef7_6ca68eb2ebb5["on_llm_end()"]
  e108f394_9734_a4fd_0bef_19ef9b674d50 -->|method| 77590a17_2c3a_5860_cef7_6ca68eb2ebb5
  a41efa7b_d244_dbfa_ed77_812f87674ef7["on_llm_error()"]
  e108f394_9734_a4fd_0bef_19ef9b674d50 -->|method| a41efa7b_d244_dbfa_ed77_812f87674ef7
  0966e362_c5bf_1a42_ca42_f07ff0153e53["on_chain_start()"]
  e108f394_9734_a4fd_0bef_19ef9b674d50 -->|method| 0966e362_c5bf_1a42_ca42_f07ff0153e53
  41306b22_6b6e_750f_697b_9e65b7aa87a0["on_chain_end()"]
  e108f394_9734_a4fd_0bef_19ef9b674d50 -->|method| 41306b22_6b6e_750f_697b_9e65b7aa87a0
  53012791_e4f6_d0f4_6da9_28fd9995bbf7["on_chain_error()"]
  e108f394_9734_a4fd_0bef_19ef9b674d50 -->|method| 53012791_e4f6_d0f4_6da9_28fd9995bbf7
  d28a009f_213c_9b69_dfe9_95b2bb2d6d77["on_tool_start()"]
  e108f394_9734_a4fd_0bef_19ef9b674d50 -->|method| d28a009f_213c_9b69_dfe9_95b2bb2d6d77
  61629a2e_634f_f3ef_e32c_116608ff8113["on_tool_end()"]
  e108f394_9734_a4fd_0bef_19ef9b674d50 -->|method| 61629a2e_634f_f3ef_e32c_116608ff8113
  3c2329e8_b353_e49c_aada_7490cb63cca0["on_tool_error()"]
  e108f394_9734_a4fd_0bef_19ef9b674d50 -->|method| 3c2329e8_b353_e49c_aada_7490cb63cca0
  80a2b7ba_1fb5_522c_dcf9_c4337053f1ce["on_text()"]
  e108f394_9734_a4fd_0bef_19ef9b674d50 -->|method| 80a2b7ba_1fb5_522c_dcf9_c4337053f1ce

Relationship Graph

Source Code

libs/core/langchain_core/callbacks/base.py lines 487–895

class AsyncCallbackHandler(BaseCallbackHandler):
    """Base async callback handler."""

    async def on_llm_start(
        self,
        serialized: dict[str, Any],
        prompts: list[str],
        *,
        run_id: UUID,
        parent_run_id: UUID | None = None,
        tags: list[str] | None = None,
        metadata: dict[str, Any] | None = None,
        **kwargs: Any,
    ) -> None:
        """Run when the model starts running.

        !!! warning

            This method is called for non-chat models (regular text completion LLMs). If
            you're implementing a handler for a chat model, you should use
            `on_chat_model_start` instead.

        Args:
            serialized: The serialized LLM.
            prompts: The prompts.
            run_id: The ID of the current run.
            parent_run_id: The ID of the parent run.
            tags: The tags.
            metadata: The metadata.
            **kwargs: Additional keyword arguments.
        """

    async def on_chat_model_start(
        self,
        serialized: dict[str, Any],
        messages: list[list[BaseMessage]],
        *,
        run_id: UUID,
        parent_run_id: UUID | None = None,
        tags: list[str] | None = None,
        metadata: dict[str, Any] | None = None,
        **kwargs: Any,
    ) -> Any:
        """Run when a chat model starts running.

        !!! warning

            This method is called for chat models. If you're implementing a handler for
            a non-chat model, you should use `on_llm_start` instead.

        Args:
            serialized: The serialized chat model.
            messages: The messages.
            run_id: The ID of the current run.
            parent_run_id: The ID of the parent run.
            tags: The tags.
            metadata: The metadata.
            **kwargs: Additional keyword arguments.
        """
        # NotImplementedError is thrown intentionally
        # Callback handler will fall back to on_llm_start if this is exception is thrown
        msg = f"{self.__class__.__name__} does not implement `on_chat_model_start`"
        raise NotImplementedError(msg)

    async def on_llm_new_token(
        self,
        token: str,
        *,
        chunk: GenerationChunk | ChatGenerationChunk | None = None,
        run_id: UUID,
        parent_run_id: UUID | None = None,
        tags: list[str] | None = None,
        **kwargs: Any,
    ) -> None:
        """Run on new output token. Only available when streaming is enabled.

        For both chat models and non-chat models (legacy text completion LLMs).

        Args:
            token: The new token.
            chunk: The new generated chunk, containing content and other information.

Frequently Asked Questions

What is the AsyncCallbackHandler class?
AsyncCallbackHandler is a class in the langchain codebase, defined in libs/core/langchain_core/callbacks/base.py.
Where is AsyncCallbackHandler defined?
AsyncCallbackHandler is defined in libs/core/langchain_core/callbacks/base.py at line 487.
What does AsyncCallbackHandler extend?
AsyncCallbackHandler extends BaseCallbackHandler.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free