Home / Function/ on_llm_new_token() — langchain Function Reference

on_llm_new_token() — langchain Function Reference

Architecture documentation for the on_llm_new_token() function in event_stream.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  36755b3c_0e3e_b46e_b4e4_33b89305b625["on_llm_new_token()"]
  33d093c4_1ed0_fc6a_17c6_762d4c5cfa04["_AstreamEventsCallbackHandler"]
  36755b3c_0e3e_b46e_b4e4_33b89305b625 -->|defined in| 33d093c4_1ed0_fc6a_17c6_762d4c5cfa04
  87f79bee_f9c5_8262_1829_62f633c4f870["_send()"]
  36755b3c_0e3e_b46e_b4e4_33b89305b625 -->|calls| 87f79bee_f9c5_8262_1829_62f633c4f870
  9aed8e4f_9d4c_016f_aa43_c5908015cf8d["_get_parent_ids()"]
  36755b3c_0e3e_b46e_b4e4_33b89305b625 -->|calls| 9aed8e4f_9d4c_016f_aa43_c5908015cf8d
  style 36755b3c_0e3e_b46e_b4e4_33b89305b625 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/core/langchain_core/tracers/event_stream.py lines 428–486

    async def on_llm_new_token(
        self,
        token: str,
        *,
        chunk: GenerationChunk | ChatGenerationChunk | None = None,
        run_id: UUID,
        parent_run_id: UUID | None = None,
        **kwargs: Any,
    ) -> None:
        """Run on new output token.

        Only available when streaming is enabled.

        For both chat models and non-chat models (legacy text-completion LLMs).

        Raises:
            ValueError: If the run type is not `llm` or `chat_model`.
            AssertionError: If the run ID is not found in the run map.
        """
        run_info = self.run_map.get(run_id)
        chunk_: GenerationChunk | BaseMessageChunk

        if run_info is None:
            msg = f"Run ID {run_id} not found in run map."
            raise AssertionError(msg)
        if self.is_tapped.get(run_id):
            return
        if run_info["run_type"] == "chat_model":
            event = "on_chat_model_stream"

            if chunk is None:
                chunk_ = AIMessageChunk(content=token)
            else:
                chunk_ = cast("ChatGenerationChunk", chunk).message

        elif run_info["run_type"] == "llm":
            event = "on_llm_stream"
            if chunk is None:
                chunk_ = GenerationChunk(text=token)
            else:
                chunk_ = cast("GenerationChunk", chunk)
        else:
            msg = f"Unexpected run type: {run_info['run_type']}"
            raise ValueError(msg)

        self._send(
            {
                "event": event,
                "data": {
                    "chunk": chunk_,
                },
                "run_id": str(run_id),
                "name": run_info["name"],
                "tags": run_info["tags"],
                "metadata": run_info["metadata"],
                "parent_ids": self._get_parent_ids(run_id),
            },
            run_info["run_type"],
        )

Domain

Subdomains

Frequently Asked Questions

What does on_llm_new_token() do?
on_llm_new_token() is a function in the langchain codebase, defined in libs/core/langchain_core/tracers/event_stream.py.
Where is on_llm_new_token() defined?
on_llm_new_token() is defined in libs/core/langchain_core/tracers/event_stream.py at line 428.
What does on_llm_new_token() call?
on_llm_new_token() calls 2 function(s): _get_parent_ids, _send.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free