Home / Function/ _on_llm_new_token() — langchain Function Reference

_on_llm_new_token() — langchain Function Reference

Architecture documentation for the _on_llm_new_token() function in core.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  996d639d_021b_9484_94fa_5b6a785d8aeb["_on_llm_new_token()"]
  70348e44_de0f_ccb4_c06a_8453289ed93e["_TracerCore"]
  996d639d_021b_9484_94fa_5b6a785d8aeb -->|defined in| 70348e44_de0f_ccb4_c06a_8453289ed93e
  style 996d639d_021b_9484_94fa_5b6a785d8aeb fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/core/langchain_core/tracers/core.py lines 583–597

    def _on_llm_new_token(
        self,
        run: Run,
        token: str,
        chunk: GenerationChunk | ChatGenerationChunk | None,
    ) -> Coroutine[Any, Any, None] | None:
        """Process new LLM token.

        Args:
            run: The LLM run.
            token: The new token.
            chunk: Optional chunk.
        """
        _ = (run, token, chunk)
        return None

Domain

Subdomains

Frequently Asked Questions

What does _on_llm_new_token() do?
_on_llm_new_token() is a function in the langchain codebase, defined in libs/core/langchain_core/tracers/core.py.
Where is _on_llm_new_token() defined?
_on_llm_new_token() is defined in libs/core/langchain_core/tracers/core.py at line 583.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free