Home / Class/ FinalStreamingStdOutCallbackHandler Class — langchain Architecture

FinalStreamingStdOutCallbackHandler Class — langchain Architecture

Architecture documentation for the FinalStreamingStdOutCallbackHandler class in streaming_stdout_final_only.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  f95e695f_57c0_a0ed_1f28_1ea7c2b7584d["FinalStreamingStdOutCallbackHandler"]
  6b65bf57_0fa9_b411_5886_294d6dbe5842["StreamingStdOutCallbackHandler"]
  f95e695f_57c0_a0ed_1f28_1ea7c2b7584d -->|extends| 6b65bf57_0fa9_b411_5886_294d6dbe5842
  438e5230_3337_758a_8d8f_93968317fa8a["streaming_stdout_final_only.py"]
  f95e695f_57c0_a0ed_1f28_1ea7c2b7584d -->|defined in| 438e5230_3337_758a_8d8f_93968317fa8a
  8ce35f14_977d_60d3_1288_c6ed5fd9cb8e["append_to_last_tokens()"]
  f95e695f_57c0_a0ed_1f28_1ea7c2b7584d -->|method| 8ce35f14_977d_60d3_1288_c6ed5fd9cb8e
  38a26d0e_c0fd_2194_94d1_7d07186bef72["check_if_answer_reached()"]
  f95e695f_57c0_a0ed_1f28_1ea7c2b7584d -->|method| 38a26d0e_c0fd_2194_94d1_7d07186bef72
  d822dff4_52a8_1fdc_7514_b206d48f0210["__init__()"]
  f95e695f_57c0_a0ed_1f28_1ea7c2b7584d -->|method| d822dff4_52a8_1fdc_7514_b206d48f0210
  6975d980_8638_2ffa_419c_fca08016e68f["on_llm_start()"]
  f95e695f_57c0_a0ed_1f28_1ea7c2b7584d -->|method| 6975d980_8638_2ffa_419c_fca08016e68f
  d608f7aa_7bee_2b7c_2da5_9d1292ea5966["on_llm_new_token()"]
  f95e695f_57c0_a0ed_1f28_1ea7c2b7584d -->|method| d608f7aa_7bee_2b7c_2da5_9d1292ea5966

Relationship Graph

Source Code

libs/langchain/langchain_classic/callbacks/streaming_stdout_final_only.py lines 12–96

class FinalStreamingStdOutCallbackHandler(StreamingStdOutCallbackHandler):
    """Callback handler for streaming in agents.

    Only works with agents using LLMs that support streaming.

    Only the final output of the agent will be streamed.
    """

    def append_to_last_tokens(self, token: str) -> None:
        """Append token to the last tokens."""
        self.last_tokens.append(token)
        self.last_tokens_stripped.append(token.strip())
        if len(self.last_tokens) > len(self.answer_prefix_tokens):
            self.last_tokens.pop(0)
            self.last_tokens_stripped.pop(0)

    def check_if_answer_reached(self) -> bool:
        """Check if the answer has been reached."""
        if self.strip_tokens:
            return self.last_tokens_stripped == self.answer_prefix_tokens_stripped
        return self.last_tokens == self.answer_prefix_tokens

    def __init__(
        self,
        *,
        answer_prefix_tokens: list[str] | None = None,
        strip_tokens: bool = True,
        stream_prefix: bool = False,
    ) -> None:
        """Instantiate FinalStreamingStdOutCallbackHandler.

        Args:
            answer_prefix_tokens: Token sequence that prefixes the answer.
                Default is ["Final", "Answer", ":"]
            strip_tokens: Ignore white spaces and new lines when comparing
                answer_prefix_tokens to last tokens? (to determine if answer has been
                reached)
            stream_prefix: Should answer prefix itself also be streamed?
        """
        super().__init__()
        if answer_prefix_tokens is None:
            self.answer_prefix_tokens = DEFAULT_ANSWER_PREFIX_TOKENS
        else:
            self.answer_prefix_tokens = answer_prefix_tokens
        if strip_tokens:
            self.answer_prefix_tokens_stripped = [
                token.strip() for token in self.answer_prefix_tokens
            ]
        else:
            self.answer_prefix_tokens_stripped = self.answer_prefix_tokens
        self.last_tokens = [""] * len(self.answer_prefix_tokens)
        self.last_tokens_stripped = [""] * len(self.answer_prefix_tokens)
        self.strip_tokens = strip_tokens
        self.stream_prefix = stream_prefix
        self.answer_reached = False

    @override
    def on_llm_start(
        self,
        serialized: dict[str, Any],
        prompts: list[str],
        **kwargs: Any,
    ) -> None:
        """Run when LLM starts running."""
        self.answer_reached = False

    @override
    def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
        """Run on new LLM token. Only available when streaming is enabled."""
        # Remember the last n tokens, where n = len(answer_prefix_tokens)
        self.append_to_last_tokens(token)

        # Check if the last n tokens match the answer_prefix_tokens list ...
        if self.check_if_answer_reached():
            self.answer_reached = True
            if self.stream_prefix:
                for t in self.last_tokens:
                    sys.stdout.write(t)
                sys.stdout.flush()
            return

Frequently Asked Questions

What is the FinalStreamingStdOutCallbackHandler class?
FinalStreamingStdOutCallbackHandler is a class in the langchain codebase, defined in libs/langchain/langchain_classic/callbacks/streaming_stdout_final_only.py.
Where is FinalStreamingStdOutCallbackHandler defined?
FinalStreamingStdOutCallbackHandler is defined in libs/langchain/langchain_classic/callbacks/streaming_stdout_final_only.py at line 12.
What does FinalStreamingStdOutCallbackHandler extend?
FinalStreamingStdOutCallbackHandler extends StreamingStdOutCallbackHandler.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free