Home / File/ token_buffer.py — langchain Source File

token_buffer.py — langchain Source File

Architecture documentation for token_buffer.py, a python file in the langchain codebase. 6 imports, 0 dependents.

File python LangChainCore ApiManagement 6 imports 1 classes

Entity Profile

Dependency Diagram

graph LR
  bdf2b296_59db_964d_8ac2_169dca431ea8["token_buffer.py"]
  feec1ec4_6917_867b_d228_b134d0ff8099["typing"]
  bdf2b296_59db_964d_8ac2_169dca431ea8 --> feec1ec4_6917_867b_d228_b134d0ff8099
  2485b66a_3839_d0b6_ad9c_a4ff40457dc6["langchain_core._api"]
  bdf2b296_59db_964d_8ac2_169dca431ea8 --> 2485b66a_3839_d0b6_ad9c_a4ff40457dc6
  e929cf21_6ab8_6ff3_3765_0d35a099a053["langchain_core.language_models"]
  bdf2b296_59db_964d_8ac2_169dca431ea8 --> e929cf21_6ab8_6ff3_3765_0d35a099a053
  9444498b_8066_55c7_b3a2_1d90c4162a32["langchain_core.messages"]
  bdf2b296_59db_964d_8ac2_169dca431ea8 --> 9444498b_8066_55c7_b3a2_1d90c4162a32
  f85fae70_1011_eaec_151c_4083140ae9e5["typing_extensions"]
  bdf2b296_59db_964d_8ac2_169dca431ea8 --> f85fae70_1011_eaec_151c_4083140ae9e5
  0631e07f_ced9_09bd_71a1_4630e7d30d26["langchain_classic.memory.chat_memory"]
  bdf2b296_59db_964d_8ac2_169dca431ea8 --> 0631e07f_ced9_09bd_71a1_4630e7d30d26
  style bdf2b296_59db_964d_8ac2_169dca431ea8 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

from typing import Any

from langchain_core._api import deprecated
from langchain_core.language_models import BaseLanguageModel
from langchain_core.messages import BaseMessage, get_buffer_string
from typing_extensions import override

from langchain_classic.memory.chat_memory import BaseChatMemory


@deprecated(
    since="0.3.1",
    removal="1.0.0",
    message=(
        "Please see the migration guide at: "
        "https://python.langchain.com/docs/versions/migrating_memory/"
    ),
)
class ConversationTokenBufferMemory(BaseChatMemory):
    """Conversation chat memory with token limit.

    Keeps only the most recent messages in the conversation under the constraint
    that the total number of tokens in the conversation does not exceed a certain limit.
    """

    human_prefix: str = "Human"
    ai_prefix: str = "AI"
    llm: BaseLanguageModel
    memory_key: str = "history"
    max_token_limit: int = 2000

    @property
    def buffer(self) -> Any:
        """String buffer of memory."""
        return self.buffer_as_messages if self.return_messages else self.buffer_as_str

    @property
    def buffer_as_str(self) -> str:
        """Exposes the buffer as a string in case return_messages is False."""
        return get_buffer_string(
            self.chat_memory.messages,
            human_prefix=self.human_prefix,
            ai_prefix=self.ai_prefix,
        )

    @property
    def buffer_as_messages(self) -> list[BaseMessage]:
        """Exposes the buffer as a list of messages in case return_messages is True."""
        return self.chat_memory.messages

    @property
    def memory_variables(self) -> list[str]:
        """Will always return list of memory variables."""
        return [self.memory_key]

    @override
    def load_memory_variables(self, inputs: dict[str, Any]) -> dict[str, Any]:
        """Return history buffer."""
        return {self.memory_key: self.buffer}

    def save_context(self, inputs: dict[str, Any], outputs: dict[str, str]) -> None:
        """Save context from this conversation to buffer. Pruned."""
        super().save_context(inputs, outputs)
        # Prune buffer if it exceeds max token limit
        buffer = self.chat_memory.messages
        curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)
        if curr_buffer_length > self.max_token_limit:
            pruned_memory = []
            while curr_buffer_length > self.max_token_limit:
                pruned_memory.append(buffer.pop(0))
                curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)

Domain

Subdomains

Dependencies

  • langchain_classic.memory.chat_memory
  • langchain_core._api
  • langchain_core.language_models
  • langchain_core.messages
  • typing
  • typing_extensions

Frequently Asked Questions

What does token_buffer.py do?
token_buffer.py is a source file in the langchain codebase, written in python. It belongs to the LangChainCore domain, ApiManagement subdomain.
What does token_buffer.py depend on?
token_buffer.py imports 6 module(s): langchain_classic.memory.chat_memory, langchain_core._api, langchain_core.language_models, langchain_core.messages, typing, typing_extensions.
Where is token_buffer.py in the architecture?
token_buffer.py is located at libs/langchain/langchain_classic/memory/token_buffer.py (domain: LangChainCore, subdomain: ApiManagement, directory: libs/langchain/langchain_classic/memory).

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free