ConversationVectorStoreTokenBufferMemory Class — langchain Architecture
Architecture documentation for the ConversationVectorStoreTokenBufferMemory class in vectorstore_token_buffer_memory.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD e472f8c6_6d6c_f2aa_40a7_070d289aca52["ConversationVectorStoreTokenBufferMemory"] b1ebcb83_5246_9215_21fb_a4ccb1d14d96["ConversationTokenBufferMemory"] e472f8c6_6d6c_f2aa_40a7_070d289aca52 -->|extends| b1ebcb83_5246_9215_21fb_a4ccb1d14d96 6c9ef707_39a9_42d2_3c5e_58c6b436e9c1["vectorstore_token_buffer_memory.py"] e472f8c6_6d6c_f2aa_40a7_070d289aca52 -->|defined in| 6c9ef707_39a9_42d2_3c5e_58c6b436e9c1 d3c1c381_5375_9a4f_d7ab_20a81f0a528b["memory_retriever()"] e472f8c6_6d6c_f2aa_40a7_070d289aca52 -->|method| d3c1c381_5375_9a4f_d7ab_20a81f0a528b 8554df57_2ba7_8c97_ba7a_904017b0d279["load_memory_variables()"] e472f8c6_6d6c_f2aa_40a7_070d289aca52 -->|method| 8554df57_2ba7_8c97_ba7a_904017b0d279 24d56770_9e7f_07b7_257b_0f6a6cf03e45["save_context()"] e472f8c6_6d6c_f2aa_40a7_070d289aca52 -->|method| 24d56770_9e7f_07b7_257b_0f6a6cf03e45 ebb0e0b8_7ea8_99f2_f088_5681d00fb278["save_remainder()"] e472f8c6_6d6c_f2aa_40a7_070d289aca52 -->|method| ebb0e0b8_7ea8_99f2_f088_5681d00fb278 24d4fdf9_df1a_abe6_fe3d_74a73febd9ca["_pop_and_store_interaction()"] e472f8c6_6d6c_f2aa_40a7_070d289aca52 -->|method| 24d4fdf9_df1a_abe6_fe3d_74a73febd9ca 0abde399_34b3_8788_e081_d83cf5e5ea35["_split_long_ai_text()"] e472f8c6_6d6c_f2aa_40a7_070d289aca52 -->|method| 0abde399_34b3_8788_e081_d83cf5e5ea35
Relationship Graph
Source Code
libs/langchain/langchain_classic/memory/vectorstore_token_buffer_memory.py lines 37–183
class ConversationVectorStoreTokenBufferMemory(ConversationTokenBufferMemory):
"""Conversation chat memory with token limit and vectordb backing.
load_memory_variables() will return a dict with the key "history".
It contains background information retrieved from the vector store
plus recent lines of the current conversation.
To help the LLM understand the part of the conversation stored in the
vectorstore, each interaction is timestamped and the current date and
time is also provided in the history. A side effect of this is that the
LLM will have access to the current date and time.
Initialization arguments:
This class accepts all the initialization arguments of
ConversationTokenBufferMemory, such as `llm`. In addition, it
accepts the following additional arguments
retriever: (required) A VectorStoreRetriever object to use
as the vector backing store
split_chunk_size: (optional, 1000) Token chunk split size
for long messages generated by the AI
previous_history_template: (optional) Template used to format
the contents of the prompt history
Example using ChromaDB:
```python
from langchain_classic.memory.token_buffer_vectorstore_memory import (
ConversationVectorStoreTokenBufferMemory,
)
from langchain_chroma import Chroma
from langchain_community.embeddings import HuggingFaceInstructEmbeddings
from langchain_openai import OpenAI
embedder = HuggingFaceInstructEmbeddings(
query_instruction="Represent the query for retrieval: "
)
chroma = Chroma(
collection_name="demo",
embedding_function=embedder,
collection_metadata={"hnsw:space": "cosine"},
)
retriever = chroma.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={
"k": 5,
"score_threshold": 0.75,
},
)
conversation_memory = ConversationVectorStoreTokenBufferMemory(
return_messages=True,
llm=OpenAI(),
retriever=retriever,
max_token_limit=1000,
)
conversation_memory.save_context({"Human": "Hi there"}, {"AI": "Nice to meet you!"})
conversation_memory.save_context(
{"Human": "Nice day isn't it?"}, {"AI": "I love Wednesdays."}
)
conversation_memory.load_memory_variables({"input": "What time is it?"})
```
"""
retriever: VectorStoreRetriever = Field(exclude=True)
memory_key: str = "history"
previous_history_template: str = DEFAULT_HISTORY_TEMPLATE
split_chunk_size: int = 1000
_memory_retriever: VectorStoreRetrieverMemory | None = PrivateAttr(default=None)
_timestamps: list[datetime] = PrivateAttr(default_factory=list)
@property
def memory_retriever(self) -> VectorStoreRetrieverMemory:
"""Return a memory retriever from the passed retriever object."""
Extends
Source
Frequently Asked Questions
What is the ConversationVectorStoreTokenBufferMemory class?
ConversationVectorStoreTokenBufferMemory is a class in the langchain codebase, defined in libs/langchain/langchain_classic/memory/vectorstore_token_buffer_memory.py.
Where is ConversationVectorStoreTokenBufferMemory defined?
ConversationVectorStoreTokenBufferMemory is defined in libs/langchain/langchain_classic/memory/vectorstore_token_buffer_memory.py at line 37.
What does ConversationVectorStoreTokenBufferMemory extend?
ConversationVectorStoreTokenBufferMemory extends ConversationTokenBufferMemory.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free