VectorStoreIndexWrapper Class — langchain Architecture
Architecture documentation for the VectorStoreIndexWrapper class in vectorstore.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD e6736e64_8fb0_4b64_bc78_d2a81e4f1b86["VectorStoreIndexWrapper"] 73d9f5a5_8ee1_7e4e_6487_8a802a7a9676["vectorstore.py"] e6736e64_8fb0_4b64_bc78_d2a81e4f1b86 -->|defined in| 73d9f5a5_8ee1_7e4e_6487_8a802a7a9676 30ae31cb_5d10_21b1_a2f8_87c2a4ec2d31["query()"] e6736e64_8fb0_4b64_bc78_d2a81e4f1b86 -->|method| 30ae31cb_5d10_21b1_a2f8_87c2a4ec2d31 b61f9b71_2850_585d_7a86_c361c38407df["aquery()"] e6736e64_8fb0_4b64_bc78_d2a81e4f1b86 -->|method| b61f9b71_2850_585d_7a86_c361c38407df d1bd3113_701b_eb74_76a7_ec566a6e9037["query_with_sources()"] e6736e64_8fb0_4b64_bc78_d2a81e4f1b86 -->|method| d1bd3113_701b_eb74_76a7_ec566a6e9037 cfbe0d53_d2b5_52c2_0192_1e8290693921["aquery_with_sources()"] e6736e64_8fb0_4b64_bc78_d2a81e4f1b86 -->|method| cfbe0d53_d2b5_52c2_0192_1e8290693921
Relationship Graph
Source Code
libs/langchain/langchain_classic/indexes/vectorstore.py lines 24–172
class VectorStoreIndexWrapper(BaseModel):
"""Wrapper around a `VectorStore` for easy access."""
vectorstore: VectorStore
model_config = ConfigDict(
arbitrary_types_allowed=True,
extra="forbid",
)
def query(
self,
question: str,
llm: BaseLanguageModel | None = None,
retriever_kwargs: dict[str, Any] | None = None,
**kwargs: Any,
) -> str:
"""Query the `VectorStore` using the provided LLM.
Args:
question: The question or prompt to query.
llm: The language model to use. Must not be `None`.
retriever_kwargs: Optional keyword arguments for the retriever.
**kwargs: Additional keyword arguments forwarded to the chain.
Returns:
The result string from the RetrievalQA chain.
"""
if llm is None:
msg = (
"This API has been changed to require an LLM. "
"Please provide an llm to use for querying the vectorstore.\n"
"For example,\n"
"from langchain_openai import OpenAI\n"
"model = OpenAI(temperature=0)"
)
raise NotImplementedError(msg)
retriever_kwargs = retriever_kwargs or {}
chain = RetrievalQA.from_chain_type(
llm,
retriever=self.vectorstore.as_retriever(**retriever_kwargs),
**kwargs,
)
return chain.invoke({chain.input_key: question})[chain.output_key]
async def aquery(
self,
question: str,
llm: BaseLanguageModel | None = None,
retriever_kwargs: dict[str, Any] | None = None,
**kwargs: Any,
) -> str:
"""Asynchronously query the `VectorStore` using the provided LLM.
Args:
question: The question or prompt to query.
llm: The language model to use. Must not be `None`.
retriever_kwargs: Optional keyword arguments for the retriever.
**kwargs: Additional keyword arguments forwarded to the chain.
Returns:
The asynchronous result string from the RetrievalQA chain.
"""
if llm is None:
msg = (
"This API has been changed to require an LLM. "
"Please provide an llm to use for querying the vectorstore.\n"
"For example,\n"
"from langchain_openai import OpenAI\n"
"model = OpenAI(temperature=0)"
)
raise NotImplementedError(msg)
retriever_kwargs = retriever_kwargs or {}
chain = RetrievalQA.from_chain_type(
llm,
retriever=self.vectorstore.as_retriever(**retriever_kwargs),
**kwargs,
)
return (await chain.ainvoke({chain.input_key: question}))[chain.output_key]
def query_with_sources(
Source
Frequently Asked Questions
What is the VectorStoreIndexWrapper class?
VectorStoreIndexWrapper is a class in the langchain codebase, defined in libs/langchain/langchain_classic/indexes/vectorstore.py.
Where is VectorStoreIndexWrapper defined?
VectorStoreIndexWrapper is defined in libs/langchain/langchain_classic/indexes/vectorstore.py at line 24.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free