test_conversation_retrieval.py — langchain Source File
Architecture documentation for test_conversation_retrieval.py, a python file in the langchain codebase. 5 imports, 0 dependents.
Entity Profile
Dependency Diagram
graph LR a5de4f24_ee73_7137_908b_587e50279cd9["test_conversation_retrieval.py"] c554676d_b731_47b2_a98f_c1c2d537c0aa["langchain_core.documents"] a5de4f24_ee73_7137_908b_587e50279cd9 --> c554676d_b731_47b2_a98f_c1c2d537c0aa ba43b74d_3099_7e1c_aac3_cf594720469e["langchain_core.language_models"] a5de4f24_ee73_7137_908b_587e50279cd9 --> ba43b74d_3099_7e1c_aac3_cf594720469e b23d4989_586a_edf5_6bf1_3b17d1ad42d8["langchain_classic.chains.conversational_retrieval.base"] a5de4f24_ee73_7137_908b_587e50279cd9 --> b23d4989_586a_edf5_6bf1_3b17d1ad42d8 fa7f5088_cc60_fc55_a885_6f490834689c["langchain_classic.memory.buffer"] a5de4f24_ee73_7137_908b_587e50279cd9 --> fa7f5088_cc60_fc55_a885_6f490834689c 58a5d868_7142_bf61_e141_5980e5c9b7d3["tests.unit_tests.retrievers.sequential_retriever"] a5de4f24_ee73_7137_908b_587e50279cd9 --> 58a5d868_7142_bf61_e141_5980e5c9b7d3 style a5de4f24_ee73_7137_908b_587e50279cd9 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
"""Test conversation chain and memory."""
from langchain_core.documents import Document
from langchain_core.language_models import FakeListLLM
from langchain_classic.chains.conversational_retrieval.base import (
ConversationalRetrievalChain,
)
from langchain_classic.memory.buffer import ConversationBufferMemory
from tests.unit_tests.retrievers.sequential_retriever import SequentialRetriever
async def test_simplea() -> None:
fixed_resp = "I don't know"
answer = "I know the answer!"
llm = FakeListLLM(responses=[answer])
retriever = SequentialRetriever(sequential_responses=[[]])
memory = ConversationBufferMemory(
k=1,
output_key="answer",
memory_key="chat_history",
return_messages=True,
)
qa_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
memory=memory,
retriever=retriever,
return_source_documents=True,
rephrase_question=False,
response_if_no_docs_found=fixed_resp,
verbose=True,
)
got = await qa_chain.acall("What is the answer?")
assert got["chat_history"][1].content == fixed_resp
assert got["answer"] == fixed_resp
async def test_fixed_message_response_when_docs_founda() -> None:
fixed_resp = "I don't know"
answer = "I know the answer!"
llm = FakeListLLM(responses=[answer])
retriever = SequentialRetriever(
sequential_responses=[[Document(page_content=answer)]],
)
memory = ConversationBufferMemory(
k=1,
output_key="answer",
memory_key="chat_history",
return_messages=True,
)
qa_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
memory=memory,
retriever=retriever,
return_source_documents=True,
rephrase_question=False,
response_if_no_docs_found=fixed_resp,
verbose=True,
)
got = await qa_chain.acall("What is the answer?")
assert got["chat_history"][1].content == answer
assert got["answer"] == answer
def test_fixed_message_response_when_no_docs_found() -> None:
fixed_resp = "I don't know"
answer = "I know the answer!"
llm = FakeListLLM(responses=[answer])
retriever = SequentialRetriever(sequential_responses=[[]])
memory = ConversationBufferMemory(
k=1,
output_key="answer",
memory_key="chat_history",
return_messages=True,
)
qa_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
memory=memory,
retriever=retriever,
return_source_documents=True,
rephrase_question=False,
response_if_no_docs_found=fixed_resp,
verbose=True,
)
got = qa_chain("What is the answer?")
assert got["chat_history"][1].content == fixed_resp
assert got["answer"] == fixed_resp
def test_fixed_message_response_when_docs_found() -> None:
fixed_resp = "I don't know"
answer = "I know the answer!"
llm = FakeListLLM(responses=[answer])
retriever = SequentialRetriever(
sequential_responses=[[Document(page_content=answer)]],
)
memory = ConversationBufferMemory(
k=1,
output_key="answer",
memory_key="chat_history",
return_messages=True,
)
qa_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
memory=memory,
retriever=retriever,
return_source_documents=True,
rephrase_question=False,
response_if_no_docs_found=fixed_resp,
verbose=True,
)
got = qa_chain("What is the answer?")
assert got["chat_history"][1].content == answer
assert got["answer"] == answer
Domain
Subdomains
Functions
Dependencies
- langchain_classic.chains.conversational_retrieval.base
- langchain_classic.memory.buffer
- langchain_core.documents
- langchain_core.language_models
- tests.unit_tests.retrievers.sequential_retriever
Source
Frequently Asked Questions
What does test_conversation_retrieval.py do?
test_conversation_retrieval.py is a source file in the langchain codebase, written in python. It belongs to the CoreAbstractions domain, RunnableInterface subdomain.
What functions are defined in test_conversation_retrieval.py?
test_conversation_retrieval.py defines 4 function(s): test_fixed_message_response_when_docs_found, test_fixed_message_response_when_docs_founda, test_fixed_message_response_when_no_docs_found, test_simplea.
What does test_conversation_retrieval.py depend on?
test_conversation_retrieval.py imports 5 module(s): langchain_classic.chains.conversational_retrieval.base, langchain_classic.memory.buffer, langchain_core.documents, langchain_core.language_models, tests.unit_tests.retrievers.sequential_retriever.
Where is test_conversation_retrieval.py in the architecture?
test_conversation_retrieval.py is located at libs/langchain/tests/unit_tests/chains/test_conversation_retrieval.py (domain: CoreAbstractions, subdomain: RunnableInterface, directory: libs/langchain/tests/unit_tests/chains).
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free