FakeListChatModel Class — langchain Architecture
Architecture documentation for the FakeListChatModel class in fake_chat_models.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD f42d1c33_e2a0_9925_258c_0236630deeb3["FakeListChatModel"] a3ea7a6e_c0f6_2e2b_7c6a_1f7b9fdaa248["SimpleChatModel"] f42d1c33_e2a0_9925_258c_0236630deeb3 -->|extends| a3ea7a6e_c0f6_2e2b_7c6a_1f7b9fdaa248 625e90ff_0acf_2872_ee23_0e50b0ab92ed["fake_chat_models.py"] f42d1c33_e2a0_9925_258c_0236630deeb3 -->|defined in| 625e90ff_0acf_2872_ee23_0e50b0ab92ed 11abb20f_7441_bb37_b96f_e47b8466c4de["_llm_type()"] f42d1c33_e2a0_9925_258c_0236630deeb3 -->|method| 11abb20f_7441_bb37_b96f_e47b8466c4de acd7b95a_49e2_fb50_75fc_8606e2647450["_call()"] f42d1c33_e2a0_9925_258c_0236630deeb3 -->|method| acd7b95a_49e2_fb50_75fc_8606e2647450 a4b33091_6860_de3b_7b8f_ff4137047186["_stream()"] f42d1c33_e2a0_9925_258c_0236630deeb3 -->|method| a4b33091_6860_de3b_7b8f_ff4137047186 8a21d4c4_85a6_a7e3_967a_3d843aff94e9["_astream()"] f42d1c33_e2a0_9925_258c_0236630deeb3 -->|method| 8a21d4c4_85a6_a7e3_967a_3d843aff94e9 3a07d319_1ffc_396c_a343_0be53fcb44eb["_identifying_params()"] f42d1c33_e2a0_9925_258c_0236630deeb3 -->|method| 3a07d319_1ffc_396c_a343_0be53fcb44eb 48879242_7ff2_3b85_b326_bd62a8023d03["batch()"] f42d1c33_e2a0_9925_258c_0236630deeb3 -->|method| 48879242_7ff2_3b85_b326_bd62a8023d03 bddf6088_1039_417c_99e4_beb67104deb1["abatch()"] f42d1c33_e2a0_9925_258c_0236630deeb3 -->|method| bddf6088_1039_417c_99e4_beb67104deb1
Relationship Graph
Source Code
libs/core/langchain_core/language_models/fake_chat_models.py lines 59–189
class FakeListChatModel(SimpleChatModel):
"""Fake chat model for testing purposes."""
responses: list[str]
"""List of responses to **cycle** through in order."""
sleep: float | None = None
i: int = 0
"""Internally incremented after every model invocation."""
error_on_chunk_number: int | None = None
"""If set, raise an error on the specified chunk number during streaming."""
@property
@override
def _llm_type(self) -> str:
return "fake-list-chat-model"
@override
def _call(
self,
*args: Any,
**kwargs: Any,
) -> str:
"""Return the next response in the list.
Cycle back to the start if at the end.
"""
if self.sleep is not None:
time.sleep(self.sleep)
response = self.responses[self.i]
if self.i < len(self.responses) - 1:
self.i += 1
else:
self.i = 0
return response
@override
def _stream(
self,
messages: list[BaseMessage],
stop: list[str] | None = None,
run_manager: CallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> Iterator[ChatGenerationChunk]:
response = self.responses[self.i]
if self.i < len(self.responses) - 1:
self.i += 1
else:
self.i = 0
for i_c, c in enumerate(response):
if self.sleep is not None:
time.sleep(self.sleep)
if (
self.error_on_chunk_number is not None
and i_c == self.error_on_chunk_number
):
raise FakeListChatModelError
chunk_position: Literal["last"] | None = (
"last" if i_c == len(response) - 1 else None
)
yield ChatGenerationChunk(
message=AIMessageChunk(content=c, chunk_position=chunk_position)
)
@override
async def _astream(
self,
messages: list[BaseMessage],
stop: list[str] | None = None,
run_manager: AsyncCallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> AsyncIterator[ChatGenerationChunk]:
response = self.responses[self.i]
if self.i < len(self.responses) - 1:
self.i += 1
else:
self.i = 0
for i_c, c in enumerate(response):
if self.sleep is not None:
await asyncio.sleep(self.sleep)
if (
Extends
Source
Frequently Asked Questions
What is the FakeListChatModel class?
FakeListChatModel is a class in the langchain codebase, defined in libs/core/langchain_core/language_models/fake_chat_models.py.
Where is FakeListChatModel defined?
FakeListChatModel is defined in libs/core/langchain_core/language_models/fake_chat_models.py at line 59.
What does FakeListChatModel extend?
FakeListChatModel extends SimpleChatModel.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free