FakeListLLM Class — langchain Architecture
Architecture documentation for the FakeListLLM class in fake.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 2056876e_ec90_c175_96a1_8935e6314af6["FakeListLLM"] 910491ea_3c91_00a1_f243_690932079870["LLM"] 2056876e_ec90_c175_96a1_8935e6314af6 -->|extends| 910491ea_3c91_00a1_f243_690932079870 1ff71dfa_a51d_6036_ad93_e1783176f476["fake.py"] 2056876e_ec90_c175_96a1_8935e6314af6 -->|defined in| 1ff71dfa_a51d_6036_ad93_e1783176f476 2f5f5dff_182b_8f36_1310_2b3c42e06ea6["_llm_type()"] 2056876e_ec90_c175_96a1_8935e6314af6 -->|method| 2f5f5dff_182b_8f36_1310_2b3c42e06ea6 d2bde126_7195_2316_4f32_e5b335b8e090["_call()"] 2056876e_ec90_c175_96a1_8935e6314af6 -->|method| d2bde126_7195_2316_4f32_e5b335b8e090 3568d54e_7dad_b405_eb13_22c504d86236["_acall()"] 2056876e_ec90_c175_96a1_8935e6314af6 -->|method| 3568d54e_7dad_b405_eb13_22c504d86236 ab9986e2_4254_1098_548e_8948a691c12d["_identifying_params()"] 2056876e_ec90_c175_96a1_8935e6314af6 -->|method| ab9986e2_4254_1098_548e_8948a691c12d
Relationship Graph
Source Code
libs/core/langchain_core/language_models/fake.py lines 19–78
class FakeListLLM(LLM):
"""Fake LLM for testing purposes."""
responses: list[str]
"""List of responses to return in order."""
# This parameter should be removed from FakeListLLM since
# it's only used by sub-classes.
sleep: float | None = None
"""Sleep time in seconds between responses.
Ignored by FakeListLLM, but used by sub-classes.
"""
i: int = 0
"""Internally incremented after every model invocation.
Useful primarily for testing purposes.
"""
@property
@override
def _llm_type(self) -> str:
"""Return type of llm."""
return "fake-list"
@override
def _call(
self,
prompt: str,
stop: list[str] | None = None,
run_manager: CallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> str:
"""Return next response."""
response = self.responses[self.i]
if self.i < len(self.responses) - 1:
self.i += 1
else:
self.i = 0
return response
@override
async def _acall(
self,
prompt: str,
stop: list[str] | None = None,
run_manager: AsyncCallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> str:
"""Return next response."""
response = self.responses[self.i]
if self.i < len(self.responses) - 1:
self.i += 1
else:
self.i = 0
return response
@property
@override
def _identifying_params(self) -> Mapping[str, Any]:
return {"responses": self.responses}
Extends
Source
Frequently Asked Questions
What is the FakeListLLM class?
FakeListLLM is a class in the langchain codebase, defined in libs/core/langchain_core/language_models/fake.py.
Where is FakeListLLM defined?
FakeListLLM is defined in libs/core/langchain_core/language_models/fake.py at line 19.
What does FakeListLLM extend?
FakeListLLM extends LLM.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free