Home / Class/ FakeLLM Class — langchain Architecture

FakeLLM Class — langchain Architecture

Architecture documentation for the FakeLLM class in fake_llm.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  ac1a5357_b229_484a_b87c_ad5bda459438["FakeLLM"]
  b2c7d2a5_0852_93df_c3e1_828c36a88999["LLM"]
  ac1a5357_b229_484a_b87c_ad5bda459438 -->|extends| b2c7d2a5_0852_93df_c3e1_828c36a88999
  f9e331ed_deed_2bc6_e252_dbbed06ec943["fake_llm.py"]
  ac1a5357_b229_484a_b87c_ad5bda459438 -->|defined in| f9e331ed_deed_2bc6_e252_dbbed06ec943
  769bdcb2_165d_c00e_61a4_ed3725d25a0e["check_queries_required()"]
  ac1a5357_b229_484a_b87c_ad5bda459438 -->|method| 769bdcb2_165d_c00e_61a4_ed3725d25a0e
  ef51f901_3612_767d_66c7_eda015b0bd9a["get_num_tokens()"]
  ac1a5357_b229_484a_b87c_ad5bda459438 -->|method| ef51f901_3612_767d_66c7_eda015b0bd9a
  9aa48d9e_b631_37fb_0e45_4c077ef700a4["_llm_type()"]
  ac1a5357_b229_484a_b87c_ad5bda459438 -->|method| 9aa48d9e_b631_37fb_0e45_4c077ef700a4
  dfc95100_44cf_9cac_fda1_28af7d8b9952["_call()"]
  ac1a5357_b229_484a_b87c_ad5bda459438 -->|method| dfc95100_44cf_9cac_fda1_28af7d8b9952
  208a7b32_fbee_6038_4181_08db60d5d25e["_identifying_params()"]
  ac1a5357_b229_484a_b87c_ad5bda459438 -->|method| 208a7b32_fbee_6038_4181_08db60d5d25e
  b4775c65_13a7_0163_e7a0_ae3237841d1a["_get_next_response_in_sequence()"]
  ac1a5357_b229_484a_b87c_ad5bda459438 -->|method| b4775c65_13a7_0163_e7a0_ae3237841d1a

Relationship Graph

Source Code

libs/langchain/tests/unit_tests/llms/fake_llm.py lines 12–61

class FakeLLM(LLM):
    """Fake LLM wrapper for testing purposes."""

    queries: Mapping | None = None
    sequential_responses: bool | None = False
    response_index: int = 0

    @model_validator(mode="before")
    @classmethod
    def check_queries_required(cls, values: dict) -> dict:
        if values.get("sequential_response") and not values.get("queries"):
            msg = "queries is required when sequential_response is set to True"
            raise ValueError(msg)
        return values

    def get_num_tokens(self, text: str) -> int:
        """Return number of tokens."""
        return len(text.split())

    @property
    def _llm_type(self) -> str:
        """Return type of llm."""
        return "fake"

    @override
    def _call(
        self,
        prompt: str,
        stop: list[str] | None = None,
        run_manager: CallbackManagerForLLMRun | None = None,
        **kwargs: Any,
    ) -> str:
        if self.sequential_responses:
            return self._get_next_response_in_sequence
        if self.queries is not None:
            return self.queries[prompt]
        if stop is None:
            return "foo"
        return "bar"

    @property
    def _identifying_params(self) -> dict[str, Any]:
        return {}

    @property
    def _get_next_response_in_sequence(self) -> str:
        queries = cast("Mapping", self.queries)
        response = queries[list(queries.keys())[self.response_index]]
        self.response_index = self.response_index + 1
        return response

Extends

Frequently Asked Questions

What is the FakeLLM class?
FakeLLM is a class in the langchain codebase, defined in libs/langchain/tests/unit_tests/llms/fake_llm.py.
Where is FakeLLM defined?
FakeLLM is defined in libs/langchain/tests/unit_tests/llms/fake_llm.py at line 12.
What does FakeLLM extend?
FakeLLM extends LLM.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free