Home / Class/ GenericFakeChatModel Class — langchain Architecture

GenericFakeChatModel Class — langchain Architecture

Architecture documentation for the GenericFakeChatModel class in fake_chat_model.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  a45c6745_da49_cdd1_be34_45a357868be5["GenericFakeChatModel"]
  48aa29b8_65e7_522f_a445_a441eeb6baff["BaseChatModel"]
  a45c6745_da49_cdd1_be34_45a357868be5 -->|extends| 48aa29b8_65e7_522f_a445_a441eeb6baff
  653774ed_85b9_3ccf_5709_df3be5253604["ChatResult"]
  a45c6745_da49_cdd1_be34_45a357868be5 -->|extends| 653774ed_85b9_3ccf_5709_df3be5253604
  de5a7878_b3fe_95d7_2575_7f534546dc1e["AIMessage"]
  a45c6745_da49_cdd1_be34_45a357868be5 -->|extends| de5a7878_b3fe_95d7_2575_7f534546dc1e
  0998183a_ee20_cc02_d37b_948998ae74b7["AIMessageChunk"]
  a45c6745_da49_cdd1_be34_45a357868be5 -->|extends| 0998183a_ee20_cc02_d37b_948998ae74b7
  77857b76_cfce_ac63_aef3_e6eb8047bdc2["fake_chat_model.py"]
  a45c6745_da49_cdd1_be34_45a357868be5 -->|defined in| 77857b76_cfce_ac63_aef3_e6eb8047bdc2
  470a7c50_258d_8644_ae1c_c86fd6540442["_generate()"]
  a45c6745_da49_cdd1_be34_45a357868be5 -->|method| 470a7c50_258d_8644_ae1c_c86fd6540442
  bcc2cfdb_efbc_bfa5_edf8_2cba8c4c2bcf["_stream()"]
  a45c6745_da49_cdd1_be34_45a357868be5 -->|method| bcc2cfdb_efbc_bfa5_edf8_2cba8c4c2bcf
  26e3a369_7a57_42b1_3346_65f11015ffc2["_astream()"]
  a45c6745_da49_cdd1_be34_45a357868be5 -->|method| 26e3a369_7a57_42b1_3346_65f11015ffc2
  a5b72bfa_eca6_3cf4_59a7_f4192b381a86["_llm_type()"]
  a45c6745_da49_cdd1_be34_45a357868be5 -->|method| a5b72bfa_eca6_3cf4_59a7_f4192b381a86

Relationship Graph

Source Code

libs/langchain/tests/unit_tests/llms/fake_chat_model.py lines 57–222

class GenericFakeChatModel(BaseChatModel):
    """A generic fake chat model that can be used to test the chat model interface.

    * Chat model should be usable in both sync and async tests
    * Invokes `on_llm_new_token` to allow for testing of callback related code for new
        tokens.
    * Includes logic to break messages into message chunk to facilitate testing of
        streaming.
    """

    messages: Iterator[AIMessage]
    """Get an iterator over messages.

    This can be expanded to accept other types like `Callables` / dicts / strings
    to make the interface more generic if needed.

    !!! note
        If you want to pass a list, you can use `iter` to convert it to an iterator.

    !!! warning
        Streaming is not implemented yet. We should try to implement it in the future by
        delegating to invoke and then breaking the resulting output into message chunks.

    """

    @override
    def _generate(
        self,
        messages: list[BaseMessage],
        stop: list[str] | None = None,
        run_manager: CallbackManagerForLLMRun | None = None,
        **kwargs: Any,
    ) -> ChatResult:
        """Top Level call."""
        message = next(self.messages)
        generation = ChatGeneration(message=message)
        return ChatResult(generations=[generation])

    def _stream(
        self,
        messages: list[BaseMessage],
        stop: list[str] | None = None,
        run_manager: CallbackManagerForLLMRun | None = None,
        **kwargs: Any,
    ) -> Iterator[ChatGenerationChunk]:
        """Stream the output of the model."""
        chat_result = self._generate(
            messages,
            stop=stop,
            run_manager=run_manager,
            **kwargs,
        )
        if not isinstance(chat_result, ChatResult):
            msg = (  # type: ignore[unreachable]
                f"Expected generate to return a ChatResult, "
                f"but got {type(chat_result)} instead."
            )
            raise TypeError(msg)

        message = chat_result.generations[0].message

        if not isinstance(message, AIMessage):
            msg = (
                f"Expected invoke to return an AIMessage, "
                f"but got {type(message)} instead."
            )
            raise TypeError(msg)

        content = message.content

        if content:
            # Use a regular expression to split on whitespace with a capture group
            # so that we can preserve the whitespace in the output.
            assert isinstance(content, str)
            content_chunks = cast("list[str]", re.split(r"(\s)", content))

            for idx, token in enumerate(content_chunks):
                chunk = ChatGenerationChunk(
                    message=AIMessageChunk(id=message.id, content=token),
                )
                if (

Frequently Asked Questions

What is the GenericFakeChatModel class?
GenericFakeChatModel is a class in the langchain codebase, defined in libs/langchain/tests/unit_tests/llms/fake_chat_model.py.
Where is GenericFakeChatModel defined?
GenericFakeChatModel is defined in libs/langchain/tests/unit_tests/llms/fake_chat_model.py at line 57.
What does GenericFakeChatModel extend?
GenericFakeChatModel extends BaseChatModel, ChatResult, AIMessage, AIMessageChunk.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free