Home / Class/ GenericFakeChatModel Class — langchain Architecture

GenericFakeChatModel Class — langchain Architecture

Architecture documentation for the GenericFakeChatModel class in fake_chat_models.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  edf81759_05d2_fa5d_d2a5_784f18a911cc["GenericFakeChatModel"]
  d009a608_c505_bd50_7200_0de8a69ba4b7["BaseChatModel"]
  edf81759_05d2_fa5d_d2a5_784f18a911cc -->|extends| d009a608_c505_bd50_7200_0de8a69ba4b7
  b8f1dd91_99fe_8e7b_c82c_986ec0b8f4d4["ChatResult"]
  edf81759_05d2_fa5d_d2a5_784f18a911cc -->|extends| b8f1dd91_99fe_8e7b_c82c_986ec0b8f4d4
  fcfa55b0_4a86_fa31_a156_3c38c76a0a9b["AIMessage"]
  edf81759_05d2_fa5d_d2a5_784f18a911cc -->|extends| fcfa55b0_4a86_fa31_a156_3c38c76a0a9b
  17a9b92d_bb83_78d8_7df7_7200745cc17b["AIMessageChunk"]
  edf81759_05d2_fa5d_d2a5_784f18a911cc -->|extends| 17a9b92d_bb83_78d8_7df7_7200745cc17b
  625e90ff_0acf_2872_ee23_0e50b0ab92ed["fake_chat_models.py"]
  edf81759_05d2_fa5d_d2a5_784f18a911cc -->|defined in| 625e90ff_0acf_2872_ee23_0e50b0ab92ed
  dd98ca33_c54a_493c_c5fa_2526e927541c["_generate()"]
  edf81759_05d2_fa5d_d2a5_784f18a911cc -->|method| dd98ca33_c54a_493c_c5fa_2526e927541c
  408e4440_8b07_4bc1_46e8_3e71de700fb8["_stream()"]
  edf81759_05d2_fa5d_d2a5_784f18a911cc -->|method| 408e4440_8b07_4bc1_46e8_3e71de700fb8
  b42071c0_7df6_4b9a_a964_c9384a29b7b6["_llm_type()"]
  edf81759_05d2_fa5d_d2a5_784f18a911cc -->|method| b42071c0_7df6_4b9a_a964_c9384a29b7b6

Relationship Graph

Source Code

libs/core/langchain_core/language_models/fake_chat_models.py lines 227–371

class GenericFakeChatModel(BaseChatModel):
    """Generic fake chat model that can be used to test the chat model interface.

    * Chat model should be usable in both sync and async tests
    * Invokes `on_llm_new_token` to allow for testing of callback related code for new
        tokens.
    * Includes logic to break messages into message chunk to facilitate testing of
        streaming.

    """

    messages: Iterator[AIMessage | str]
    """Get an iterator over messages.

    This can be expanded to accept other types like Callables / dicts / strings
    to make the interface more generic if needed.

    !!! note
        if you want to pass a list, you can use `iter` to convert it to an iterator.

    !!! warning
        Streaming is not implemented yet. We should try to implement it in the future by
        delegating to invoke and then breaking the resulting output into message chunks.

    """

    @override
    def _generate(
        self,
        messages: list[BaseMessage],
        stop: list[str] | None = None,
        run_manager: CallbackManagerForLLMRun | None = None,
        **kwargs: Any,
    ) -> ChatResult:
        message = next(self.messages)
        message_ = AIMessage(content=message) if isinstance(message, str) else message
        generation = ChatGeneration(message=message_)
        return ChatResult(generations=[generation])

    def _stream(
        self,
        messages: list[BaseMessage],
        stop: list[str] | None = None,
        run_manager: CallbackManagerForLLMRun | None = None,
        **kwargs: Any,
    ) -> Iterator[ChatGenerationChunk]:
        chat_result = self._generate(
            messages, stop=stop, run_manager=run_manager, **kwargs
        )
        if not isinstance(chat_result, ChatResult):
            msg = (
                f"Expected generate to return a ChatResult, "
                f"but got {type(chat_result)} instead."
            )
            raise ValueError(msg)  # noqa: TRY004

        message = chat_result.generations[0].message

        if not isinstance(message, AIMessage):
            msg = (
                f"Expected invoke to return an AIMessage, "
                f"but got {type(message)} instead."
            )
            raise ValueError(msg)  # noqa: TRY004

        content = message.content

        if content:
            # Use a regular expression to split on whitespace with a capture group
            # so that we can preserve the whitespace in the output.
            if not isinstance(content, str):
                msg = "Expected content to be a string."
                raise ValueError(msg)

            content_chunks = cast("list[str]", re.split(r"(\s)", content))

            for idx, token in enumerate(content_chunks):
                chunk = ChatGenerationChunk(
                    message=AIMessageChunk(content=token, id=message.id)
                )
                if (

Frequently Asked Questions

What is the GenericFakeChatModel class?
GenericFakeChatModel is a class in the langchain codebase, defined in libs/core/langchain_core/language_models/fake_chat_models.py.
Where is GenericFakeChatModel defined?
GenericFakeChatModel is defined in libs/core/langchain_core/language_models/fake_chat_models.py at line 227.
What does GenericFakeChatModel extend?
GenericFakeChatModel extends BaseChatModel, ChatResult, AIMessage, AIMessageChunk.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free