Home / Class/ FakeStreamingListLLM Class — langchain Architecture

FakeStreamingListLLM Class — langchain Architecture

Architecture documentation for the FakeStreamingListLLM class in fake.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  651773be_345f_2613_8c67_6ae578c21e2d["FakeStreamingListLLM"]
  2056876e_ec90_c175_96a1_8935e6314af6["FakeListLLM"]
  651773be_345f_2613_8c67_6ae578c21e2d -->|extends| 2056876e_ec90_c175_96a1_8935e6314af6
  1ff71dfa_a51d_6036_ad93_e1783176f476["fake.py"]
  651773be_345f_2613_8c67_6ae578c21e2d -->|defined in| 1ff71dfa_a51d_6036_ad93_e1783176f476
  a3142d95_7473_8f20_eeaf_80f80a969491["stream()"]
  651773be_345f_2613_8c67_6ae578c21e2d -->|method| a3142d95_7473_8f20_eeaf_80f80a969491
  cf2e8e88_7368_a8ef_a6f4_6f12a8a48b85["astream()"]
  651773be_345f_2613_8c67_6ae578c21e2d -->|method| cf2e8e88_7368_a8ef_a6f4_6f12a8a48b85

Relationship Graph

Source Code

libs/core/langchain_core/language_models/fake.py lines 85–137

class FakeStreamingListLLM(FakeListLLM):
    """Fake streaming list LLM for testing purposes.

    An LLM that will return responses from a list in order.

    This model also supports optionally sleeping between successive
    chunks in a streaming implementation.
    """

    error_on_chunk_number: int | None = None
    """If set, will raise an exception on the specified chunk number."""

    @override
    def stream(
        self,
        input: LanguageModelInput,
        config: RunnableConfig | None = None,
        *,
        stop: list[str] | None = None,
        **kwargs: Any,
    ) -> Iterator[str]:
        result = self.invoke(input, config)
        for i_c, c in enumerate(result):
            if self.sleep is not None:
                time.sleep(self.sleep)

            if (
                self.error_on_chunk_number is not None
                and i_c == self.error_on_chunk_number
            ):
                raise FakeListLLMError
            yield c

    @override
    async def astream(
        self,
        input: LanguageModelInput,
        config: RunnableConfig | None = None,
        *,
        stop: list[str] | None = None,
        **kwargs: Any,
    ) -> AsyncIterator[str]:
        result = await self.ainvoke(input, config)
        for i_c, c in enumerate(result):
            if self.sleep is not None:
                await asyncio.sleep(self.sleep)

            if (
                self.error_on_chunk_number is not None
                and i_c == self.error_on_chunk_number
            ):
                raise FakeListLLMError
            yield c

Extends

Frequently Asked Questions

What is the FakeStreamingListLLM class?
FakeStreamingListLLM is a class in the langchain codebase, defined in libs/core/langchain_core/language_models/fake.py.
Where is FakeStreamingListLLM defined?
FakeStreamingListLLM is defined in libs/core/langchain_core/language_models/fake.py at line 85.
What does FakeStreamingListLLM extend?
FakeStreamingListLLM extends FakeListLLM.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free