Home / Class/ RunnableWithFallbacks Class — langchain Architecture

RunnableWithFallbacks Class — langchain Architecture

Architecture documentation for the RunnableWithFallbacks class in fallbacks.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  9f9694d3_4b93_e8e8_a10b_a4e734aaf3b6["RunnableWithFallbacks"]
  e5174863_7a16_0a1b_3df0_7385f9898aa9["fallbacks.py"]
  9f9694d3_4b93_e8e8_a10b_a4e734aaf3b6 -->|defined in| e5174863_7a16_0a1b_3df0_7385f9898aa9
  b1ffc5b9_c38d_56c4_27ae_9fc99ba578bc["InputType()"]
  9f9694d3_4b93_e8e8_a10b_a4e734aaf3b6 -->|method| b1ffc5b9_c38d_56c4_27ae_9fc99ba578bc
  7ae6ac13_ad0a_6083_e62b_d4abbf1076da["OutputType()"]
  9f9694d3_4b93_e8e8_a10b_a4e734aaf3b6 -->|method| 7ae6ac13_ad0a_6083_e62b_d4abbf1076da
  fde324a9_82dd_faa4_5777_4dce430f69bc["get_input_schema()"]
  9f9694d3_4b93_e8e8_a10b_a4e734aaf3b6 -->|method| fde324a9_82dd_faa4_5777_4dce430f69bc
  71f27151_0bc5_6cbd_6e01_085290559617["get_output_schema()"]
  9f9694d3_4b93_e8e8_a10b_a4e734aaf3b6 -->|method| 71f27151_0bc5_6cbd_6e01_085290559617
  f875a92c_213d_6842_9b22_89b2a4a62bb9["config_specs()"]
  9f9694d3_4b93_e8e8_a10b_a4e734aaf3b6 -->|method| f875a92c_213d_6842_9b22_89b2a4a62bb9
  7dd8d840_9359_d189_14bd_cca7baa63df0["is_lc_serializable()"]
  9f9694d3_4b93_e8e8_a10b_a4e734aaf3b6 -->|method| 7dd8d840_9359_d189_14bd_cca7baa63df0
  96279839_bc81_9705_d610_fc47a172a8a5["get_lc_namespace()"]
  9f9694d3_4b93_e8e8_a10b_a4e734aaf3b6 -->|method| 96279839_bc81_9705_d610_fc47a172a8a5
  27ae6c42_a82d_99ab_d189_5b02db145966["runnables()"]
  9f9694d3_4b93_e8e8_a10b_a4e734aaf3b6 -->|method| 27ae6c42_a82d_99ab_d189_5b02db145966
  cfec4c1b_2b35_afd8_dd7e_75be4a983b39["invoke()"]
  9f9694d3_4b93_e8e8_a10b_a4e734aaf3b6 -->|method| cfec4c1b_2b35_afd8_dd7e_75be4a983b39
  37a0529f_dc15_9b43_47c8_596fe83d35bd["ainvoke()"]
  9f9694d3_4b93_e8e8_a10b_a4e734aaf3b6 -->|method| 37a0529f_dc15_9b43_47c8_596fe83d35bd
  18c97a20_575c_08ed_30e9_8cc1332776ca["batch()"]
  9f9694d3_4b93_e8e8_a10b_a4e734aaf3b6 -->|method| 18c97a20_575c_08ed_30e9_8cc1332776ca
  94d8f933_b10d_ec90_8184_507f0f5b44d8["abatch()"]
  9f9694d3_4b93_e8e8_a10b_a4e734aaf3b6 -->|method| 94d8f933_b10d_ec90_8184_507f0f5b44d8
  54bcd062_a3f4_a53c_dd1e_dd4a3239b5c3["stream()"]
  9f9694d3_4b93_e8e8_a10b_a4e734aaf3b6 -->|method| 54bcd062_a3f4_a53c_dd1e_dd4a3239b5c3

Relationship Graph

Source Code

libs/core/langchain_core/runnables/fallbacks.py lines 36–646

class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
    """`Runnable` that can fallback to other `Runnable` objects if it fails.

    External APIs (e.g., APIs for a language model) may at times experience
    degraded performance or even downtime.

    In these cases, it can be useful to have a fallback `Runnable` that can be
    used in place of the original `Runnable` (e.g., fallback to another LLM provider).

    Fallbacks can be defined at the level of a single `Runnable`, or at the level
    of a chain of `Runnable`s. Fallbacks are tried in order until one succeeds or
    all fail.

    While you can instantiate a `RunnableWithFallbacks` directly, it is usually
    more convenient to use the `with_fallbacks` method on a `Runnable`.

    Example:
        ```python
        from langchain_core.chat_models.openai import ChatOpenAI
        from langchain_core.chat_models.anthropic import ChatAnthropic

        model = ChatAnthropic(model="claude-3-haiku-20240307").with_fallbacks(
            [ChatOpenAI(model="gpt-3.5-turbo-0125")]
        )
        # Will usually use ChatAnthropic, but fallback to ChatOpenAI
        # if ChatAnthropic fails.
        model.invoke("hello")

        # And you can also use fallbacks at the level of a chain.
        # Here if both LLM providers fail, we'll fallback to a good hardcoded
        # response.

        from langchain_core.prompts import PromptTemplate
        from langchain_core.output_parser import StrOutputParser
        from langchain_core.runnables import RunnableLambda


        def when_all_is_lost(inputs):
            return (
                "Looks like our LLM providers are down. "
                "Here's a nice 🦜️ emoji for you instead."
            )


        chain_with_fallback = (
            PromptTemplate.from_template("Tell me a joke about {topic}")
            | model
            | StrOutputParser()
        ).with_fallbacks([RunnableLambda(when_all_is_lost)])
        ```
    """

    runnable: Runnable[Input, Output]
    """The `Runnable` to run first."""
    fallbacks: Sequence[Runnable[Input, Output]]
    """A sequence of fallbacks to try."""
    exceptions_to_handle: tuple[type[BaseException], ...] = (Exception,)
    """The exceptions on which fallbacks should be tried.

    Any exception that is not a subclass of these exceptions will be raised immediately.
    """
    exception_key: str | None = None
    """If `string` is specified then handled exceptions will be passed to fallbacks as
    part of the input under the specified key.

    If `None`, exceptions will not be passed to fallbacks.

    If used, the base `Runnable` and its fallbacks must accept a dictionary as input.
    """

    model_config = ConfigDict(
        arbitrary_types_allowed=True,
    )

    @property
    @override
    def InputType(self) -> type[Input]:
        return self.runnable.InputType

    @property
    @override

Frequently Asked Questions

What is the RunnableWithFallbacks class?
RunnableWithFallbacks is a class in the langchain codebase, defined in libs/core/langchain_core/runnables/fallbacks.py.
Where is RunnableWithFallbacks defined?
RunnableWithFallbacks is defined in libs/core/langchain_core/runnables/fallbacks.py at line 36.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free