Home / Class/ RunnableMultiActionAgent Class — langchain Architecture

RunnableMultiActionAgent Class — langchain Architecture

Architecture documentation for the RunnableMultiActionAgent class in agent.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  43de2fb1_2cba_0c7f_1a66_b91200c3611f["RunnableMultiActionAgent"]
  da24d66e_6dad_d8e1_dddc_7885d3e6576f["BaseMultiActionAgent"]
  43de2fb1_2cba_0c7f_1a66_b91200c3611f -->|extends| da24d66e_6dad_d8e1_dddc_7885d3e6576f
  0faae8c7_2812_be15_1073_b6537539cea8["agent.py"]
  43de2fb1_2cba_0c7f_1a66_b91200c3611f -->|defined in| 0faae8c7_2812_be15_1073_b6537539cea8
  51afca7b_fdc9_9de7_a680_b20124411d9b["return_values()"]
  43de2fb1_2cba_0c7f_1a66_b91200c3611f -->|method| 51afca7b_fdc9_9de7_a680_b20124411d9b
  4e999d46_2af1_b851_7209_d11c8c1c8127["input_keys()"]
  43de2fb1_2cba_0c7f_1a66_b91200c3611f -->|method| 4e999d46_2af1_b851_7209_d11c8c1c8127
  dd36f14f_cc3c_6f86_0cda_844be39584a3["plan()"]
  43de2fb1_2cba_0c7f_1a66_b91200c3611f -->|method| dd36f14f_cc3c_6f86_0cda_844be39584a3
  6f0df2b5_3eae_dfc5_06c2_f70cf546a695["aplan()"]
  43de2fb1_2cba_0c7f_1a66_b91200c3611f -->|method| 6f0df2b5_3eae_dfc5_06c2_f70cf546a695

Relationship Graph

Source Code

libs/langchain/langchain_classic/agents/agent.py lines 497–607

class RunnableMultiActionAgent(BaseMultiActionAgent):
    """Agent powered by Runnables."""

    runnable: Runnable[dict, list[AgentAction] | AgentFinish]
    """Runnable to call to get agent actions."""
    input_keys_arg: list[str] = []
    return_keys_arg: list[str] = []
    stream_runnable: bool = True
    """Whether to stream from the runnable or not.

    If `True` then underlying LLM is invoked in a streaming fashion to make it possible
        to get access to the individual LLM tokens when using stream_log with the
        `AgentExecutor`. If `False` then LLM is invoked in a non-streaming fashion and
        individual LLM tokens will not be available in stream_log.
    """

    model_config = ConfigDict(
        arbitrary_types_allowed=True,
    )

    @property
    def return_values(self) -> list[str]:
        """Return values of the agent."""
        return self.return_keys_arg

    @property
    def input_keys(self) -> list[str]:
        """Return the input keys.

        Returns:
            List of input keys.
        """
        return self.input_keys_arg

    def plan(
        self,
        intermediate_steps: list[tuple[AgentAction, str]],
        callbacks: Callbacks = None,
        **kwargs: Any,
    ) -> list[AgentAction] | AgentFinish:
        """Based on past history and current inputs, decide what to do.

        Args:
            intermediate_steps: Steps the LLM has taken to date,
                along with the observations.
            callbacks: Callbacks to run.
            **kwargs: User inputs.

        Returns:
            Action specifying what tool to use.
        """
        inputs = {**kwargs, "intermediate_steps": intermediate_steps}
        final_output: Any = None
        if self.stream_runnable:
            # Use streaming to make sure that the underlying LLM is invoked in a
            # streaming
            # fashion to make it possible to get access to the individual LLM tokens
            # when using stream_log with the AgentExecutor.
            # Because the response from the plan is not a generator, we need to
            # accumulate the output into final output and return that.
            for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
                if final_output is None:
                    final_output = chunk
                else:
                    final_output += chunk
        else:
            final_output = self.runnable.invoke(inputs, config={"callbacks": callbacks})

        return final_output

    async def aplan(
        self,
        intermediate_steps: list[tuple[AgentAction, str]],
        callbacks: Callbacks = None,
        **kwargs: Any,
    ) -> list[AgentAction] | AgentFinish:
        """Async based on past history and current inputs, decide what to do.

        Args:
            intermediate_steps: Steps the LLM has taken to date,
                along with observations.

Frequently Asked Questions

What is the RunnableMultiActionAgent class?
RunnableMultiActionAgent is a class in the langchain codebase, defined in libs/langchain/langchain_classic/agents/agent.py.
Where is RunnableMultiActionAgent defined?
RunnableMultiActionAgent is defined in libs/langchain/langchain_classic/agents/agent.py at line 497.
What does RunnableMultiActionAgent extend?
RunnableMultiActionAgent extends BaseMultiActionAgent.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free