Home / Function/ invoke() — langchain Function Reference

invoke() — langchain Function Reference

Architecture documentation for the invoke() function in base.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  addd6899_a5cd_0e3e_74c7_bb99653507b2["invoke()"]
  40171661_732e_8178_c8ae_92254ace13fe["OpenAIAssistantRunnable"]
  addd6899_a5cd_0e3e_74c7_bb99653507b2 -->|defined in| 40171661_732e_8178_c8ae_92254ace13fe
  782b6df1_2ffe_3fa8_3268_e14b8296dbcb["_parse_intermediate_steps()"]
  addd6899_a5cd_0e3e_74c7_bb99653507b2 -->|calls| 782b6df1_2ffe_3fa8_3268_e14b8296dbcb
  7e21132d_9efa_50c9_67b3_696161b058a8["_create_thread_and_run()"]
  addd6899_a5cd_0e3e_74c7_bb99653507b2 -->|calls| 7e21132d_9efa_50c9_67b3_696161b058a8
  24d4664f_f871_cae2_c0d0_ab58f161c9c9["_create_run()"]
  addd6899_a5cd_0e3e_74c7_bb99653507b2 -->|calls| 24d4664f_f871_cae2_c0d0_ab58f161c9c9
  2c47f734_2b6a_7a50_60db_4c515cc3d976["_wait_for_run()"]
  addd6899_a5cd_0e3e_74c7_bb99653507b2 -->|calls| 2c47f734_2b6a_7a50_60db_4c515cc3d976
  1ad76bb2_dc8b_7c02_4dc1_b65f4601f9bf["_get_response()"]
  addd6899_a5cd_0e3e_74c7_bb99653507b2 -->|calls| 1ad76bb2_dc8b_7c02_4dc1_b65f4601f9bf
  style addd6899_a5cd_0e3e_74c7_bb99653507b2 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/agents/openai_assistant/base.py lines 288–383

    def invoke(
        self,
        input: dict,
        config: RunnableConfig | None = None,
        **kwargs: Any,
    ) -> OutputType:
        """Invoke assistant.

        Args:
            input: Runnable input dict that can have:
                content: User message when starting a new run.
                thread_id: Existing thread to use.
                run_id: Existing run to use. Should only be supplied when providing
                    the tool output for a required action after an initial invocation.
                message_metadata: Metadata to associate with new message.
                thread_metadata: Metadata to associate with new thread. Only relevant
                    when new thread being created.
                instructions: Additional run instructions.
                model: Override Assistant model for this run.
                tools: Override Assistant tools for this run.
                parallel_tool_calls: Allow Assistant to set parallel_tool_calls
                    for this run.
                top_p: Override Assistant top_p for this run.
                temperature: Override Assistant temperature for this run.
                max_completion_tokens: Allow setting max_completion_tokens for this run.
                max_prompt_tokens: Allow setting max_prompt_tokens for this run.
                run_metadata: Metadata to associate with new run.
                attachments: A list of files attached to the message, and the
                    tools they should be added to.
            config: Runnable config.
            **kwargs: Additional arguments.

        Returns:
            If self.as_agent, will return
                Union[List[OpenAIAssistantAction], OpenAIAssistantFinish].
                Otherwise, will return OpenAI types
                Union[List[ThreadMessage], List[RequiredActionFunctionToolCall]].
        """
        config = ensure_config(config)
        callback_manager = CallbackManager.configure(
            inheritable_callbacks=config.get("callbacks"),
            inheritable_tags=config.get("tags"),
            inheritable_metadata=config.get("metadata"),
        )
        run_manager = callback_manager.on_chain_start(
            dumpd(self),
            input,
            name=config.get("run_name") or self.get_name(),
        )
        try:
            # Being run within AgentExecutor and there are tool outputs to submit.
            if self.as_agent and input.get("intermediate_steps"):
                tool_outputs = self._parse_intermediate_steps(
                    input["intermediate_steps"],
                )
                run = self.client.beta.threads.runs.submit_tool_outputs(**tool_outputs)
            # Starting a new thread and a new run.
            elif "thread_id" not in input:
                thread = {
                    "messages": [
                        {
                            "role": "user",
                            "content": input["content"],
                            "metadata": input.get("message_metadata"),
                            "attachments": input.get("attachments"),
                        },
                    ],
                    "metadata": input.get("thread_metadata"),
                }
                run = self._create_thread_and_run(input, thread)
            # Starting a new run in an existing thread.
            elif "run_id" not in input:
                _ = self.client.beta.threads.messages.create(
                    input["thread_id"],
                    content=input["content"],
                    role="user",
                    metadata=input.get("message_metadata"),
                )
                run = self._create_run(input)
            # Submitting tool outputs to an existing run, outside the AgentExecutor
            # framework.

Subdomains

Frequently Asked Questions

What does invoke() do?
invoke() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/agents/openai_assistant/base.py.
Where is invoke() defined?
invoke() is defined in libs/langchain/langchain_classic/agents/openai_assistant/base.py at line 288.
What does invoke() call?
invoke() calls 5 function(s): _create_run, _create_thread_and_run, _get_response, _parse_intermediate_steps, _wait_for_run.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free