Home / File/ openai_functions.py — langchain Source File

openai_functions.py — langchain Source File

Architecture documentation for openai_functions.py, a python file in the langchain codebase. 10 imports, 0 dependents.

File python AgentOrchestration ToolInterface 10 imports 2 functions

Entity Profile

Dependency Diagram

graph LR
  b71d7579_0b5f_44f5_df6a_78400012a4e4["openai_functions.py"]
  8e2034b7_ceb8_963f_29fc_2ea6b50ef9b3["typing"]
  b71d7579_0b5f_44f5_df6a_78400012a4e4 --> 8e2034b7_ceb8_963f_29fc_2ea6b50ef9b3
  ba43b74d_3099_7e1c_aac3_cf594720469e["langchain_core.language_models"]
  b71d7579_0b5f_44f5_df6a_78400012a4e4 --> ba43b74d_3099_7e1c_aac3_cf594720469e
  d758344f_537f_649e_f467_b9d7442e86df["langchain_core.messages"]
  b71d7579_0b5f_44f5_df6a_78400012a4e4 --> d758344f_537f_649e_f467_b9d7442e86df
  e45722a2_0136_a972_1f58_7b5987500404["langchain_core.prompts.chat"]
  b71d7579_0b5f_44f5_df6a_78400012a4e4 --> e45722a2_0136_a972_1f58_7b5987500404
  43d88577_548b_2248_b01b_7987bae85dcc["langchain_core.tools"]
  b71d7579_0b5f_44f5_df6a_78400012a4e4 --> 43d88577_548b_2248_b01b_7987bae85dcc
  e160f068_75de_4342_6673_9969b919de85["langchain_classic.agents.agent"]
  b71d7579_0b5f_44f5_df6a_78400012a4e4 --> e160f068_75de_4342_6673_9969b919de85
  2b03adbb_c849_4520_2c3a_086bdb244d3d["langchain_classic.agents.openai_functions_agent.agent_token_buffer_memory"]
  b71d7579_0b5f_44f5_df6a_78400012a4e4 --> 2b03adbb_c849_4520_2c3a_086bdb244d3d
  e7648bbd_3a19_8852_3676_884e5d5c894e["langchain_classic.agents.openai_functions_agent.base"]
  b71d7579_0b5f_44f5_df6a_78400012a4e4 --> e7648bbd_3a19_8852_3676_884e5d5c894e
  6bcbe1b6_8195_eae9_29e5_8e80e91eca64["langchain_classic.base_memory"]
  b71d7579_0b5f_44f5_df6a_78400012a4e4 --> 6bcbe1b6_8195_eae9_29e5_8e80e91eca64
  2b278813_0a37_c5e7_9f85_b24ab7ad09f2["langchain_classic.memory.token_buffer"]
  b71d7579_0b5f_44f5_df6a_78400012a4e4 --> 2b278813_0a37_c5e7_9f85_b24ab7ad09f2
  style b71d7579_0b5f_44f5_df6a_78400012a4e4 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

from typing import Any

from langchain_core.language_models import BaseLanguageModel
from langchain_core.messages import SystemMessage
from langchain_core.prompts.chat import MessagesPlaceholder
from langchain_core.tools import BaseTool

from langchain_classic.agents.agent import AgentExecutor
from langchain_classic.agents.openai_functions_agent.agent_token_buffer_memory import (
    AgentTokenBufferMemory,
)
from langchain_classic.agents.openai_functions_agent.base import OpenAIFunctionsAgent
from langchain_classic.base_memory import BaseMemory
from langchain_classic.memory.token_buffer import ConversationTokenBufferMemory


def _get_default_system_message() -> SystemMessage:
    return SystemMessage(
        content=(
            "Do your best to answer the questions. "
            "Feel free to use any tools available to look up "
            "relevant information, only if necessary"
        ),
    )


def create_conversational_retrieval_agent(
    llm: BaseLanguageModel,
    tools: list[BaseTool],
    remember_intermediate_steps: bool = True,  # noqa: FBT001,FBT002
    memory_key: str = "chat_history",
    system_message: SystemMessage | None = None,
    verbose: bool = False,  # noqa: FBT001,FBT002
    max_token_limit: int = 2000,
    **kwargs: Any,
) -> AgentExecutor:
    """A convenience method for creating a conversational retrieval agent.

    Args:
        llm: The language model to use, should be `ChatOpenAI`
        tools: A list of tools the agent has access to
        remember_intermediate_steps: Whether the agent should remember intermediate
            steps or not. Intermediate steps refer to prior action/observation
            pairs from previous questions. The benefit of remembering these is if
            there is relevant information in there, the agent can use it to answer
            follow up questions. The downside is it will take up more tokens.
        memory_key: The name of the memory key in the prompt.
        system_message: The system message to use. By default, a basic one will
            be used.
        verbose: Whether or not the final AgentExecutor should be verbose or not.
        max_token_limit: The max number of tokens to keep around in memory.
        **kwargs: Additional keyword arguments to pass to the `AgentExecutor`.

    Returns:
        An agent executor initialized appropriately
    """
    if remember_intermediate_steps:
        memory: BaseMemory = AgentTokenBufferMemory(
            memory_key=memory_key,
            llm=llm,
            max_token_limit=max_token_limit,
        )
    else:
        memory = ConversationTokenBufferMemory(
            memory_key=memory_key,
            return_messages=True,
            output_key="output",
            llm=llm,
            max_token_limit=max_token_limit,
        )

    _system_message = system_message or _get_default_system_message()
    prompt = OpenAIFunctionsAgent.create_prompt(
        system_message=_system_message,
        extra_prompt_messages=[MessagesPlaceholder(variable_name=memory_key)],
    )
    agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
    return AgentExecutor(
        agent=agent,
        tools=tools,
        memory=memory,
        verbose=verbose,
        return_intermediate_steps=remember_intermediate_steps,
        **kwargs,
    )

Subdomains

Dependencies

  • langchain_classic.agents.agent
  • langchain_classic.agents.openai_functions_agent.agent_token_buffer_memory
  • langchain_classic.agents.openai_functions_agent.base
  • langchain_classic.base_memory
  • langchain_classic.memory.token_buffer
  • langchain_core.language_models
  • langchain_core.messages
  • langchain_core.prompts.chat
  • langchain_core.tools
  • typing

Frequently Asked Questions

What does openai_functions.py do?
openai_functions.py is a source file in the langchain codebase, written in python. It belongs to the AgentOrchestration domain, ToolInterface subdomain.
What functions are defined in openai_functions.py?
openai_functions.py defines 2 function(s): _get_default_system_message, create_conversational_retrieval_agent.
What does openai_functions.py depend on?
openai_functions.py imports 10 module(s): langchain_classic.agents.agent, langchain_classic.agents.openai_functions_agent.agent_token_buffer_memory, langchain_classic.agents.openai_functions_agent.base, langchain_classic.base_memory, langchain_classic.memory.token_buffer, langchain_core.language_models, langchain_core.messages, langchain_core.prompts.chat, and 2 more.
Where is openai_functions.py in the architecture?
openai_functions.py is located at libs/langchain/langchain_classic/agents/agent_toolkits/conversational_retrieval/openai_functions.py (domain: AgentOrchestration, subdomain: ToolInterface, directory: libs/langchain/langchain_classic/agents/agent_toolkits/conversational_retrieval).

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free