Home / Function/ create_conversational_retrieval_agent() — langchain Function Reference

create_conversational_retrieval_agent() — langchain Function Reference

Architecture documentation for the create_conversational_retrieval_agent() function in openai_functions.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  fa0488bd_6015_5226_ed9c_bf0a84adf2eb["create_conversational_retrieval_agent()"]
  f998064d_3ef9_aecd_21a8_d1bcc1d5137a["openai_functions.py"]
  fa0488bd_6015_5226_ed9c_bf0a84adf2eb -->|defined in| f998064d_3ef9_aecd_21a8_d1bcc1d5137a
  1add157e_9f24_0612_1084_0bc32b06cf56["_get_default_system_message()"]
  fa0488bd_6015_5226_ed9c_bf0a84adf2eb -->|calls| 1add157e_9f24_0612_1084_0bc32b06cf56
  style fa0488bd_6015_5226_ed9c_bf0a84adf2eb fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/agents/agent_toolkits/conversational_retrieval/openai_functions.py lines 27–85

def create_conversational_retrieval_agent(
    llm: BaseLanguageModel,
    tools: list[BaseTool],
    remember_intermediate_steps: bool = True,  # noqa: FBT001,FBT002
    memory_key: str = "chat_history",
    system_message: SystemMessage | None = None,
    verbose: bool = False,  # noqa: FBT001,FBT002
    max_token_limit: int = 2000,
    **kwargs: Any,
) -> AgentExecutor:
    """A convenience method for creating a conversational retrieval agent.

    Args:
        llm: The language model to use, should be `ChatOpenAI`
        tools: A list of tools the agent has access to
        remember_intermediate_steps: Whether the agent should remember intermediate
            steps or not. Intermediate steps refer to prior action/observation
            pairs from previous questions. The benefit of remembering these is if
            there is relevant information in there, the agent can use it to answer
            follow up questions. The downside is it will take up more tokens.
        memory_key: The name of the memory key in the prompt.
        system_message: The system message to use. By default, a basic one will
            be used.
        verbose: Whether or not the final AgentExecutor should be verbose or not.
        max_token_limit: The max number of tokens to keep around in memory.
        **kwargs: Additional keyword arguments to pass to the `AgentExecutor`.

    Returns:
        An agent executor initialized appropriately
    """
    if remember_intermediate_steps:
        memory: BaseMemory = AgentTokenBufferMemory(
            memory_key=memory_key,
            llm=llm,
            max_token_limit=max_token_limit,
        )
    else:
        memory = ConversationTokenBufferMemory(
            memory_key=memory_key,
            return_messages=True,
            output_key="output",
            llm=llm,
            max_token_limit=max_token_limit,
        )

    _system_message = system_message or _get_default_system_message()
    prompt = OpenAIFunctionsAgent.create_prompt(
        system_message=_system_message,
        extra_prompt_messages=[MessagesPlaceholder(variable_name=memory_key)],
    )
    agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
    return AgentExecutor(
        agent=agent,
        tools=tools,
        memory=memory,
        verbose=verbose,
        return_intermediate_steps=remember_intermediate_steps,
        **kwargs,
    )

Subdomains

Frequently Asked Questions

What does create_conversational_retrieval_agent() do?
create_conversational_retrieval_agent() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/agents/agent_toolkits/conversational_retrieval/openai_functions.py.
Where is create_conversational_retrieval_agent() defined?
create_conversational_retrieval_agent() is defined in libs/langchain/langchain_classic/agents/agent_toolkits/conversational_retrieval/openai_functions.py at line 27.
What does create_conversational_retrieval_agent() call?
create_conversational_retrieval_agent() calls 1 function(s): _get_default_system_message.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free