Home / Function/ create_prompt() — langchain Function Reference

create_prompt() — langchain Function Reference

Architecture documentation for the create_prompt() function in base.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  3d034d31_079b_a79a_e7e6_64b9cb2b9334["create_prompt()"]
  56062173_377c_1c6c_4d10_62181f6c83f8["ConversationalAgent"]
  3d034d31_079b_a79a_e7e6_64b9cb2b9334 -->|defined in| 56062173_377c_1c6c_4d10_62181f6c83f8
  f5b65376_6bec_6d97_9db6_78e842df66ea["from_llm_and_tools()"]
  f5b65376_6bec_6d97_9db6_78e842df66ea -->|calls| 3d034d31_079b_a79a_e7e6_64b9cb2b9334
  style 3d034d31_079b_a79a_e7e6_64b9cb2b9334 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/agents/conversational/base.py lines 75–113

    def create_prompt(
        cls,
        tools: Sequence[BaseTool],
        prefix: str = PREFIX,
        suffix: str = SUFFIX,
        format_instructions: str = FORMAT_INSTRUCTIONS,
        ai_prefix: str = "AI",
        human_prefix: str = "Human",
        input_variables: list[str] | None = None,
    ) -> PromptTemplate:
        """Create prompt in the style of the zero-shot agent.

        Args:
            tools: List of tools the agent will have access to, used to format the
                prompt.
            prefix: String to put before the list of tools.
            suffix: String to put after the list of tools.
            format_instructions: Instructions on how to use the tools.
            ai_prefix: String to use before AI output.
            human_prefix: String to use before human output.
            input_variables: List of input variables the final prompt will expect.
                Defaults to `["input", "chat_history", "agent_scratchpad"]`.

        Returns:
            A PromptTemplate with the template assembled from the pieces here.
        """
        tool_strings = "\n".join(
            [f"> {tool.name}: {tool.description}" for tool in tools],
        )
        tool_names = ", ".join([tool.name for tool in tools])
        format_instructions = format_instructions.format(
            tool_names=tool_names,
            ai_prefix=ai_prefix,
            human_prefix=human_prefix,
        )
        template = f"{prefix}\n\n{tool_strings}\n\n{format_instructions}\n\n{suffix}"
        if input_variables is None:
            input_variables = ["input", "chat_history", "agent_scratchpad"]
        return PromptTemplate(template=template, input_variables=input_variables)

Subdomains

Frequently Asked Questions

What does create_prompt() do?
create_prompt() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/agents/conversational/base.py.
Where is create_prompt() defined?
create_prompt() is defined in libs/langchain/langchain_classic/agents/conversational/base.py at line 75.
What calls create_prompt()?
create_prompt() is called by 1 function(s): from_llm_and_tools.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free