Home / Function/ create_prompt() — langchain Function Reference

create_prompt() — langchain Function Reference

Architecture documentation for the create_prompt() function in base.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  a6206bc9_89e6_ac37_3164_590e3974b4f6["create_prompt()"]
  37dfdb3a_fc75_8e43_a082_e000915175c2["ConversationalChatAgent"]
  a6206bc9_89e6_ac37_3164_590e3974b4f6 -->|defined in| 37dfdb3a_fc75_8e43_a082_e000915175c2
  792d39b0_6cc6_56c1_0b6c_cc11d6f11d45["from_llm_and_tools()"]
  792d39b0_6cc6_56c1_0b6c_cc11d6f11d45 -->|calls| a6206bc9_89e6_ac37_3164_590e3974b4f6
  5e0d227f_9765_818c_3c02_743fc1bc0c86["_get_default_output_parser()"]
  a6206bc9_89e6_ac37_3164_590e3974b4f6 -->|calls| 5e0d227f_9765_818c_3c02_743fc1bc0c86
  style a6206bc9_89e6_ac37_3164_590e3974b4f6 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/agents/conversational_chat/base.py lines 78–118

    def create_prompt(
        cls,
        tools: Sequence[BaseTool],
        system_message: str = PREFIX,
        human_message: str = SUFFIX,
        input_variables: list[str] | None = None,
        output_parser: BaseOutputParser | None = None,
    ) -> BasePromptTemplate:
        """Create a prompt for the agent.

        Args:
            tools: The tools to use.
            system_message: The `SystemMessage` to use.
            human_message: The `HumanMessage` to use.
            input_variables: The input variables to use.
            output_parser: The output parser to use.

        Returns:
            A `PromptTemplate`.
        """
        tool_strings = "\n".join(
            [f"> {tool.name}: {tool.description}" for tool in tools],
        )
        tool_names = ", ".join([tool.name for tool in tools])
        _output_parser = output_parser or cls._get_default_output_parser()
        format_instructions = human_message.format(
            format_instructions=_output_parser.get_format_instructions(),
        )
        final_prompt = format_instructions.format(
            tool_names=tool_names,
            tools=tool_strings,
        )
        if input_variables is None:
            input_variables = ["input", "chat_history", "agent_scratchpad"]
        messages = [
            SystemMessagePromptTemplate.from_template(system_message),
            MessagesPlaceholder(variable_name="chat_history"),
            HumanMessagePromptTemplate.from_template(final_prompt),
            MessagesPlaceholder(variable_name="agent_scratchpad"),
        ]
        return ChatPromptTemplate(input_variables=input_variables, messages=messages)

Subdomains

Frequently Asked Questions

What does create_prompt() do?
create_prompt() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/agents/conversational_chat/base.py.
Where is create_prompt() defined?
create_prompt() is defined in libs/langchain/langchain_classic/agents/conversational_chat/base.py at line 78.
What does create_prompt() call?
create_prompt() calls 1 function(s): _get_default_output_parser.
What calls create_prompt()?
create_prompt() is called by 1 function(s): from_llm_and_tools.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free