aplan() — langchain Function Reference
Architecture documentation for the aplan() function in base.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD b6e57bb0_10c3_3ee9_0c58_8dc9d4b47668["aplan()"] b444f628_93a3_fafc_0827_7b9ca1eef67c["OpenAIFunctionsAgent"] b6e57bb0_10c3_3ee9_0c58_8dc9d4b47668 -->|defined in| b444f628_93a3_fafc_0827_7b9ca1eef67c style b6e57bb0_10c3_3ee9_0c58_8dc9d4b47668 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain/langchain_classic/agents/openai_functions_agent/base.py lines 137–168
async def aplan(
self,
intermediate_steps: list[tuple[AgentAction, str]],
callbacks: Callbacks = None,
**kwargs: Any,
) -> AgentAction | AgentFinish:
"""Async given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations.
callbacks: Callbacks to use.
**kwargs: User inputs.
Returns:
Action specifying what tool to use.
If the agent is finished, returns an AgentFinish.
If the agent is not finished, returns an AgentAction.
"""
agent_scratchpad = format_to_openai_function_messages(intermediate_steps)
selected_inputs = {
k: kwargs[k] for k in self.prompt.input_variables if k != "agent_scratchpad"
}
full_inputs = dict(**selected_inputs, agent_scratchpad=agent_scratchpad)
prompt = self.prompt.format_prompt(**full_inputs)
messages = prompt.to_messages()
predicted_message = await self.llm.ainvoke(
messages,
functions=self.functions,
callbacks=callbacks,
)
return self.output_parser.parse_ai_message(predicted_message)
Domain
Subdomains
Source
Frequently Asked Questions
What does aplan() do?
aplan() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/agents/openai_functions_agent/base.py.
Where is aplan() defined?
aplan() is defined in libs/langchain/langchain_classic/agents/openai_functions_agent/base.py at line 137.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free