test_openai_functions.py — langchain Source File
Architecture documentation for test_openai_functions.py, a python file in the langchain codebase. 3 imports, 0 dependents.
Entity Profile
Dependency Diagram
graph LR 42bc0ff5_047f_656a_a092_b27afc995173["test_openai_functions.py"] 59e0d3b0_0f8e_4b79_d442_e9b4821561c7["langchain_core.agents"] 42bc0ff5_047f_656a_a092_b27afc995173 --> 59e0d3b0_0f8e_4b79_d442_e9b4821561c7 9444498b_8066_55c7_b3a2_1d90c4162a32["langchain_core.messages"] 42bc0ff5_047f_656a_a092_b27afc995173 --> 9444498b_8066_55c7_b3a2_1d90c4162a32 d099fc72_77b0_a5d6_6190_287d54d192ef["langchain_classic.agents.format_scratchpad.openai_functions"] 42bc0ff5_047f_656a_a092_b27afc995173 --> d099fc72_77b0_a5d6_6190_287d54d192ef style 42bc0ff5_047f_656a_a092_b27afc995173 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
from langchain_core.agents import AgentActionMessageLog
from langchain_core.messages import AIMessage, FunctionMessage
from langchain_classic.agents.format_scratchpad.openai_functions import (
format_to_openai_function_messages,
)
def test_calls_convert_agent_action_to_messages() -> None:
additional_kwargs1 = {
"function_call": {
"name": "tool1",
"arguments": "input1",
},
}
message1 = AIMessage(content="", additional_kwargs=additional_kwargs1)
action1 = AgentActionMessageLog(
tool="tool1",
tool_input="input1",
log="log1",
message_log=[message1],
)
additional_kwargs2 = {
"function_call": {
"name": "tool2",
"arguments": "input2",
},
}
message2 = AIMessage(content="", additional_kwargs=additional_kwargs2)
action2 = AgentActionMessageLog(
tool="tool2",
tool_input="input2",
log="log2",
message_log=[message2],
)
additional_kwargs3 = {
"function_call": {
"name": "tool3",
"arguments": "input3",
},
}
message3 = AIMessage(content="", additional_kwargs=additional_kwargs3)
action3 = AgentActionMessageLog(
tool="tool3",
tool_input="input3",
log="log3",
message_log=[message3],
)
intermediate_steps = [
(action1, "observation1"),
(action2, "observation2"),
(action3, "observation3"),
]
expected_messages = [
message1,
FunctionMessage(name="tool1", content="observation1"),
message2,
FunctionMessage(name="tool2", content="observation2"),
message3,
FunctionMessage(name="tool3", content="observation3"),
]
output = format_to_openai_function_messages(intermediate_steps)
assert output == expected_messages
def test_handles_empty_input_list() -> None:
output = format_to_openai_function_messages([])
assert output == []
Domain
Subdomains
Dependencies
- langchain_classic.agents.format_scratchpad.openai_functions
- langchain_core.agents
- langchain_core.messages
Source
Frequently Asked Questions
What does test_openai_functions.py do?
test_openai_functions.py is a source file in the langchain codebase, written in python. It belongs to the LangChainCore domain, MessageInterface subdomain.
What functions are defined in test_openai_functions.py?
test_openai_functions.py defines 2 function(s): test_calls_convert_agent_action_to_messages, test_handles_empty_input_list.
What does test_openai_functions.py depend on?
test_openai_functions.py imports 3 module(s): langchain_classic.agents.format_scratchpad.openai_functions, langchain_core.agents, langchain_core.messages.
Where is test_openai_functions.py in the architecture?
test_openai_functions.py is located at libs/langchain/tests/unit_tests/agents/format_scratchpad/test_openai_functions.py (domain: LangChainCore, subdomain: MessageInterface, directory: libs/langchain/tests/unit_tests/agents/format_scratchpad).
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free