Home / File/ test_openai_functions.py — langchain Source File

test_openai_functions.py — langchain Source File

Architecture documentation for test_openai_functions.py, a python file in the langchain codebase. 5 imports, 0 dependents.

File python CoreAbstractions MessageSchema 5 imports 6 functions

Entity Profile

Dependency Diagram

graph LR
  3b74d1e5_bda4_cdc4_35f2_c0f1a1ab5ee9["test_openai_functions.py"]
  120e2591_3e15_b895_72b6_cb26195e40a6["pytest"]
  3b74d1e5_bda4_cdc4_35f2_c0f1a1ab5ee9 --> 120e2591_3e15_b895_72b6_cb26195e40a6
  80d582c5_7cc3_ac96_2742_3dbe1cbd4e2b["langchain_core.agents"]
  3b74d1e5_bda4_cdc4_35f2_c0f1a1ab5ee9 --> 80d582c5_7cc3_ac96_2742_3dbe1cbd4e2b
  75137834_4ba7_dc43_7ec5_182c05eceedf["langchain_core.exceptions"]
  3b74d1e5_bda4_cdc4_35f2_c0f1a1ab5ee9 --> 75137834_4ba7_dc43_7ec5_182c05eceedf
  d758344f_537f_649e_f467_b9d7442e86df["langchain_core.messages"]
  3b74d1e5_bda4_cdc4_35f2_c0f1a1ab5ee9 --> d758344f_537f_649e_f467_b9d7442e86df
  d5aabb51_1acb_207c_d830_615a074c1c9d["langchain_classic.agents.output_parsers.openai_functions"]
  3b74d1e5_bda4_cdc4_35f2_c0f1a1ab5ee9 --> d5aabb51_1acb_207c_d830_615a074c1c9d
  style 3b74d1e5_bda4_cdc4_35f2_c0f1a1ab5ee9 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

import pytest
from langchain_core.agents import (
    AgentActionMessageLog,
    AgentFinish,
)
from langchain_core.exceptions import OutputParserException
from langchain_core.messages import AIMessage, SystemMessage

from langchain_classic.agents.output_parsers.openai_functions import (
    OpenAIFunctionsAgentOutputParser,
)


def test_not_an_ai() -> None:
    parser = OpenAIFunctionsAgentOutputParser()
    err = f"Expected an AI message got {SystemMessage!s}"
    with pytest.raises(TypeError, match=err):
        parser.invoke(SystemMessage(content="x"))


# Test: Model response (not a function call).
def test_model_response() -> None:
    parser = OpenAIFunctionsAgentOutputParser()
    msg = AIMessage(content="Model response.")
    result = parser.invoke(msg)

    assert isinstance(result, AgentFinish)
    assert result.return_values == {"output": "Model response."}
    assert result.log == "Model response."


# Test: Model response with a function call.
def test_func_call() -> None:
    parser = OpenAIFunctionsAgentOutputParser()
    msg = AIMessage(
        content="LLM thoughts.",
        additional_kwargs={
            "function_call": {"name": "foo", "arguments": '{"param": 42}'},
        },
    )
    result = parser.invoke(msg)

    assert isinstance(result, AgentActionMessageLog)
    assert result.tool == "foo"
    assert result.tool_input == {"param": 42}
    assert result.log == (
        "\nInvoking: `foo` with `{'param': 42}`\nresponded: LLM thoughts.\n\n"
    )
    assert result.message_log == [msg]


# Test: Model response with a function call for a function taking no arguments
def test_func_call_no_args() -> None:
    parser = OpenAIFunctionsAgentOutputParser()
    msg = AIMessage(
        content="LLM thoughts.",
        additional_kwargs={"function_call": {"name": "foo", "arguments": ""}},
    )
    result = parser.invoke(msg)

    assert isinstance(result, AgentActionMessageLog)
    assert result.tool == "foo"
    assert result.tool_input == {}
    assert result.log == ("\nInvoking: `foo` with `{}`\nresponded: LLM thoughts.\n\n")
    assert result.message_log == [msg]


# Test: Model response with a function call (old style tools).
def test_func_call_oldstyle() -> None:
    parser = OpenAIFunctionsAgentOutputParser()
    msg = AIMessage(
        content="LLM thoughts.",
        additional_kwargs={
            "function_call": {"name": "foo", "arguments": '{"__arg1": "42"}'},
        },
    )
    result = parser.invoke(msg)

    assert isinstance(result, AgentActionMessageLog)
    assert result.tool == "foo"
    assert result.tool_input == "42"
    assert result.log == "\nInvoking: `foo` with `42`\nresponded: LLM thoughts.\n\n"
    assert result.message_log == [msg]


# Test: Invalid function call args.
def test_func_call_invalid() -> None:
    parser = OpenAIFunctionsAgentOutputParser()
    msg = AIMessage(
        content="LLM thoughts.",
        additional_kwargs={"function_call": {"name": "foo", "arguments": "{42]"}},
    )

    err = (
        "Could not parse tool input: {'name': 'foo', 'arguments': '{42]'} "
        "because the `arguments` is not valid JSON."
    )
    with pytest.raises(OutputParserException, match=err):
        parser.invoke(msg)

Subdomains

Dependencies

  • langchain_classic.agents.output_parsers.openai_functions
  • langchain_core.agents
  • langchain_core.exceptions
  • langchain_core.messages
  • pytest

Frequently Asked Questions

What does test_openai_functions.py do?
test_openai_functions.py is a source file in the langchain codebase, written in python. It belongs to the CoreAbstractions domain, MessageSchema subdomain.
What functions are defined in test_openai_functions.py?
test_openai_functions.py defines 6 function(s): test_func_call, test_func_call_invalid, test_func_call_no_args, test_func_call_oldstyle, test_model_response, test_not_an_ai.
What does test_openai_functions.py depend on?
test_openai_functions.py imports 5 module(s): langchain_classic.agents.output_parsers.openai_functions, langchain_core.agents, langchain_core.exceptions, langchain_core.messages, pytest.
Where is test_openai_functions.py in the architecture?
test_openai_functions.py is located at libs/langchain/tests/unit_tests/agents/output_parsers/test_openai_functions.py (domain: CoreAbstractions, subdomain: MessageSchema, directory: libs/langchain/tests/unit_tests/agents/output_parsers).

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free