test_return_direct_spec.py — langchain Source File
Architecture documentation for test_return_direct_spec.py, a python file in the langchain codebase. 10 imports, 0 dependents.
Entity Profile
Dependency Diagram
graph LR c8900b43_6b5d_2599_1ca3_7e1e2f55b668["test_return_direct_spec.py"] 0029f612_c503_ebcf_a452_a0fae8c9f2c3["os"] c8900b43_6b5d_2599_1ca3_7e1e2f55b668 --> 0029f612_c503_ebcf_a452_a0fae8c9f2c3 feec1ec4_6917_867b_d228_b134d0ff8099["typing"] c8900b43_6b5d_2599_1ca3_7e1e2f55b668 --> feec1ec4_6917_867b_d228_b134d0ff8099 23cb242e_1754_041d_200a_553fcb8abe1b["unittest.mock"] c8900b43_6b5d_2599_1ca3_7e1e2f55b668 --> 23cb242e_1754_041d_200a_553fcb8abe1b f69d6389_263d_68a4_7fbf_f14c0602a9ba["pytest"] c8900b43_6b5d_2599_1ca3_7e1e2f55b668 --> f69d6389_263d_68a4_7fbf_f14c0602a9ba 9444498b_8066_55c7_b3a2_1d90c4162a32["langchain_core.messages"] c8900b43_6b5d_2599_1ca3_7e1e2f55b668 --> 9444498b_8066_55c7_b3a2_1d90c4162a32 121262a1_0bd6_d637_bce3_307ab6b3ecd4["langchain_core.tools"] c8900b43_6b5d_2599_1ca3_7e1e2f55b668 --> 121262a1_0bd6_d637_bce3_307ab6b3ecd4 d9a6942a_c37a_07f8_ed13_74d0fdc117be["langchain.agents"] c8900b43_6b5d_2599_1ca3_7e1e2f55b668 --> d9a6942a_c37a_07f8_ed13_74d0fdc117be 6a1fbcf5_4f70_081d_b76c_92eaad743465["langchain.agents.structured_output"] c8900b43_6b5d_2599_1ca3_7e1e2f55b668 --> 6a1fbcf5_4f70_081d_b76c_92eaad743465 0b0a3757_a493_f871_3824_df66b9fbd327["tests.unit_tests.agents.utils"] c8900b43_6b5d_2599_1ca3_7e1e2f55b668 --> 0b0a3757_a493_f871_3824_df66b9fbd327 2cad93e6_586a_5d28_a74d_4ec6fd4d2227["langchain_openai"] c8900b43_6b5d_2599_1ca3_7e1e2f55b668 --> 2cad93e6_586a_5d28_a74d_4ec6fd4d2227 style c8900b43_6b5d_2599_1ca3_7e1e2f55b668 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
from __future__ import annotations
import os
from typing import (
Any,
)
from unittest.mock import MagicMock
import pytest
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
from langchain.agents import create_agent
from langchain.agents.structured_output import (
ToolStrategy,
)
from tests.unit_tests.agents.utils import BaseSchema, load_spec
try:
from langchain_openai import ChatOpenAI
except ImportError:
skip_openai_integration_tests = True
else:
skip_openai_integration_tests = "OPENAI_API_KEY" not in os.environ
AGENT_PROMPT = """
You are a strict polling bot.
- Only use the "poll_job" tool until it returns { status: "succeeded" }.
- If status is "pending", call the tool again. Do not produce a final answer.
- When it is "succeeded", return exactly: "Attempts: <number>" with no extra text.
"""
class TestCase(BaseSchema):
name: str
return_direct: bool
response_format: dict[str, Any] | None
expected_tool_calls: int
expected_last_message: str
expected_structured_response: dict[str, Any] | None
TEST_CASES = load_spec("return_direct", as_model=TestCase)
def _make_tool(*, return_direct: bool) -> dict[str, Any]:
attempts = 0
def _side_effect() -> dict[str, Any]:
nonlocal attempts
attempts += 1
return {
"status": "succeeded" if attempts >= 10 else "pending",
"attempts": attempts,
}
mock = MagicMock(side_effect=_side_effect)
@tool(
"pollJob",
description=(
"Check the status of a long-running job. "
"Returns { status: 'pending' | 'succeeded', attempts: number }."
),
return_direct=return_direct,
)
def _wrapped() -> Any:
return mock()
return {"tool": _wrapped, "mock": mock}
@pytest.mark.skipif(skip_openai_integration_tests, reason="OpenAI integration tests are disabled.")
@pytest.mark.parametrize("case", TEST_CASES, ids=[c.name for c in TEST_CASES])
def test_return_direct_integration_matrix(case: TestCase) -> None:
poll_tool = _make_tool(return_direct=case.return_direct)
model = ChatOpenAI(
model="gpt-4o",
temperature=0,
)
if case.response_format:
agent = create_agent(
model,
tools=[poll_tool["tool"]],
system_prompt=AGENT_PROMPT,
response_format=ToolStrategy(case.response_format),
)
else:
agent = create_agent(
model,
tools=[poll_tool["tool"]],
system_prompt=AGENT_PROMPT,
)
result = agent.invoke(
{
"messages": [
HumanMessage("Poll the job until it's done and tell me how many attempts it took.")
]
}
)
# Count tool calls
assert poll_tool["mock"].call_count == case.expected_tool_calls
# Check last message content
last_message = result["messages"][-1]
assert last_message.content == case.expected_last_message
# Check structured response
if case.expected_structured_response is not None:
structured_response_json = result["structured_response"]
assert structured_response_json == case.expected_structured_response
else:
assert "structured_response" not in result
Domain
Subdomains
Functions
Classes
Dependencies
- langchain.agents
- langchain.agents.structured_output
- langchain_core.messages
- langchain_core.tools
- langchain_openai
- os
- pytest
- tests.unit_tests.agents.utils
- typing
- unittest.mock
Source
Frequently Asked Questions
What does test_return_direct_spec.py do?
test_return_direct_spec.py is a source file in the langchain codebase, written in python. It belongs to the LangChainCore domain, MessageInterface subdomain.
What functions are defined in test_return_direct_spec.py?
test_return_direct_spec.py defines 4 function(s): _make_tool, langchain_openai, skip_openai_integration_tests, test_return_direct_integration_matrix.
What does test_return_direct_spec.py depend on?
test_return_direct_spec.py imports 10 module(s): langchain.agents, langchain.agents.structured_output, langchain_core.messages, langchain_core.tools, langchain_openai, os, pytest, tests.unit_tests.agents.utils, and 2 more.
Where is test_return_direct_spec.py in the architecture?
test_return_direct_spec.py is located at libs/langchain_v1/tests/unit_tests/agents/test_return_direct_spec.py (domain: LangChainCore, subdomain: MessageInterface, directory: libs/langchain_v1/tests/unit_tests/agents).
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free