test_code_interpreter() — langchain Function Reference
Architecture documentation for the test_code_interpreter() function in test_responses_api.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD f3e5d093_21b9_fa67_83b6_8a6f12a90bca["test_code_interpreter()"] 992496d5_b7d4_139f_00cf_3e585d851f81["test_responses_api.py"] f3e5d093_21b9_fa67_83b6_8a6f12a90bca -->|defined in| 992496d5_b7d4_139f_00cf_3e585d851f81 b0966d53_e5bb_3879_d8d6_00823de68309["_check_response()"] f3e5d093_21b9_fa67_83b6_8a6f12a90bca -->|calls| b0966d53_e5bb_3879_d8d6_00823de68309 style f3e5d093_21b9_fa67_83b6_8a6f12a90bca fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/tests/integration_tests/chat_models/test_responses_api.py lines 542–629
def test_code_interpreter(output_version: Literal["v0", "responses/v1", "v1"]) -> None:
llm = ChatOpenAI(
model="o4-mini", use_responses_api=True, output_version=output_version
)
llm_with_tools = llm.bind_tools(
[{"type": "code_interpreter", "container": {"type": "auto"}}]
)
input_message = {
"role": "user",
"content": "Write and run code to answer the question: what is 3^3?",
}
response = llm_with_tools.invoke([input_message])
assert isinstance(response, AIMessage)
_check_response(response)
if output_version == "v0":
tool_outputs = [
item
for item in response.additional_kwargs["tool_outputs"]
if item["type"] == "code_interpreter_call"
]
assert len(tool_outputs) == 1
elif output_version == "responses/v1":
tool_outputs = [
item
for item in response.content
if isinstance(item, dict) and item["type"] == "code_interpreter_call"
]
assert len(tool_outputs) == 1
else:
# v1
tool_outputs = [
item
for item in response.content_blocks
if item["type"] == "server_tool_call" and item["name"] == "code_interpreter"
]
code_interpreter_result = next(
item
for item in response.content_blocks
if item["type"] == "server_tool_result"
)
assert tool_outputs
assert code_interpreter_result
assert len(tool_outputs) == 1
# Test streaming
# Use same container
container_id = tool_outputs[0].get("container_id") or tool_outputs[0].get(
"extras", {}
).get("container_id")
llm_with_tools = llm.bind_tools(
[{"type": "code_interpreter", "container": container_id}]
)
full: BaseMessageChunk | None = None
for chunk in llm_with_tools.stream([input_message]):
assert isinstance(chunk, AIMessageChunk)
full = chunk if full is None else full + chunk
assert isinstance(full, AIMessageChunk)
if output_version == "v0":
tool_outputs = [
item
for item in response.additional_kwargs["tool_outputs"]
if item["type"] == "code_interpreter_call"
]
assert tool_outputs
elif output_version == "responses/v1":
tool_outputs = [
item
for item in response.content
if isinstance(item, dict) and item["type"] == "code_interpreter_call"
]
assert tool_outputs
else:
# v1
code_interpreter_call = next(
item
for item in full.content_blocks
if item["type"] == "server_tool_call" and item["name"] == "code_interpreter"
)
code_interpreter_result = next(
item for item in full.content_blocks if item["type"] == "server_tool_result"
Domain
Subdomains
Calls
Source
Frequently Asked Questions
What does test_code_interpreter() do?
test_code_interpreter() is a function in the langchain codebase, defined in libs/partners/openai/tests/integration_tests/chat_models/test_responses_api.py.
Where is test_code_interpreter() defined?
test_code_interpreter() is defined in libs/partners/openai/tests/integration_tests/chat_models/test_responses_api.py at line 542.
What does test_code_interpreter() call?
test_code_interpreter() calls 1 function(s): _check_response.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free