test_image_generation_multi_turn() — langchain Function Reference
Architecture documentation for the test_image_generation_multi_turn() function in test_responses_api.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 8cef8119_769e_ecbc_363f_5cf82014bd24["test_image_generation_multi_turn()"] 992496d5_b7d4_139f_00cf_3e585d851f81["test_responses_api.py"] 8cef8119_769e_ecbc_363f_5cf82014bd24 -->|defined in| 992496d5_b7d4_139f_00cf_3e585d851f81 b0966d53_e5bb_3879_d8d6_00823de68309["_check_response()"] 8cef8119_769e_ecbc_363f_5cf82014bd24 -->|calls| b0966d53_e5bb_3879_d8d6_00823de68309 style 8cef8119_769e_ecbc_363f_5cf82014bd24 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/tests/integration_tests/chat_models/test_responses_api.py lines 894–996
def test_image_generation_multi_turn(
output_version: Literal["v0", "responses/v1"],
) -> None:
"""Test multi-turn editing of image generation by passing in history."""
# Test multi-turn
llm = ChatOpenAI(
model="gpt-4.1", use_responses_api=True, output_version=output_version
)
# Test invocation
tool = {
"type": "image_generation",
# For testing purposes let's keep the quality low, so the test runs faster.
"quality": "low",
"output_format": "jpeg",
"output_compression": 100,
"size": "1024x1024",
}
llm_with_tools = llm.bind_tools([tool])
chat_history: list[MessageLikeRepresentation] = [
{"role": "user", "content": "Draw a random short word in green font."}
]
ai_message = llm_with_tools.invoke(chat_history)
assert isinstance(ai_message, AIMessage)
_check_response(ai_message)
expected_keys = {
"id",
"background",
"output_format",
"quality",
"result",
"revised_prompt",
"size",
"status",
"type",
}
if output_version == "v0":
tool_output = ai_message.additional_kwargs["tool_outputs"][0]
assert set(tool_output.keys()).issubset(expected_keys)
elif output_version == "responses/v1":
tool_output = next(
block
for block in ai_message.content
if isinstance(block, dict) and block["type"] == "image_generation_call"
)
assert set(tool_output.keys()).issubset(expected_keys)
else:
standard_keys = {"type", "base64", "id", "status"}
tool_output = next(
block
for block in ai_message.content
if isinstance(block, dict) and block["type"] == "image"
)
assert set(standard_keys).issubset(tool_output.keys())
# Example tool output for an image (v0)
# {
# "background": "opaque",
# "id": "ig_683716a8ddf0819888572b20621c7ae4029ec8c11f8dacf8",
# "output_format": "png",
# "quality": "high",
# "revised_prompt": "A fluffy, fuzzy cat sitting calmly, with soft fur, bright "
# "eyes, and a cute, friendly expression. The background is "
# "simple and light to emphasize the cat's texture and "
# "fluffiness.",
# "size": "1024x1024",
# "status": "completed",
# "type": "image_generation_call",
# "result": # base64 encode image data
# }
chat_history.extend(
[
# AI message with tool output
ai_message,
# New request
{
"role": "user",
"content": (
Domain
Subdomains
Calls
Source
Frequently Asked Questions
What does test_image_generation_multi_turn() do?
test_image_generation_multi_turn() is a function in the langchain codebase, defined in libs/partners/openai/tests/integration_tests/chat_models/test_responses_api.py.
Where is test_image_generation_multi_turn() defined?
test_image_generation_multi_turn() is defined in libs/partners/openai/tests/integration_tests/chat_models/test_responses_api.py at line 894.
What does test_image_generation_multi_turn() call?
test_image_generation_multi_turn() calls 1 function(s): _check_response.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free