test_image_generation_multi_turn_v1() — langchain Function Reference
Architecture documentation for the test_image_generation_multi_turn_v1() function in test_responses_api.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 1ac07fd5_fc83_7a44_8a20_ffb3d8fc1f3e["test_image_generation_multi_turn_v1()"] 992496d5_b7d4_139f_00cf_3e585d851f81["test_responses_api.py"] 1ac07fd5_fc83_7a44_8a20_ffb3d8fc1f3e -->|defined in| 992496d5_b7d4_139f_00cf_3e585d851f81 b0966d53_e5bb_3879_d8d6_00823de68309["_check_response()"] 1ac07fd5_fc83_7a44_8a20_ffb3d8fc1f3e -->|calls| b0966d53_e5bb_3879_d8d6_00823de68309 style 1ac07fd5_fc83_7a44_8a20_ffb3d8fc1f3e fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/tests/integration_tests/chat_models/test_responses_api.py lines 1001–1065
def test_image_generation_multi_turn_v1() -> None:
"""Test multi-turn editing of image generation by passing in history."""
# Test multi-turn
llm = ChatOpenAI(model="gpt-4.1", use_responses_api=True, output_version="v1")
# Test invocation
tool = {
"type": "image_generation",
"quality": "low",
"output_format": "jpeg",
"output_compression": 100,
"size": "1024x1024",
}
llm_with_tools = llm.bind_tools([tool])
chat_history: list[MessageLikeRepresentation] = [
{"role": "user", "content": "Draw a random short word in green font."}
]
ai_message = llm_with_tools.invoke(chat_history)
assert isinstance(ai_message, AIMessage)
_check_response(ai_message)
standard_keys = {"type", "base64", "mime_type", "id"}
extra_keys = {
"background",
"output_format",
"quality",
"revised_prompt",
"size",
"status",
}
tool_output = next(
block
for block in ai_message.content
if isinstance(block, dict) and block["type"] == "image"
)
assert set(standard_keys).issubset(tool_output.keys())
assert set(extra_keys).issubset(tool_output["extras"].keys())
chat_history.extend(
[
# AI message with tool output
ai_message,
# New request
{
"role": "user",
"content": (
"Now, change the font to blue. Keep the word and everything else "
"the same."
),
},
]
)
ai_message2 = llm_with_tools.invoke(chat_history)
assert isinstance(ai_message2, AIMessage)
_check_response(ai_message2)
tool_output = next(
block
for block in ai_message2.content
if isinstance(block, dict) and block["type"] == "image"
)
assert set(standard_keys).issubset(tool_output.keys())
assert set(extra_keys).issubset(tool_output["extras"].keys())
Domain
Subdomains
Calls
Source
Frequently Asked Questions
What does test_image_generation_multi_turn_v1() do?
test_image_generation_multi_turn_v1() is a function in the langchain codebase, defined in libs/partners/openai/tests/integration_tests/chat_models/test_responses_api.py.
Where is test_image_generation_multi_turn_v1() defined?
test_image_generation_multi_turn_v1() is defined in libs/partners/openai/tests/integration_tests/chat_models/test_responses_api.py at line 1001.
What does test_image_generation_multi_turn_v1() call?
test_image_generation_multi_turn_v1() calls 1 function(s): _check_response.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free