test_image_generation_streaming() — langchain Function Reference
Architecture documentation for the test_image_generation_streaming() function in test_responses_api.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 5826d574_86c2_f643_4ca9_96c1ed47fb4e["test_image_generation_streaming()"] 992496d5_b7d4_139f_00cf_3e585d851f81["test_responses_api.py"] 5826d574_86c2_f643_4ca9_96c1ed47fb4e -->|defined in| 992496d5_b7d4_139f_00cf_3e585d851f81 b0966d53_e5bb_3879_d8d6_00823de68309["_check_response()"] 5826d574_86c2_f643_4ca9_96c1ed47fb4e -->|calls| b0966d53_e5bb_3879_d8d6_00823de68309 style 5826d574_86c2_f643_4ca9_96c1ed47fb4e fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/tests/integration_tests/chat_models/test_responses_api.py lines 786–850
def test_image_generation_streaming(
output_version: Literal["v0", "responses/v1"],
) -> None:
"""Test image generation streaming."""
llm = ChatOpenAI(
model="gpt-4.1", use_responses_api=True, output_version=output_version
)
tool = {
"type": "image_generation",
# For testing purposes let's keep the quality low, so the test runs faster.
"quality": "low",
"output_format": "jpeg",
"output_compression": 100,
"size": "1024x1024",
}
# Example tool output for an image
# {
# "background": "opaque",
# "id": "ig_683716a8ddf0819888572b20621c7ae4029ec8c11f8dacf8",
# "output_format": "png",
# "quality": "high",
# "revised_prompt": "A fluffy, fuzzy cat sitting calmly, with soft fur, bright "
# "eyes, and a cute, friendly expression. The background is "
# "simple and light to emphasize the cat's texture and "
# "fluffiness.",
# "size": "1024x1024",
# "status": "completed",
# "type": "image_generation_call",
# "result": # base64 encode image data
# }
expected_keys = {
"id",
"index",
"background",
"output_format",
"quality",
"result",
"revised_prompt",
"size",
"status",
"type",
}
full: BaseMessageChunk | None = None
for chunk in llm.stream("Draw a random short word in green font.", tools=[tool]):
assert isinstance(chunk, AIMessageChunk)
full = chunk if full is None else full + chunk
complete_ai_message = cast(AIMessageChunk, full)
# At the moment, the streaming API does not pick up annotations fully.
# So the following check is commented out.
# _check_response(complete_ai_message)
if output_version == "v0":
assert complete_ai_message.additional_kwargs["tool_outputs"]
tool_output = complete_ai_message.additional_kwargs["tool_outputs"][0]
assert set(tool_output.keys()).issubset(expected_keys)
else:
# "responses/v1"
tool_output = next(
block
for block in complete_ai_message.content
if isinstance(block, dict) and block["type"] == "image_generation_call"
)
assert set(tool_output.keys()).issubset(expected_keys)
Domain
Subdomains
Calls
Source
Frequently Asked Questions
What does test_image_generation_streaming() do?
test_image_generation_streaming() is a function in the langchain codebase, defined in libs/partners/openai/tests/integration_tests/chat_models/test_responses_api.py.
Where is test_image_generation_streaming() defined?
test_image_generation_streaming() is defined in libs/partners/openai/tests/integration_tests/chat_models/test_responses_api.py at line 786.
What does test_image_generation_streaming() call?
test_image_generation_streaming() calls 1 function(s): _check_response.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free