test_stream_reasoning_summary() — langchain Function Reference
Architecture documentation for the test_stream_reasoning_summary() function in test_responses_api.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD a5e829fb_c683_3b27_b363_57ab4ced4959["test_stream_reasoning_summary()"] 992496d5_b7d4_139f_00cf_3e585d851f81["test_responses_api.py"] a5e829fb_c683_3b27_b363_57ab4ced4959 -->|defined in| 992496d5_b7d4_139f_00cf_3e585d851f81 style a5e829fb_c683_3b27_b363_57ab4ced4959 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/tests/integration_tests/chat_models/test_responses_api.py lines 475–536
def test_stream_reasoning_summary(
output_version: Literal["v0", "responses/v1", "v1"],
) -> None:
llm = ChatOpenAI(
model="o4-mini",
# Routes to Responses API if `reasoning` is set.
reasoning={"effort": "medium", "summary": "auto"},
output_version=output_version,
)
message_1 = {
"role": "user",
"content": "What was the third tallest buliding in the year 2000?",
}
response_1: BaseMessageChunk | None = None
for chunk in llm.stream([message_1]):
assert isinstance(chunk, AIMessageChunk)
response_1 = chunk if response_1 is None else response_1 + chunk
assert isinstance(response_1, AIMessageChunk)
if output_version == "v0":
reasoning = response_1.additional_kwargs["reasoning"]
assert set(reasoning.keys()) == {"id", "type", "summary"}
summary = reasoning["summary"]
assert isinstance(summary, list)
for block in summary:
assert isinstance(block, dict)
assert isinstance(block["type"], str)
assert isinstance(block["text"], str)
assert block["text"]
elif output_version == "responses/v1":
reasoning = next(
block
for block in response_1.content
if block["type"] == "reasoning" # type: ignore[index]
)
if isinstance(reasoning, str):
reasoning = json.loads(reasoning)
assert set(reasoning.keys()) == {"id", "type", "summary", "index"}
summary = reasoning["summary"]
assert isinstance(summary, list)
for block in summary:
assert isinstance(block, dict)
assert isinstance(block["type"], str)
assert isinstance(block["text"], str)
assert block["text"]
else:
# v1
total_reasoning_blocks = 0
for block in response_1.content_blocks:
if block["type"] == "reasoning":
total_reasoning_blocks += 1
assert isinstance(block.get("id"), str)
assert block.get("id", "").startswith("rs_")
assert isinstance(block.get("reasoning"), str)
assert isinstance(block.get("index"), str)
assert (
total_reasoning_blocks > 1
) # This query typically generates multiple reasoning blocks
# Check we can pass back summaries
message_2 = {"role": "user", "content": "Thank you."}
response_2 = llm.invoke([message_1, response_1, message_2])
assert isinstance(response_2, AIMessage)
Domain
Subdomains
Source
Frequently Asked Questions
What does test_stream_reasoning_summary() do?
test_stream_reasoning_summary() is a function in the langchain codebase, defined in libs/partners/openai/tests/integration_tests/chat_models/test_responses_api.py.
Where is test_stream_reasoning_summary() defined?
test_stream_reasoning_summary() is defined in libs/partners/openai/tests/integration_tests/chat_models/test_responses_api.py at line 475.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free