_astream_responses() — langchain Function Reference
Architecture documentation for the _astream_responses() function in base.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD f16a8c4e_203d_d4ca_56b2_d86075a05781["_astream_responses()"] 2a683305_667b_3567_cab9_9f77e29d4afa["BaseChatOpenAI"] f16a8c4e_203d_d4ca_56b2_d86075a05781 -->|defined in| 2a683305_667b_3567_cab9_9f77e29d4afa fe7d2227_7e9d_1aa6_e506_35d77859da4c["_astream()"] fe7d2227_7e9d_1aa6_e506_35d77859da4c -->|calls| f16a8c4e_203d_d4ca_56b2_d86075a05781 36b15b48_0822_029c_4a53_8243405e5a5e["_get_request_payload()"] f16a8c4e_203d_d4ca_56b2_d86075a05781 -->|calls| 36b15b48_0822_029c_4a53_8243405e5a5e 4ffa404b_88f9_d1df_3a9e_bd6d93548453["_convert_responses_chunk_to_generation_chunk()"] f16a8c4e_203d_d4ca_56b2_d86075a05781 -->|calls| 4ffa404b_88f9_d1df_3a9e_bd6d93548453 6a6e1bc7_82ad_0ec6_6f76_46c87a121099["_handle_openai_bad_request()"] f16a8c4e_203d_d4ca_56b2_d86075a05781 -->|calls| 6a6e1bc7_82ad_0ec6_6f76_46c87a121099 9b7290da_4511_6588_b149_1f3f5856fece["_handle_openai_api_error()"] f16a8c4e_203d_d4ca_56b2_d86075a05781 -->|calls| 9b7290da_4511_6588_b149_1f3f5856fece style f16a8c4e_203d_d4ca_56b2_d86075a05781 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/langchain_openai/chat_models/base.py lines 1223–1283
async def _astream_responses(
self,
messages: list[BaseMessage],
stop: list[str] | None = None,
run_manager: AsyncCallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> AsyncIterator[ChatGenerationChunk]:
kwargs["stream"] = True
payload = self._get_request_payload(messages, stop=stop, **kwargs)
try:
if self.include_response_headers:
raw_context_manager = (
await self.root_async_client.with_raw_response.responses.create(
**payload
)
)
context_manager = raw_context_manager.parse()
headers = {"headers": dict(raw_context_manager.headers)}
else:
context_manager = await self.root_async_client.responses.create(
**payload
)
headers = {}
original_schema_obj = kwargs.get("response_format")
async with context_manager as response:
is_first_chunk = True
current_index = -1
current_output_index = -1
current_sub_index = -1
has_reasoning = False
async for chunk in response:
metadata = headers if is_first_chunk else {}
(
current_index,
current_output_index,
current_sub_index,
generation_chunk,
) = _convert_responses_chunk_to_generation_chunk(
chunk,
current_index,
current_output_index,
current_sub_index,
schema=original_schema_obj,
metadata=metadata,
has_reasoning=has_reasoning,
output_version=self.output_version,
)
if generation_chunk:
if run_manager:
await run_manager.on_llm_new_token(
generation_chunk.text, chunk=generation_chunk
)
is_first_chunk = False
if "reasoning" in generation_chunk.message.additional_kwargs:
has_reasoning = True
yield generation_chunk
except openai.BadRequestError as e:
_handle_openai_bad_request(e)
except openai.APIError as e:
_handle_openai_api_error(e)
Domain
Subdomains
Calls
Called By
Source
Frequently Asked Questions
What does _astream_responses() do?
_astream_responses() is a function in the langchain codebase, defined in libs/partners/openai/langchain_openai/chat_models/base.py.
Where is _astream_responses() defined?
_astream_responses() is defined in libs/partners/openai/langchain_openai/chat_models/base.py at line 1223.
What does _astream_responses() call?
_astream_responses() calls 4 function(s): _convert_responses_chunk_to_generation_chunk, _get_request_payload, _handle_openai_api_error, _handle_openai_bad_request.
What calls _astream_responses()?
_astream_responses() is called by 1 function(s): _astream.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free