Home / Function/ _stream_responses() — langchain Function Reference

_stream_responses() — langchain Function Reference

Architecture documentation for the _stream_responses() function in base.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  ad60e96a_ba3b_fa9d_754c_16ef8861396c["_stream_responses()"]
  2a683305_667b_3567_cab9_9f77e29d4afa["BaseChatOpenAI"]
  ad60e96a_ba3b_fa9d_754c_16ef8861396c -->|defined in| 2a683305_667b_3567_cab9_9f77e29d4afa
  6eabc4a9_2f10_b00f_2edb_18c7de5ad41d["_stream()"]
  6eabc4a9_2f10_b00f_2edb_18c7de5ad41d -->|calls| ad60e96a_ba3b_fa9d_754c_16ef8861396c
  6086a2e4_e2cd_389e_0d61_cb83485f9aef["_ensure_sync_client_available()"]
  ad60e96a_ba3b_fa9d_754c_16ef8861396c -->|calls| 6086a2e4_e2cd_389e_0d61_cb83485f9aef
  36b15b48_0822_029c_4a53_8243405e5a5e["_get_request_payload()"]
  ad60e96a_ba3b_fa9d_754c_16ef8861396c -->|calls| 36b15b48_0822_029c_4a53_8243405e5a5e
  4ffa404b_88f9_d1df_3a9e_bd6d93548453["_convert_responses_chunk_to_generation_chunk()"]
  ad60e96a_ba3b_fa9d_754c_16ef8861396c -->|calls| 4ffa404b_88f9_d1df_3a9e_bd6d93548453
  6a6e1bc7_82ad_0ec6_6f76_46c87a121099["_handle_openai_bad_request()"]
  ad60e96a_ba3b_fa9d_754c_16ef8861396c -->|calls| 6a6e1bc7_82ad_0ec6_6f76_46c87a121099
  9b7290da_4511_6588_b149_1f3f5856fece["_handle_openai_api_error()"]
  ad60e96a_ba3b_fa9d_754c_16ef8861396c -->|calls| 9b7290da_4511_6588_b149_1f3f5856fece
  style ad60e96a_ba3b_fa9d_754c_16ef8861396c fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/openai/langchain_openai/chat_models/base.py lines 1164–1221

    def _stream_responses(
        self,
        messages: list[BaseMessage],
        stop: list[str] | None = None,
        run_manager: CallbackManagerForLLMRun | None = None,
        **kwargs: Any,
    ) -> Iterator[ChatGenerationChunk]:
        self._ensure_sync_client_available()
        kwargs["stream"] = True
        payload = self._get_request_payload(messages, stop=stop, **kwargs)
        try:
            if self.include_response_headers:
                raw_context_manager = (
                    self.root_client.with_raw_response.responses.create(**payload)
                )
                context_manager = raw_context_manager.parse()
                headers = {"headers": dict(raw_context_manager.headers)}
            else:
                context_manager = self.root_client.responses.create(**payload)
                headers = {}
            original_schema_obj = kwargs.get("response_format")

            with context_manager as response:
                is_first_chunk = True
                current_index = -1
                current_output_index = -1
                current_sub_index = -1
                has_reasoning = False
                for chunk in response:
                    metadata = headers if is_first_chunk else {}
                    (
                        current_index,
                        current_output_index,
                        current_sub_index,
                        generation_chunk,
                    ) = _convert_responses_chunk_to_generation_chunk(
                        chunk,
                        current_index,
                        current_output_index,
                        current_sub_index,
                        schema=original_schema_obj,
                        metadata=metadata,
                        has_reasoning=has_reasoning,
                        output_version=self.output_version,
                    )
                    if generation_chunk:
                        if run_manager:
                            run_manager.on_llm_new_token(
                                generation_chunk.text, chunk=generation_chunk
                            )
                        is_first_chunk = False
                        if "reasoning" in generation_chunk.message.additional_kwargs:
                            has_reasoning = True
                        yield generation_chunk
        except openai.BadRequestError as e:
            _handle_openai_bad_request(e)
        except openai.APIError as e:
            _handle_openai_api_error(e)

Domain

Subdomains

Called By

Frequently Asked Questions

What does _stream_responses() do?
_stream_responses() is a function in the langchain codebase, defined in libs/partners/openai/langchain_openai/chat_models/base.py.
Where is _stream_responses() defined?
_stream_responses() is defined in libs/partners/openai/langchain_openai/chat_models/base.py at line 1164.
What does _stream_responses() call?
_stream_responses() calls 5 function(s): _convert_responses_chunk_to_generation_chunk, _ensure_sync_client_available, _get_request_payload, _handle_openai_api_error, _handle_openai_bad_request.
What calls _stream_responses()?
_stream_responses() is called by 1 function(s): _stream.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free