Home / Function/ _convert_responses_chunk_to_generation_chunk() — langchain Function Reference

_convert_responses_chunk_to_generation_chunk() — langchain Function Reference

Architecture documentation for the _convert_responses_chunk_to_generation_chunk() function in base.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  4ffa404b_88f9_d1df_3a9e_bd6d93548453["_convert_responses_chunk_to_generation_chunk()"]
  2b046911_ea21_8e2e_ba0d_9d03da8d7bda["base.py"]
  4ffa404b_88f9_d1df_3a9e_bd6d93548453 -->|defined in| 2b046911_ea21_8e2e_ba0d_9d03da8d7bda
  ad60e96a_ba3b_fa9d_754c_16ef8861396c["_stream_responses()"]
  ad60e96a_ba3b_fa9d_754c_16ef8861396c -->|calls| 4ffa404b_88f9_d1df_3a9e_bd6d93548453
  f16a8c4e_203d_d4ca_56b2_d86075a05781["_astream_responses()"]
  f16a8c4e_203d_d4ca_56b2_d86075a05781 -->|calls| 4ffa404b_88f9_d1df_3a9e_bd6d93548453
  f4107f2b_20c9_97b1_124f_31046942bf15["_format_annotation_to_lc()"]
  4ffa404b_88f9_d1df_3a9e_bd6d93548453 -->|calls| f4107f2b_20c9_97b1_124f_31046942bf15
  06595fa5_189f_7f73_3a37_309f84e5179d["_construct_lc_result_from_responses_api()"]
  4ffa404b_88f9_d1df_3a9e_bd6d93548453 -->|calls| 06595fa5_189f_7f73_3a37_309f84e5179d
  style 4ffa404b_88f9_d1df_3a9e_bd6d93548453 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/openai/langchain_openai/chat_models/base.py lines 4462–4709

def _convert_responses_chunk_to_generation_chunk(
    chunk: Any,
    current_index: int,  # index in content
    current_output_index: int,  # index in Response output
    current_sub_index: int,  # index of content block in output item
    schema: type[_BM] | None = None,
    metadata: dict | None = None,
    has_reasoning: bool = False,
    output_version: str | None = None,
) -> tuple[int, int, int, ChatGenerationChunk | None]:
    def _advance(output_idx: int, sub_idx: int | None = None) -> None:
        """Advance indexes tracked during streaming.

        Example: we stream a response item of the form:

        ```python
        {
            "type": "message",  # output_index 0
            "role": "assistant",
            "id": "msg_123",
            "content": [
                {"type": "output_text", "text": "foo"},  # sub_index 0
                {"type": "output_text", "text": "bar"},  # sub_index 1
            ],
        }
        ```

        This is a single item with a shared `output_index` and two sub-indexes, one
        for each content block.

        This will be processed into an `AIMessage` with two text blocks:

        ```python
        AIMessage(
            [
                {"type": "text", "text": "foo", "id": "msg_123"},  # index 0
                {"type": "text", "text": "bar", "id": "msg_123"},  # index 1
            ]
        )
        ```

        This function just identifies updates in output or sub-indexes and increments
        the current index accordingly.
        """
        nonlocal current_index, current_output_index, current_sub_index
        if sub_idx is None:
            if current_output_index != output_idx:
                current_index += 1
        else:
            if (current_output_index != output_idx) or (current_sub_index != sub_idx):
                current_index += 1
            current_sub_index = sub_idx
        current_output_index = output_idx

    if output_version is None:
        # Sentinel value of None lets us know if output_version is set explicitly.
        # Explicitly setting `output_version="responses/v1"` separately enables the
        # Responses API.
        output_version = "responses/v1"

    content = []
    tool_call_chunks: list = []
    additional_kwargs: dict = {}
    response_metadata = metadata or {}
    response_metadata["model_provider"] = "openai"
    usage_metadata = None
    chunk_position: Literal["last"] | None = None
    id = None
    if chunk.type == "response.output_text.delta":
        _advance(chunk.output_index, chunk.content_index)
        content.append({"type": "text", "text": chunk.delta, "index": current_index})
    elif chunk.type == "response.output_text.annotation.added":
        _advance(chunk.output_index, chunk.content_index)
        if isinstance(chunk.annotation, dict):
            # Appears to be a breaking change in openai==1.82.0
            annotation = chunk.annotation
        else:
            annotation = chunk.annotation.model_dump(exclude_none=True, mode="json")

        content.append(
            {

Domain

Subdomains

Frequently Asked Questions

What does _convert_responses_chunk_to_generation_chunk() do?
_convert_responses_chunk_to_generation_chunk() is a function in the langchain codebase, defined in libs/partners/openai/langchain_openai/chat_models/base.py.
Where is _convert_responses_chunk_to_generation_chunk() defined?
_convert_responses_chunk_to_generation_chunk() is defined in libs/partners/openai/langchain_openai/chat_models/base.py at line 4462.
What does _convert_responses_chunk_to_generation_chunk() call?
_convert_responses_chunk_to_generation_chunk() calls 2 function(s): _construct_lc_result_from_responses_api, _format_annotation_to_lc.
What calls _convert_responses_chunk_to_generation_chunk()?
_convert_responses_chunk_to_generation_chunk() is called by 2 function(s): _astream_responses, _stream_responses.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free