Home / Function/ _convert_chunk_to_generation_chunk() — langchain Function Reference

_convert_chunk_to_generation_chunk() — langchain Function Reference

Architecture documentation for the _convert_chunk_to_generation_chunk() function in chat_models.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  f7829534_65a1_4c75_bdf2_2f48f84f7235["_convert_chunk_to_generation_chunk()"]
  44814818_ed14_7dba_0cd5_a8f2cd67fb61["ChatXAI"]
  f7829534_65a1_4c75_bdf2_2f48f84f7235 -->|defined in| 44814818_ed14_7dba_0cd5_a8f2cd67fb61
  style f7829534_65a1_4c75_bdf2_2f48f84f7235 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/xai/langchain_xai/chat_models.py lines 603–648

    def _convert_chunk_to_generation_chunk(
        self,
        chunk: dict,
        default_chunk_class: type,
        base_generation_info: dict | None,
    ) -> ChatGenerationChunk | None:
        generation_chunk = super()._convert_chunk_to_generation_chunk(
            chunk,
            default_chunk_class,
            base_generation_info,
        )

        if generation_chunk:
            generation_chunk.message.response_metadata["model_provider"] = "xai"

        if (choices := chunk.get("choices")) and generation_chunk:
            top = choices[0]
            if isinstance(generation_chunk.message, AIMessageChunk) and (
                reasoning_content := top.get("delta", {}).get("reasoning_content")
            ):
                generation_chunk.message.additional_kwargs["reasoning_content"] = (
                    reasoning_content
                )

        if (
            (citations := chunk.get("citations"))
            and generation_chunk
            and isinstance(generation_chunk.message, AIMessageChunk)
            and not chunk.get("usage")  # citations are repeated in final usage chunk
        ):
            generation_chunk.message.additional_kwargs["citations"] = citations

        # Unlike OpenAI, xAI reports reasoning tokens < completion tokens. So we assume
        # they are not counted in output tokens, and we add them here.
        if (
            generation_chunk
            and (not self._use_responses_api({}))
            and (usage_metadata := generation_chunk.message.usage_metadata)  # type: ignore[attr-defined]
            and (
                reasoning_tokens := usage_metadata.get("output_token_details", {}).get(
                    "reasoning"
                )
            )
        ):
            generation_chunk.message.usage_metadata["output_tokens"] += reasoning_tokens  # type: ignore[attr-defined]
        return generation_chunk

Domain

Subdomains

Frequently Asked Questions

What does _convert_chunk_to_generation_chunk() do?
_convert_chunk_to_generation_chunk() is a function in the langchain codebase, defined in libs/partners/xai/langchain_xai/chat_models.py.
Where is _convert_chunk_to_generation_chunk() defined?
_convert_chunk_to_generation_chunk() is defined in libs/partners/xai/langchain_xai/chat_models.py at line 603.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free