Home / Function/ _create_chat_result() — langchain Function Reference

_create_chat_result() — langchain Function Reference

Architecture documentation for the _create_chat_result() function in chat_models.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  25e7df35_8784_243e_a1d0_b4b85e223bad["_create_chat_result()"]
  44814818_ed14_7dba_0cd5_a8f2cd67fb61["ChatXAI"]
  25e7df35_8784_243e_a1d0_b4b85e223bad -->|defined in| 44814818_ed14_7dba_0cd5_a8f2cd67fb61
  style 25e7df35_8784_243e_a1d0_b4b85e223bad fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/xai/langchain_xai/chat_models.py lines 563–601

    def _create_chat_result(
        self,
        response: dict | openai.BaseModel,
        generation_info: dict | None = None,
    ) -> ChatResult:
        rtn = super()._create_chat_result(response, generation_info)

        for generation in rtn.generations:
            generation.message.response_metadata["model_provider"] = "xai"

        if not isinstance(response, openai.BaseModel):
            return rtn

        if hasattr(response.choices[0].message, "reasoning_content"):  # type: ignore[attr-defined]
            rtn.generations[0].message.additional_kwargs["reasoning_content"] = (
                response.choices[0].message.reasoning_content  # type: ignore[attr-defined]
            )

        if hasattr(response, "citations"):
            rtn.generations[0].message.additional_kwargs["citations"] = (
                response.citations
            )

        # Unlike OpenAI, xAI reports reasoning tokens < completion tokens. So we assume
        # they are not counted in output tokens, and we add them here.
        if (
            (not self._use_responses_api({}))
            and (usage_metadata := rtn.generations[0].message.usage_metadata)  # type: ignore[attr-defined]
            and (
                reasoning_tokens := usage_metadata.get("output_token_details", {}).get(
                    "reasoning"
                )
            )
        ):
            rtn.generations[0].message.usage_metadata["output_tokens"] += (  # type: ignore[attr-defined]
                reasoning_tokens
            )

        return rtn

Domain

Subdomains

Frequently Asked Questions

What does _create_chat_result() do?
_create_chat_result() is a function in the langchain codebase, defined in libs/partners/xai/langchain_xai/chat_models.py.
Where is _create_chat_result() defined?
_create_chat_result() is defined in libs/partners/xai/langchain_xai/chat_models.py at line 563.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free