Home / Function/ _construct_lc_result_from_responses_api() — langchain Function Reference

_construct_lc_result_from_responses_api() — langchain Function Reference

Architecture documentation for the _construct_lc_result_from_responses_api() function in base.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  06595fa5_189f_7f73_3a37_309f84e5179d["_construct_lc_result_from_responses_api()"]
  2b046911_ea21_8e2e_ba0d_9d03da8d7bda["base.py"]
  06595fa5_189f_7f73_3a37_309f84e5179d -->|defined in| 2b046911_ea21_8e2e_ba0d_9d03da8d7bda
  ab5f2298_5c1f_3b25_f23a_6b16e835aba5["_generate()"]
  ab5f2298_5c1f_3b25_f23a_6b16e835aba5 -->|calls| 06595fa5_189f_7f73_3a37_309f84e5179d
  9ad9d2b6_95a9_593f_2283_7b4495562124["_agenerate()"]
  9ad9d2b6_95a9_593f_2283_7b4495562124 -->|calls| 06595fa5_189f_7f73_3a37_309f84e5179d
  4ffa404b_88f9_d1df_3a9e_bd6d93548453["_convert_responses_chunk_to_generation_chunk()"]
  4ffa404b_88f9_d1df_3a9e_bd6d93548453 -->|calls| 06595fa5_189f_7f73_3a37_309f84e5179d
  2a873213_75a9_7f1f_1e88_61e17ed10c52["_create_usage_metadata_responses()"]
  06595fa5_189f_7f73_3a37_309f84e5179d -->|calls| 2a873213_75a9_7f1f_1e88_61e17ed10c52
  f4107f2b_20c9_97b1_124f_31046942bf15["_format_annotation_to_lc()"]
  06595fa5_189f_7f73_3a37_309f84e5179d -->|calls| f4107f2b_20c9_97b1_124f_31046942bf15
  0d743376_acb9_d505_7453_01840d66d480["_get_output_text()"]
  06595fa5_189f_7f73_3a37_309f84e5179d -->|calls| 0d743376_acb9_d505_7453_01840d66d480
  cdaeec71_b581_c482_d50c_4334d429196c["_is_pydantic_class()"]
  06595fa5_189f_7f73_3a37_309f84e5179d -->|calls| cdaeec71_b581_c482_d50c_4334d429196c
  style 06595fa5_189f_7f73_3a37_309f84e5179d fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/openai/langchain_openai/chat_models/base.py lines 4293–4459

def _construct_lc_result_from_responses_api(
    response: Response,
    schema: type[_BM] | None = None,
    metadata: dict | None = None,
    output_version: str | None = None,
) -> ChatResult:
    """Construct `ChatResponse` from OpenAI Response API response."""
    if response.error:
        raise ValueError(response.error)

    if output_version is None:
        # Sentinel value of None lets us know if output_version is set explicitly.
        # Explicitly setting `output_version="responses/v1"` separately enables the
        # Responses API.
        output_version = "responses/v1"

    response_metadata = {
        k: v
        for k, v in response.model_dump(exclude_none=True, mode="json").items()
        if k
        in (
            "created_at",
            # backwards compatibility: keep response ID in response_metadata as well as
            # top-level-id
            "id",
            "incomplete_details",
            "metadata",
            "object",
            "status",
            "user",
            "model",
            "service_tier",
        )
    }
    if metadata:
        response_metadata.update(metadata)
    # for compatibility with chat completion calls.
    response_metadata["model_provider"] = "openai"
    response_metadata["model_name"] = response_metadata.get("model")
    if response.usage:
        usage_metadata = _create_usage_metadata_responses(
            response.usage.model_dump(), response.service_tier
        )
    else:
        usage_metadata = None

    content_blocks: list = []
    tool_calls = []
    invalid_tool_calls = []
    additional_kwargs: dict = {}
    for output in response.output:
        if output.type == "message":
            for content in output.content:
                if content.type == "output_text":
                    block = {
                        "type": "text",
                        "text": content.text,
                        "annotations": [
                            _format_annotation_to_lc(annotation.model_dump())
                            for annotation in content.annotations
                        ]
                        if isinstance(content.annotations, list)
                        else [],
                        "id": output.id,
                    }
                    content_blocks.append(block)
                    if hasattr(content, "parsed"):
                        additional_kwargs["parsed"] = content.parsed
                if content.type == "refusal":
                    content_blocks.append(
                        {"type": "refusal", "refusal": content.refusal, "id": output.id}
                    )
        elif output.type == "function_call":
            content_blocks.append(output.model_dump(exclude_none=True, mode="json"))
            try:
                args = json.loads(output.arguments, strict=False)
                error = None
            except JSONDecodeError as e:
                args = output.arguments
                error = str(e)
            if error is None:

Domain

Subdomains

Frequently Asked Questions

What does _construct_lc_result_from_responses_api() do?
_construct_lc_result_from_responses_api() is a function in the langchain codebase, defined in libs/partners/openai/langchain_openai/chat_models/base.py.
Where is _construct_lc_result_from_responses_api() defined?
_construct_lc_result_from_responses_api() is defined in libs/partners/openai/langchain_openai/chat_models/base.py at line 4293.
What does _construct_lc_result_from_responses_api() call?
_construct_lc_result_from_responses_api() calls 4 function(s): _create_usage_metadata_responses, _format_annotation_to_lc, _get_output_text, _is_pydantic_class.
What calls _construct_lc_result_from_responses_api()?
_construct_lc_result_from_responses_api() is called by 3 function(s): _agenerate, _convert_responses_chunk_to_generation_chunk, _generate.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free