Home / Function/ _construct_responses_api_payload() — langchain Function Reference

_construct_responses_api_payload() — langchain Function Reference

Architecture documentation for the _construct_responses_api_payload() function in base.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  a129387a_a0b1_9985_0b43_bfc1f161529a["_construct_responses_api_payload()"]
  2b046911_ea21_8e2e_ba0d_9d03da8d7bda["base.py"]
  a129387a_a0b1_9985_0b43_bfc1f161529a -->|defined in| 2b046911_ea21_8e2e_ba0d_9d03da8d7bda
  36b15b48_0822_029c_4a53_8243405e5a5e["_get_request_payload()"]
  36b15b48_0822_029c_4a53_8243405e5a5e -->|calls| a129387a_a0b1_9985_0b43_bfc1f161529a
  b988bc7d_ceff_06f1_193c_a22abc7a149f["_construct_responses_api_input()"]
  a129387a_a0b1_9985_0b43_bfc1f161529a -->|calls| b988bc7d_ceff_06f1_193c_a22abc7a149f
  cdaeec71_b581_c482_d50c_4334d429196c["_is_pydantic_class()"]
  a129387a_a0b1_9985_0b43_bfc1f161529a -->|calls| cdaeec71_b581_c482_d50c_4334d429196c
  1b8cb178_42d5_1a67_7602_8228794247a8["_convert_to_openai_response_format()"]
  a129387a_a0b1_9985_0b43_bfc1f161529a -->|calls| 1b8cb178_42d5_1a67_7602_8228794247a8
  style a129387a_a0b1_9985_0b43_bfc1f161529a fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/openai/langchain_openai/chat_models/base.py lines 3854–3958

def _construct_responses_api_payload(
    messages: Sequence[BaseMessage], payload: dict
) -> dict:
    # Rename legacy parameters
    for legacy_token_param in ["max_tokens", "max_completion_tokens"]:
        if legacy_token_param in payload:
            payload["max_output_tokens"] = payload.pop(legacy_token_param)
    if "reasoning_effort" in payload and "reasoning" not in payload:
        payload["reasoning"] = {"effort": payload.pop("reasoning_effort")}

    # Remove temperature parameter for models that don't support it in responses API
    # gpt-5-chat supports temperature, and gpt-5 models with reasoning.effort='none'
    # also support temperature
    model = payload.get("model") or ""
    if (
        model.startswith("gpt-5")
        and ("chat" not in model)  # gpt-5-chat supports
        and (payload.get("reasoning") or {}).get("effort") != "none"
    ):
        payload.pop("temperature", None)

    payload["input"] = _construct_responses_api_input(messages)
    if tools := payload.pop("tools", None):
        new_tools: list = []
        for tool in tools:
            # chat api: {"type": "function", "function": {"name": "...", "description": "...", "parameters": {...}, "strict": ...}}  # noqa: E501
            # responses api: {"type": "function", "name": "...", "description": "...", "parameters": {...}, "strict": ...}  # noqa: E501
            if tool["type"] == "function" and "function" in tool:
                new_tools.append({"type": "function", **tool["function"]})
            else:
                if tool["type"] == "image_generation":
                    # Handle partial images (not yet supported)
                    if "partial_images" in tool:
                        msg = (
                            "Partial image generation is not yet supported "
                            "via the LangChain ChatOpenAI client. Please "
                            "drop the 'partial_images' key from the image_generation "
                            "tool."
                        )
                        raise NotImplementedError(msg)
                    if payload.get("stream") and "partial_images" not in tool:
                        # OpenAI requires this parameter be set; we ignore it during
                        # streaming.
                        tool = {**tool, "partial_images": 1}
                    else:
                        pass

                new_tools.append(tool)

        payload["tools"] = new_tools
    if tool_choice := payload.pop("tool_choice", None):
        # chat api: {"type": "function", "function": {"name": "..."}}
        # responses api: {"type": "function", "name": "..."}
        if (
            isinstance(tool_choice, dict)
            and tool_choice["type"] == "function"
            and "function" in tool_choice
        ):
            payload["tool_choice"] = {"type": "function", **tool_choice["function"]}
        else:
            payload["tool_choice"] = tool_choice

    # Structured output
    if schema := payload.pop("response_format", None):
        # For pydantic + non-streaming case, we use responses.parse.
        # Otherwise, we use responses.create.
        strict = payload.pop("strict", None)
        if not payload.get("stream") and _is_pydantic_class(schema):
            payload["text_format"] = schema
        else:
            if _is_pydantic_class(schema):
                schema_dict = schema.model_json_schema()
                strict = True
            else:
                schema_dict = schema
            if schema_dict == {"type": "json_object"}:  # JSON mode
                if "text" in payload and isinstance(payload["text"], dict):
                    payload["text"]["format"] = {"type": "json_object"}
                else:
                    payload["text"] = {"format": {"type": "json_object"}}
            elif (

Domain

Subdomains

Frequently Asked Questions

What does _construct_responses_api_payload() do?
_construct_responses_api_payload() is a function in the langchain codebase, defined in libs/partners/openai/langchain_openai/chat_models/base.py.
Where is _construct_responses_api_payload() defined?
_construct_responses_api_payload() is defined in libs/partners/openai/langchain_openai/chat_models/base.py at line 3854.
What does _construct_responses_api_payload() call?
_construct_responses_api_payload() calls 3 function(s): _construct_responses_api_input, _convert_to_openai_response_format, _is_pydantic_class.
What calls _construct_responses_api_payload()?
_construct_responses_api_payload() is called by 1 function(s): _get_request_payload.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free