Home / Function/ _handle_openai_bad_request() — langchain Function Reference

_handle_openai_bad_request() — langchain Function Reference

Architecture documentation for the _handle_openai_bad_request() function in base.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  dc6fe6fa_2137_b20d_926e_df1b3f5d8643["_handle_openai_bad_request()"]
  cbac6225_d16a_7d3b_a2eb_91848460cf5a["base.py"]
  dc6fe6fa_2137_b20d_926e_df1b3f5d8643 -->|defined in| cbac6225_d16a_7d3b_a2eb_91848460cf5a
  4e60455d_9607_026a_e62c_daaae8d8fd32["_stream_responses()"]
  4e60455d_9607_026a_e62c_daaae8d8fd32 -->|calls| dc6fe6fa_2137_b20d_926e_df1b3f5d8643
  ebb797f8_e145_5d6a_5451_7f2333a7f64f["_astream_responses()"]
  ebb797f8_e145_5d6a_5451_7f2333a7f64f -->|calls| dc6fe6fa_2137_b20d_926e_df1b3f5d8643
  be11bdd9_20e6_0d66_ee45_45175189364d["_stream()"]
  be11bdd9_20e6_0d66_ee45_45175189364d -->|calls| dc6fe6fa_2137_b20d_926e_df1b3f5d8643
  ebd1bfb1_67ad_e1e6_3202_04ca697dfd47["_generate()"]
  ebd1bfb1_67ad_e1e6_3202_04ca697dfd47 -->|calls| dc6fe6fa_2137_b20d_926e_df1b3f5d8643
  6786ef19_3f66_ba7c_9786_8e30addcc463["_astream()"]
  6786ef19_3f66_ba7c_9786_8e30addcc463 -->|calls| dc6fe6fa_2137_b20d_926e_df1b3f5d8643
  dcbd739a_66ae_915f_7087_234dec7749be["_agenerate()"]
  dcbd739a_66ae_915f_7087_234dec7749be -->|calls| dc6fe6fa_2137_b20d_926e_df1b3f5d8643
  style dc6fe6fa_2137_b20d_926e_df1b3f5d8643 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/openai/langchain_openai/chat_models/base.py lines 461–490

def _handle_openai_bad_request(e: openai.BadRequestError) -> None:
    if (
        "context_length_exceeded" in str(e)
        or "Input tokens exceed the configured limit" in e.message
    ):
        raise OpenAIContextOverflowError(
            message=e.message, response=e.response, body=e.body
        ) from e
    if (
        "'response_format' of type 'json_schema' is not supported with this model"
    ) in e.message:
        message = (
            "This model does not support OpenAI's structured output feature, which "
            "is the default method for `with_structured_output` as of "
            "langchain-openai==0.3. To use `with_structured_output` with this model, "
            'specify `method="function_calling"`.'
        )
        warnings.warn(message)
        raise e
    if "Invalid schema for response_format" in e.message:
        message = (
            "Invalid schema for OpenAI's structured output feature, which is the "
            "default method for `with_structured_output` as of langchain-openai==0.3. "
            'Specify `method="function_calling"` instead or update your schema. '
            "See supported schemas: "
            "https://platform.openai.com/docs/guides/structured-outputs#supported-schemas"
        )
        warnings.warn(message)
        raise e
    raise

Subdomains

Frequently Asked Questions

What does _handle_openai_bad_request() do?
_handle_openai_bad_request() is a function in the langchain codebase, defined in libs/partners/openai/langchain_openai/chat_models/base.py.
Where is _handle_openai_bad_request() defined?
_handle_openai_bad_request() is defined in libs/partners/openai/langchain_openai/chat_models/base.py at line 461.
What calls _handle_openai_bad_request()?
_handle_openai_bad_request() is called by 6 function(s): _agenerate, _astream, _astream_responses, _generate, _stream, _stream_responses.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free