test_gpt_5_1_temperature_with_reasoning_effort_none() — langchain Function Reference
Architecture documentation for the test_gpt_5_1_temperature_with_reasoning_effort_none() function in test_base.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD a576e753_00c2_8191_aacf_c8539111b275["test_gpt_5_1_temperature_with_reasoning_effort_none()"] 48232d20_f8c1_b597_14fa_7dc407e9bfe5["test_base.py"] a576e753_00c2_8191_aacf_c8539111b275 -->|defined in| 48232d20_f8c1_b597_14fa_7dc407e9bfe5 style a576e753_00c2_8191_aacf_c8539111b275 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/tests/unit_tests/chat_models/test_base.py lines 3135–3191
def test_gpt_5_1_temperature_with_reasoning_effort_none(
use_responses_api: bool,
) -> None:
"""Test that temperature is preserved when reasoning_effort is explicitly 'none'."""
# Test with reasoning_effort='none' explicitly set
llm = ChatOpenAI(
model="gpt-5.1",
temperature=0.5,
reasoning_effort="none",
use_responses_api=use_responses_api,
)
messages = [HumanMessage(content="Hello")]
payload = llm._get_request_payload(messages)
assert payload["temperature"] == 0.5
# Test with reasoning={'effort': 'none'}
llm = ChatOpenAI(
model="gpt-5.1",
temperature=0.5,
reasoning={"effort": "none"},
use_responses_api=use_responses_api,
)
messages = [HumanMessage(content="Hello")]
payload = llm._get_request_payload(messages)
assert payload["temperature"] == 0.5
# Test that temperature is restricted by default (no reasoning_effort)
llm = ChatOpenAI(
model="gpt-5.1",
temperature=0.5,
use_responses_api=use_responses_api,
)
messages = [HumanMessage(content="Hello")]
payload = llm._get_request_payload(messages)
assert "temperature" not in payload
# Test that temperature is still restricted when reasoning_effort is something else
llm = ChatOpenAI(
model="gpt-5.1",
temperature=0.5,
reasoning_effort="low",
use_responses_api=use_responses_api,
)
messages = [HumanMessage(content="Hello")]
payload = llm._get_request_payload(messages)
assert "temperature" not in payload
# Test with reasoning={'effort': 'low'}
llm = ChatOpenAI(
model="gpt-5.1",
temperature=0.5,
reasoning={"effort": "low"},
use_responses_api=use_responses_api,
)
messages = [HumanMessage(content="Hello")]
payload = llm._get_request_payload(messages)
assert "temperature" not in payload
Domain
Subdomains
Source
Frequently Asked Questions
What does test_gpt_5_1_temperature_with_reasoning_effort_none() do?
test_gpt_5_1_temperature_with_reasoning_effort_none() is a function in the langchain codebase, defined in libs/partners/openai/tests/unit_tests/chat_models/test_base.py.
Where is test_gpt_5_1_temperature_with_reasoning_effort_none() defined?
test_gpt_5_1_temperature_with_reasoning_effort_none() is defined in libs/partners/openai/tests/unit_tests/chat_models/test_base.py at line 3135.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free