test_flex_usage_responses() — langchain Function Reference
Architecture documentation for the test_flex_usage_responses() function in test_base.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 35f70d16_d39c_9cf2_41ba_e88950e9345a["test_flex_usage_responses()"] bd382a4e_442c_13ae_530c_6e34bc43623d["test_base.py"] 35f70d16_d39c_9cf2_41ba_e88950e9345a -->|defined in| bd382a4e_442c_13ae_530c_6e34bc43623d style 35f70d16_d39c_9cf2_41ba_e88950e9345a fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/tests/integration_tests/chat_models/test_base.py lines 395–413
def test_flex_usage_responses(streaming: bool) -> None:
llm = ChatOpenAI(
model="gpt-5-nano",
service_tier="flex",
max_retries=3,
use_responses_api=True,
streaming=streaming,
)
result = llm.invoke("Hello")
assert result.usage_metadata
flex_input = result.usage_metadata.get("input_token_details", {}).get("flex")
flex_output = result.usage_metadata.get("output_token_details", {}).get("flex")
flex_reasoning = result.usage_metadata.get("output_token_details", {}).get(
"flex_reasoning"
)
assert isinstance(flex_input, int)
assert isinstance(flex_output, int)
assert isinstance(flex_reasoning, int)
assert flex_output + flex_reasoning == result.usage_metadata.get("output_tokens")
Domain
Subdomains
Source
Frequently Asked Questions
What does test_flex_usage_responses() do?
test_flex_usage_responses() is a function in the langchain codebase, defined in libs/partners/openai/tests/integration_tests/chat_models/test_base.py.
Where is test_flex_usage_responses() defined?
test_flex_usage_responses() is defined in libs/partners/openai/tests/integration_tests/chat_models/test_base.py at line 395.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free