_create_usage_metadata_responses() — langchain Function Reference
Architecture documentation for the _create_usage_metadata_responses() function in base.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 2a873213_75a9_7f1f_1e88_61e17ed10c52["_create_usage_metadata_responses()"] 2b046911_ea21_8e2e_ba0d_9d03da8d7bda["base.py"] 2a873213_75a9_7f1f_1e88_61e17ed10c52 -->|defined in| 2b046911_ea21_8e2e_ba0d_9d03da8d7bda 06595fa5_189f_7f73_3a37_309f84e5179d["_construct_lc_result_from_responses_api()"] 06595fa5_189f_7f73_3a37_309f84e5179d -->|calls| 2a873213_75a9_7f1f_1e88_61e17ed10c52 style 2a873213_75a9_7f1f_1e88_61e17ed10c52 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/langchain_openai/chat_models/base.py lines 3770–3808
def _create_usage_metadata_responses(
oai_token_usage: dict, service_tier: str | None = None
) -> UsageMetadata:
input_tokens = oai_token_usage.get("input_tokens", 0)
output_tokens = oai_token_usage.get("output_tokens", 0)
total_tokens = oai_token_usage.get("total_tokens", input_tokens + output_tokens)
if service_tier not in {"priority", "flex"}:
service_tier = None
service_tier_prefix = f"{service_tier}_" if service_tier else ""
output_token_details: dict = {
f"{service_tier_prefix}reasoning": (
oai_token_usage.get("output_tokens_details") or {}
).get("reasoning_tokens")
}
input_token_details: dict = {
f"{service_tier_prefix}cache_read": (
oai_token_usage.get("input_tokens_details") or {}
).get("cached_tokens")
}
if service_tier is not None:
# Avoid counting cache and reasoning tokens towards the service tier token
# counts, since service tier tokens are already priced differently
output_token_details[service_tier] = output_tokens - output_token_details.get(
f"{service_tier_prefix}reasoning", 0
)
input_token_details[service_tier] = input_tokens - input_token_details.get(
f"{service_tier_prefix}cache_read", 0
)
return UsageMetadata(
input_tokens=input_tokens,
output_tokens=output_tokens,
total_tokens=total_tokens,
input_token_details=InputTokenDetails(
**{k: v for k, v in input_token_details.items() if v is not None}
),
output_token_details=OutputTokenDetails(
**{k: v for k, v in output_token_details.items() if v is not None}
),
)
Domain
Subdomains
Source
Frequently Asked Questions
What does _create_usage_metadata_responses() do?
_create_usage_metadata_responses() is a function in the langchain codebase, defined in libs/partners/openai/langchain_openai/chat_models/base.py.
Where is _create_usage_metadata_responses() defined?
_create_usage_metadata_responses() is defined in libs/partners/openai/langchain_openai/chat_models/base.py at line 3770.
What calls _create_usage_metadata_responses()?
_create_usage_metadata_responses() is called by 1 function(s): _construct_lc_result_from_responses_api.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free