_create_usage_metadata() — langchain Function Reference
Architecture documentation for the _create_usage_metadata() function in base.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 81a7400d_9d67_48c6_dcb8_b344f5cba3ee["_create_usage_metadata()"] 2b046911_ea21_8e2e_ba0d_9d03da8d7bda["base.py"] 81a7400d_9d67_48c6_dcb8_b344f5cba3ee -->|defined in| 2b046911_ea21_8e2e_ba0d_9d03da8d7bda 9dd73ff5_bb27_7bf2_5124_b82e93cd60f6["_convert_chunk_to_generation_chunk()"] 9dd73ff5_bb27_7bf2_5124_b82e93cd60f6 -->|calls| 81a7400d_9d67_48c6_dcb8_b344f5cba3ee 64924e61_d0ac_6bbf_4818_aeb51576556d["_create_chat_result()"] 64924e61_d0ac_6bbf_4818_aeb51576556d -->|calls| 81a7400d_9d67_48c6_dcb8_b344f5cba3ee style 81a7400d_9d67_48c6_dcb8_b344f5cba3ee fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/langchain_openai/chat_models/base.py lines 3723–3767
def _create_usage_metadata(
oai_token_usage: dict, service_tier: str | None = None
) -> UsageMetadata:
input_tokens = oai_token_usage.get("prompt_tokens") or 0
output_tokens = oai_token_usage.get("completion_tokens") or 0
total_tokens = oai_token_usage.get("total_tokens") or input_tokens + output_tokens
if service_tier not in {"priority", "flex"}:
service_tier = None
service_tier_prefix = f"{service_tier}_" if service_tier else ""
input_token_details: dict = {
"audio": (oai_token_usage.get("prompt_tokens_details") or {}).get(
"audio_tokens"
),
f"{service_tier_prefix}cache_read": (
oai_token_usage.get("prompt_tokens_details") or {}
).get("cached_tokens"),
}
output_token_details: dict = {
"audio": (oai_token_usage.get("completion_tokens_details") or {}).get(
"audio_tokens"
),
f"{service_tier_prefix}reasoning": (
oai_token_usage.get("completion_tokens_details") or {}
).get("reasoning_tokens"),
}
if service_tier is not None:
# Avoid counting cache and reasoning tokens towards the service tier token
# counts, since service tier tokens are already priced differently
input_token_details[service_tier] = input_tokens - input_token_details.get(
f"{service_tier_prefix}cache_read", 0
)
output_token_details[service_tier] = output_tokens - output_token_details.get(
f"{service_tier_prefix}reasoning", 0
)
return UsageMetadata(
input_tokens=input_tokens,
output_tokens=output_tokens,
total_tokens=total_tokens,
input_token_details=InputTokenDetails(
**{k: v for k, v in input_token_details.items() if v is not None}
),
output_token_details=OutputTokenDetails(
**{k: v for k, v in output_token_details.items() if v is not None}
),
)
Domain
Subdomains
Source
Frequently Asked Questions
What does _create_usage_metadata() do?
_create_usage_metadata() is a function in the langchain codebase, defined in libs/partners/openai/langchain_openai/chat_models/base.py.
Where is _create_usage_metadata() defined?
_create_usage_metadata() is defined in libs/partners/openai/langchain_openai/chat_models/base.py at line 3723.
What calls _create_usage_metadata()?
_create_usage_metadata() is called by 2 function(s): _convert_chunk_to_generation_chunk, _create_chat_result.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free