Home / Function/ test_usage_metadata_streaming() — langchain Function Reference

test_usage_metadata_streaming() — langchain Function Reference

Architecture documentation for the test_usage_metadata_streaming() function in chat_models.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  15eb1b9b_a5ad_2288_5822_5f8d65db9b08["test_usage_metadata_streaming()"]
  971e928f_9c9b_ce7a_b93d_e762f2f5aa54["ChatModelIntegrationTests"]
  15eb1b9b_a5ad_2288_5822_5f8d65db9b08 -->|defined in| 971e928f_9c9b_ce7a_b93d_e762f2f5aa54
  c8ace908_9e36_58d9_a682_297db2b2016a["invoke_with_audio_input()"]
  15eb1b9b_a5ad_2288_5822_5f8d65db9b08 -->|calls| c8ace908_9e36_58d9_a682_297db2b2016a
  d1acec24_5fa6_76ac_c3aa_dfe9847046c0["invoke_with_audio_output()"]
  15eb1b9b_a5ad_2288_5822_5f8d65db9b08 -->|calls| d1acec24_5fa6_76ac_c3aa_dfe9847046c0
  b5d5f3a0_cf7a_b6e1_b6b3_c97891b6f0a0["invoke_with_reasoning_output()"]
  15eb1b9b_a5ad_2288_5822_5f8d65db9b08 -->|calls| b5d5f3a0_cf7a_b6e1_b6b3_c97891b6f0a0
  16308e84_efbf_1f25_6f4d_adf9ca75bc0e["invoke_with_cache_read_input()"]
  15eb1b9b_a5ad_2288_5822_5f8d65db9b08 -->|calls| 16308e84_efbf_1f25_6f4d_adf9ca75bc0e
  3c44484f_fa9d_0b02_4503_c20ffefe584f["invoke_with_cache_creation_input()"]
  15eb1b9b_a5ad_2288_5822_5f8d65db9b08 -->|calls| 3c44484f_fa9d_0b02_4503_c20ffefe584f
  style 15eb1b9b_a5ad_2288_5822_5f8d65db9b08 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/standard-tests/langchain_tests/integration_tests/chat_models.py lines 1345–1512

    def test_usage_metadata_streaming(self, model: BaseChatModel) -> None:
        """Test usage metadata in streaming mode.

        Test to verify that the model returns correct usage metadata in streaming mode.

        !!! warning "Behavior changed in `langchain-tests` 0.3.17"

            Additionally check for the presence of `model_name` in the response
            metadata, which is needed for usage tracking in callback handlers.

        ??? note "Configuration"

            By default, this test is run.
            To disable this feature, set `returns_usage_metadata` to `False` in your
            test class:

            ```python
            class TestMyChatModelIntegration(ChatModelIntegrationTests):
                @property
                def returns_usage_metadata(self) -> bool:
                    return False
            ```

            This test can also check the format of specific kinds of usage metadata
            based on the `supported_usage_metadata_details` property.

            This property should be configured as follows with the types of tokens that
            the model supports tracking:

            ```python
            class TestMyChatModelIntegration(ChatModelIntegrationTests):
                @property
                def supported_usage_metadata_details(self) -> dict:
                    return {
                        "invoke": [
                            "audio_input",
                            "audio_output",
                            "reasoning_output",
                            "cache_read_input",
                            "cache_creation_input",
                        ],
                        "stream": [
                            "audio_input",
                            "audio_output",
                            "reasoning_output",
                            "cache_read_input",
                            "cache_creation_input",
                        ],
                    }
            ```

        ??? question "Troubleshooting"

            If this test fails, first verify that your model yields
            `langchain_core.messages.ai.UsageMetadata` dicts
            attached to the returned `AIMessage` object in `_stream`
            that sum up to the total usage metadata.

            Note that `input_tokens` should only be included on one of the chunks
            (typically the first or the last chunk), and the rest should have `0` or
            `None` to avoid counting input tokens multiple times.

            `output_tokens` typically count the number of tokens in each chunk, not
            the sum. This test will pass as long as the sum of `output_tokens` across
            all chunks is not `0`.

            ```python
            yield ChatResult(
                generations=[
                    ChatGeneration(
                        message=AIMessage(
                            content="Output text",
                            usage_metadata={
                                "input_tokens": (
                                    num_input_tokens if is_first_chunk else 0
                                ),
                                "output_tokens": 11,
                                "total_tokens": (
                                    11 + num_input_tokens if is_first_chunk else 11
                                ),
                                "input_token_details": {

Domain

Subdomains

Frequently Asked Questions

What does test_usage_metadata_streaming() do?
test_usage_metadata_streaming() is a function in the langchain codebase, defined in libs/standard-tests/langchain_tests/integration_tests/chat_models.py.
Where is test_usage_metadata_streaming() defined?
test_usage_metadata_streaming() is defined in libs/standard-tests/langchain_tests/integration_tests/chat_models.py at line 1345.
What does test_usage_metadata_streaming() call?
test_usage_metadata_streaming() calls 5 function(s): invoke_with_audio_input, invoke_with_audio_output, invoke_with_cache_creation_input, invoke_with_cache_read_input, invoke_with_reasoning_output.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free