Home / Function/ test_chat_result_with_reasoning_tokens() — langchain Function Reference

test_chat_result_with_reasoning_tokens() — langchain Function Reference

Architecture documentation for the test_chat_result_with_reasoning_tokens() function in test_chat_models.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  eb645f63_e23f_2db9_8cc4_849536d395f4["test_chat_result_with_reasoning_tokens()"]
  5bf2e477_37e0_3e98_4042_bc609f2f7f60["test_chat_models.py"]
  eb645f63_e23f_2db9_8cc4_849536d395f4 -->|defined in| 5bf2e477_37e0_3e98_4042_bc609f2f7f60
  style eb645f63_e23f_2db9_8cc4_849536d395f4 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/groq/tests/unit_tests/test_chat_models.py lines 578–621

def test_chat_result_with_reasoning_tokens() -> None:
    """Test that _create_chat_result properly includes reasoning tokens."""
    llm = ChatGroq(model="test-model")

    mock_response = {
        "id": "chatcmpl-123",
        "object": "chat.completion",
        "created": 1234567890,
        "model": "test-model",
        "choices": [
            {
                "index": 0,
                "message": {
                    "role": "assistant",
                    "content": "Test reasoning response",
                },
                "finish_reason": "stop",
            }
        ],
        "usage": {
            "prompt_tokens": 100,
            "completion_tokens": 450,
            "total_tokens": 550,
            "output_tokens_details": {"reasoning_tokens": 200},
        },
    }

    result = llm._create_chat_result(mock_response, {})

    assert len(result.generations) == 1
    message = result.generations[0].message
    assert isinstance(message, AIMessage)
    assert message.content == "Test reasoning response"

    assert message.usage_metadata is not None
    assert isinstance(message.usage_metadata, dict)
    assert message.usage_metadata["input_tokens"] == 100
    assert message.usage_metadata["output_tokens"] == 450
    assert message.usage_metadata["total_tokens"] == 550

    assert "output_token_details" in message.usage_metadata
    assert message.usage_metadata["output_token_details"]["reasoning"] == 200

    assert "input_token_details" not in message.usage_metadata

Domain

Subdomains

Frequently Asked Questions

What does test_chat_result_with_reasoning_tokens() do?
test_chat_result_with_reasoning_tokens() is a function in the langchain codebase, defined in libs/partners/groq/tests/unit_tests/test_chat_models.py.
Where is test_chat_result_with_reasoning_tokens() defined?
test_chat_result_with_reasoning_tokens() is defined in libs/partners/groq/tests/unit_tests/test_chat_models.py at line 578.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free