Home / Function/ test_reasoning_modes_behavior() — langchain Function Reference

test_reasoning_modes_behavior() — langchain Function Reference

Architecture documentation for the test_reasoning_modes_behavior() function in test_chat_models_reasoning.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  0f1607dd_f69a_9e45_c553_caae8f54a9e6["test_reasoning_modes_behavior()"]
  5a5c2d7b_4823_4697_a3e1_c5e1c3fce238["test_chat_models_reasoning.py"]
  0f1607dd_f69a_9e45_c553_caae8f54a9e6 -->|defined in| 5a5c2d7b_4823_4697_a3e1_c5e1c3fce238
  style 0f1607dd_f69a_9e45_c553_caae8f54a9e6 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/ollama/tests/integration_tests/chat_models/test_chat_models_reasoning.py lines 181–226

def test_reasoning_modes_behavior(model: str) -> None:
    """Test the behavior differences between reasoning modes.

    This test documents how the Ollama API and LangChain handle reasoning content
    for DeepSeek R1 models across different reasoning settings.

    Current Ollama API behavior:
    - Ollama automatically separates reasoning content into a 'thinking' field
    - No <think> tags are present in responses
    - `think=False` prevents the 'thinking' field from being included
    - `think=None` includes the 'thinking' field (model default)
    - `think=True` explicitly requests the 'thinking' field

    LangChain behavior:
    - `reasoning=False`: Does not capture reasoning content
    - `reasoning=None`: Does not capture reasoning content (model default behavior)
    - `reasoning=True`: Captures reasoning in `additional_kwargs['reasoning_content']`
    """
    message = HumanMessage(content=SAMPLE)

    # Test with reasoning=None (model default - no reasoning captured)
    llm_default = ChatOllama(model=model, reasoning=None, num_ctx=2**12)
    result_default = llm_default.invoke([message])
    assert result_default.content
    assert "<think>" not in result_default.content
    assert "</think>" not in result_default.content
    assert "reasoning_content" not in result_default.additional_kwargs

    # Test with reasoning=False (explicit disable - no reasoning captured)
    llm_disabled = ChatOllama(model=model, reasoning=False, num_ctx=2**12)
    result_disabled = llm_disabled.invoke([message])
    assert result_disabled.content
    assert "<think>" not in result_disabled.content
    assert "</think>" not in result_disabled.content
    assert "reasoning_content" not in result_disabled.additional_kwargs

    # Test with reasoning=True (reasoning captured separately)
    llm_enabled = ChatOllama(model=model, reasoning=True, num_ctx=2**12)
    result_enabled = llm_enabled.invoke([message])
    assert result_enabled.content
    assert "<think>" not in result_enabled.content
    assert "</think>" not in result_enabled.content
    assert "reasoning_content" in result_enabled.additional_kwargs
    assert len(result_enabled.additional_kwargs["reasoning_content"]) > 0
    assert "<think>" not in result_enabled.additional_kwargs["reasoning_content"]
    assert "</think>" not in result_enabled.additional_kwargs["reasoning_content"]

Domain

Subdomains

Frequently Asked Questions

What does test_reasoning_modes_behavior() do?
test_reasoning_modes_behavior() is a function in the langchain codebase, defined in libs/partners/ollama/tests/integration_tests/chat_models/test_chat_models_reasoning.py.
Where is test_reasoning_modes_behavior() defined?
test_reasoning_modes_behavior() is defined in libs/partners/ollama/tests/integration_tests/chat_models/test_chat_models_reasoning.py at line 181.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free