Home / Function/ test_redacted_thinking() — langchain Function Reference

test_redacted_thinking() — langchain Function Reference

Architecture documentation for the test_redacted_thinking() function in test_chat_models.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  83f5d89e_ac66_1e93_6910_b8b01767660b["test_redacted_thinking()"]
  f27640dd_3870_5548_d153_f9504ae1021f["test_chat_models.py"]
  83f5d89e_ac66_1e93_6910_b8b01767660b -->|defined in| f27640dd_3870_5548_d153_f9504ae1021f
  style 83f5d89e_ac66_1e93_6910_b8b01767660b fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/anthropic/tests/integration_tests/test_chat_models.py lines 1094–1155

def test_redacted_thinking(output_version: Literal["v0", "v1"]) -> None:
    llm = ChatAnthropic(
        # It appears that Sonnet 4.5 either: isn't returning redacted thinking blocks,
        # or the magic string is broken? Retry later once 3-7 finally removed
        model="claude-3-7-sonnet-latest",  # type: ignore[call-arg]
        max_tokens=5_000,  # type: ignore[call-arg]
        thinking={"type": "enabled", "budget_tokens": 2_000},
        output_version=output_version,
    )
    query = "ANTHROPIC_MAGIC_STRING_TRIGGER_REDACTED_THINKING_46C9A13E193C177646C7398A98432ECCCE4C1253D5E2D82641AC0E52CC2876CB"  # noqa: E501
    input_message = {"role": "user", "content": query}

    response = llm.invoke([input_message])
    value = None
    for block in response.content:
        assert isinstance(block, dict)
        if block["type"] == "redacted_thinking":
            value = block
        elif (
            block["type"] == "non_standard"
            and block["value"]["type"] == "redacted_thinking"
        ):
            value = block["value"]
        else:
            pass
        if value:
            assert set(value.keys()) == {"type", "data"}
            assert value["data"]
            assert isinstance(value["data"], str)
    assert value is not None

    # Test streaming
    full: BaseMessageChunk | None = None
    for chunk in llm.stream([input_message]):
        full = cast("BaseMessageChunk", chunk) if full is None else full + chunk
    assert isinstance(full, AIMessageChunk)
    assert isinstance(full.content, list)
    value = None
    for block in full.content:
        assert isinstance(block, dict)
        if block["type"] == "redacted_thinking":
            value = block
            assert set(value.keys()) == {"type", "data", "index"}
            assert "index" in block
        elif (
            block["type"] == "non_standard"
            and block["value"]["type"] == "redacted_thinking"
        ):
            value = block["value"]
            assert isinstance(value, dict)
            assert set(value.keys()) == {"type", "data"}
            assert "index" in block
        else:
            pass
        if value:
            assert value["data"]
            assert isinstance(value["data"], str)
    assert value is not None

    # Test pass back in
    next_message = {"role": "user", "content": "What?"}
    _ = llm.invoke([input_message, full, next_message])

Domain

Subdomains

Frequently Asked Questions

What does test_redacted_thinking() do?
test_redacted_thinking() is a function in the langchain codebase, defined in libs/partners/anthropic/tests/integration_tests/test_chat_models.py.
Where is test_redacted_thinking() defined?
test_redacted_thinking() is defined in libs/partners/anthropic/tests/integration_tests/test_chat_models.py at line 1094.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free