Home / Function/ test_code_execution() — langchain Function Reference

test_code_execution() — langchain Function Reference

Architecture documentation for the test_code_execution() function in test_chat_models.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  c53fcf17_7ab0_0655_c4e7_9689b9b49eb2["test_code_execution()"]
  f27640dd_3870_5548_d153_f9504ae1021f["test_chat_models.py"]
  c53fcf17_7ab0_0655_c4e7_9689b9b49eb2 -->|defined in| f27640dd_3870_5548_d153_f9504ae1021f
  style c53fcf17_7ab0_0655_c4e7_9689b9b49eb2 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/anthropic/tests/integration_tests/test_chat_models.py lines 1722–1786

def test_code_execution(output_version: Literal["v0", "v1"]) -> None:
    """Note: this is a beta feature.

    TODO: Update to remove beta once generally available.
    """
    llm = ChatAnthropic(
        model=MODEL_NAME,  # type: ignore[call-arg]
        betas=["code-execution-2025-08-25"],
        output_version=output_version,
    )

    tool = {"type": "code_execution_20250825", "name": "code_execution"}
    llm_with_tools = llm.bind_tools([tool])

    input_message = {
        "role": "user",
        "content": [
            {
                "type": "text",
                "text": (
                    "Calculate the mean and standard deviation of "
                    "[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]"
                ),
            },
        ],
    }
    response = llm_with_tools.invoke([input_message])
    assert all(isinstance(block, dict) for block in response.content)
    block_types = {block["type"] for block in response.content}  # type: ignore[index]
    if output_version == "v0":
        assert block_types == {
            "text",
            "server_tool_use",
            "text_editor_code_execution_tool_result",
            "bash_code_execution_tool_result",
        }
    else:
        assert block_types == {"text", "server_tool_call", "server_tool_result"}

    # Test streaming
    full: BaseMessageChunk | None = None
    for chunk in llm_with_tools.stream([input_message]):
        assert isinstance(chunk, AIMessageChunk)
        full = chunk if full is None else full + chunk
    assert isinstance(full, AIMessageChunk)
    assert isinstance(full.content, list)
    block_types = {block["type"] for block in full.content}  # type: ignore[index]
    if output_version == "v0":
        assert block_types == {
            "text",
            "server_tool_use",
            "text_editor_code_execution_tool_result",
            "bash_code_execution_tool_result",
        }
    else:
        assert block_types == {"text", "server_tool_call", "server_tool_result"}

    # Test we can pass back in
    next_message = {
        "role": "user",
        "content": "Please add more comments to the code.",
    }
    _ = llm_with_tools.invoke(
        [input_message, full, next_message],
    )

Domain

Subdomains

Frequently Asked Questions

What does test_code_execution() do?
test_code_execution() is a function in the langchain codebase, defined in libs/partners/anthropic/tests/integration_tests/test_chat_models.py.
Where is test_code_execution() defined?
test_code_execution() is defined in libs/partners/anthropic/tests/integration_tests/test_chat_models.py at line 1722.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free