test_code_interpreter() — langchain Function Reference
Architecture documentation for the test_code_interpreter() function in test_chat_models.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 63fe0f3d_b661_68dc_7569_363ce8e7b9f9["test_code_interpreter()"] af57ae60_607e_c138_9ab0_fb8bb1c5916a["test_chat_models.py"] 63fe0f3d_b661_68dc_7569_363ce8e7b9f9 -->|defined in| af57ae60_607e_c138_9ab0_fb8bb1c5916a style 63fe0f3d_b661_68dc_7569_363ce8e7b9f9 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/groq/tests/integration_tests/test_chat_models.py lines 739–770
def test_code_interpreter() -> None:
llm = ChatGroq(model="groq/compound-mini")
input_message = {
"role": "user",
"content": (
"Calculate the square root of 101 and show me the Python code you used."
),
}
full: AIMessageChunk | None = None
for chunk in llm.stream([input_message]):
full = chunk if full is None else full + chunk
assert isinstance(full, AIMessageChunk)
assert full.additional_kwargs["reasoning_content"]
assert full.additional_kwargs["executed_tools"]
assert [block["type"] for block in full.content_blocks] == [
"reasoning",
"server_tool_call",
"server_tool_result",
"text",
]
next_message = {
"role": "user",
"content": "Now do the same for 102.",
}
response = llm.invoke([input_message, full, next_message])
assert [block["type"] for block in response.content_blocks] == [
"reasoning",
"server_tool_call",
"server_tool_result",
"text",
]
Domain
Subdomains
Source
Frequently Asked Questions
What does test_code_interpreter() do?
test_code_interpreter() is a function in the langchain codebase, defined in libs/partners/groq/tests/integration_tests/test_chat_models.py.
Where is test_code_interpreter() defined?
test_code_interpreter() is defined in libs/partners/groq/tests/integration_tests/test_chat_models.py at line 739.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free