test_chat_models.py — langchain Source File
Architecture documentation for test_chat_models.py, a python file in the langchain codebase. 12 imports, 0 dependents.
Entity Profile
Dependency Diagram
graph LR b11341c3_5abf_2fac_d82a_55ca324bc761["test_chat_models.py"] 7025b240_fdc3_cf68_b72f_f41dac94566b["json"] b11341c3_5abf_2fac_d82a_55ca324bc761 --> 7025b240_fdc3_cf68_b72f_f41dac94566b 2a7f66a7_8738_3d47_375b_70fcaa6ac169["logging"] b11341c3_5abf_2fac_d82a_55ca324bc761 --> 2a7f66a7_8738_3d47_375b_70fcaa6ac169 cfe2bde5_180e_e3b0_df2b_55b3ebaca8e7["collections.abc"] b11341c3_5abf_2fac_d82a_55ca324bc761 --> cfe2bde5_180e_e3b0_df2b_55b3ebaca8e7 69e1d8cc_6173_dcd0_bfdf_2132d8e1ce56["contextlib"] b11341c3_5abf_2fac_d82a_55ca324bc761 --> 69e1d8cc_6173_dcd0_bfdf_2132d8e1ce56 8e2034b7_ceb8_963f_29fc_2ea6b50ef9b3["typing"] b11341c3_5abf_2fac_d82a_55ca324bc761 --> 8e2034b7_ceb8_963f_29fc_2ea6b50ef9b3 525a7d6f_f455_56e3_854a_c8a7da4a1417["unittest.mock"] b11341c3_5abf_2fac_d82a_55ca324bc761 --> 525a7d6f_f455_56e3_854a_c8a7da4a1417 120e2591_3e15_b895_72b6_cb26195e40a6["pytest"] b11341c3_5abf_2fac_d82a_55ca324bc761 --> 120e2591_3e15_b895_72b6_cb26195e40a6 1803c8c1_a347_1256_1454_9f04c3553d93["httpx"] b11341c3_5abf_2fac_d82a_55ca324bc761 --> 1803c8c1_a347_1256_1454_9f04c3553d93 75137834_4ba7_dc43_7ec5_182c05eceedf["langchain_core.exceptions"] b11341c3_5abf_2fac_d82a_55ca324bc761 --> 75137834_4ba7_dc43_7ec5_182c05eceedf d758344f_537f_649e_f467_b9d7442e86df["langchain_core.messages"] b11341c3_5abf_2fac_d82a_55ca324bc761 --> d758344f_537f_649e_f467_b9d7442e86df bab3058f_1ec6_5a8a_2b11_86f46e62adb4["langchain_tests.unit_tests"] b11341c3_5abf_2fac_d82a_55ca324bc761 --> bab3058f_1ec6_5a8a_2b11_86f46e62adb4 eefab93e_c3f9_d0fd_c7a7_310371e9d16f["langchain_ollama.chat_models"] b11341c3_5abf_2fac_d82a_55ca324bc761 --> eefab93e_c3f9_d0fd_c7a7_310371e9d16f style b11341c3_5abf_2fac_d82a_55ca324bc761 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
"""Unit tests for ChatOllama."""
import json
import logging
from collections.abc import Generator
from contextlib import contextmanager
from typing import Any
from unittest.mock import MagicMock, patch
import pytest
from httpx import Client, Request, Response
from langchain_core.exceptions import OutputParserException
from langchain_core.messages import ChatMessage, HumanMessage
from langchain_tests.unit_tests import ChatModelUnitTests
from langchain_ollama.chat_models import (
ChatOllama,
_parse_arguments_from_tool_call,
_parse_json_string,
)
MODEL_NAME = "llama3.1"
@contextmanager
def _mock_httpx_client_stream(
*_args: Any, **_kwargs: Any
) -> Generator[Response, Any, Any]:
yield Response(
status_code=200,
content='{"message": {"role": "assistant", "content": "The meaning ..."}}',
request=Request(method="POST", url="http://whocares:11434"),
)
dummy_raw_tool_call = {
"function": {"name": "test_func", "arguments": ""},
}
class TestChatOllama(ChatModelUnitTests):
@property
def chat_model_class(self) -> type[ChatOllama]:
return ChatOllama
@property
def chat_model_params(self) -> dict:
return {"model": MODEL_NAME}
def test__parse_arguments_from_tool_call() -> None:
"""Test that string arguments are preserved as strings in tool call parsing.
PR #30154
String-typed tool arguments (like IDs or long strings) were being incorrectly
processed. The parser should preserve string values as strings rather than
attempting to parse them as JSON when they're already valid string arguments.
Use a long string ID to ensure string arguments maintain their original type after
parsing, which is critical for tools expecting string inputs.
// ... (416 more lines)
Domain
Subdomains
Functions
- _mock_httpx_client_stream()
- test__parse_arguments_from_tool_call()
- test__parse_arguments_from_tool_call_with_function_name_metadata()
- test_all_none_parameters_results_in_empty_options()
- test_arbitrary_roles_accepted_in_chatmessages()
- test_chat_ollama_ignores_strict_arg()
- test_explicit_options_dict_preserved()
- test_load_followed_by_content_response()
- test_load_response_with_actual_content_is_not_skipped()
- test_load_response_with_empty_content_is_skipped()
- test_load_response_with_whitespace_content_is_skipped()
- test_none_parameters_excluded_from_options()
- test_parse_json_string_failure_case_raises_exception()
- test_parse_json_string_skip_returns_input_on_failure()
- test_parse_json_string_success_cases()
- test_reasoning_param_passed_to_client()
- test_validate_model_on_init()
Classes
Dependencies
- collections.abc
- contextlib
- httpx
- json
- langchain_core.exceptions
- langchain_core.messages
- langchain_ollama.chat_models
- langchain_tests.unit_tests
- logging
- pytest
- typing
- unittest.mock
Source
Frequently Asked Questions
What does test_chat_models.py do?
test_chat_models.py is a source file in the langchain codebase, written in python. It belongs to the CoreAbstractions domain, MessageSchema subdomain.
What functions are defined in test_chat_models.py?
test_chat_models.py defines 17 function(s): _mock_httpx_client_stream, test__parse_arguments_from_tool_call, test__parse_arguments_from_tool_call_with_function_name_metadata, test_all_none_parameters_results_in_empty_options, test_arbitrary_roles_accepted_in_chatmessages, test_chat_ollama_ignores_strict_arg, test_explicit_options_dict_preserved, test_load_followed_by_content_response, test_load_response_with_actual_content_is_not_skipped, test_load_response_with_empty_content_is_skipped, and 7 more.
What does test_chat_models.py depend on?
test_chat_models.py imports 12 module(s): collections.abc, contextlib, httpx, json, langchain_core.exceptions, langchain_core.messages, langchain_ollama.chat_models, langchain_tests.unit_tests, and 4 more.
Where is test_chat_models.py in the architecture?
test_chat_models.py is located at libs/partners/ollama/tests/unit_tests/test_chat_models.py (domain: CoreAbstractions, subdomain: MessageSchema, directory: libs/partners/ollama/tests/unit_tests).
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free