test_openai_client_caching() — langchain Function Reference
Architecture documentation for the test_openai_client_caching() function in test_base.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD e0201335_c5a0_ac8d_759a_b875ca7815f8["test_openai_client_caching()"] 48232d20_f8c1_b597_14fa_7dc407e9bfe5["test_base.py"] e0201335_c5a0_ac8d_759a_b875ca7815f8 -->|defined in| 48232d20_f8c1_b597_14fa_7dc407e9bfe5 style e0201335_c5a0_ac8d_759a_b875ca7815f8 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/tests/unit_tests/chat_models/test_base.py lines 104–125
def test_openai_client_caching() -> None:
"""Test that the OpenAI client is cached."""
llm1 = ChatOpenAI(model="gpt-4.1-mini")
llm2 = ChatOpenAI(model="gpt-4.1-mini")
assert llm1.root_client._client is llm2.root_client._client
llm3 = ChatOpenAI(model="gpt-4.1-mini", base_url="foo")
assert llm1.root_client._client is not llm3.root_client._client
llm4 = ChatOpenAI(model="gpt-4.1-mini", timeout=None)
assert llm1.root_client._client is llm4.root_client._client
llm5 = ChatOpenAI(model="gpt-4.1-mini", timeout=3)
assert llm1.root_client._client is not llm5.root_client._client
llm6 = ChatOpenAI(
model="gpt-4.1-mini", timeout=httpx.Timeout(timeout=60.0, connect=5.0)
)
assert llm1.root_client._client is not llm6.root_client._client
llm7 = ChatOpenAI(model="gpt-4.1-mini", timeout=(5, 1))
assert llm1.root_client._client is not llm7.root_client._client
Domain
Subdomains
Source
Frequently Asked Questions
What does test_openai_client_caching() do?
test_openai_client_caching() is a function in the langchain codebase, defined in libs/partners/openai/tests/unit_tests/chat_models/test_base.py.
Where is test_openai_client_caching() defined?
test_openai_client_caching() is defined in libs/partners/openai/tests/unit_tests/chat_models/test_base.py at line 104.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free