test_async_chat_openai_streaming() — langchain Function Reference
Architecture documentation for the test_async_chat_openai_streaming() function in test_azure.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 190dd465_6696_f5b0_964a_48ec027dedc9["test_async_chat_openai_streaming()"] c413d48d_e43d_eae6_47cb_3eea9394c77c["test_azure.py"] 190dd465_6696_f5b0_964a_48ec027dedc9 -->|defined in| c413d48d_e43d_eae6_47cb_3eea9394c77c ccf90d51_5ff2_7b50_5922_b28b80cca584["_get_llm()"] 190dd465_6696_f5b0_964a_48ec027dedc9 -->|calls| ccf90d51_5ff2_7b50_5922_b28b80cca584 style 190dd465_6696_f5b0_964a_48ec027dedc9 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/tests/integration_tests/chat_models/test_azure.py lines 140–161
async def test_async_chat_openai_streaming() -> None:
"""Test that streaming correctly invokes on_llm_new_token callback."""
callback_handler = FakeCallbackHandler()
callback_manager = CallbackManager([callback_handler])
chat = _get_llm(
max_tokens=10,
streaming=True,
temperature=0,
callbacks=callback_manager,
verbose=True,
)
message = HumanMessage(content="Hello")
response = await chat.agenerate([[message], [message]])
assert callback_handler.llm_streams > 0
assert isinstance(response, LLMResult)
assert len(response.generations) == 2
for generations in response.generations:
assert len(generations) == 1
for generation in generations:
assert isinstance(generation, ChatGeneration)
assert isinstance(generation.text, str)
assert generation.text == generation.message.content
Domain
Subdomains
Calls
Source
Frequently Asked Questions
What does test_async_chat_openai_streaming() do?
test_async_chat_openai_streaming() is a function in the langchain codebase, defined in libs/partners/openai/tests/integration_tests/chat_models/test_azure.py.
Where is test_async_chat_openai_streaming() defined?
test_async_chat_openai_streaming() is defined in libs/partners/openai/tests/integration_tests/chat_models/test_azure.py at line 140.
What does test_async_chat_openai_streaming() call?
test_async_chat_openai_streaming() calls 1 function(s): _get_llm.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free