test_audio_input_modality() — langchain Function Reference
Architecture documentation for the test_audio_input_modality() function in test_base.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 1985d864_97a0_f79b_2495_3fe06573e9fe["test_audio_input_modality()"] bd382a4e_442c_13ae_530c_6e34bc43623d["test_base.py"] 1985d864_97a0_f79b_2495_3fe06573e9fe -->|defined in| bd382a4e_442c_13ae_530c_6e34bc43623d style 1985d864_97a0_f79b_2495_3fe06573e9fe fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/tests/integration_tests/chat_models/test_base.py lines 919–956
def test_audio_input_modality() -> None:
llm = ChatOpenAI(
model="gpt-4o-audio-preview",
temperature=0,
model_kwargs={
"modalities": ["text", "audio"],
"audio": {"voice": "alloy", "format": "wav"},
},
)
filepath = Path(__file__).parent / "audio_input.wav"
audio_data = filepath.read_bytes()
b64_audio_data = base64.b64encode(audio_data).decode("utf-8")
history: list[BaseMessage] = [
HumanMessage(
[
{"type": "text", "text": "What is happening in this audio clip"},
{
"type": "input_audio",
"input_audio": {"data": b64_audio_data, "format": "wav"},
},
]
)
]
output = llm.invoke(history)
assert isinstance(output, AIMessage)
assert "audio" in output.additional_kwargs
history.append(output)
history.append(HumanMessage("Why?"))
output = llm.invoke(history)
assert isinstance(output, AIMessage)
assert "audio" in output.additional_kwargs
Domain
Subdomains
Source
Frequently Asked Questions
What does test_audio_input_modality() do?
test_audio_input_modality() is a function in the langchain codebase, defined in libs/partners/openai/tests/integration_tests/chat_models/test_base.py.
Where is test_audio_input_modality() defined?
test_audio_input_modality() is defined in libs/partners/openai/tests/integration_tests/chat_models/test_base.py at line 919.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free