test_with_llm() — langchain Function Reference
Architecture documentation for the test_with_llm() function in test_runnable_events_v2.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 1b8eb356_6b8d_641a_0884_12c890ec360a["test_with_llm()"] 33c02978_2077_5819_7048_bc2a81e80625["test_runnable_events_v2.py"] 1b8eb356_6b8d_641a_0884_12c890ec360a -->|defined in| 33c02978_2077_5819_7048_bc2a81e80625 716d2a5e_dc8e_3cae_e044_b56b06bee655["_collect_events()"] 1b8eb356_6b8d_641a_0884_12c890ec360a -->|calls| 716d2a5e_dc8e_3cae_e044_b56b06bee655 style 1b8eb356_6b8d_641a_0884_12c890ec360a fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/core/tests/unit_tests/runnables/test_runnable_events_v2.py lines 1681–1808
async def test_with_llm() -> None:
"""Test with regular llm."""
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are Cat Agent 007"),
("human", "{question}"),
]
).with_config({"run_name": "my_template", "tags": ["my_template"]})
llm = FakeStreamingListLLM(responses=["abc"])
chain = prompt | llm
events = await _collect_events(
chain.astream_events({"question": "hello"}, version="v2")
)
_assert_events_equal_allow_superset_metadata(
events,
[
{
"data": {"input": {"question": "hello"}},
"event": "on_chain_start",
"metadata": {},
"name": "RunnableSequence",
"run_id": "",
"parent_ids": [],
"tags": [],
},
{
"data": {"input": {"question": "hello"}},
"event": "on_prompt_start",
"metadata": {},
"name": "my_template",
"run_id": "",
"parent_ids": [],
"tags": ["my_template", "seq:step:1"],
},
{
"data": {
"input": {"question": "hello"},
"output": ChatPromptValue(
messages=[
SystemMessage(content="You are Cat Agent 007"),
HumanMessage(content="hello"),
]
),
},
"event": "on_prompt_end",
"metadata": {},
"name": "my_template",
"run_id": "",
"parent_ids": [],
"tags": ["my_template", "seq:step:1"],
},
{
"data": {
"input": {
"prompts": ["System: You are Cat Agent 007\nHuman: hello"]
}
},
"event": "on_llm_start",
"metadata": {},
"name": "FakeStreamingListLLM",
"run_id": "",
"parent_ids": [],
"tags": ["seq:step:2"],
},
{
"data": {
"input": {
"prompts": ["System: You are Cat Agent 007\nHuman: hello"]
},
"output": {
"generations": [
[
{
"generation_info": None,
"text": "abc",
"type": "Generation",
}
]
],
"llm_output": None,
Domain
Subdomains
Calls
Source
Frequently Asked Questions
What does test_with_llm() do?
test_with_llm() is a function in the langchain codebase, defined in libs/core/tests/unit_tests/runnables/test_runnable_events_v2.py.
Where is test_with_llm() defined?
test_with_llm() is defined in libs/core/tests/unit_tests/runnables/test_runnable_events_v2.py at line 1681.
What does test_with_llm() call?
test_with_llm() calls 1 function(s): _collect_events.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free