test_with_llm() — langchain Function Reference
Architecture documentation for the test_with_llm() function in test_runnable_events_v1.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 9d9ca4fa_50fe_f5e2_5299_a822943e33f6["test_with_llm()"] 8ff41f3c_f250_f8de_8094_4f24860a10e0["test_runnable_events_v1.py"] 9d9ca4fa_50fe_f5e2_5299_a822943e33f6 -->|defined in| 8ff41f3c_f250_f8de_8094_4f24860a10e0 aac0453e_34cd_4dce_c7fa_f176ab20140b["_collect_events()"] 9d9ca4fa_50fe_f5e2_5299_a822943e33f6 -->|calls| aac0453e_34cd_4dce_c7fa_f176ab20140b 6ebe7fde_6e8f_dae5_d42f_9cea181617f5["_assert_events_equal_allow_superset_metadata()"] 9d9ca4fa_50fe_f5e2_5299_a822943e33f6 -->|calls| 6ebe7fde_6e8f_dae5_d42f_9cea181617f5 style 9d9ca4fa_50fe_f5e2_5299_a822943e33f6 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/core/tests/unit_tests/runnables/test_runnable_events_v1.py lines 1731–1860
async def test_with_llm() -> None:
"""Test with regular llm."""
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are Cat Agent 007"),
("human", "{question}"),
]
).with_config({"run_name": "my_template", "tags": ["my_template"]})
llm = FakeStreamingListLLM(responses=["abc"])
chain = prompt | llm
events = await _collect_events(
chain.astream_events({"question": "hello"}, version="v1")
)
_assert_events_equal_allow_superset_metadata(
events,
[
{
"data": {"input": {"question": "hello"}},
"event": "on_chain_start",
"metadata": {},
"name": "RunnableSequence",
"run_id": "",
"parent_ids": [],
"tags": [],
},
{
"data": {"input": {"question": "hello"}},
"event": "on_prompt_start",
"metadata": {},
"name": "my_template",
"run_id": "",
"parent_ids": [],
"tags": ["my_template", "seq:step:1"],
},
{
"data": {
"input": {"question": "hello"},
"output": ChatPromptValue(
messages=[
SystemMessage(content="You are Cat Agent 007"),
HumanMessage(content="hello"),
]
),
},
"event": "on_prompt_end",
"metadata": {},
"name": "my_template",
"run_id": "",
"parent_ids": [],
"tags": ["my_template", "seq:step:1"],
},
{
"data": {
"input": {
"prompts": ["System: You are Cat Agent 007\nHuman: hello"]
}
},
"event": "on_llm_start",
"metadata": {},
"name": "FakeStreamingListLLM",
"run_id": "",
"parent_ids": [],
"tags": ["seq:step:2"],
},
{
"data": {
"input": {
"prompts": ["System: You are Cat Agent 007\nHuman: hello"]
},
"output": {
"generations": [
[
{
"generation_info": None,
"text": "abc",
"type": "Generation",
}
]
],
"llm_output": None,
Domain
Subdomains
Source
Frequently Asked Questions
What does test_with_llm() do?
test_with_llm() is a function in the langchain codebase, defined in libs/core/tests/unit_tests/runnables/test_runnable_events_v1.py.
Where is test_with_llm() defined?
test_with_llm() is defined in libs/core/tests/unit_tests/runnables/test_runnable_events_v1.py at line 1731.
What does test_with_llm() call?
test_with_llm() calls 2 function(s): _assert_events_equal_allow_superset_metadata, _collect_events.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free