test_prompt_with_llm() — langchain Function Reference
Architecture documentation for the test_prompt_with_llm() function in test_runnable.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD f1012a38_ff36_4def_e0f2_df496716b208["test_prompt_with_llm()"] 26df6ad8_0189_51d0_c3c1_6c3248893ff5["test_runnable.py"] f1012a38_ff36_4def_e0f2_df496716b208 -->|defined in| 26df6ad8_0189_51d0_c3c1_6c3248893ff5 8652094c_ec57_c551_fc44_9566d00cf872["abatch()"] f1012a38_ff36_4def_e0f2_df496716b208 -->|calls| 8652094c_ec57_c551_fc44_9566d00cf872 style f1012a38_ff36_4def_e0f2_df496716b208 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/core/tests/unit_tests/runnables/test_runnable.py lines 1994–2186
async def test_prompt_with_llm(
mocker: MockerFixture, snapshot: SnapshotAssertion
) -> None:
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
llm = FakeListLLM(responses=["foo", "bar"])
chain = prompt | llm
assert isinstance(chain, RunnableSequence)
assert chain.first == prompt
assert chain.middle == []
assert chain.last == llm
assert dumps(chain, pretty=True) == snapshot
# Test invoke
prompt_spy = mocker.spy(prompt.__class__, "ainvoke")
llm_spy = mocker.spy(llm.__class__, "ainvoke")
tracer = FakeTracer()
assert (
await chain.ainvoke({"question": "What is your name?"}, {"callbacks": [tracer]})
== "foo"
)
assert prompt_spy.call_args.args[1] == {"question": "What is your name?"}
assert llm_spy.call_args.args[1] == ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
)
assert tracer.runs == snapshot
mocker.stop(prompt_spy)
mocker.stop(llm_spy)
# Test batch
prompt_spy = mocker.spy(prompt.__class__, "abatch")
llm_spy = mocker.spy(llm.__class__, "abatch")
tracer = FakeTracer()
assert await chain.abatch(
[
{"question": "What is your name?"},
{"question": "What is your favorite color?"},
],
{"callbacks": [tracer]},
) == ["bar", "foo"]
assert prompt_spy.call_args.args[1] == [
{"question": "What is your name?"},
{"question": "What is your favorite color?"},
]
assert llm_spy.call_args.args[1] == [
ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
),
ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your favorite color?"),
]
),
]
assert tracer.runs == snapshot
mocker.stop(prompt_spy)
mocker.stop(llm_spy)
# Test stream
prompt_spy = mocker.spy(prompt.__class__, "ainvoke")
llm_spy = mocker.spy(llm.__class__, "astream")
tracer = FakeTracer()
assert [
token
async for token in chain.astream(
{"question": "What is your name?"}, {"callbacks": [tracer]}
)
] == ["bar"]
assert prompt_spy.call_args.args[1] == {"question": "What is your name?"}
assert llm_spy.call_args.args[1] == ChatPromptValue(
Domain
Subdomains
Calls
Source
Frequently Asked Questions
What does test_prompt_with_llm() do?
test_prompt_with_llm() is a function in the langchain codebase, defined in libs/core/tests/unit_tests/runnables/test_runnable.py.
Where is test_prompt_with_llm() defined?
test_prompt_with_llm() is defined in libs/core/tests/unit_tests/runnables/test_runnable.py at line 1994.
What does test_prompt_with_llm() call?
test_prompt_with_llm() calls 1 function(s): abatch.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free