test_prompt_with_llm_parser() — langchain Function Reference
Architecture documentation for the test_prompt_with_llm_parser() function in test_runnable.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD d3e93cb3_c29c_f455_2d43_c53eed4a194a["test_prompt_with_llm_parser()"] 26df6ad8_0189_51d0_c3c1_6c3248893ff5["test_runnable.py"] d3e93cb3_c29c_f455_2d43_c53eed4a194a -->|defined in| 26df6ad8_0189_51d0_c3c1_6c3248893ff5 8652094c_ec57_c551_fc44_9566d00cf872["abatch()"] d3e93cb3_c29c_f455_2d43_c53eed4a194a -->|calls| 8652094c_ec57_c551_fc44_9566d00cf872 style d3e93cb3_c29c_f455_2d43_c53eed4a194a fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/core/tests/unit_tests/runnables/test_runnable.py lines 2190–2464
async def test_prompt_with_llm_parser(
mocker: MockerFixture, snapshot: SnapshotAssertion
) -> None:
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
llm = FakeStreamingListLLM(responses=["bear, dog, cat", "tomato, lettuce, onion"])
parser = CommaSeparatedListOutputParser()
chain = prompt | llm | parser
assert isinstance(chain, RunnableSequence)
assert chain.first == prompt
assert chain.middle == [llm]
assert chain.last == parser
assert dumps(chain, pretty=True) == snapshot
# Test invoke
prompt_spy = mocker.spy(prompt.__class__, "ainvoke")
llm_spy = mocker.spy(llm.__class__, "ainvoke")
parser_spy = mocker.spy(parser.__class__, "ainvoke")
tracer = FakeTracer()
assert await chain.ainvoke(
{"question": "What is your name?"}, {"callbacks": [tracer]}
) == ["bear", "dog", "cat"]
assert prompt_spy.call_args.args[1] == {"question": "What is your name?"}
assert llm_spy.call_args.args[1] == ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
)
assert parser_spy.call_args.args[1] == "bear, dog, cat"
assert tracer.runs == snapshot
mocker.stop(prompt_spy)
mocker.stop(llm_spy)
mocker.stop(parser_spy)
# Test batch
prompt_spy = mocker.spy(prompt.__class__, "abatch")
llm_spy = mocker.spy(llm.__class__, "abatch")
parser_spy = mocker.spy(parser.__class__, "abatch")
tracer = FakeTracer()
assert await chain.abatch(
[
{"question": "What is your name?"},
{"question": "What is your favorite color?"},
],
{"callbacks": [tracer]},
) == [["tomato", "lettuce", "onion"], ["bear", "dog", "cat"]]
assert prompt_spy.call_args.args[1] == [
{"question": "What is your name?"},
{"question": "What is your favorite color?"},
]
assert llm_spy.call_args.args[1] == [
ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
),
ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your favorite color?"),
]
),
]
assert parser_spy.call_args.args[1] == [
"tomato, lettuce, onion",
"bear, dog, cat",
]
assert len(tracer.runs) == 2
assert all(
run.name == "RunnableSequence"
and run.run_type == "chain"
and len(run.child_runs) == 3
for run in tracer.runs
)
mocker.stop(prompt_spy)
Domain
Subdomains
Calls
Source
Frequently Asked Questions
What does test_prompt_with_llm_parser() do?
test_prompt_with_llm_parser() is a function in the langchain codebase, defined in libs/core/tests/unit_tests/runnables/test_runnable.py.
Where is test_prompt_with_llm_parser() defined?
test_prompt_with_llm_parser() is defined in libs/core/tests/unit_tests/runnables/test_runnable.py at line 2190.
What does test_prompt_with_llm_parser() call?
test_prompt_with_llm_parser() calls 1 function(s): abatch.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free