Home / Function/ test_seq_dict_prompt_llm() — langchain Function Reference

test_seq_dict_prompt_llm() — langchain Function Reference

Architecture documentation for the test_seq_dict_prompt_llm() function in test_runnable.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  d02bdb6e_06e6_6255_7f70_bb316f1a7bba["test_seq_dict_prompt_llm()"]
  26df6ad8_0189_51d0_c3c1_6c3248893ff5["test_runnable.py"]
  d02bdb6e_06e6_6255_7f70_bb316f1a7bba -->|defined in| 26df6ad8_0189_51d0_c3c1_6c3248893ff5
  a3895c50_dab4_0c36_2d24_f63121e198a0["invoke()"]
  d02bdb6e_06e6_6255_7f70_bb316f1a7bba -->|calls| a3895c50_dab4_0c36_2d24_f63121e198a0
  style d02bdb6e_06e6_6255_7f70_bb316f1a7bba fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/core/tests/unit_tests/runnables/test_runnable.py lines 2723–2800

def test_seq_dict_prompt_llm(
    mocker: MockerFixture, snapshot: SnapshotAssertion
) -> None:
    passthrough = mocker.Mock(side_effect=lambda x: x)

    retriever = FakeRetriever()

    prompt = (
        SystemMessagePromptTemplate.from_template("You are a nice assistant.")
        + """Context:
{documents}

Question:
{question}"""
    )

    chat = FakeListChatModel(responses=["foo, bar"])

    parser = CommaSeparatedListOutputParser()

    chain: Runnable = (
        {
            "question": RunnablePassthrough[str]() | passthrough,
            "documents": passthrough | retriever,
            "just_to_test_lambda": passthrough,
        }
        | prompt
        | chat
        | parser
    )

    assert repr(chain) == snapshot
    assert isinstance(chain, RunnableSequence)
    assert isinstance(chain.first, RunnableParallel)
    assert chain.middle == [prompt, chat]
    assert chain.last == parser
    assert dumps(chain, pretty=True) == snapshot

    # Test invoke
    prompt_spy = mocker.spy(prompt.__class__, "invoke")
    chat_spy = mocker.spy(chat.__class__, "invoke")
    parser_spy = mocker.spy(parser.__class__, "invoke")
    tracer = FakeTracer()
    assert chain.invoke("What is your name?", {"callbacks": [tracer]}) == [
        "foo",
        "bar",
    ]
    assert prompt_spy.call_args.args[1] == {
        "documents": [Document(page_content="foo"), Document(page_content="bar")],
        "question": "What is your name?",
        "just_to_test_lambda": "What is your name?",
    }
    assert chat_spy.call_args.args[1] == ChatPromptValue(
        messages=[
            SystemMessage(
                content="You are a nice assistant.",
                additional_kwargs={},
                response_metadata={},
            ),
            HumanMessage(
                content="Context:\n"
                "[Document(metadata={}, page_content='foo'), "
                "Document(metadata={}, page_content='bar')]\n"
                "\n"
                "Question:\n"
                "What is your name?",
                additional_kwargs={},
                response_metadata={},
            ),
        ]
    )
    assert parser_spy.call_args.args[1] == _any_id_ai_message(content="foo, bar")
    assert len([r for r in tracer.runs if r.parent_run_id is None]) == 1
    parent_run = next(r for r in tracer.runs if r.parent_run_id is None)
    assert len(parent_run.child_runs) == 4
    map_run = parent_run.child_runs[0]
    assert map_run.name == "RunnableParallel<question,documents,just_to_test_lambda>"
    assert len(map_run.child_runs) == 3

Domain

Subdomains

Calls

Frequently Asked Questions

What does test_seq_dict_prompt_llm() do?
test_seq_dict_prompt_llm() is a function in the langchain codebase, defined in libs/core/tests/unit_tests/runnables/test_runnable.py.
Where is test_seq_dict_prompt_llm() defined?
test_seq_dict_prompt_llm() is defined in libs/core/tests/unit_tests/runnables/test_runnable.py at line 2723.
What does test_seq_dict_prompt_llm() call?
test_seq_dict_prompt_llm() calls 1 function(s): invoke.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free