Home / Function/ _arun_llm() — langchain Function Reference

_arun_llm() — langchain Function Reference

Architecture documentation for the _arun_llm() function in runner_utils.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  5dad9d8a_ecf9_b785_9a01_d927f6bea73e["_arun_llm()"]
  8253c602_7d0c_9195_a7e1_3e9b19304131["runner_utils.py"]
  5dad9d8a_ecf9_b785_9a01_d927f6bea73e -->|defined in| 8253c602_7d0c_9195_a7e1_3e9b19304131
  f3487723_0da7_fd94_b3c8_cdac21cf7e23["_arun_llm_or_chain()"]
  f3487723_0da7_fd94_b3c8_cdac21cf7e23 -->|calls| 5dad9d8a_ecf9_b785_9a01_d927f6bea73e
  cb0a59f9_bf61_2368_e170_04f16da99179["_get_prompt()"]
  5dad9d8a_ecf9_b785_9a01_d927f6bea73e -->|calls| cb0a59f9_bf61_2368_e170_04f16da99179
  c56a3c0a_b0e2_287c_e948_c9d9eb18b351["_get_messages()"]
  5dad9d8a_ecf9_b785_9a01_d927f6bea73e -->|calls| c56a3c0a_b0e2_287c_e948_c9d9eb18b351
  style 5dad9d8a_ecf9_b785_9a01_d927f6bea73e fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/smith/evaluation/runner_utils.py lines 697–764

async def _arun_llm(
    llm: BaseLanguageModel,
    inputs: dict[str, Any],
    *,
    tags: list[str] | None = None,
    callbacks: Callbacks = None,
    input_mapper: Callable[[dict], Any] | None = None,
    metadata: dict[str, Any] | None = None,
) -> str | BaseMessage:
    """Asynchronously run the language model.

    Args:
        llm: The language model to run.
        inputs: The input dictionary.
        tags: Optional tags to add to the run.
        callbacks: Optional callbacks to use during the run.
        input_mapper: Optional function to map inputs to the expected format.
        metadata: Optional metadata to add to the run.

    Returns:
        The LLMResult or ChatResult.

    Raises:
        ValueError: If the LLM type is unsupported.
        InputFormatError: If the input format is invalid.
    """
    if input_mapper is not None:
        prompt_or_messages = input_mapper(inputs)
        if isinstance(prompt_or_messages, str) or (
            isinstance(prompt_or_messages, list)
            and all(isinstance(msg, BaseMessage) for msg in prompt_or_messages)
        ):
            return await llm.ainvoke(
                prompt_or_messages,
                config=RunnableConfig(
                    callbacks=callbacks,
                    tags=tags or [],
                    metadata=metadata or {},
                ),
            )
        msg = (
            "Input mapper returned invalid format"
            f" {prompt_or_messages}"
            "\nExpected a single string or list of chat messages."
        )
        raise InputFormatError(msg)

    try:
        prompt = _get_prompt(inputs)
        llm_output: str | BaseMessage = await llm.ainvoke(
            prompt,
            config=RunnableConfig(
                callbacks=callbacks,
                tags=tags or [],
                metadata=metadata or {},
            ),
        )
    except InputFormatError:
        llm_inputs = _get_messages(inputs)
        llm_output = await llm.ainvoke(
            **llm_inputs,
            config=RunnableConfig(
                callbacks=callbacks,
                tags=tags or [],
                metadata=metadata or {},
            ),
        )
    return llm_output

Domain

Subdomains

Frequently Asked Questions

What does _arun_llm() do?
_arun_llm() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/smith/evaluation/runner_utils.py.
Where is _arun_llm() defined?
_arun_llm() is defined in libs/langchain/langchain_classic/smith/evaluation/runner_utils.py at line 697.
What does _arun_llm() call?
_arun_llm() calls 2 function(s): _get_messages, _get_prompt.
What calls _arun_llm()?
_arun_llm() is called by 1 function(s): _arun_llm_or_chain.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free