_run_llm() — langchain Function Reference
Architecture documentation for the _run_llm() function in runner_utils.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 916eedb8_7d10_72b9_9829_105f8ada65ab["_run_llm()"] 8253c602_7d0c_9195_a7e1_3e9b19304131["runner_utils.py"] 916eedb8_7d10_72b9_9829_105f8ada65ab -->|defined in| 8253c602_7d0c_9195_a7e1_3e9b19304131 385c1e91_d947_1192_8746_ee1dd66ceb54["_run_llm_or_chain()"] 385c1e91_d947_1192_8746_ee1dd66ceb54 -->|calls| 916eedb8_7d10_72b9_9829_105f8ada65ab cb0a59f9_bf61_2368_e170_04f16da99179["_get_prompt()"] 916eedb8_7d10_72b9_9829_105f8ada65ab -->|calls| cb0a59f9_bf61_2368_e170_04f16da99179 c56a3c0a_b0e2_287c_e948_c9d9eb18b351["_get_messages()"] 916eedb8_7d10_72b9_9829_105f8ada65ab -->|calls| c56a3c0a_b0e2_287c_e948_c9d9eb18b351 style 916eedb8_7d10_72b9_9829_105f8ada65ab fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain/langchain_classic/smith/evaluation/runner_utils.py lines 861–926
def _run_llm(
llm: BaseLanguageModel,
inputs: dict[str, Any],
callbacks: Callbacks,
*,
tags: list[str] | None = None,
input_mapper: Callable[[dict], Any] | None = None,
metadata: dict[str, Any] | None = None,
) -> str | BaseMessage:
"""Run the language model on the example.
Args:
llm: The language model to run.
inputs: The input dictionary.
callbacks: The callbacks to use during the run.
tags: Optional tags to add to the run.
input_mapper: function to map to the inputs dictionary from an Example
metadata: Optional metadata to add to the run.
Returns:
The LLMResult or ChatResult.
Raises:
ValueError: If the LLM type is unsupported.
InputFormatError: If the input format is invalid.
"""
# Most of this is legacy code; we could probably remove a lot of it.
if input_mapper is not None:
prompt_or_messages = input_mapper(inputs)
if isinstance(prompt_or_messages, str) or (
isinstance(prompt_or_messages, list)
and all(isinstance(msg, BaseMessage) for msg in prompt_or_messages)
):
llm_output: str | BaseMessage = llm.invoke(
prompt_or_messages,
config=RunnableConfig(
callbacks=callbacks,
tags=tags or [],
metadata=metadata or {},
),
)
else:
msg = (
"Input mapper returned invalid format: "
f" {prompt_or_messages}"
"\nExpected a single string or list of chat messages."
)
raise InputFormatError(msg)
else:
try:
llm_prompts = _get_prompt(inputs)
llm_output = llm.invoke(
llm_prompts,
config=RunnableConfig(
callbacks=callbacks,
tags=tags or [],
metadata=metadata or {},
),
)
except InputFormatError:
llm_inputs = _get_messages(inputs)
llm_output = llm.invoke(
**llm_inputs,
config=RunnableConfig(callbacks=callbacks, metadata=metadata or {}),
)
return llm_output
Domain
Subdomains
Called By
Source
Frequently Asked Questions
What does _run_llm() do?
_run_llm() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/smith/evaluation/runner_utils.py.
Where is _run_llm() defined?
_run_llm() is defined in libs/langchain/langchain_classic/smith/evaluation/runner_utils.py at line 861.
What does _run_llm() call?
_run_llm() calls 2 function(s): _get_messages, _get_prompt.
What calls _run_llm()?
_run_llm() is called by 1 function(s): _run_llm_or_chain.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free