_load_run_evaluators() — langchain Function Reference
Architecture documentation for the _load_run_evaluators() function in runner_utils.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 5e72e350_cbb1_d312_db46_9ac4a7cf909f["_load_run_evaluators()"] 8253c602_7d0c_9195_a7e1_3e9b19304131["runner_utils.py"] 5e72e350_cbb1_d312_db46_9ac4a7cf909f -->|defined in| 8253c602_7d0c_9195_a7e1_3e9b19304131 ae9076b6_e76d_5271_0240_412b70e62fda["_setup_evaluation()"] ae9076b6_e76d_5271_0240_412b70e62fda -->|calls| 5e72e350_cbb1_d312_db46_9ac4a7cf909f a089196b_8c1b_51e2_42d9_a8bd76f8a276["_get_keys()"] 5e72e350_cbb1_d312_db46_9ac4a7cf909f -->|calls| a089196b_8c1b_51e2_42d9_a8bd76f8a276 563aa52b_6aee_8625_107a_9d26008bcf24["_construct_run_evaluator()"] 5e72e350_cbb1_d312_db46_9ac4a7cf909f -->|calls| 563aa52b_6aee_8625_107a_9d26008bcf24 style 5e72e350_cbb1_d312_db46_9ac4a7cf909f fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain/langchain_classic/smith/evaluation/runner_utils.py lines 622–691
def _load_run_evaluators(
config: smith_eval.RunEvalConfig,
run_type: str,
data_type: DataType,
example_outputs: list[str] | None,
run_inputs: list[str] | None,
run_outputs: list[str] | None,
) -> list[RunEvaluator]:
"""Load run evaluators from a configuration.
Args:
config: Configuration for the run evaluators.
run_type: The type of run.
data_type: The type of dataset used in the run.
example_outputs: The example outputs.
run_inputs: The input keys for the run.
run_outputs: The output keys for the run.
Returns:
A list of run evaluators.
"""
run_evaluators = []
input_key, prediction_key, reference_key = None, None, None
if config.evaluators or (
config.custom_evaluators
and any(isinstance(e, StringEvaluator) for e in config.custom_evaluators)
):
input_key, prediction_key, reference_key = _get_keys(
config,
run_inputs,
run_outputs,
example_outputs,
)
for eval_config in config.evaluators:
run_evaluator = _construct_run_evaluator(
eval_config,
config.eval_llm,
run_type,
data_type,
example_outputs,
reference_key,
input_key,
prediction_key,
)
run_evaluators.append(run_evaluator)
custom_evaluators = config.custom_evaluators or []
for custom_evaluator in custom_evaluators:
if isinstance(custom_evaluator, RunEvaluator):
run_evaluators.append(custom_evaluator)
elif isinstance(custom_evaluator, StringEvaluator):
run_evaluators.append(
smith_eval.StringRunEvaluatorChain.from_run_and_data_type(
custom_evaluator,
run_type,
data_type,
input_key=input_key,
prediction_key=prediction_key,
reference_key=reference_key,
),
)
elif callable(custom_evaluator):
run_evaluators.append(run_evaluator_dec(custom_evaluator))
else:
msg = ( # type: ignore[unreachable]
f"Unsupported custom evaluator: {custom_evaluator}."
f" Expected RunEvaluator or StringEvaluator."
)
raise ValueError(msg) # noqa: TRY004
return run_evaluators
Domain
Subdomains
Called By
Source
Frequently Asked Questions
What does _load_run_evaluators() do?
_load_run_evaluators() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/smith/evaluation/runner_utils.py.
Where is _load_run_evaluators() defined?
_load_run_evaluators() is defined in libs/langchain/langchain_classic/smith/evaluation/runner_utils.py at line 622.
What does _load_run_evaluators() call?
_load_run_evaluators() calls 2 function(s): _construct_run_evaluator, _get_keys.
What calls _load_run_evaluators()?
_load_run_evaluators() is called by 1 function(s): _setup_evaluation.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free