Home / Function/ _setup_evaluation() — langchain Function Reference

_setup_evaluation() — langchain Function Reference

Architecture documentation for the _setup_evaluation() function in runner_utils.py from the langchain codebase.

Function python LangChainCore Runnables calls 1 called by 1

Entity Profile

Dependency Diagram

graph TD
  ae9076b6_e76d_5271_0240_412b70e62fda["_setup_evaluation()"]
  8253c602_7d0c_9195_a7e1_3e9b19304131["runner_utils.py"]
  ae9076b6_e76d_5271_0240_412b70e62fda -->|defined in| 8253c602_7d0c_9195_a7e1_3e9b19304131
  00d82cfb_ba59_4f67_e504_1faad0617f06["prepare()"]
  00d82cfb_ba59_4f67_e504_1faad0617f06 -->|calls| ae9076b6_e76d_5271_0240_412b70e62fda
  5e72e350_cbb1_d312_db46_9ac4a7cf909f["_load_run_evaluators()"]
  ae9076b6_e76d_5271_0240_412b70e62fda -->|calls| 5e72e350_cbb1_d312_db46_9ac4a7cf909f
  style ae9076b6_e76d_5271_0240_412b70e62fda fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/smith/evaluation/runner_utils.py lines 441–468

def _setup_evaluation(
    llm_or_chain_factory: MCF,
    examples: list[Example],
    evaluation: smith_eval.RunEvalConfig | None,
    data_type: DataType,
) -> list[RunEvaluator] | None:
    """Configure the evaluators to run on the results of the chain."""
    if evaluation:
        if isinstance(llm_or_chain_factory, BaseLanguageModel):
            run_inputs, run_outputs = None, None
            run_type = "llm"
        else:
            run_type = "chain"
            chain = llm_or_chain_factory()
            run_inputs = chain.input_keys if isinstance(chain, Chain) else None
            run_outputs = chain.output_keys if isinstance(chain, Chain) else None
        run_evaluators = _load_run_evaluators(
            evaluation,
            run_type,
            data_type,
            list(examples[0].outputs) if examples[0].outputs else None,
            run_inputs,
            run_outputs,
        )
    else:
        # TODO: Create a default helpfulness evaluator
        run_evaluators = None
    return run_evaluators

Domain

Subdomains

Called By

Frequently Asked Questions

What does _setup_evaluation() do?
_setup_evaluation() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/smith/evaluation/runner_utils.py.
Where is _setup_evaluation() defined?
_setup_evaluation() is defined in libs/langchain/langchain_classic/smith/evaluation/runner_utils.py at line 441.
What does _setup_evaluation() call?
_setup_evaluation() calls 1 function(s): _load_run_evaluators.
What calls _setup_evaluation()?
_setup_evaluation() is called by 1 function(s): prepare.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free