_prepare_eval_run() — langchain Function Reference
Architecture documentation for the _prepare_eval_run() function in runner_utils.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 9a9f493e_7864_c75d_ebe0_af192df494f6["_prepare_eval_run()"] 8253c602_7d0c_9195_a7e1_3e9b19304131["runner_utils.py"] 9a9f493e_7864_c75d_ebe0_af192df494f6 -->|defined in| 8253c602_7d0c_9195_a7e1_3e9b19304131 00d82cfb_ba59_4f67_e504_1faad0617f06["prepare()"] 00d82cfb_ba59_4f67_e504_1faad0617f06 -->|calls| 9a9f493e_7864_c75d_ebe0_af192df494f6 c2ae8ee6_ba74_2f11_df16_cafb61b88f1e["_wrap_in_chain_factory()"] 9a9f493e_7864_c75d_ebe0_af192df494f6 -->|calls| c2ae8ee6_ba74_2f11_df16_cafb61b88f1e 0522e7ee_f1e6_b6f7_6738_1dad72cfffba["run_on_dataset()"] 9a9f493e_7864_c75d_ebe0_af192df494f6 -->|calls| 0522e7ee_f1e6_b6f7_6738_1dad72cfffba style 9a9f493e_7864_c75d_ebe0_af192df494f6 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain/langchain_classic/smith/evaluation/runner_utils.py lines 1022–1082
def _prepare_eval_run(
client: Client,
dataset_name: str,
llm_or_chain_factory: MODEL_OR_CHAIN_FACTORY,
project_name: str,
project_metadata: dict[str, Any] | None = None,
tags: list[str] | None = None,
dataset_version: str | datetime | None = None,
) -> tuple[MCF, TracerSession, Dataset, list[Example]]:
wrapped_model = _wrap_in_chain_factory(llm_or_chain_factory, dataset_name)
dataset = client.read_dataset(dataset_name=dataset_name)
examples = list(client.list_examples(dataset_id=dataset.id, as_of=dataset_version))
if not examples:
msg = f"Dataset {dataset_name} has no example rows."
raise ValueError(msg)
modified_at = [ex.modified_at for ex in examples if ex.modified_at]
# Should always be defined in practice when fetched,
# but the typing permits None
max_modified_at = max(modified_at) if modified_at else None
inferred_version = max_modified_at.isoformat() if max_modified_at else None
try:
project_metadata = project_metadata or {}
git_info = get_git_info()
if git_info:
project_metadata = {
**project_metadata,
"git": git_info,
}
project_metadata["dataset_version"] = inferred_version
project = client.create_project(
project_name,
reference_dataset_id=dataset.id,
project_extra={"tags": tags} if tags else {},
metadata=project_metadata,
)
except (HTTPError, ValueError, LangSmithError) as e:
if "already exists " not in str(e):
raise
uid = uuid.uuid4()
example_msg = f"""
run_on_dataset(
...
project_name="{project_name} - {uid}", # Update since {project_name} already exists
)
"""
msg = (
f"Test project {project_name} already exists. Please use a different name:"
f"\n\n{example_msg}"
)
raise ValueError(msg) from e
comparison_url = dataset.url + f"/compare?selectedSessions={project.id}"
print( # noqa: T201
f"View the evaluation results for project '{project_name}'"
f" at:\n{comparison_url}\n\n"
f"View all tests for Dataset {dataset_name} at:\n{dataset.url}",
flush=True,
)
return wrapped_model, project, dataset, examples
Domain
Subdomains
Called By
Source
Frequently Asked Questions
What does _prepare_eval_run() do?
_prepare_eval_run() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/smith/evaluation/runner_utils.py.
Where is _prepare_eval_run() defined?
_prepare_eval_run() is defined in libs/langchain/langchain_classic/smith/evaluation/runner_utils.py at line 1022.
What does _prepare_eval_run() call?
_prepare_eval_run() calls 2 function(s): _wrap_in_chain_factory, run_on_dataset.
What calls _prepare_eval_run()?
_prepare_eval_run() is called by 1 function(s): prepare.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free