Home / Function/ prepare() — langchain Function Reference

prepare() — langchain Function Reference

Architecture documentation for the prepare() function in runner_utils.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  00d82cfb_ba59_4f67_e504_1faad0617f06["prepare()"]
  3aaa6e94_b6a8_1c13_86d0_1709a1d93909["_DatasetRunContainer"]
  00d82cfb_ba59_4f67_e504_1faad0617f06 -->|defined in| 3aaa6e94_b6a8_1c13_86d0_1709a1d93909
  1ac4f6b0_183e_bd75_7f40_81c2246416b6["arun_on_dataset()"]
  1ac4f6b0_183e_bd75_7f40_81c2246416b6 -->|calls| 00d82cfb_ba59_4f67_e504_1faad0617f06
  0522e7ee_f1e6_b6f7_6738_1dad72cfffba["run_on_dataset()"]
  0522e7ee_f1e6_b6f7_6738_1dad72cfffba -->|calls| 00d82cfb_ba59_4f67_e504_1faad0617f06
  9a9f493e_7864_c75d_ebe0_af192df494f6["_prepare_eval_run()"]
  00d82cfb_ba59_4f67_e504_1faad0617f06 -->|calls| 9a9f493e_7864_c75d_ebe0_af192df494f6
  c2ae8ee6_ba74_2f11_df16_cafb61b88f1e["_wrap_in_chain_factory()"]
  00d82cfb_ba59_4f67_e504_1faad0617f06 -->|calls| c2ae8ee6_ba74_2f11_df16_cafb61b88f1e
  ae9076b6_e76d_5271_0240_412b70e62fda["_setup_evaluation()"]
  00d82cfb_ba59_4f67_e504_1faad0617f06 -->|calls| ae9076b6_e76d_5271_0240_412b70e62fda
  42662eb3_17b6_0aaa_f09d_12aabcf769e7["_validate_example_inputs()"]
  00d82cfb_ba59_4f67_e504_1faad0617f06 -->|calls| 42662eb3_17b6_0aaa_f09d_12aabcf769e7
  style 00d82cfb_ba59_4f67_e504_1faad0617f06 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/smith/evaluation/runner_utils.py lines 1221–1293

    def prepare(
        cls,
        client: Client,
        dataset_name: str,
        llm_or_chain_factory: MODEL_OR_CHAIN_FACTORY,
        project_name: str | None,
        evaluation: smith_eval.RunEvalConfig | None = None,
        tags: list[str] | None = None,
        input_mapper: Callable[[dict], Any] | None = None,
        concurrency_level: int = 5,
        project_metadata: dict[str, Any] | None = None,
        revision_id: str | None = None,
        dataset_version: datetime | str | None = None,
    ) -> _DatasetRunContainer:
        project_name = project_name or name_generation.random_name()
        if revision_id:
            if not project_metadata:
                project_metadata = {}
            project_metadata.update({"revision_id": revision_id})
        wrapped_model, project, dataset, examples = _prepare_eval_run(
            client,
            dataset_name,
            llm_or_chain_factory,
            project_name,
            project_metadata=project_metadata,
            tags=tags,
            dataset_version=dataset_version,
        )
        tags = tags or []
        for k, v in (project.metadata.get("git") or {}).items():
            tags.append(f"git:{k}={v}")
        run_metadata = {"dataset_version": project.metadata["dataset_version"]}
        if revision_id:
            run_metadata["revision_id"] = revision_id
        wrapped_model = _wrap_in_chain_factory(llm_or_chain_factory)
        run_evaluators = _setup_evaluation(
            wrapped_model,
            examples,
            evaluation,
            dataset.data_type or DataType.kv,
        )
        _validate_example_inputs(examples[0], wrapped_model, input_mapper)
        progress_bar = progress.ProgressBarCallback(len(examples))
        configs = [
            RunnableConfig(
                callbacks=[
                    LangChainTracer(
                        project_name=project.name,
                        client=client,
                        example_id=example.id,
                    ),
                    EvaluatorCallbackHandler(
                        evaluators=run_evaluators or [],
                        client=client,
                        example_id=example.id,
                        max_concurrency=0,
                    ),
                    progress_bar,
                ],
                tags=tags,
                max_concurrency=concurrency_level,
                metadata=run_metadata,
            )
            for example in examples
        ]
        return cls(
            client=client,
            project=project,
            wrapped_model=wrapped_model,
            examples=examples,
            configs=configs,
            batch_evaluators=evaluation.batch_evaluators if evaluation else None,
        )

Domain

Subdomains

Frequently Asked Questions

What does prepare() do?
prepare() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/smith/evaluation/runner_utils.py.
Where is prepare() defined?
prepare() is defined in libs/langchain/langchain_classic/smith/evaluation/runner_utils.py at line 1221.
What does prepare() call?
prepare() calls 4 function(s): _prepare_eval_run, _setup_evaluation, _validate_example_inputs, _wrap_in_chain_factory.
What calls prepare()?
prepare() is called by 2 function(s): arun_on_dataset, run_on_dataset.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free