Home / Function/ _collect_metrics() — langchain Function Reference

_collect_metrics() — langchain Function Reference

Architecture documentation for the _collect_metrics() function in runner_utils.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  983ed1c6_8485_7927_a832_f9e88ee9bb16["_collect_metrics()"]
  3aaa6e94_b6a8_1c13_86d0_1709a1d93909["_DatasetRunContainer"]
  983ed1c6_8485_7927_a832_f9e88ee9bb16 -->|defined in| 3aaa6e94_b6a8_1c13_86d0_1709a1d93909
  7bd0a459_a7f0_719c_faf9_2cf0ffd65a8c["_collect_test_results()"]
  7bd0a459_a7f0_719c_faf9_2cf0ffd65a8c -->|calls| 983ed1c6_8485_7927_a832_f9e88ee9bb16
  style 983ed1c6_8485_7927_a832_f9e88ee9bb16 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/smith/evaluation/runner_utils.py lines 1151–1178

    def _collect_metrics(self) -> tuple[dict[str, _RowResult], dict[str, Run]]:
        all_eval_results: dict = {}
        all_runs: dict = {}
        for c in self.configs:
            for callback in cast("list", c["callbacks"]):
                if isinstance(callback, EvaluatorCallbackHandler):
                    eval_results = callback.logged_eval_results
                    for (_, example_id), v in eval_results.items():
                        all_eval_results.setdefault(str(example_id), {}).update(
                            {"feedback": v},
                        )
                elif isinstance(callback, LangChainTracer):
                    run = callback.latest_run
                    execution_time = (
                        (run.end_time - run.start_time).total_seconds()
                        if run and run.end_time
                        else None
                    )
                    run_id = str(run.id) if run else None
                    all_eval_results.setdefault(str(callback.example_id), {}).update(
                        {
                            "execution_time": execution_time,
                            "run_id": run_id,
                            "run": run,
                        },
                    )
                    all_runs[str(callback.example_id)] = run
        return cast("dict[str, _RowResult]", all_eval_results), all_runs

Domain

Subdomains

Frequently Asked Questions

What does _collect_metrics() do?
_collect_metrics() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/smith/evaluation/runner_utils.py.
Where is _collect_metrics() defined?
_collect_metrics() is defined in libs/langchain/langchain_classic/smith/evaluation/runner_utils.py at line 1151.
What calls _collect_metrics()?
_collect_metrics() is called by 1 function(s): _collect_test_results.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free