Home / Function/ _evaluate_in_project() — langchain Function Reference

_evaluate_in_project() — langchain Function Reference

Architecture documentation for the _evaluate_in_project() function in evaluation.py from the langchain codebase.

Function python Observability Tracers calls 1 called by 1

Entity Profile

Dependency Diagram

graph TD
  0c53e289_4919_bbc6_c165_0a9bf3c71d14["_evaluate_in_project()"]
  d98d30f4_d5fd_24fc_54d0_e2f82eecc3cd["EvaluatorCallbackHandler"]
  0c53e289_4919_bbc6_c165_0a9bf3c71d14 -->|defined in| d98d30f4_d5fd_24fc_54d0_e2f82eecc3cd
  f8b5f1f4_e3b0_be12_bf7c_561a99a6e105["_persist_run()"]
  f8b5f1f4_e3b0_be12_bf7c_561a99a6e105 -->|calls| 0c53e289_4919_bbc6_c165_0a9bf3c71d14
  9cdfe3a0_d3ac_e744_24ad_fa3959981970["_log_evaluation_feedback()"]
  0c53e289_4919_bbc6_c165_0a9bf3c71d14 -->|calls| 9cdfe3a0_d3ac_e744_24ad_fa3959981970
  style 0c53e289_4919_bbc6_c165_0a9bf3c71d14 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/core/langchain_core/tracers/evaluation.py lines 118–160

    def _evaluate_in_project(self, run: Run, evaluator: langsmith.RunEvaluator) -> None:
        """Evaluate the run in the project.

        Args:
            run: The run to be evaluated.
            evaluator: The evaluator to use for evaluating the run.
        """
        try:
            if self.project_name is None:
                eval_result = self.client.evaluate_run(run, evaluator)
                eval_results = [eval_result]
            with tracing_v2_enabled(
                project_name=self.project_name, tags=["eval"], client=self.client
            ) as cb:
                reference_example = (
                    self.client.read_example(run.reference_example_id)
                    if run.reference_example_id
                    else None
                )
                evaluation_result = evaluator.evaluate_run(
                    # This is subclass, but getting errors for some reason
                    run,  # type: ignore[arg-type]
                    example=reference_example,
                )
                eval_results = self._log_evaluation_feedback(
                    evaluation_result,
                    run,
                    source_run_id=cb.latest_run.id if cb.latest_run else None,
                )
        except Exception:
            logger.exception(
                "Error evaluating run %s with %s",
                run.id,
                evaluator.__class__.__name__,
            )
            raise
        example_id = str(run.reference_example_id)
        with self.lock:
            for res in eval_results:
                run_id = str(getattr(res, "target_run_id", run.id))
                self.logged_eval_results.setdefault((run_id, example_id), []).append(
                    res
                )

Domain

Subdomains

Called By

Frequently Asked Questions

What does _evaluate_in_project() do?
_evaluate_in_project() is a function in the langchain codebase, defined in libs/core/langchain_core/tracers/evaluation.py.
Where is _evaluate_in_project() defined?
_evaluate_in_project() is defined in libs/core/langchain_core/tracers/evaluation.py at line 118.
What does _evaluate_in_project() call?
_evaluate_in_project() calls 1 function(s): _log_evaluation_feedback.
What calls _evaluate_in_project()?
_evaluate_in_project() is called by 1 function(s): _persist_run.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free