Home / Function/ _log_evaluation_feedback() — langchain Function Reference

_log_evaluation_feedback() — langchain Function Reference

Architecture documentation for the _log_evaluation_feedback() function in evaluation.py from the langchain codebase.

Function python Observability Tracers calls 1 called by 1

Entity Profile

Dependency Diagram

graph TD
  9cdfe3a0_d3ac_e744_24ad_fa3959981970["_log_evaluation_feedback()"]
  d98d30f4_d5fd_24fc_54d0_e2f82eecc3cd["EvaluatorCallbackHandler"]
  9cdfe3a0_d3ac_e744_24ad_fa3959981970 -->|defined in| d98d30f4_d5fd_24fc_54d0_e2f82eecc3cd
  0c53e289_4919_bbc6_c165_0a9bf3c71d14["_evaluate_in_project()"]
  0c53e289_4919_bbc6_c165_0a9bf3c71d14 -->|calls| 9cdfe3a0_d3ac_e744_24ad_fa3959981970
  b0dc41a9_2b64_0617_60e8_135a3bffaff5["_select_eval_results()"]
  9cdfe3a0_d3ac_e744_24ad_fa3959981970 -->|calls| b0dc41a9_2b64_0617_60e8_135a3bffaff5
  style 9cdfe3a0_d3ac_e744_24ad_fa3959981970 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/core/langchain_core/tracers/evaluation.py lines 178–203

    def _log_evaluation_feedback(
        self,
        evaluator_response: EvaluationResult | EvaluationResults,
        run: Run,
        source_run_id: UUID | None = None,
    ) -> list[EvaluationResult]:
        results = self._select_eval_results(evaluator_response)
        for res in results:
            source_info_: dict[str, Any] = {}
            if res.evaluator_info:
                source_info_ = {**res.evaluator_info, **source_info_}
            run_id_ = getattr(res, "target_run_id", None)
            if run_id_ is None:
                run_id_ = run.id
            self.client.create_feedback(
                run_id_,
                res.key,
                score=res.score,
                value=res.value,
                comment=res.comment,
                correction=res.correction,
                source_info=source_info_,
                source_run_id=res.source_run_id or source_run_id,
                feedback_source_type=langsmith.schemas.FeedbackSourceType.MODEL,
            )
        return results

Domain

Subdomains

Frequently Asked Questions

What does _log_evaluation_feedback() do?
_log_evaluation_feedback() is a function in the langchain codebase, defined in libs/core/langchain_core/tracers/evaluation.py.
Where is _log_evaluation_feedback() defined?
_log_evaluation_feedback() is defined in libs/core/langchain_core/tracers/evaluation.py at line 178.
What does _log_evaluation_feedback() call?
_log_evaluation_feedback() calls 1 function(s): _select_eval_results.
What calls _log_evaluation_feedback()?
_log_evaluation_feedback() is called by 1 function(s): _evaluate_in_project.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free