Home / Function/ _complete_llm_run() — langchain Function Reference

_complete_llm_run() — langchain Function Reference

Architecture documentation for the _complete_llm_run() function in core.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  eb070888_82fd_5fcb_9ec5_0050da03bd4a["_complete_llm_run()"]
  70348e44_de0f_ccb4_c06a_8453289ed93e["_TracerCore"]
  eb070888_82fd_5fcb_9ec5_0050da03bd4a -->|defined in| 70348e44_de0f_ccb4_c06a_8453289ed93e
  66392709_99c4_1a2e_dfac_aed34eebe5f5["_get_run()"]
  eb070888_82fd_5fcb_9ec5_0050da03bd4a -->|calls| 66392709_99c4_1a2e_dfac_aed34eebe5f5
  style eb070888_82fd_5fcb_9ec5_0050da03bd4a fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/core/langchain_core/tracers/core.py lines 275–303

    def _complete_llm_run(self, response: LLMResult, run_id: UUID) -> Run:
        llm_run = self._get_run(run_id, run_type={"llm", "chat_model"})
        if getattr(llm_run, "outputs", None) is None:
            llm_run.outputs = {}
        else:
            llm_run.outputs = cast("dict[str, Any]", llm_run.outputs)
        if not llm_run.extra.get("__omit_auto_outputs", False):
            llm_run.outputs.update(response.model_dump())
        for i, generations in enumerate(response.generations):
            for j, generation in enumerate(generations):
                output_generation = llm_run.outputs["generations"][i][j]
                if "message" in output_generation:
                    output_generation["message"] = dumpd(
                        cast("ChatGeneration", generation).message
                    )
        llm_run.end_time = datetime.now(timezone.utc)
        llm_run.events.append({"name": "end", "time": llm_run.end_time})

        tool_call_count = 0
        for generations in response.generations:
            for generation in generations:
                if hasattr(generation, "message"):
                    msg = generation.message
                    if hasattr(msg, "tool_calls") and msg.tool_calls:
                        tool_call_count += len(msg.tool_calls)
        if tool_call_count > 0:
            llm_run.extra["tool_call_count"] = tool_call_count

        return llm_run

Domain

Subdomains

Calls

Frequently Asked Questions

What does _complete_llm_run() do?
_complete_llm_run() is a function in the langchain codebase, defined in libs/core/langchain_core/tracers/core.py.
Where is _complete_llm_run() defined?
_complete_llm_run() is defined in libs/core/langchain_core/tracers/core.py at line 275.
What does _complete_llm_run() call?
_complete_llm_run() calls 1 function(s): _get_run.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free