Home / Function/ _errored_llm_run() — langchain Function Reference

_errored_llm_run() — langchain Function Reference

Architecture documentation for the _errored_llm_run() function in core.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  34d25378_0f81_566b_54ab_1e4753830303["_errored_llm_run()"]
  70348e44_de0f_ccb4_c06a_8453289ed93e["_TracerCore"]
  34d25378_0f81_566b_54ab_1e4753830303 -->|defined in| 70348e44_de0f_ccb4_c06a_8453289ed93e
  66392709_99c4_1a2e_dfac_aed34eebe5f5["_get_run()"]
  34d25378_0f81_566b_54ab_1e4753830303 -->|calls| 66392709_99c4_1a2e_dfac_aed34eebe5f5
  c29d131b_982f_a6b1_9b0a_0c44ecc968f4["_get_stacktrace()"]
  34d25378_0f81_566b_54ab_1e4753830303 -->|calls| c29d131b_982f_a6b1_9b0a_0c44ecc968f4
  style 34d25378_0f81_566b_54ab_1e4753830303 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/core/langchain_core/tracers/core.py lines 305–327

    def _errored_llm_run(
        self, error: BaseException, run_id: UUID, response: LLMResult | None = None
    ) -> Run:
        llm_run = self._get_run(run_id, run_type={"llm", "chat_model"})
        llm_run.error = self._get_stacktrace(error)
        if response:
            if getattr(llm_run, "outputs", None) is None:
                llm_run.outputs = {}
            else:
                llm_run.outputs = cast("dict[str, Any]", llm_run.outputs)
            if not llm_run.extra.get("__omit_auto_outputs", False):
                llm_run.outputs.update(response.model_dump())
            for i, generations in enumerate(response.generations):
                for j, generation in enumerate(generations):
                    output_generation = llm_run.outputs["generations"][i][j]
                    if "message" in output_generation:
                        output_generation["message"] = dumpd(
                            cast("ChatGeneration", generation).message
                        )
        llm_run.end_time = datetime.now(timezone.utc)
        llm_run.events.append({"name": "error", "time": llm_run.end_time})

        return llm_run

Domain

Subdomains

Frequently Asked Questions

What does _errored_llm_run() do?
_errored_llm_run() is a function in the langchain codebase, defined in libs/core/langchain_core/tracers/core.py.
Where is _errored_llm_run() defined?
_errored_llm_run() is defined in libs/core/langchain_core/tracers/core.py at line 305.
What does _errored_llm_run() call?
_errored_llm_run() calls 2 function(s): _get_run, _get_stacktrace.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free