llm_result.py — langchain Source File
Architecture documentation for llm_result.py, a python file in the langchain codebase. 6 imports, 0 dependents.
Entity Profile
Dependency Diagram
graph LR 10cc7dcd_2563_25b1_fd99_24d1711b41bb["llm_result.py"] e874d8a4_cef0_9d0b_d1ee_84999c07cc2c["copy"] 10cc7dcd_2563_25b1_fd99_24d1711b41bb --> e874d8a4_cef0_9d0b_d1ee_84999c07cc2c 8e2034b7_ceb8_963f_29fc_2ea6b50ef9b3["typing"] 10cc7dcd_2563_25b1_fd99_24d1711b41bb --> 8e2034b7_ceb8_963f_29fc_2ea6b50ef9b3 6e58aaea_f08e_c099_3cc7_f9567bfb1ae7["pydantic"] 10cc7dcd_2563_25b1_fd99_24d1711b41bb --> 6e58aaea_f08e_c099_3cc7_f9567bfb1ae7 3bd68d31_2a9b_9448_d8ba_c386f0e45f29["langchain_core.outputs.chat_generation"] 10cc7dcd_2563_25b1_fd99_24d1711b41bb --> 3bd68d31_2a9b_9448_d8ba_c386f0e45f29 7f324a3e_1ccd_321c_fef1_09174d50081d["langchain_core.outputs.generation"] 10cc7dcd_2563_25b1_fd99_24d1711b41bb --> 7f324a3e_1ccd_321c_fef1_09174d50081d 51a3ca9a_d40e_993f_6547_51772aeeda98["langchain_core.outputs.run_info"] 10cc7dcd_2563_25b1_fd99_24d1711b41bb --> 51a3ca9a_d40e_993f_6547_51772aeeda98 style 10cc7dcd_2563_25b1_fd99_24d1711b41bb fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
"""`LLMResult` class."""
from __future__ import annotations
from copy import deepcopy
from typing import Literal
from pydantic import BaseModel
from langchain_core.outputs.chat_generation import ChatGeneration, ChatGenerationChunk
from langchain_core.outputs.generation import Generation, GenerationChunk
from langchain_core.outputs.run_info import RunInfo
class LLMResult(BaseModel):
"""A container for results of an LLM call.
Both chat models and LLMs generate an `LLMResult` object. This object contains the
generated outputs and any additional information that the model provider wants to
return.
"""
generations: list[
list[Generation | ChatGeneration | GenerationChunk | ChatGenerationChunk]
]
"""Generated outputs.
The first dimension of the list represents completions for different input prompts.
The second dimension of the list represents different candidate generations for a
given prompt.
- When returned from **an LLM**, the type is `list[list[Generation]]`.
- When returned from a **chat model**, the type is `list[list[ChatGeneration]]`.
`ChatGeneration` is a subclass of `Generation` that has a field for a structured
chat message.
"""
llm_output: dict | None = None
"""For arbitrary LLM provider specific output.
This dictionary is a free-form dictionary that can contain any information that the
provider wants to return. It is not standardized and is provider-specific.
Users should generally avoid relying on this field and instead rely on accessing
relevant information from standardized fields present in AIMessage.
"""
run: list[RunInfo] | None = None
"""List of metadata info for model call for each input.
See `langchain_core.outputs.run_info.RunInfo` for details.
"""
type: Literal["LLMResult"] = "LLMResult"
"""Type is used exclusively for serialization purposes."""
def flatten(self) -> list[LLMResult]:
"""Flatten generations into a single list.
Unpack `list[list[Generation]] -> list[LLMResult]` where each returned
`LLMResult` contains only a single `Generation`. If token usage information is
available, it is kept only for the `LLMResult` corresponding to the top-choice
`Generation`, to avoid over-counting of token usage downstream.
Returns:
List of `LLMResult` objects where each returned `LLMResult` contains a
single `Generation`.
"""
llm_results = []
for i, gen_list in enumerate(self.generations):
# Avoid double counting tokens in OpenAICallback
if i == 0:
llm_results.append(
LLMResult(
generations=[gen_list],
llm_output=self.llm_output,
)
)
else:
if self.llm_output is not None:
llm_output = deepcopy(self.llm_output)
llm_output["token_usage"] = {}
else:
llm_output = None
llm_results.append(
LLMResult(
generations=[gen_list],
llm_output=llm_output,
)
)
return llm_results
def __eq__(self, other: object) -> bool:
"""Check for `LLMResult` equality by ignoring any metadata related to runs.
Args:
other: Another `LLMResult` object to compare against.
Returns:
`True` if the generations and `llm_output` are equal, `False` otherwise.
"""
if not isinstance(other, LLMResult):
return NotImplemented
return (
self.generations == other.generations
and self.llm_output == other.llm_output
)
__hash__ = None # type: ignore[assignment]
Domain
Subdomains
Classes
Dependencies
- copy
- langchain_core.outputs.chat_generation
- langchain_core.outputs.generation
- langchain_core.outputs.run_info
- pydantic
- typing
Source
Frequently Asked Questions
What does llm_result.py do?
llm_result.py is a source file in the langchain codebase, written in python. It belongs to the CoreAbstractions domain, Serialization subdomain.
What does llm_result.py depend on?
llm_result.py imports 6 module(s): copy, langchain_core.outputs.chat_generation, langchain_core.outputs.generation, langchain_core.outputs.run_info, pydantic, typing.
Where is llm_result.py in the architecture?
llm_result.py is located at libs/core/langchain_core/outputs/llm_result.py (domain: CoreAbstractions, subdomain: Serialization, directory: libs/core/langchain_core/outputs).
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free