generation.py — langchain Source File
Architecture documentation for generation.py, a python file in the langchain codebase. 3 imports, 0 dependents.
Entity Profile
Dependency Diagram
graph LR f6b7d3d5_cc5d_7670_342a_fb0bc7ef1c38["generation.py"] 8e2034b7_ceb8_963f_29fc_2ea6b50ef9b3["typing"] f6b7d3d5_cc5d_7670_342a_fb0bc7ef1c38 --> 8e2034b7_ceb8_963f_29fc_2ea6b50ef9b3 36cce5da_d805_04c3_7e86_e1b4dd49b497["langchain_core.load"] f6b7d3d5_cc5d_7670_342a_fb0bc7ef1c38 --> 36cce5da_d805_04c3_7e86_e1b4dd49b497 053c6d65_7a74_9819_7c2a_c7357c95d2b8["langchain_core.utils._merge"] f6b7d3d5_cc5d_7670_342a_fb0bc7ef1c38 --> 053c6d65_7a74_9819_7c2a_c7357c95d2b8 style f6b7d3d5_cc5d_7670_342a_fb0bc7ef1c38 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
"""Generation output schema."""
from __future__ import annotations
from typing import Any, Literal
from langchain_core.load import Serializable
from langchain_core.utils._merge import merge_dicts
class Generation(Serializable):
"""A single text generation output.
Generation represents the response from an "old-fashioned" LLM (string-in,
string-out) that generates regular text (not chat messages).
This model is used internally by chat model and will eventually be mapped to a more
general `LLMResult` object, and then projected into an `AIMessage` object.
LangChain users working with chat models will usually access information via
`AIMessage` (returned from runnable interfaces) or `LLMResult` (available via
callbacks). Please refer to `AIMessage` and `LLMResult` for more information.
"""
text: str
"""Generated text output."""
generation_info: dict[str, Any] | None = None
"""Raw response from the provider.
May include things like the reason for finishing or token log probabilities.
"""
type: Literal["Generation"] = "Generation"
"""Type is used exclusively for serialization purposes.
Set to `'Generation'` for this class.
"""
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
return True
@classmethod
def get_lc_namespace(cls) -> list[str]:
"""Get the namespace of the LangChain object.
Returns:
`["langchain", "schema", "output"]`
"""
return ["langchain", "schema", "output"]
class GenerationChunk(Generation):
"""`GenerationChunk`, which can be concatenated with other `Generation` chunks."""
def __add__(self, other: GenerationChunk) -> GenerationChunk:
"""Concatenate two `GenerationChunk` objects.
Args:
other: Another `GenerationChunk` to concatenate with.
Raises:
TypeError: If other is not a `GenerationChunk`.
Returns:
A new `GenerationChunk` concatenated from self and other.
"""
if isinstance(other, GenerationChunk):
generation_info = merge_dicts(
self.generation_info or {},
other.generation_info or {},
)
return GenerationChunk(
text=self.text + other.text,
generation_info=generation_info or None,
)
msg = f"unsupported operand type(s) for +: '{type(self)}' and '{type(other)}'"
raise TypeError(msg)
Domain
Subdomains
Classes
Dependencies
- langchain_core.load
- langchain_core.utils._merge
- typing
Source
Frequently Asked Questions
What does generation.py do?
generation.py is a source file in the langchain codebase, written in python. It belongs to the CoreAbstractions domain, Serialization subdomain.
What does generation.py depend on?
generation.py imports 3 module(s): langchain_core.load, langchain_core.utils._merge, typing.
Where is generation.py in the architecture?
generation.py is located at libs/core/langchain_core/outputs/generation.py (domain: CoreAbstractions, subdomain: Serialization, directory: libs/core/langchain_core/outputs).
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free