message.py — anthropic-sdk-python Source File
Architecture documentation for message.py, a python file in the anthropic-sdk-python codebase. 8 imports, 3 dependents.
Entity Profile
Dependency Diagram
graph LR 21a395d8_015e_6c08_b7bd_94e93c67aaf8["message.py"] aa80407b_6ce3_389a_8463_fe4caf9aa0bd["model.py"] 21a395d8_015e_6c08_b7bd_94e93c67aaf8 --> aa80407b_6ce3_389a_8463_fe4caf9aa0bd a3d73c4a_4724_f0e3_fa64_0bb3dd55cd5e["usage.py"] 21a395d8_015e_6c08_b7bd_94e93c67aaf8 --> a3d73c4a_4724_f0e3_fa64_0bb3dd55cd5e 83325191_652d_66a8_5e97_acb665f88372["Usage"] 21a395d8_015e_6c08_b7bd_94e93c67aaf8 --> 83325191_652d_66a8_5e97_acb665f88372 699c911a_fb78_b883_732f_5ac6bf31d4a3["_models"] 21a395d8_015e_6c08_b7bd_94e93c67aaf8 --> 699c911a_fb78_b883_732f_5ac6bf31d4a3 f79bf88b_d25b_8452_5841_c144754741ff["stop_reason.py"] 21a395d8_015e_6c08_b7bd_94e93c67aaf8 --> f79bf88b_d25b_8452_5841_c144754741ff 713c0742_3b61_ba8f_1fce_05bd03381fe5["content_block.py"] 21a395d8_015e_6c08_b7bd_94e93c67aaf8 --> 713c0742_3b61_ba8f_1fce_05bd03381fe5 89ddcdd7_3ae1_4c7b_41bb_9f1e25f16875["typing"] 21a395d8_015e_6c08_b7bd_94e93c67aaf8 --> 89ddcdd7_3ae1_4c7b_41bb_9f1e25f16875 37c05070_ca59_d596_7250_de9d1939227f["typing_extensions"] 21a395d8_015e_6c08_b7bd_94e93c67aaf8 --> 37c05070_ca59_d596_7250_de9d1939227f 2eb1aa7f_c778_4d68_0a26_a102fadd9638["__init__.py"] 2eb1aa7f_c778_4d68_0a26_a102fadd9638 --> 21a395d8_015e_6c08_b7bd_94e93c67aaf8 3c1d116b_af29_e9f2_495b_ceebd1febd18["parsed_message.py"] 3c1d116b_af29_e9f2_495b_ceebd1febd18 --> 21a395d8_015e_6c08_b7bd_94e93c67aaf8 d0dcbd4b_c516_bdfd_b119_88ac7d8e1399["raw_message_start_event.py"] d0dcbd4b_c516_bdfd_b119_88ac7d8e1399 --> 21a395d8_015e_6c08_b7bd_94e93c67aaf8 style 21a395d8_015e_6c08_b7bd_94e93c67aaf8 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
from typing import List, Optional
from typing_extensions import Literal
from .model import Model
from .usage import Usage
from .._models import BaseModel
from .stop_reason import StopReason
from .content_block import ContentBlock, ContentBlock as ContentBlock
__all__ = ["Message"]
class Message(BaseModel):
id: str
"""Unique object identifier.
The format and length of IDs may change over time.
"""
content: List[ContentBlock]
"""Content generated by the model.
This is an array of content blocks, each of which has a `type` that determines
its shape.
Example:
```json
[{ "type": "text", "text": "Hi, I'm Claude." }]
```
If the request input `messages` ended with an `assistant` turn, then the
response `content` will continue directly from that last turn. You can use this
to constrain the model's output.
For example, if the input `messages` were:
```json
[
{
"role": "user",
"content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
},
{ "role": "assistant", "content": "The best answer is (" }
]
```
Then the response `content` might be:
```json
[{ "type": "text", "text": "B)" }]
```
"""
model: Model
"""
The model that will complete your prompt.\n\nSee
[models](https://docs.anthropic.com/en/docs/models-overview) for additional
details and options.
"""
role: Literal["assistant"]
"""Conversational role of the generated message.
This will always be `"assistant"`.
"""
stop_reason: Optional[StopReason] = None
"""The reason that we stopped.
This may be one the following values:
- `"end_turn"`: the model reached a natural stopping point
- `"max_tokens"`: we exceeded the requested `max_tokens` or the model's maximum
- `"stop_sequence"`: one of your provided custom `stop_sequences` was generated
- `"tool_use"`: the model invoked one or more tools
- `"pause_turn"`: we paused a long-running turn. You may provide the response
back as-is in a subsequent request to let the model continue.
- `"refusal"`: when streaming classifiers intervene to handle potential policy
violations
In non-streaming mode this value is always non-null. In streaming mode, it is
null in the `message_start` event and non-null otherwise.
"""
stop_sequence: Optional[str] = None
"""Which custom stop sequence was generated, if any.
This value will be a non-null string if one of your custom stop sequences was
generated.
"""
type: Literal["message"]
"""Object type.
For Messages, this is always `"message"`.
"""
usage: Usage
"""Billing and rate-limit usage.
Anthropic's API bills and rate-limits by token counts, as tokens represent the
underlying cost to our systems.
Under the hood, the API transforms requests into a format suitable for the
model. The model's output then goes through a parsing stage before becoming an
API response. As a result, the token counts in `usage` will not match one-to-one
with the exact visible content of an API request or response.
For example, `output_tokens` will be non-zero, even for an empty string response
from Claude.
Total input tokens in a request is the summation of `input_tokens`,
`cache_creation_input_tokens`, and `cache_read_input_tokens`.
"""
Domain
Classes
Dependencies
- Usage
- _models
- content_block.py
- model.py
- stop_reason.py
- typing
- typing_extensions
- usage.py
Imported By
Source
Frequently Asked Questions
What does message.py do?
message.py is a source file in the anthropic-sdk-python codebase, written in python. It belongs to the AnthropicClient domain.
What does message.py depend on?
message.py imports 8 module(s): Usage, _models, content_block.py, model.py, stop_reason.py, typing, typing_extensions, usage.py.
What files import message.py?
message.py is imported by 3 file(s): __init__.py, parsed_message.py, raw_message_start_event.py.
Where is message.py in the architecture?
message.py is located at src/anthropic/types/message.py (domain: AnthropicClient, directory: src/anthropic/types).
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free