Message Class — anthropic-sdk-python Architecture
Architecture documentation for the Message class in message.py from the anthropic-sdk-python codebase.
Entity Profile
Dependency Diagram
graph TD 01beb2f6_442c_779e_7c57_22078b50dcf4["Message"] 17ce5647_6f06_0676_a4a5_e378a3f57cb1["BaseModel"] 01beb2f6_442c_779e_7c57_22078b50dcf4 -->|extends| 17ce5647_6f06_0676_a4a5_e378a3f57cb1 21a395d8_015e_6c08_b7bd_94e93c67aaf8["message.py"] 01beb2f6_442c_779e_7c57_22078b50dcf4 -->|defined in| 21a395d8_015e_6c08_b7bd_94e93c67aaf8
Relationship Graph
Source Code
src/anthropic/types/message.py lines 15–117
class Message(BaseModel):
id: str
"""Unique object identifier.
The format and length of IDs may change over time.
"""
content: List[ContentBlock]
"""Content generated by the model.
This is an array of content blocks, each of which has a `type` that determines
its shape.
Example:
```json
[{ "type": "text", "text": "Hi, I'm Claude." }]
```
If the request input `messages` ended with an `assistant` turn, then the
response `content` will continue directly from that last turn. You can use this
to constrain the model's output.
For example, if the input `messages` were:
```json
[
{
"role": "user",
"content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
},
{ "role": "assistant", "content": "The best answer is (" }
]
```
Then the response `content` might be:
```json
[{ "type": "text", "text": "B)" }]
```
"""
model: Model
"""
The model that will complete your prompt.\n\nSee
[models](https://docs.anthropic.com/en/docs/models-overview) for additional
details and options.
"""
role: Literal["assistant"]
"""Conversational role of the generated message.
This will always be `"assistant"`.
"""
stop_reason: Optional[StopReason] = None
"""The reason that we stopped.
This may be one the following values:
- `"end_turn"`: the model reached a natural stopping point
- `"max_tokens"`: we exceeded the requested `max_tokens` or the model's maximum
- `"stop_sequence"`: one of your provided custom `stop_sequences` was generated
- `"tool_use"`: the model invoked one or more tools
- `"pause_turn"`: we paused a long-running turn. You may provide the response
back as-is in a subsequent request to let the model continue.
- `"refusal"`: when streaming classifiers intervene to handle potential policy
violations
In non-streaming mode this value is always non-null. In streaming mode, it is
null in the `message_start` event and non-null otherwise.
"""
stop_sequence: Optional[str] = None
"""Which custom stop sequence was generated, if any.
This value will be a non-null string if one of your custom stop sequences was
generated.
"""
type: Literal["message"]
Domain
Defined In
Extends
Source
Frequently Asked Questions
What is the Message class?
Message is a class in the anthropic-sdk-python codebase, defined in src/anthropic/types/message.py.
Where is Message defined?
Message is defined in src/anthropic/types/message.py at line 15.
What does Message extend?
Message extends BaseModel.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free