MessageCountTokensParams Class — anthropic-sdk-python Architecture
Architecture documentation for the MessageCountTokensParams class in message_count_tokens_params.py from the anthropic-sdk-python codebase.
Entity Profile
Dependency Diagram
graph TD 2b84679b_89b3_e7d6_1f3d_705dd3ce5391["MessageCountTokensParams"] 11b723a4_f0a6_3747_c358_91ffd0f2c339["message_count_tokens_params.py"] 2b84679b_89b3_e7d6_1f3d_705dd3ce5391 -->|defined in| 11b723a4_f0a6_3747_c358_91ffd0f2c339
Relationship Graph
Source Code
src/anthropic/types/message_count_tokens_params.py lines 19–201
class MessageCountTokensParams(TypedDict, total=False):
messages: Required[Iterable[MessageParam]]
"""Input messages.
Our models are trained to operate on alternating `user` and `assistant`
conversational turns. When creating a new `Message`, you specify the prior
conversational turns with the `messages` parameter, and the model then generates
the next `Message` in the conversation. Consecutive `user` or `assistant` turns
in your request will be combined into a single turn.
Each input message must be an object with a `role` and `content`. You can
specify a single `user`-role message, or you can include multiple `user` and
`assistant` messages.
If the final message uses the `assistant` role, the response content will
continue immediately from the content in that message. This can be used to
constrain part of the model's response.
Example with a single `user` message:
```json
[{ "role": "user", "content": "Hello, Claude" }]
```
Example with multiple conversational turns:
```json
[
{ "role": "user", "content": "Hello there." },
{ "role": "assistant", "content": "Hi, I'm Claude. How can I help you?" },
{ "role": "user", "content": "Can you explain LLMs in plain English?" }
]
```
Example with a partially-filled response from Claude:
```json
[
{
"role": "user",
"content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
},
{ "role": "assistant", "content": "The best answer is (" }
]
```
Each input message `content` may be either a single `string` or an array of
content blocks, where each block has a specific `type`. Using a `string` for
`content` is shorthand for an array of one content block of type `"text"`. The
following input messages are equivalent:
```json
{ "role": "user", "content": "Hello, Claude" }
```
```json
{ "role": "user", "content": [{ "type": "text", "text": "Hello, Claude" }] }
```
See [input examples](https://docs.claude.com/en/api/messages-examples).
Note that if you want to include a
[system prompt](https://docs.claude.com/en/docs/system-prompts), you can use the
top-level `system` parameter — there is no `"system"` role for input messages in
the Messages API.
There is a limit of 100,000 messages in a single request.
"""
model: Required[ModelParam]
"""
The model that will complete your prompt.\n\nSee
[models](https://docs.anthropic.com/en/docs/models-overview) for additional
details and options.
"""
output_config: OutputConfigParam
"""Configuration options for the model's output, such as the output format."""
system: Union[str, Iterable[TextBlockParam]]
"""System prompt.
Domain
Source
Frequently Asked Questions
What is the MessageCountTokensParams class?
MessageCountTokensParams is a class in the anthropic-sdk-python codebase, defined in src/anthropic/types/message_count_tokens_params.py.
Where is MessageCountTokensParams defined?
MessageCountTokensParams is defined in src/anthropic/types/message_count_tokens_params.py at line 19.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free