MessageCountTokensParams Class — anthropic-sdk-python Architecture
Architecture documentation for the MessageCountTokensParams class in message_count_tokens_params.py from the anthropic-sdk-python codebase.
Entity Profile
Dependency Diagram
graph TD d5bbb81f_5d9f_b989_f56d_fbb6b967a2d0["MessageCountTokensParams"] 579bd2f9_ae1a_5b30_88c4_93179ebb82b6["message_count_tokens_params.py"] d5bbb81f_5d9f_b989_f56d_fbb6b967a2d0 -->|defined in| 579bd2f9_ae1a_5b30_88c4_93179ebb82b6
Relationship Graph
Source Code
src/anthropic/types/beta/message_count_tokens_params.py lines 41–252
class MessageCountTokensParams(TypedDict, total=False):
messages: Required[Iterable[BetaMessageParam]]
"""Input messages.
Our models are trained to operate on alternating `user` and `assistant`
conversational turns. When creating a new `Message`, you specify the prior
conversational turns with the `messages` parameter, and the model then generates
the next `Message` in the conversation. Consecutive `user` or `assistant` turns
in your request will be combined into a single turn.
Each input message must be an object with a `role` and `content`. You can
specify a single `user`-role message, or you can include multiple `user` and
`assistant` messages.
If the final message uses the `assistant` role, the response content will
continue immediately from the content in that message. This can be used to
constrain part of the model's response.
Example with a single `user` message:
```json
[{ "role": "user", "content": "Hello, Claude" }]
```
Example with multiple conversational turns:
```json
[
{ "role": "user", "content": "Hello there." },
{ "role": "assistant", "content": "Hi, I'm Claude. How can I help you?" },
{ "role": "user", "content": "Can you explain LLMs in plain English?" }
]
```
Example with a partially-filled response from Claude:
```json
[
{
"role": "user",
"content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
},
{ "role": "assistant", "content": "The best answer is (" }
]
```
Each input message `content` may be either a single `string` or an array of
content blocks, where each block has a specific `type`. Using a `string` for
`content` is shorthand for an array of one content block of type `"text"`. The
following input messages are equivalent:
```json
{ "role": "user", "content": "Hello, Claude" }
```
```json
{ "role": "user", "content": [{ "type": "text", "text": "Hello, Claude" }] }
```
See [input examples](https://docs.claude.com/en/api/messages-examples).
Note that if you want to include a
[system prompt](https://docs.claude.com/en/docs/system-prompts), you can use the
top-level `system` parameter — there is no `"system"` role for input messages in
the Messages API.
There is a limit of 100,000 messages in a single request.
"""
model: Required[ModelParam]
"""
The model that will complete your prompt.\n\nSee
[models](https://docs.anthropic.com/en/docs/models-overview) for additional
details and options.
"""
context_management: Optional[BetaContextManagementConfigParam]
"""Context management configuration.
This allows you to control how Claude manages context across multiple requests,
such as whether to clear function results or not.
Domain
Source
Frequently Asked Questions
What is the MessageCountTokensParams class?
MessageCountTokensParams is a class in the anthropic-sdk-python codebase, defined in src/anthropic/types/beta/message_count_tokens_params.py.
Where is MessageCountTokensParams defined?
MessageCountTokensParams is defined in src/anthropic/types/beta/message_count_tokens_params.py at line 41.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free