message_count_tokens_params.py — anthropic-sdk-python Source File
Architecture documentation for message_count_tokens_params.py, a python file in the anthropic-sdk-python codebase. 12 imports, 1 dependents.
Entity Profile
Dependency Diagram
graph LR 11b723a4_f0a6_3747_c358_91ffd0f2c339["message_count_tokens_params.py"] 0b785e51_9b26_ae4b_cf32_6b73997c2811["model_param.py"] 11b723a4_f0a6_3747_c358_91ffd0f2c339 --> 0b785e51_9b26_ae4b_cf32_6b73997c2811 ee233da2_be45_4f72_f2d1_e5c3132da735["message_param.py"] 11b723a4_f0a6_3747_c358_91ffd0f2c339 --> ee233da2_be45_4f72_f2d1_e5c3132da735 5b37714c_c7af_1519_a0b1_0167fccc0751["MessageParam"] 11b723a4_f0a6_3747_c358_91ffd0f2c339 --> 5b37714c_c7af_1519_a0b1_0167fccc0751 ad175324_e5f5_a13e_bfab_419a3f7bb8cf["text_block_param.py"] 11b723a4_f0a6_3747_c358_91ffd0f2c339 --> ad175324_e5f5_a13e_bfab_419a3f7bb8cf 2bf44e5b_0987_db82_09a9_ca08ad9d6f15["TextBlockParam"] 11b723a4_f0a6_3747_c358_91ffd0f2c339 --> 2bf44e5b_0987_db82_09a9_ca08ad9d6f15 e379b36a_6cc4_73cb_a38b_f2eb2d9f7dc8["tool_choice_param.py"] 11b723a4_f0a6_3747_c358_91ffd0f2c339 --> e379b36a_6cc4_73cb_a38b_f2eb2d9f7dc8 75f45536_924c_c385_0b03_5cc1a67cf4ab["output_config_param.py"] 11b723a4_f0a6_3747_c358_91ffd0f2c339 --> 75f45536_924c_c385_0b03_5cc1a67cf4ab cfd73339_00da_52d7_4ad2_7ed904085316["OutputConfigParam"] 11b723a4_f0a6_3747_c358_91ffd0f2c339 --> cfd73339_00da_52d7_4ad2_7ed904085316 821cd9e1_c9b7_aa68_181f_1b0002d949a2["thinking_config_param.py"] 11b723a4_f0a6_3747_c358_91ffd0f2c339 --> 821cd9e1_c9b7_aa68_181f_1b0002d949a2 297c2067_cd78_c65b_b9eb_ee1696eac623["message_count_tokens_tool_param.py"] 11b723a4_f0a6_3747_c358_91ffd0f2c339 --> 297c2067_cd78_c65b_b9eb_ee1696eac623 89ddcdd7_3ae1_4c7b_41bb_9f1e25f16875["typing"] 11b723a4_f0a6_3747_c358_91ffd0f2c339 --> 89ddcdd7_3ae1_4c7b_41bb_9f1e25f16875 37c05070_ca59_d596_7250_de9d1939227f["typing_extensions"] 11b723a4_f0a6_3747_c358_91ffd0f2c339 --> 37c05070_ca59_d596_7250_de9d1939227f 2eb1aa7f_c778_4d68_0a26_a102fadd9638["__init__.py"] 2eb1aa7f_c778_4d68_0a26_a102fadd9638 --> 11b723a4_f0a6_3747_c358_91ffd0f2c339 style 11b723a4_f0a6_3747_c358_91ffd0f2c339 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
from __future__ import annotations
from typing import Union, Iterable
from typing_extensions import Required, TypedDict
from .model_param import ModelParam
from .message_param import MessageParam
from .text_block_param import TextBlockParam
from .tool_choice_param import ToolChoiceParam
from .output_config_param import OutputConfigParam
from .thinking_config_param import ThinkingConfigParam
from .message_count_tokens_tool_param import MessageCountTokensToolParam
__all__ = ["MessageCountTokensParams"]
class MessageCountTokensParams(TypedDict, total=False):
messages: Required[Iterable[MessageParam]]
"""Input messages.
Our models are trained to operate on alternating `user` and `assistant`
conversational turns. When creating a new `Message`, you specify the prior
conversational turns with the `messages` parameter, and the model then generates
the next `Message` in the conversation. Consecutive `user` or `assistant` turns
in your request will be combined into a single turn.
Each input message must be an object with a `role` and `content`. You can
specify a single `user`-role message, or you can include multiple `user` and
`assistant` messages.
If the final message uses the `assistant` role, the response content will
continue immediately from the content in that message. This can be used to
constrain part of the model's response.
Example with a single `user` message:
```json
[{ "role": "user", "content": "Hello, Claude" }]
```
Example with multiple conversational turns:
```json
[
{ "role": "user", "content": "Hello there." },
{ "role": "assistant", "content": "Hi, I'm Claude. How can I help you?" },
{ "role": "user", "content": "Can you explain LLMs in plain English?" }
]
```
Example with a partially-filled response from Claude:
```json
[
{
"role": "user",
"content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
},
// ... (142 more lines)
Domain
Classes
Dependencies
Imported By
Source
Frequently Asked Questions
What does message_count_tokens_params.py do?
message_count_tokens_params.py is a source file in the anthropic-sdk-python codebase, written in python. It belongs to the AnthropicClient domain.
What does message_count_tokens_params.py depend on?
message_count_tokens_params.py imports 12 module(s): MessageParam, OutputConfigParam, TextBlockParam, message_count_tokens_tool_param.py, message_param.py, model_param.py, output_config_param.py, text_block_param.py, and 4 more.
What files import message_count_tokens_params.py?
message_count_tokens_params.py is imported by 1 file(s): __init__.py.
Where is message_count_tokens_params.py in the architecture?
message_count_tokens_params.py is located at src/anthropic/types/message_count_tokens_params.py (domain: AnthropicClient, directory: src/anthropic/types).
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free