Skip to main content
{
  "text": "<string>",
  "start_index": 123,
  "end_index": 123,
  "token_count": 123
}
The Token Chunker splits text into chunks based on token count, ensuring each chunk stays within specified token limits.

Examples

Text Input

from chonkie.cloud import TokenChunker

chunker = TokenChunker(
    tokenizer="gpt2",
    chunk_size=512,
    chunk_overlap=128
)

text = "Your text here..."
chunks = chunker.chunk(text)

File Input

from chonkie.cloud import TokenChunker

chunker = TokenChunker(
    tokenizer="gpt2",
    chunk_size=512,
    chunk_overlap=128
)

# Chunk from file
with open("document.txt", "rb") as f:
    chunks = chunker.chunk(file=f)

Request

Parameters

text
string | string[]
The text to chunk. Can be a single string or an array of strings for batch processing. Either text or file is required.
file
file
File to chunk. Use multipart/form-data encoding. Either text or file is required.
tokenizer
string
default:"gpt2"
Tokenizer to use for counting tokens. Options: “gpt2”, “character”, “word”, or any Hugging Face tokenizer.
chunk_size
integer
default:"512"
Maximum number of tokens per chunk.
chunk_overlap
integer
default:"0"
Number of tokens to overlap between consecutive chunks.

Response

Returns

Array of Chunk objects, each containing:
text
string
The chunk text content.
start_index
integer
Starting character position in the original text.
end_index
integer
Ending character position in the original text.
token_count
integer
Number of tokens in the chunk.
I