Skip to main content
Slumber Chunker
curl --request POST \
  --url https://api.chonkie.ai/v1/chunk/slumber
{
  "text": "<string>",
  "start_index": 123,
  "end_index": 123,
  "token_count": 123
}
The Slumber Chunker uses a sliding window approach with efficient processing for chunking long documents while maintaining context.

Examples

Text Input

from chonkie.cloud import SlumberChunker

chunker = SlumberChunker(
    chunk_size=512,
    recipe="markdown"
)

text = "Your text here..."
chunks = chunker.chunk(text)

File Input

from chonkie.cloud import SlumberChunker

chunker = SlumberChunker(
    chunk_size=512,
    recipe="markdown"
)

# Chunk from file
with open("document.txt", "rb") as f:
    chunks = chunker.chunk(file=f)

Request

Parameters

text
string | string[]
The text to chunk. Can be a single string or an array of strings for batch processing. Either text or file is required.
file
file
File to chunk. Use multipart/form-data encoding. Either text or file is required.
tokenizer
string
default:"gpt2"
Tokenizer to use for counting tokens.
chunk_size
integer
default:"512"
Maximum number of tokens per chunk.
recipe
string
default:"default"
Recursive rules recipe to follow. See all recipes on our Hugging Face repo
lang
string
default:"en"
Language of the document to chunk
candidate_size
integer
default:"128"
The size of the candidate splits that the chunker will consider.
min_characters_per_chunk
integer
default:"24"
Minimum number of characters per chunk

Response

Returns

Array of Chunk objects, each containing:
text
string
The chunk text content.
start_index
integer
Starting character position in the original text.
end_index
integer
Ending character position in the original text.
token_count
integer
Number of tokens in the chunk.
I