POST
/
v1
/
chunk
/
sentence
curl --request POST \
  --url https://api.chonkie.ai/v1/chunk/sentence \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "text": "<string>",
  "tokenizer_or_token_counter": "gpt2",
  "chunk_size": 512,
  "chunk_overlap": 0,
  "min_sentences_per_chunk": 1,
  "min_characters_per_sentence": 1,
  "approximate": true,
  "delim": "<string>",
  "include_delim": "prev",
  "return_type": "chunks"
}'
[
  {
    "text": "<string>",
    "start_index": 123,
    "end_index": 123,
    "token_count": 123,
    "sentences": [
      {
        "text": "<string>",
        "start_index": 123,
        "end_index": 123,
        "token_count": 123
      }
    ]
  }
]

Authorizations

Authorization
string
header
required

Your API Key from the Chonkie Cloud dashboard

Body

application/json

Data to pass to the Sentence Chunker.

text
required

The input text or list of texts to be chunked.

tokenizer_or_token_counter
string
default:gpt2

Tokenizer to use. Can be a string identifier or a tokenizer instance

chunk_size
integer
default:512

Maximum number of tokens per chunk

chunk_overlap
integer
default:0

Number of overlapping tokens between chunks

min_sentences_per_chunk
integer
default:1

Minimum number of sentences to include in each chunk

min_characters_per_sentence
integer
default:1

Minimum number of characters per sentence

approximate
boolean
default:true

Use approximate token counting for faster processing (deprecated)

delim
default:["\n",".","!","?"]

Delimiters to split sentences on

include_delim
enum<string> | null
default:prev

Include delimiters in the chunk text. If so, specifies whether to include in the previous or next chunk

Available options:
prev,
next
return_type
enum<string>
default:chunks

Whether to return chunks as text strings or as SentenceChunk objects

Available options:
texts,
chunks

Response

200 - application/json
Successful Response: A list of sentence chunk objects.
text
string

The actual text content of the chunk.

start_index
integer

The starting character index of the chunk within the original input text.

end_index
integer

The ending character index (exclusive) of the chunk within the original input text.

token_count
integer

The number of tokens in this specific chunk, according to the tokenizer used.

sentences
object[]

List of sentences contained within this chunk.

Represents a single sentence within a chunk.