Run a saved pipeline to process text or documents and return chunks.
Examples
from chonkie.cloud import Pipeline
# Execute with text
pipeline = Pipeline.get("my-rag-pipeline")
chunks = pipeline.run(text="Your document text here...")
# Execute with multiple texts
chunks = pipeline.run(text=["First document", "Second document"])
# Execute with file (auto-uploaded)
chunks = pipeline.run(file="document.pdf")
# Access chunk data
for chunk in chunks:
print(f"Text: {chunk.text[:100]}...")
print(f"Tokens: {chunk.token_count}")
Path Parameters
The pipeline slug to execute.
Request
Text content to process. Can be a single string or array of strings.
File reference for document processing. Use after uploading via the file upload endpoint.
File type (e.g., “document”).
The uploaded filename or file reference.
Provide either text or file, not both.
Response
Array of chunk objects returned by the pipeline.
Starting character index in the original text.
Ending character index in the original text.
Number of tokens in the chunk.
Optional context added by refinement steps.
Optional embedding vector if embeddings step was included.
Errors
| Status | Description |
|---|
| 400 | Invalid request (missing text/file or both provided) |
| 404 | Pipeline not found |
| 422 | Processing error |