
Security News
Bun 1.2.19 Adds Isolated Installs for Better Monorepo Support
Bun 1.2.19 introduces isolated installs for smoother monorepo workflows, along with performance boosts, new tooling, and key compatibility fixes.
The no-nonsense ultra-light and lightning-fast chunking library that's ready to CHONK your texts!
Installation β’ Usage β’ Pipeline β’ Chunkers β’ Integrations β’ Benchmarks
Tired of making your gazillionth chunker? Sick of the overhead of large libraries? Want to chunk your texts quickly and efficiently? Chonkie the mighty hippo is here to help!
π Feature-rich: All the CHONKs you'd ever need
β¨ Easy to use: Install, Import, CHONK
β‘ Fast: CHONK at the speed of light! zooooom
πͺΆ Light-weight: No bloat, just CHONK
π Wide support: CHONKie integrates with your favorite tokenizer, embedding model and APIs!
π¬ οΈMultilingual: Out-of-the-box support for 56 languages
βοΈ Cloud-Ready: CHONK locally or in the Chonkie Cloud
π¦ Cute CHONK mascot: psst it's a pygmy hippo btw
β€οΈ Moto Moto's favorite python library
Chonkie is a chunking library that "just works" β¨
To install chonkie, run:
pip install chonkie
Chonkie follows the rule of minimum installs.
Have a favorite chunker? Read our docs to install only what you need
Don't want to think about it? Simply install all
(Not recommended for production environments)
pip install chonkie[all]
Here's a basic example to get you started:
# First import the chunker you want from Chonkie
from chonkie import RecursiveChunker
# Initialize the chunker
chunker = RecursiveChunker()
# Chunk some text
chunks = chunker("Chonkie is the goodest boi! My favorite chunking hippo hehe.")
# Access chunks
for chunk in chunks:
print(f"Chunk: {chunk.text}")
print(f"Tokens: {chunk.token_count}")
Check out more usage examples in the docs!
Chonkie processes text using a pipeline approach to transform raw documents into refined, usable chunks. This allows for flexibility and efficiency in handling different chunking strategies. We call this pipeline CHOMP
(short for 'CHOnkie's Multi-step Pipeline').
Here's a conceptual overview of the pipeline, as illustrated in the diagram:
The main stages are:
Chef
stage to clean your text before chunking.Refineries
to apply different post-processing steps.Porters
to export the chunks and Handshakes
to ingest the chunks into your vector database.
JSON
is supported for exporting the chunks.This modular pipeline allows Chonkie to be both powerful and easy to configure for various text chunking needs.
Chonkie provides several chunkers to help you split your text efficiently for RAG applications. Here's a quick overview of the available chunkers:
Name | Alias | Description |
---|---|---|
TokenChunker | token | Splits text into fixed-size token chunks. |
SentenceChunker | sentence | Splits text into chunks based on sentences. |
RecursiveChunker | recursive | Splits text hierarchically using customizable rules to create semantically meaningful chunks. |
SemanticChunker | semantic | Splits text into chunks based on semantic similarity. Inspired by the work of Greg Kamradt. |
SDPMChunker | sdpm | Splits text using a Semantic Double-Pass Merge approach. |
LateChunker | late | Embeds text and then splits it to have better chunk embeddings. |
CodeChunker | code | Splits code into structurally meaningful chunks. |
NeuralChunker | neural | Splits text using a neural model. |
SlumberChunker | slumber | Splits text using an LLM to find semantically meaningful chunks. Also known as "AgenticChunker". |
More on these methods and the approaches taken inside the docs
Chonkie boasts 19+ integrations across tokenizers, embedding providers, LLMs, porters, and vector databases, ensuring it fits seamlessly into your existing workflow.
Choose from supported tokenizers or provide your own custom token counting function. Flexibility first!
Name | Description | Optional Install |
---|---|---|
character | Basic character-level tokenizer. Default tokenizer. | default |
word | Basic word-level tokenizer. | default |
tokenizers | Load any tokenizer from the Hugging Face tokenizers library. | default |
tiktoken | Use OpenAI's tiktoken library (e.g., for gpt-4 ). | chonkie[tiktoken] |
transformers | Load tokenizers via AutoTokenizer from HF transformers . | chonkie[transformers] |
default
indicates that the feature is available with the default pip install chonkie
.
To use a custom token counter, you can pass in any function that takes a string and returns an integer! Something like this:
def custom_token_counter(text: str) -> int:
return len(text)
chunker = RecursiveChunker(tokenizer_or_token_counter=custom_token_counter)
You can use this to extend Chonkie to support any tokenization scheme you want!
Seamlessly works with various embedding model providers. Bring your favorite embeddings to the CHONK party! Use AutoEmbeddings
to load models easily.
Provider / Alias | Class | Description | Optional Install |
---|---|---|---|
model2vec | Model2VecEmbeddings | Use Model2Vec models. | chonkie[model2vec] |
sentence-transformers | SentenceTransformerEmbeddings | Use any sentence-transformers model. | chonkie[st] |
openai | OpenAIEmbeddings | Use OpenAI's embedding API. | chonkie[openai] |
cohere | CohereEmbeddings | Use Cohere's embedding API. | chonkie[cohere] |
gemini | GeminiEmbeddings | Use Google's Gemini embedding API. | chonkie[gemini] |
jina | JinaEmbeddings | Use Jina AI's embedding API. | chonkie[jina] |
voyageai | VoyageAIEmbeddings | Use Voyage AI's embedding API. | chonkie[voyageai] |
Genies provide interfaces to interact with Large Language Models (LLMs) for advanced chunking strategies or other tasks within the pipeline.
Genie Name | Class | Description | Optional Install |
---|---|---|---|
gemini | GeminiGenie | Interact with Google Gemini APIs. | chonkie[gemini] |
openai | OpenAIGenie | Interact with OpenAI APIs. | chonkie[openai] |
You can also use the OpenAIGenie
to interact with any LLM provider that supports the OpenAI API format, by simply changing the model
, base_url
, and api_key
parameters. For example, here's how to use the OpenAIGenie
to interact with the Llama-4-Maverick
model via OpenRouter:
from chonkie import OpenAIGenie
genie = OpenAIGenie(model="meta-llama/llama-4-maverick",
base_url="https://openrouter.ai/api/v1",
api_key="your_api_key")
Porters help you save your chunks easily.
Porter Name | Class | Description | Optional Install |
---|---|---|---|
json | JSONPorter | Export chunks to a JSON file. | default |
Handshakes provide a unified interface to ingest chunks directly into your favorite vector databases.
Handshake Name | Class | Description | Optional Install |
---|---|---|---|
chroma | ChromaHandshake | Ingest chunks into ChromaDB. | chonkie[chroma] |
qdrant | QdrantHandshake | Ingest chunks into Qdrant. | chonkie[qdrant] |
pgvector | PgvectorHandshake | Ingest chunks into PostgreSQL with pgvector. | chonkie[pgvector] |
turbopuffer | TurbopufferHandshake | Ingest chunks into Turbopuffer. | chonkie[turbopuffer] |
With Chonkie's wide range of integrations, you can easily plug it into your existing infrastructure and start CHONKING!
"I may be smol hippo, but I pack a big punch!" π¦
Chonkie is not just cute, it's also fast and efficient! Here's how it stacks up against the competition:
Sizeπ¦
Speedβ‘
Check out our detailed benchmarks to see how Chonkie races past the competition! πββοΈπ¨
Want to help grow Chonkie? Check out CONTRIBUTING.md to get started! Whether you're fixing bugs, adding features, or improving docs, every contribution helps make Chonkie a better CHONK for everyone.
Remember: No contribution is too small for this tiny hippo! π¦
Chonkie would like to CHONK its way through a special thanks to all the users and contributors who have helped make this library what it is today! Your feedback, issue reports, and improvements have helped make Chonkie the CHONKIEST it can be.
And of course, special thanks to Moto Moto for endorsing Chonkie with his famous quote:
"I like them big, I like them chonkie." ~ Moto Moto
If you use Chonkie in your research, please cite it as follows:
@software{chonkie2025,
author = {Minhas, Bhavnick AND Nigam, Shreyash},
title = {Chonkie: A no-nonsense fast, lightweight, and efficient text chunking library},
year = {2025},
publisher = {GitHub},
howpublished = {\url{https://github.com/chonkie-inc/chonkie}},
}
FAQs
π¦ CHONK your texts with Chonkie β¨ - The no-nonsense chunking library
We found that chonkie demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Β It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Bun 1.2.19 introduces isolated installs for smoother monorepo workflows, along with performance boosts, new tooling, and key compatibility fixes.
Security News
Popular npm packages like eslint-config-prettier were compromised after a phishing attack stole a maintainerβs token, spreading malicious updates.
Security News
/Research
A phishing attack targeted developers using a typosquatted npm domain (npnjs.com) to steal credentials via fake login pages - watch out for similar scams.