Neum AI
Core library with Neum AI components to connect, load, chunk and sink vector embeddings. Neum AI is a data platform that helps developers leverage their data to contextualize Large Language Models through Retrieval Augmented Generation (RAG) This includes
extracting data from existing data sources like document storage and NoSQL, processing the contents into vector embeddings and ingesting the vector embeddings into vector databases for similarity search.
It provides you a comprehensive solution for RAG that can scale with your application and reduce the time spent integrating services like data connectors, embedding models and vector databases.
Features
- 🏭 High throughput distrubted architecture to handle billions of data points. Allows high degrees of parallelization to optimize embedding generation and ingestion.
- 🧱 Built-in data connectors to common data sources, embedding services and vector stores.
- 🔄 Real-time synchronization of data sources to ensure your data is always up-to-date.
- 🤝 Cohesive data management to support hybrid retrieval with metdata. Neum AI automatically augments and tracks metadata to provide rich retrieval experience.
Getting Started
Neum AI Cloud
Sign up today at dasboard.neum.ai. See our quickstart to get started.
The Neum AI Cloud supports a large-scale, distrubted architecture to run millions of documents through vector embedding. For the full set of features see: Cloud vs Local
Local Development
Install the neumai
package:
pip install neumai
To create your first data pipelines visit our quickstart.
Self-host
If you are interested in deploying Neum AI to your own cloud contact us at founders@tryneum.com.
We will publish soon an open-source self-host that leverages the framework's architecture to do high throughput data processing.
Roadmap
Connectors
Search
Extensibility
Experimental