
Security News
GitHub Actions Pricing Whiplash: Self-Hosted Actions Billing Change Postponed
GitHub postponed a new billing model for self-hosted Actions after developer pushback, but moved forward with hosted runner price cuts on January 1.
ragql
Advanced tools
This project implements a local-first RAG chat system that reads and processes various text-based log files. It splits the content into manageable chunks, generates embeddings using Ollama or OpenAI, and allows users to interactively query the logs for specific information. The application features a customizable response format and supports configuration for user preferences.
RagQL is a local-first Retrieval-Augmented Generation (RAG) system designed for natural language Q&A over your logs and databases. It provides a modular chat-like interface that can index and query local data sources such as log files and SQLite .db database files. With RagQL, you can ask questions about the contents of your logs or databases and get answers powered by a large language model, all while keeping your data private on your own machine. The system works by generating vector embeddings of your data and using those for context retrieval, then feeding relevant context to an LLM to produce answers. Its modular design makes it easy to extend (e.g. adding new data loaders or swapping components), and it supports dual LLM backends for maximum flexibility.
Importantly, RagQL supports both local and remote LLM/embedding backends. By default it favors a local setup using Ollama (which runs open-source models on your machine) for embedding generation and question answering. This means you can run RagQL completely offline. Alternatively, you can integrate OpenAI's API for embeddings and/or LLM responses – useful if you prefer OpenAI's models or if you don't have a suitable local model. This dual backend support is seamless, allowing you to switch between local and cloud as needed (with a simple flag). RagQL's chat interface and CLI tools make it easy to interactively query your data or automate queries via scripts.
--remote flag. This gives you the choice between offline processing or OpenAI's latest models on demand..db). RagQL uses modular loader components to parse content (e.g. reading database tables via pandas or splitting log files into chunks) and then builds a vector index (using FAISS) for efficient similarity search. This modular design makes it easy to add support for new file types or data sources in the future..env file for sensitive settings (like your OpenAI API key or Ollama server URL) and a config.json for persistent configuration (such as a list of data sources to index by default, or other preferences). A built-in config mode (--configs) allows you to add or remove indexed sources and set keys without manually editing files. These settings persist between runs, so you can "set and forget" your environment and data sources.pandas for data handling, argparse for CLI, etc.), the project remains lightweight and hackable. Developers can easily extend RagQL – for example, by adding new loader modules for different file formats, or integrating alternative vector stores – thanks to its clean, modular architecture.Prerequisites: You'll need Python 3.10+ and Poetry (for dependency management) installed on your system (just if you want to contribute, otherwise the dependencies are only Python and the backend LLM provider of your choice). If you plan to use the local LLM mode, you should also install Ollama and have it running (Ollama is available for macOS, Linux, and Windows; it provides a local API endpoint for running models). For remote mode, you'll need an OpenAI account and API key.
Follow these steps to install RagQL:
Via Poetry (recommended):
Clone the repo and enter it:
git clone https://github.com/yourusername/ragql.git
cd ragql
Install dependencies:
poetry install
Configure your .env and config.json as described below.
Install via PyPI:
RAGQL is also published on PyPI, so you can install it directly:
pip install ragql
Or, if you're using Poetry in another project:
poetry add ragql
After installing via PyPI, make sure to create a .env file in your working directory:
# .env
OPENAI_API_KEY=<your OpenAI key>
OLLAMA_URL=http://localhost:11434
Then you can run:
ragql --help
Configure Environment – Create a .env file in the project root (or wherever you run RagQL) to store configuration like API keys. At minimum you should add:
OPENAI_API_KEY=<your OpenAI key> (if you plan to use OpenAI for embeddings or answers)OLLAMA_URL=http://localhost:11434 (or the appropriate URL if your Ollama server is running on a different port/host; default Ollama listens at 11434).RagQL will automatically load this .env file on startup. You can also configure these via environment variables directly, but using a .env is convenient for local development.
(Optional) Configure Default Sources – By default, RagQL will create a rag_config.json to persist settings. You can manually create or edit this file to specify directories or files that should be indexed on startup (and other config options). However, you can also use RagQL's interactive config commands to set this up after installation (see Usage below), so manual editing isn't required.
Run RagQL – You're all set! You can now run the tool via Poetry:
poetry run ragql --help
Or, if the Poetry environment is active, simply:
ragql --help
This will show the help message and verify that the installation was successful. (If RagQL was installed as a package or script, the ragql command should be available in your PATH.)
RagQL can be used in multiple ways with various command-line options. Here's a comprehensive guide:
ragql [options] [command] [key_value]
--help, -h – Display help message and exit--migrate – Migrate your config.json to the new schema while preserving unchanged fields--query QUESTION, -q QUESTION – Run a single RAG-powered query and exit--sources [SOURCES ...] – Specify one or more folders/text files/Data.db files to index--remote – Force using OpenAI API even if OLLAMA_URL is set--configs – Enter configuration modeInteractive Chat Mode (REPL):
ragql with no argumentsexit or Ctrl+COne-off Query Mode:
ragql --query "Your question here" --sources path/to/data
# or
ragql -q "Your question here" --sources path/to/data
Configuration Mode:
ragql --configs
# or
ragql [command] [key_value]
The following commands can be used either in configuration mode (--configs) or directly as positional arguments:
add <path> – Add a single file to the index configurationadd-folder <directory> – Recursively add all files from a directoryremove <path> – Remove a file or folder from configurationlist – Display all configured source files/foldersset openai key <API_KEY> – Configure your OpenAI API keyhelp – Show available commands in config modeexit – Exit configuration mode (when in interactive mode)Index and Query a Single File:
ragql --sources ~/logs/system.log -q "What errors occurred today?"
Force Remote API Usage:
ragql --remote --sources data.db -q "Summarize this database"
Configure Sources via Command Line:
ragql add-folder ~/project/logs
ragql add ~/project/data/metrics.db
Migrate Configuration:
ragql --migrate
Interactive Chat with Multiple Sources:
ragql --sources ~/logs/ ~/databases/metrics.db
RagQL uses two main configuration files:
.env - For sensitive settings:
OPENAI_API_KEY=<your OpenAI key>
OLLAMA_URL=http://localhost:11434
rag_config.json - For persistent configuration:
--configs mode or direct commandsRagQL is built on a stack of modern tools and libraries:
.db and convert them into text or CSV for indexing..env file. This simplifies configuration of API keys and URLs without hardcoding them.--sources, --remote, and subcommands in config mode.Additionally, RagQL is structured in a modular way (with separate components for CLI, configuration management, data loading, embedding generation, and storage). This design makes it easy for developers to understand and modify the codebase or integrate other tools (for example, swapping FAISS with a different vector database, or adding a new loader for a different file type).
.db files).FAQs
This project implements a local-first RAG chat system that reads and processes various text-based log files. It splits the content into manageable chunks, generates embeddings using Ollama or OpenAI, and allows users to interactively query the logs for specific information. The application features a customizable response format and supports configuration for user preferences.
We found that ragql demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
GitHub postponed a new billing model for self-hosted Actions after developer pushback, but moved forward with hosted runner price cuts on January 1.

Research
Destructive malware is rising across open source registries, using delays and kill switches to wipe code, break builds, and disrupt CI/CD.

Security News
Socket CTO Ahmad Nassri shares practical AI coding techniques, tools, and team workflows, plus what still feels noisy and why shipping remains human-led.