
Security News
Axios Supply Chain Attack Reaches OpenAI macOS Signing Pipeline, Forces Certificate Rotation
OpenAI rotated macOS signing certificates after a malicious Axios package reached its CI pipeline in a broader software supply chain attack.
A self-hosted AI agent that runs on the user's machine with a full-stack workspace it can modify, a chat interface for remote control, and a relay system for public access via custom domains.
Fluxy is three separate codebases working together:
api.fluxy.bot) -- optional cloud service that maps username.fluxy.bot to the user's Cloudflare tunnel. Routes HTTP and WebSocket traffic. Only used with Quick Tunnels.localhost:3000 to the internet. Two modes:
*.trycloudflare.com URL that changes on restart. Optionally paired with the relay for a permanent domain.The user chooses their tunnel mode during fluxy init via an interactive selector.
When fluxy start runs, the CLI spawns a single supervisor process. The supervisor then spawns child processes and manages their lifecycle:
CLI (bin/cli.js)
|
spawns
v
Supervisor (supervisor/index.ts) port 3000 HTTP server + WebSocket + reverse proxy
|
+-- Worker (worker/index.ts) port 3001 Express API, SQLite, auth, conversations
+-- Vite Dev Server port 3002 Serves workspace/client with HMR
+-- Backend (workspace/backend/) port 3004 User's custom Express server
+-- cloudflared (tunnel) -- Exposes port 3000 to the internet (quick or named)
+-- Scheduler (supervisor/scheduler) -- PULSE + CRON job runner (in-process)
Port allocation: base port (default 3000), worker = base+1, Vite = base+2, backend = base+4.
All child processes auto-restart up to 3 times on crash (reset counter if alive >30s). The supervisor catches SIGINT/SIGTERM and tears everything down gracefully.
The supervisor is a raw http.createServer (no Express) that routes every incoming request:
| Path | Target | Notes |
|---|---|---|
/fluxy/widget.js | Direct file serve | Chat bubble script, no-cache |
/sw.js, /fluxy/sw.js | Embedded service worker | PWA + push notification support |
/app/api/* | Backend (port 3004) | Strips /app/api prefix before forwarding |
/api/* | Worker (port 3001) | Auth middleware checks Bearer token on mutations |
/fluxy/* | Static files from dist-fluxy/ | Pre-built chat SPA. HTML: no-cache. Hashed assets: immutable, 1yr max-age |
| Everything else | Vite dev server (port 3002) | Dashboard + HMR |
WebSocket upgrades:
/fluxy/ws -- Fluxy chat. Auth-gated via query param token. Handled by an in-process WebSocketServer.The supervisor also:
app:hmr-update to all connected dashboard clients after file changesworkspace/backend/ for .ts/.js/.json changes and auto-restarts the backend.env changes (backend restart), .restart trigger, and .update trigger (deferred fluxy update)The supervisor validates Bearer tokens on /api/* POST/PUT/DELETE requests by calling the worker's /api/portal/validate-token endpoint. Token results are cached for 60 seconds. Auth-exempt routes (login, onboard, health, push, auth endpoints) skip this check.
The /app/api/* route has no auth -- the user's workspace backend handles its own authentication.
Express server on port 3001. Owns the database and all platform API logic. The supervisor never touches SQLite directly -- everything goes through HTTP. All API responses set Cache-Control: no-store, no-cache, must-revalidate to prevent stale responses through the relay/CDN.
Database: ~/.fluxy/memory.db (SQLite via better-sqlite3, WAL mode)
Tables:
conversations -- chat sessions (id, title, model, session_id, timestamps)messages -- individual messages (role, content, tokens_in, tokens_out, model, audio_data, attachments as JSON)settings -- key-value store (onboard config, provider, model, portal credentials, etc.)sessions -- auth tokens with 7-day expirypush_subscriptions -- Web Push endpoints with VAPID keys (endpoint, keys_p256dh, keys_auth)Auto-migrations add missing columns on startup (session_id, audio_data, attachments).
Key endpoints:
/api/conversations -- CRUD for conversations and messages (paginated with before cursor)/api/settings -- key-value read/write/api/onboard -- saves wizard configuration (provider, model, portal password, whisper key, names)/api/onboard/status -- returns current setup state (names, portal, whisper, provider, handle)/api/portal/login -- password auth (POST with JSON body or GET with Basic Auth header)/api/portal/validate-token -- session token validation (POST or GET)/api/portal/verify-password -- check password without creating session/api/whisper/transcribe -- audio-to-text via OpenAI Whisper API (10MB limit)/api/handle/* -- check availability, register, change relay handles/api/context/current, /api/context/set, /api/context/clear -- tracks which conversation is active/api/auth/claude/* -- Claude OAuth start, exchange, status/api/auth/codex/* -- OpenAI OAuth start, cancel, status/api/push/* -- VAPID public key, subscribe, unsubscribe, send notifications, status/api/files/* -- static file serving from attachment storage/api/health -- health checkPortal passwords are hashed with scrypt (random 16-byte salt, stored as salt:hash).
The workspace is a full-stack app template that Claude can freely modify. It lives at workspace/ in the project and gets copied to ~/.fluxy/workspace/ on first install.
workspace/
client/ React + Vite + Tailwind dashboard
src/App.tsx Main app entry (error boundary, rebuild overlay, onboard iframe)
index.html PWA manifest, service worker registration, widget script
backend/
index.ts Express server template (reads .env, opens app.db)
.env Environment variables for the backend
app.db SQLite database for workspace data
MYSELF.md Agent identity and personality
MYHUMAN.md Everything the agent knows about the user
MEMORY.md Long-term curated knowledge
PULSE.json Periodic wake-up config (interval, quiet hours)
CRONS.json Scheduled tasks with cron expressions
memory/ Daily notes (YYYY-MM-DD.md files, append-only)
skills/ Plugin directories with .claude-plugin/plugin.json
MCP.json MCP server configuration (optional)
files/ Attachment storage (audio, images, documents)
The backend runs on port 3004, accessed at /app/api/* through the supervisor proxy. The /app/api prefix is stripped before reaching the backend, so routes are defined as /health not /app/api/health.
The frontend is served by Vite with HMR. When Claude edits files, Vite picks up changes instantly for the frontend. The supervisor restarts the backend process after any file write by the agent.
The workspace is the only directory Claude is allowed to modify. The system prompt explicitly tells it never to touch supervisor/, worker/, shared/, or bin/.
The agent has no persistent memory between sessions except through files in the workspace:
| File | Purpose |
|---|---|
MYSELF.md | Agent identity, personality, operating manual. The agent's self-authored description of who it is. |
MYHUMAN.md | Profile of the user -- preferences, context, everything the agent has learned about them. |
MEMORY.md | Long-term curated knowledge. Distilled from daily notes into durable insights. |
memory/YYYY-MM-DD.md | Daily notes. Raw, append-only log of events and observations for that day. |
All four are injected into the system prompt at query time by fluxy-agent.ts. The agent reads and writes these files itself -- there's no external process managing them.
The scheduler runs in-process within the supervisor, checking every 60 seconds.
Periodic wake-ups configured in workspace/PULSE.json:
{ "enabled": true, "intervalMinutes": 30, "quietHours": { "start": "23:00", "end": "07:00" } }
When a pulse fires, the scheduler triggers the agent with a system-generated prompt. The agent can check in, review notes, or take proactive action. Quiet hours suppress pulses.
Scheduled tasks configured in workspace/CRONS.json:
[{ "id": "...", "schedule": "0 9 * * *", "task": "Check the weather", "enabled": true, "oneShot": false }]
Uses cron-parser to match cron expressions against the current minute. One-shot crons are auto-removed after firing.
When a cron or pulse fires:
startFluxyAgentQuery with the task<Message> blocks from the agent's responseThe agent auto-discovers skills in workspace/skills/. Each skill is a folder containing .claude-plugin/plugin.json. These are loaded as tool plugins when the agent starts a query.
MCP servers can be configured in workspace/MCP.json. The agent loads them at query time and logs which servers are active.
The worker generates VAPID keys on first boot and stores them in settings. The chat SPA requests notification permission, subscribes via the Push API, and sends the subscription endpoint + keys to the worker.
When the scheduler (or any server-side event) needs to notify the user, it calls POST /api/push/send which fans out web-push notifications to all stored subscriptions. Expired subscriptions are auto-cleaned.
The chat UI is a standalone React SPA built separately (vite.fluxy.config.ts -> dist-fluxy/). It runs inside an iframe injected by widget.js on the dashboard.
If Claude introduces a bug that crashes the dashboard, the chat stays alive. The user can still talk to Claude and ask for a fix. The chat and dashboard are completely isolated -- different React trees, different build outputs, different error boundaries.
Vanilla JS that injects:
/fluxy/Communicates with the iframe via postMessage:
fluxy:close -- iframe requests panel closefluxy:install-app -- iframe requests PWA install promptfluxy:show-ios-install -- iframe requests iOS-specific install modalfluxy:onboard-complete -- iframe notifies onboarding finished, reloads bubblefluxy:rebuilding -- agent started modifying files, show rebuild overlayfluxy:rebuilt -- rebuild complete, dashboard should reloadfluxy:build-error -- build failed, show error overlayfluxy:hmr-update -- supervisor notifies dashboard of file changesPanel state persisted in localStorage (fluxy_widget_open) to survive HMR reloads. Bubble hidden during onboarding.
Client -> Server:
user:message -- { content, conversationId?, attachments? } where attachments are { type, name, mediaType, data(base64) }user:stop -- abort current agent queryuser:clear-context -- clear conversation and agent sessionwhisper:transcribe -- { audio: base64 } (bypasses relay POST limitation)settings:save -- { ...settings } (bypasses relay POST limitation)Server -> Client:
bot:typing -- agent startedbot:token -- streamed text chunkbot:tool -- tool invocation (name, status)bot:response -- final complete responsebot:error -- error messagebot:done -- query complete, includes usedFileTools flagchat:conversation-created -- new conversation ID assignedchat:sync -- message from another connected clientchat:state -- stream state on reconnect (catches up missed tokens)chat:cleared -- context was clearedapp:hmr-update -- file changes detected, dashboard should reloadAuto-reconnects with exponential backoff (1s -> 8s cap). Queues messages during disconnection. Sends heartbeat pings every 25 seconds. Auth token passed as query parameter on connect.
The chat SPA handles:
beforeinstallprompt for Android PWA installWhen the configured provider is Anthropic, chat messages are routed through the Claude Agent SDK instead of a raw API call.
permissionMode: bypassPermissions -- full tool access, no confirmation promptsworkspace/worker/prompts/fluxy-system-prompt.txtworkspace/skills/workspace/MCP.jsonThe agent has access to all Claude Code tools (Read, Write, Edit, Bash, Grep, Glob, etc.). After a query completes, the supervisor checks if Write or Edit tools were used. If so, it restarts the backend and broadcasts an HMR update.
OAuth tokens are managed by worker/claude-auth.ts using PKCE flow against claude.ai. Tokens are stored in the macOS Keychain (primary) or ~/.claude/.credentials.json (fallback). Refresh tokens are used to renew access tokens with a 5-minute expiry buffer.
For non-Anthropic providers (OpenAI, Ollama), the supervisor falls back to ai.chat() with simple message history -- no agent tools, no file access.
The user's machine is behind NAT. We need a public URL so they can access their bot from their phone.
The user selects a tunnel mode during fluxy init. The mode is stored in ~/.fluxy/config.json as tunnel.mode.
Zero configuration. No Cloudflare account needed.
Phone browser
|
| https://bruno.fluxy.bot (via relay, optional)
v
Fluxy Relay (api.fluxy.bot) Cloud server, maps username -> tunnel URL
|
| https://random-abc.trycloudflare.com
v
Cloudflare Quick Tunnel Ephemeral tunnel, changes on restart
|
| http://localhost:3000
v
Supervisor User's machine
cloudflared tunnel --url http://localhost:3000 --no-autoupdate*.trycloudflare.com)my.fluxy.bot/username or fluxy.bot/username URLPersistent URL with the user's own domain. Requires a Cloudflare account + domain.
Phone browser
|
| https://bot.mydomain.com
v
Cloudflare Named Tunnel Persistent tunnel, URL never changes
|
| http://localhost:3000
v
Supervisor User's machine
fluxy tunnel setup (interactive: login, create tunnel, generate config, print CNAME instructions)cloudflared tunnel --config <configPath> run <name><uuid>.cfargotunnel.comcloudflared binary to ~/.fluxy/bin/ on first run (validates minimum 10MB file size)Node.js/Express + http-proxy + MongoDB. Hosted on Railway. Only used when the user opts into Quick Tunnel mode and registers a handle.
Registration flow:
POST /api/register with username + tier~/.fluxy/config.jsonPUT /api/tunnel with its cloudflared URLRequest proxying:
bruno.fluxy.botproxy.web(req, res, { target: tunnelUrl }) forwards everything -- headers, body, methodPresence:
POST /api/heartbeat every 30 seconds with its tunnel URLPOST /api/disconnectDomain tiers:
| Tier | Subdomain | Path shortcut | Cost |
|---|---|---|---|
| Premium | bruno.fluxy.bot | fluxy.bot/bruno | $5/mo |
| Free | bruno.my.fluxy.bot | my.fluxy.bot/bruno | Free |
Same username can exist on both tiers independently. Compound unique index on username + tier.
WebSocket proxying:
The relay listens for HTTP upgrade events outside of Express middleware. This is critical -- Express middleware (body parsing, CORS) must not touch WebSocket upgrades. The upgrade handler parses the subdomain, looks up the bot, and calls proxy.ws().
The relay's express.json() middleware must run AFTER the subdomain resolver, not before. If body parsing runs first, it consumes the request stream and http-proxy has nothing to forward. This was a real bug -- the fix was scoping express.json() to /api routes only, letting proxied traffic pass through with raw streams intact.
The Fluxy chat has additional workarounds for this (sending settings and whisper data over WebSocket instead of POST), but with the relay fix these are no longer strictly necessary. They remain as defense-in-depth.
Multi-step wizard shown on first launch (inside the Fluxy chat iframe):
Settings are saved via WebSocket (settings:save message) to bypass relay POST limitations.
The CLI is the user-facing entry point. Commands:
| Command | Description |
|---|---|
fluxy init | First-time setup: interactive tunnel mode chooser (Quick or Named), creates config, installs cloudflared, boots server, optionally installs systemd daemon |
fluxy start | Boot the supervisor (or detect existing daemon and show status) |
fluxy status | Health check via /api/health, shows uptime, tunnel URL, and relay URL |
fluxy update | Downloads latest from npm registry, updates code directories, rebuilds UI, restarts daemon |
fluxy tunnel | Named tunnel management (subcommands below) |
fluxy daemon | Linux systemd management: install, start, stop, restart, status, logs, uninstall |
fluxy tunnel subcommands:
| Subcommand | Description |
|---|---|
fluxy tunnel setup | Interactive named tunnel setup: login to Cloudflare, create tunnel, enter domain, generate config YAML, print CNAME instructions |
fluxy tunnel status | Show current tunnel mode and configuration |
fluxy tunnel reset | Switch back to quick tunnel mode |
fluxy init tunnel chooser:
During init, the user is presented with an interactive arrow-key menu to choose their tunnel mode:
my.fluxy.bot/username handle (free) or a premium fluxy.bot/username handle ($5 one-time).bot.yourdomain.com or the root domain.If Named Tunnel is selected, fluxy init immediately runs the named tunnel setup flow inline (same as fluxy tunnel setup).
The CLI spawns the supervisor via node --import tsx/esm supervisor/index.ts and waits for readiness markers on stdout (__TUNNEL_URL__, __RELAY_URL__, __VITE_WARM__, __READY__, __TUNNEL_FAILED__) with a 45-second timeout.
On Linux, fluxy daemon generates a systemd unit file that runs the supervisor as a service with auto-restart on failure.
Two installation paths:
Via curl (production):
curl -fsSL https://fluxy.bot/install | sh
The install script (scripts/install.sh) detects OS/arch, checks for Node.js >= 18 (or bundles Node 22.14.0), downloads the npm package, extracts to ~/.fluxy/, and adds fluxy to PATH.
Via npm (development):
npm install fluxy-bot
The postinstall script (scripts/postinstall.js) copies code directories to ~/.fluxy/, runs npm install --omit=dev there, builds the chat UI if missing, and creates a fluxy symlink.
Windows: scripts/install.ps1 (PowerShell equivalent).
| Path | Contents |
|---|---|
~/.fluxy/config.json | Port, username, AI provider, tunnel mode/config, relay token, tunnel URL |
~/.fluxy/cloudflared-config.yml | Named tunnel config (generated by fluxy tunnel setup) |
~/.fluxy/memory.db | SQLite -- conversations, messages, settings, sessions, push subscriptions |
~/.fluxy/bin/cloudflared | Cloudflare tunnel binary |
~/.fluxy/workspace/ | User's workspace copy (client, backend, memory files, skills, config) |
~/.codex/codedeck-auth.json | OpenAI OAuth tokens |
~/.claude/.credentials.json | Claude OAuth tokens (Linux/Windows) |
macOS Keychain Claude Code-credentials | Claude OAuth tokens (macOS, source of truth) |
bin/cli.js CLI entry point, startup sequence, update logic, daemon management
supervisor/
index.ts HTTP server, request routing, WebSocket handler, process orchestration
worker.ts Worker process spawn/stop/restart
backend.ts Backend process spawn/stop/restart
tunnel.ts Cloudflare tunnel lifecycle (quick + named), health watchdog
vite-dev.ts Vite dev server startup for dashboard HMR
fluxy-agent.ts Claude Agent SDK wrapper, session management, memory injection
scheduler.ts PULSE + CRON scheduler, 60s tick, push notification dispatch
file-saver.ts Attachment storage (audio, images, documents)
widget.js Chat bubble + panel injected into dashboard
chat/
fluxy-main.tsx Chat SPA entry -- auth, WS connection, push subscription, PWA
onboard-main.tsx Onboard SPA entry
OnboardWizard.tsx Multi-step setup wizard
ARCHITECTURE.md Network topology and relay workaround docs
src/
hooks/useChat.ts Base chat state management
hooks/useFluxyChat.ts Fluxy-specific chat: DB persistence, sync, pagination, streaming
lib/ws-client.ts WebSocket client with reconnect + queue
lib/auth.ts Token storage and auth fetch wrapper
components/Chat/
ChatView.tsx Main chat container
InputBar.tsx Text input, file/camera attachments, voice recording
MessageBubble.tsx Markdown rendering, syntax highlighting, attachments
MessageList.tsx Paginated message history with infinite scroll
AudioBubble.tsx Audio player for voice messages
ImageLightbox.tsx Image viewer modal
TypingIndicator.tsx "Bot is typing..." animation
components/LoginScreen.tsx Portal login UI
worker/
index.ts Express API server -- all platform endpoints
db.ts SQLite schema, CRUD operations, migrations
claude-auth.ts Claude OAuth PKCE flow, token refresh, Keychain integration
codex-auth.ts OpenAI OAuth PKCE flow, local callback server on port 1455
prompts/fluxy-system-prompt.txt System prompt that constrains the agent
shared/
config.ts Load/save ~/.fluxy/config.json
paths.ts All path constants (PKG_DIR, DATA_DIR, WORKSPACE_DIR)
relay.ts Relay API client (register, heartbeat, disconnect, tunnel update)
ai.ts AI provider abstraction (Anthropic, OpenAI, Ollama) with streaming
logger.ts Colored console logging with timestamps
scripts/
install.sh User-facing install script (curl-piped), Node bundling
install.ps1 Windows PowerShell installer
postinstall.js npm postinstall: copies files to ~/.fluxy/, builds UI, creates symlink
workspace/
client/
index.html Dashboard HTML shell, PWA manifest, widget script tag
src/main.tsx React DOM entry
src/App.tsx Dashboard root -- error boundary, rebuild overlay
src/components/ Dashboard UI components
backend/
index.ts Express server template with .env loading and SQLite
.env Environment variables
app.db Workspace SQLite database
MYSELF.md Agent identity and personality
MYHUMAN.md User profile (agent-maintained)
MEMORY.md Long-term curated knowledge
PULSE.json Periodic wake-up configuration
CRONS.json Scheduled task definitions
memory/ Daily notes (YYYY-MM-DD.md)
skills/ Plugin directories (.claude-plugin/plugin.json)
MCP.json MCP server configuration (optional)
files/ Uploaded file storage (audio/, images/, documents/)
Why a supervisor + worker split instead of one process? Process isolation. If the worker crashes (bad DB migration, OOM), the supervisor keeps running, the tunnel stays up, the chat stays connected. The user can still talk to Claude. Same logic for the backend -- if Claude writes buggy code, only the backend dies.
Why serve the chat from pre-built static files instead of Vite?
The chat must survive dashboard crashes. If Vite dies or the workspace frontend throws, the chat iframe loads from dist-fluxy/ which is just static files. No build process, no dev server dependency.
Why WebSocket for chat instead of HTTP streaming? The relay couldn't reliably forward POST bodies (now fixed). WebSocket was the workaround. It also gives us bidirectional real-time communication, multi-device sync, and heartbeat detection for free.
Why bypassPermissions on the agent? The whole point is that the user talks to Claude from their phone and Claude does whatever's needed. Confirmation prompts would require a terminal session that doesn't exist. The workspace directory boundary + the system prompt are the safety rails.
Why two tunnel modes?
Quick Tunnel is the default for simplicity -- zero configuration, no Cloudflare account needed. The tradeoff is the URL changes on restart, which is why the relay exists as an optional stable domain layer. Named Tunnel is for advanced users who want full control -- their own domain, no dependency on the relay, permanent URLs. Both modes are offered during fluxy init.
Why two Vite configs?
vite.config.ts builds the workspace dashboard (user-facing app). vite.fluxy.config.ts builds the Fluxy chat SPA. They're separate apps with separate entry points, bundled independently. The chat is pre-built at publish time; the dashboard runs as a dev server with HMR.
Why memory files instead of a database for agent memory? Files are the natural interface for the Claude Agent SDK -- it can read and write them with its built-in tools. No custom tool needed, no API integration. The agent manages its own memory with the same tools it uses to edit code.
FAQs
Self-hosted, self-evolving AI agent with its own dashboard.
The npm package deco-bot receives a total of 3 weekly downloads. As such, deco-bot popularity was classified as not popular.
We found that deco-bot demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
OpenAI rotated macOS signing certificates after a malicious Axios package reached its CI pipeline in a broader software supply chain attack.

Security News
Open source is under attack because of how much value it creates. It has been the foundation of every major software innovation for the last three decades. This is not the time to walk away from it.

Security News
Socket CEO Feross Aboukhadijeh breaks down how North Korea hijacked Axios and what it means for the future of software supply chain security.