Subto.One
Comprehensive Website Analysis & AI Code Surgery - 100% Free Forever
Features
- Deep Runtime Analysis: Full Playwright-based crawling with JavaScript execution
- Network Interception: Captures all requests, responses, and timing data
- Interaction Simulation: Programmatically tests buttons, inputs, and interactive elements
- Lighthouse Integration: Performance, accessibility, SEO, and best practices scoring
- SEO & Markup Validation: W3C Validator, Google Mobile-Friendly Test integration
- Performance Testing: Google PageSpeed, WebPageTest, GTmetrix integration
- Security Audit: Mozilla Observatory, Security Headers, SSL Labs, Safe Browsing integration
- Malware Detection: VirusTotal, Hybrid Analysis, URLScan.io support
- AI API Selection: Ask AI to choose the best scanning API for your needs
- No-JS Differential: Compares JS-enabled vs disabled behavior
- AI Code Surgeon: Upload your code and get AI-powered fixes
Quick Start
Prerequisites
Installation
npm install
npx playwright install chromium
cp .env.example .env
Configuration
Edit .env and add your OpenRouter API key for AI features:
OPENROUTER_API_KEY=your_key_here
Get a free API key at OpenRouter
Optional API Keys
Add these to your .env for enhanced scanning capabilities (all have generous free tiers):
# Performance & Speed (Scalable Free Options)
GOOGLE_PAGESPEED_API_KEY=your_key # Google PageSpeed Insights
WEBPAGETEST_API_KEY=your_key # WebPageTest (public instance available)
GOOGLE_MOBILE_FRIENDLY_API_KEY=your_key # Google Mobile-Friendly Test
# SEO & Markup Validation
# W3C Markup Validator (no key needed) # HTML validation for SEO
# Security & Malware (No/Low Limits)
VIRUSTOTAL_API_KEY=your_key # VirusTotal (500 requests/day free)
GOOGLE_SAFE_BROWSING_API_KEY=your_key # Google Safe Browsing
URLSCAN_API_KEY=your_key # URLScan.io (free tier)
HYBRID_ANALYSIS_API_KEY=your_key # Hybrid Analysis (free tier)
# Always Free (No API Keys Needed)
# Mozilla Observatory, Security Headers, SSL Labs
All APIs have free tiers. See .env.example for details.
Running
npm run dev
npm start
Visit http://localhost:3000
Architecture
quantumreasoning/
├── public/ # Frontend assets
│ ├── index.html # Single-page app
│ ├── styles.css # Exact styling spec
│ └── app.js # Frontend logic
├── server/
│ ├── index.js # Express server + WebSocket
│ └── modules/
│ ├── scan-pipeline.js # 7-phase analysis engine
│ ├── ai-analyzer.js # OpenRouter AI integration
│ ├── file-manager.js # Upload/ZIP handling
│ └── data-store.js # In-memory storage
├── package.json
└── .env.example
Scan Pipeline
- Fetch Initial HTML - Loads page with Playwright
- Execute JavaScript Runtime - Captures all JS files and builds AST
- Intercept Network Requests - Records all network activity
- Simulate Interactions - Hovers, clicks, types on interactive elements
- Run Lighthouse - Google PageSpeed Insights API
- Audit Security - Mozilla Observatory + OWASP checks
- No-JS Differential - Compares behavior without JavaScript
API Endpoints
Scan
POST /api/v1/scan
Body: { "url": "https://subto.one" }
Response: { "scanId": "uuid", "status": "started" }
Notes on queuing and rate limiting:
- Concurrency limit: the server allows up to
MAX_CONCURRENT_SCANS (default 50) scans to run concurrently.
- If the server is at capacity, API clients can opt into an automatic queue by sending the header
X-Accept-Queue: true with the POST request. In that case the request will be accepted and queued; the response will be 202 Accepted with JSON { scanId, status: 'queued', queuePosition }.
- If the client does not opt into queuing and the server is at capacity, the API will return
429 Too Many Requests with a short message instructing the client to retry or set X-Accept-Queue: true.
- The UI automatically sets
X-Accept-Queue: true and displays Queuing plus the user's position (e.g., You are #3 in queue).
Get Results
GET /api/v1/scan/:scanId
Response: Full scan data
AI Analysis
POST /api/v1/ai/analyze
Body: { "scanId": "uuid", "files": [...] }
Response: { "summary": "...", "changes": [...] }
File Upload
Supported Types
- JavaScript:
.js, .ts, .jsx, .tsx
- Styles:
.css, .scss
- Markup:
.html, .vue, .svelte
- Data:
.json, .env, .md
Limits
- Single file: 50 MB max
- Total upload: 250 MB max
- File count: 5,000 files max
Excluded Folders
node_modules/
.git/
.next/, dist/, build/
AI Models (Free Tier)
- Default:
deepseek/deepseek-r1:free
- Code Analysis:
qwen/qwen3-coder:free
- Security:
mistralai/devstral-small:free
Rate Limits
- 5 seconds between requests
- 50 requests per day
Data Retention
All scan data is automatically deleted after 24 hours. No user accounts required.
Security
- No binary file uploads
- Server-side validation even if client-side passes
- Streaming uploads to disk, not memory
- Directory traversal prevention
- MIME type validation
License
MIT
Deployment (Docker + nginx reverse-proxy)
This repo includes a simple production-ready scaffold under deploy/ that runs the Node app behind an nginx reverse-proxy which terminates TLS and forwards requests (including WebSocket upgrades) to the app.
Quick local test (self-signed cert):
cd deploy
./mk-self-signed.sh
docker compose up --build
Production notes:
- Use Node.js 22+ in your runtime (required by Lighthouse v13).
- Terminate TLS at your load balancer (Cloud Load Balancer, nginx, etc.) and forward plain HTTP to the Node container.
- Ensure WebSocket upgrades are forwarded by the proxy.
- Provide secrets via environment variables or your hosting secret manager (do NOT commit secrets).