
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
@techwavedev/agi-agent-kit
Advanced tools
Enterprise-Grade Agentic Framework - Modular skill-based AI assistant toolkit with deterministic execution, semantic memory, and platform-adaptive orchestration.
π PortuguΓͺs (BR) | English
Stop hallucinating. Start executing.
AGI Agent Kit is the enterprise-grade scaffolding that turns any AI coding assistant into a deterministic production machine. While LLMs are probabilistic (90% accuracy per step = 59% over 5 steps), this framework forces them through a 3-Layer Architecture β Intent β Orchestration β Execution β where business logic lives in tested scripts, not hallucinated code.
Most AI coding setups give you a prompt and hope for the best. AGI Agent Kit gives you:
npx @techwavedev/agi-agent-kit init
If this project helps you, consider supporting it here or simply β the repo.
Scaffold a new agent workspace in seconds:
npx @techwavedev/agi-agent-kit init
# Or install globally to ~/.agent to share skills across projects
npx @techwavedev/agi-agent-kit init --global
You'll be guided through an interactive wizard:
~/.agent shared across projects).agent/ structure.claude/settings.json)After installation the wizard shows your next steps, including:
# Boot the memory system (verifies Qdrant + Ollama, auto-fixes issues)
python3 execution/session_boot.py --auto-fix
# Run the platform setup wizard (auto-configures your AI platform)
python3 skills/plugin-discovery/scripts/platform_setup.py --project-dir .
| Feature | Description |
|---|---|
| Deterministic Execution | Separates business logic (Python scripts) from AI reasoning (Directives) |
| Modular Skill System | 1,191+ plug-and-play skills across 3 tiers, organized in 16 domain categories |
| Memory Mode Tiers | Solo β Team β Pro: start simple, add multi-tenancy and auth as needed β no data migration |
| Distributed Agent Auth | HMAC-SHA256 signing, hash anchoring, project access control via shared Qdrant (Hyperledger Aries optional) |
| Real-Time Agent Events | Apache Pulsar event bus for push notifications between agents β graceful degradation if unavailable |
| Hybrid Memory | Qdrant vectors + BM25 keywords with weighted score merge (95% token savings) |
| Platform-Adaptive | Auto-detects Claude Code, Gemini CLI, Codex CLI, Cursor, Copilot, OpenCode, AdaL, Antigravity, Kiro |
| MCP Compatible | Memory + cross-agent coordination exposed as MCP tools (execution/mcp_server.py) for Claude Desktop and any MCP client |
| Multi-Agent Orchestration | Agent Teams, subagents, Powers, or sequential personas β adapts to platform |
| Structured Plan Execution | Batch or subagent-driven execution with two-stage review (spec + quality) |
| TDD Enforcement | Iron-law RED-GREEN-REFACTOR cycle β no production code without failing test |
| Verification Gates | Evidence before claims β no completion without fresh verification output |
| Self-Healing Workflows | Agents read error logs, patch scripts, and update directives automatically |
| Skill Self-Improvement | Karpathy Loop: autonomous test β improve β commit/reset cycle with 18 binary assertion types |
| One-Shot Setup | Platform detection + project stack scan + auto-configuration in one command |
The agi framework adopts all best patterns from obra/superpowers and extends them with capabilities superpowers does not have:
| Capability | obra/superpowers | agi Framework |
|---|---|---|
| TDD Enforcement | β | β Adapted |
| Plan Execution + Review | β | β Adapted + platform-adaptive |
| Systematic Debugging | β | β
Adapted + debugger agent |
| Verification Gates | β | β Adapted + 12 audit scripts |
| Two-Stage Code Review | β | β Adapted into orchestrator |
| Multi-Platform Orchestration | β Claude only | β 10 platforms |
| Semantic Memory (Qdrant) | β | β 90-100% token savings |
| 19 Specialist Agents | β | β Domain boundaries |
| Agent Boundary Enforcement | β | β File-type ownership |
| Dynamic Question Generation | β | β Trade-offs + priorities |
| Memory-First Protocol | β | β Auto cache-hit |
| Skill Creator + Catalog | β | β 1,191 composable skills |
| Platform Setup Wizard | β | β One-shot config |
| Multi-Platform Symlinks | β Claude only | β 10 platforms |
| MCP Server | β | β Memory + coordination |
The framework supports two orchestration modes. Here are real test results from execution/benchmark_modes.py running on local infrastructure (Qdrant + Ollama nomic-embed-text, zero cloud API calls):
MODE A: SUBAGENTS β Independent, fire-and-forget
π€ Explore Auth Patterns β β
stored in cache + memory (127ms)
π€ Query Performance β β FAILED (timeout β fault tolerant)
π€ Scan CVEs β β
stored in cache + memory (14ms)
Summary: 2/3 completed, 1 failed, 0 cross-references
MODE B: AGENT TEAMS β Shared context, coordinated
π€ Backend Specialist β β
stored in shared memory (14ms)
π€ Database Specialist β β
stored in shared memory (13ms)
π€ Frontend Specialist β π Read Backend + Database output first
β
Got context from team-backend: "API contract: POST /api/messages..."
β
Got context from team-database: "Schema: users(id UUID PK, name..."
β β
stored in shared memory (14ms)
Summary: 3/3 completed, 0 failed, 2 cross-references
2nd run (cache warm): All queries hit cache at score 1.000, reducing total time from 314ms β 76ms (Subagents) and 292ms β 130ms (Agent Teams).
| Metric | Subagents | Agent Teams |
|---|---|---|
| Execution model | Fire-and-forget (isolated) | Shared context (coordinated) |
| Tasks completed | 2/3 (fault tolerant) | 3/3 |
| Cross-references | 0 (not supported) | 2 (peers read each other's work) |
| Context sharing | β Each agent isolated | β Peer-to-peer via Qdrant |
| Two-stage review | β | β Spec + Quality |
| Cache hits (2nd run) | 5/5 | 5/5 |
| Embedding provider | Ollama local (nomic-embed-text 137M) | Ollama local (nomic-embed-text 137M) |
Try it yourself:
# 1. Start infrastructure
docker run -d -p 6333:6333 -v qdrant_storage:/qdrant/storage qdrant/qdrant
ollama serve & ollama pull nomic-embed-text
# 2. Boot memory system
python3 execution/session_boot.py --auto-fix
# β
Memory system ready β 5 memories, 1 cached responses
# 3. Run the full benchmark (both modes)
python3 execution/benchmark_modes.py --verbose
# 4. Or test individual operations:
# Store a decision (embedding generated locally via Ollama)
python3 execution/memory_manager.py store \
--content "Chose PostgreSQL for relational data" \
--type decision --project myapp
# β {"status": "stored", "point_id": "...", "token_count": 5}
# Auto-query: checks cache first, then retrieves context
python3 execution/memory_manager.py auto \
--query "what database did we choose?"
# β {"source": "memory", "cache_hit": false, "context_chunks": [...]}
# Cache an LLM response for future reuse
python3 execution/memory_manager.py cache-store \
--query "how to set up auth?" \
--response "Use JWT with 24h expiry, refresh tokens in httpOnly cookies"
# Re-query β instant cache hit (score 1.000, zero re-computation)
python3 execution/memory_manager.py auto \
--query "how to set up auth?"
# β {"source": "cache", "cache_hit": true, "tokens_saved_estimate": 12}
The framework automatically detects your AI coding environment and activates the best available features.
Skills are installed to the canonical skills/ directory and symlinked to each platform's expected path:
| Platform | Skills Path | Instruction File | Orchestration Strategy |
|---|---|---|---|
| Claude Code | .claude/skills/ | CLAUDE.md | Agent Teams (parallel) or Subagents |
| Gemini CLI | .gemini/skills/ | GEMINI.md | Sequential personas via @agent |
| Codex CLI | .codex/skills/ | AGENTS.md | Sequential via prompts |
| Antigravity IDE | .agent/skills/ | AGENTS.md | Full agentic orchestration |
| Cursor | .cursor/skills/ | AGENTS.md | Chat-based via @skill |
| GitHub Copilot | N/A (paste) | COPILOT.md | Manual paste into context |
| OpenCode | .agent/skills/ | OPENCODE.md | Sequential personas via @agent |
| AdaL CLI | .adal/skills/ | AGENTS.md | Auto-load on demand |
| Kiro (AWS) | .kiro/skills/ | .kiro/steering/agents.md | Full agentic orchestration |
Run /setup to auto-detect and configure your platform, or use the setup script directly:
# Interactive (one Y/n question)
python3 skills/plugin-discovery/scripts/platform_setup.py --project-dir .
# Auto-apply everything
python3 skills/plugin-discovery/scripts/platform_setup.py --project-dir . --auto
# Preview without changes
python3 skills/plugin-discovery/scripts/platform_setup.py --project-dir . --dry-run
your-project/
βββ AGENTS.md # Master instruction file
βββ GEMINI.md β AGENTS.md # Platform symlinks
βββ CLAUDE.md β AGENTS.md
βββ OPENCODE.md β AGENTS.md
βββ COPILOT.md β AGENTS.md
βββ skills/ # Up to 1,191 skills (depends on pack)
β βββ webcrawler/ # Documentation harvesting
β βββ qdrant-memory/ # Semantic caching & memory
β βββ ... # 877 more skills in full pack
βββ .claude/skills β skills/ # Platform-specific symlinks
βββ .gemini/skills β skills/
βββ .codex/skills β skills/
βββ .cursor/skills β skills/
βββ .adal/skills β skills/
βββ directives/ # SOPs in Markdown
βββ execution/ # Deterministic Python scripts
β βββ session_boot.py # Session startup (Qdrant + Ollama check)
β βββ memory_manager.py # Store/retrieve/cache operations
βββ skill-creator/ # Tools to create new skills
βββ .agent/ # (medium/full) Agents, workflows, rules
βββ workflows/ # /setup, /deploy, /test, /debug, etc.
The system operates on three layers:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Layer 1: DIRECTIVES (Intent) β
β ββ SOPs written in Markdown (directives/) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 2: ORCHESTRATION (Agent) β
β ββ LLM reads directive, decides which tool to call β
β ββ Platform-adaptive: Teams, Subagents, or Personas β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 3: EXECUTION (Code) β
β ββ Pure Python scripts (execution/) do the actual work β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Why? LLMs are probabilistic. 90% accuracy per step = 59% success over 5 steps. By pushing complexity into deterministic scripts, we achieve reliable execution.
The framework supports fully distributed agent deployments where multiple agents across different machines share context, authenticate writes, and receive real-time notifications β all through the shared Qdrant instance.
Set MEMORY_MODE in .env to choose your tier. All modes are backward-compatible β upgrade anytime without data migration.
| Mode | Use Case | Infrastructure | Key Feature |
|---|---|---|---|
| Solo | Single developer, one agent | Ollama + Qdrant | Full hybrid search, semantic cache |
| Team | Multiple agents sharing context | Same as Solo | Developer isolation + shared memories (--shared) |
| Pro | Enterprise / high-trust | Same + optional Aries | Signed writes, hash anchoring, access control, audit trail |
# Solo: just works
MEMORY_MODE=solo python3 execution/session_boot.py --auto-fix
# Team: agents share context, each has private + shared memories
MEMORY_MODE=team python3 execution/memory_manager.py store \
--content "Use Redis for session cache" --type decision --project myapp --shared
# Pro: signed writes with tamper detection and access control
MEMORY_MODE=pro python3 execution/blockchain_auth.py init
python3 execution/blockchain_auth.py register --entity-type developer --entity-id you@co.com
python3 execution/blockchain_auth.py grant --entity-id you@co.com --project myapp --permissions read,write
python3 execution/memory_manager.py store --content "Decision" --type decision --project myapp --auth
# β {"status": "stored", "signature": "...", "blockchain_anchor": {"status": "anchored"}, "event": {"status": "published"}}
Pro mode adds cryptographic verification to every write:
agent_auth collection for tamper detectionAll auth data is stored in the shared Qdrant instance β no separate database needed. See docs/blockchain-auth.md.
Optional add-on for team/pro modes. Without Pulsar, agents poll Qdrant on each query (~10ms). With Pulsar, events are pushed instantly.
# Start Pulsar (single container, ~256MB heap)
docker compose -f docker-compose.pulsar.yml up -d
pip install pulsar-client
# Events auto-publish on memory store
python3 execution/memory_manager.py store \
--content "Switched to PostgreSQL" --type decision --project myapp
# β "event": {"status": "published", "topic": "persistent://agi/memory/myapp"}
If Pulsar is down, events are silently dropped β Qdrant stores always succeed. See docs/agent-events.md.
ββ Machine 1 βββββββββββββββββββ ββ Machine 2 βββββββββββββββββββ
β Agent A (Claude) β β Agent B (Gemini) β
β ββ memory_manager.py β β ββ memory_manager.py β
β ββ Qdrant (shared) βββββββΌβββββΌβββ Qdrant (shared) β
β ββ BM25 (auto-synced) ββββΌβββββΌβββ BM25 (auto-synced) β
β ββ Pulsar events βββββββββΌβββββΌβββΊ Pulsar events β
ββββββββββββββββββββββββββββββββ ββββββββββββββββββββββββββββββββ
Every component is sourced from the shared Qdrant:
agent_memory + semantic_cacheagent_auth (pro mode)session_bootFor full details: docs/memory-modes.md Β· docs/blockchain-auth.md Β· docs/agent-events.md
Dual-engine retrieval: Qdrant vector similarity for semantic concepts + SQLite FTS5 BM25 for exact keyword matching. Automatically merges results with configurable weights.
| Scenario | Without Memory | With Memory | Savings |
|---|---|---|---|
| Repeated question | ~2000 tokens | 0 tokens | 100% |
| Similar architecture | ~5000 tokens | ~500 tokens | 90% |
| Past error resolution | ~3000 tokens | ~300 tokens | 90% |
| Exact ID/code lookup | ~3000 tokens | ~200 tokens | 93% |
Setup (requires Qdrant + Ollama):
# Start Qdrant
docker run -d -p 6333:6333 -v qdrant_storage:/qdrant/storage qdrant/qdrant
# Start Ollama + pull embedding model
ollama serve &
ollama pull nomic-embed-text
# Boot memory system (auto-creates collections)
python3 execution/session_boot.py --auto-fix
Agents automatically run session_boot.py at session start (first instruction in AGENTS.md). Memory operations:
# Auto-query (check cache + retrieve context)
python3 execution/memory_manager.py auto --query "your task summary"
# Store a decision (auto-indexes into BM25)
python3 execution/memory_manager.py store --content "what was decided" --type decision
# Health check (includes BM25 index status)
python3 execution/memory_manager.py health
# Rebuild BM25 index from existing Qdrant data
python3 execution/memory_manager.py bm25-sync
Hybrid search modes (via hybrid_search.py):
# True hybrid (default): vector + BM25 merged
python3 skills/qdrant-memory/scripts/hybrid_search.py --query "ImagePullBackOff error" --mode hybrid
# Vector only (pure semantic)
python3 skills/qdrant-memory/scripts/hybrid_search.py --query "database architecture" --mode vector
# Keyword only (exact BM25 match)
python3 skills/qdrant-memory/scripts/hybrid_search.py --query "sg-018f20ea63e82eeb5" --mode keyword
The npx init command automatically creates a .venv and installs all dependencies. Just activate it:
source .venv/bin/activate # macOS/Linux
# .venv\Scripts\activate # Windows
If you need to reinstall or update dependencies:
.venv/bin/pip install -r requirements.txt
npx @techwavedev/agi-agent-kit init --pack=full
# To install globally instead of per-project:
npx @techwavedev/agi-agent-kit init --pack=full --global
python3 skills/plugin-discovery/scripts/platform_setup.py --project-dir .
npx @techwavedev/agi-agent-kit@latest init --pack=full
# or use the built-in skill:
python3 skills/self-update/scripts/update_kit.py
python3 execution/session_boot.py --auto-fix
python3 execution/system_checkup.py --verbose
python3 skill-creator/scripts/init_skill.py my-skill --path skills/
python3 skill-creator/scripts/update_catalog.py --skills-dir skills/
Use these keywords, commands, and phrases to trigger specific capabilities:
| Command | What It Does |
|---|---|
/setup | Auto-detect platform and configure environment |
/setup-memory | Initialize Qdrant + Ollama memory system |
/create | Start interactive app builder dialogue |
/plan | Create a structured project plan (no code) |
/enhance | Add or update features in existing app |
/debug | Activate systematic debugging mode |
/test | Generate and run tests |
/deploy | Pre-flight checks + deployment |
/orchestrate | Multi-agent coordination for complex tasks |
/brainstorm | Structured brainstorming with multiple options |
/preview | Start/stop local dev server |
/status | Show project progress and status board |
/update | Update AGI Agent Kit to latest version |
/checkup | Verify agents, workflows, skills, and core files |
@agent)| Mention | Specialist | When To Use |
|---|---|---|
@orchestrator | Multi-agent coordinator | Complex multi-domain tasks |
@project-planner | Planning specialist | Roadmaps, task breakdowns, phase planning |
@frontend-specialist | UI/UX architect | Web interfaces, React, Next.js |
@backend-specialist | API/DB engineer | Server-side, databases, APIs |
@mobile-developer | Mobile specialist | iOS, Android, React Native, Flutter |
@security-auditor | Security expert | Vulnerability scanning, audits, hardening |
@debugger | Debug specialist | Complex bug investigation |
@game-developer | Game dev specialist | 2D/3D games, multiplayer, VR/AR |
@devops-engineer | DevOps specialist | CI/CD, containers, cloud infrastructure |
@database-architect | Database specialist | Schema design, migrations, optimization |
@documentation-writer | Docs specialist | Technical writing, API docs, READMEs |
@test-engineer | Testing specialist | Test strategy, automation, coverage |
@qa-automation-engineer | QA specialist | E2E testing, regression, quality gates |
@performance-optimizer | Performance specialist | Profiling, bottlenecks, optimization |
@seo-specialist | SEO specialist | Search optimization, meta tags, rankings |
@penetration-tester | Pen testing specialist | Red team exercises, exploit verification |
@product-manager | Product specialist | Requirements, user stories, prioritization |
@code-archaeologist | Legacy code specialist | Understanding old codebases, migrations |
@explorer-agent | Discovery specialist | Codebase exploration, dependency mapping |
| Category | Trigger Words / Phrases | Skill Activated |
|---|---|---|
| Memory | "don't use cache", "no cache", "skip memory", "fresh" | Memory opt-out |
| Research | "research my docs", "check my notebooks", "deep search", "@notebooklm" | notebooklm-rag |
| Documentation | "update docs", "regenerate catalog", "sync documentation" | documentation |
| Quality | "lint", "format", "check", "validate", "static analysis" | lint-and-validate |
| Testing | "write tests", "run tests", "TDD", "test coverage" | testing-patterns / tdd-workflow |
| TDD | "test first", "red green refactor", "failing test" | test-driven-development |
| Plan Execution | "execute plan", "run the plan", "batch execution" | executing-plans |
| Verification | "verify", "prove it works", "evidence", "show me the output" | verification-before-completion |
| Debugging | "debug", "root cause", "investigate", "why is this failing" | systematic-debugging |
| Architecture | "design system", "architecture decision", "ADR", "trade-off" | architecture |
| Security | "security scan", "vulnerability", "audit", "OWASP" | red-team-tactics |
| Performance | "lighthouse", "bundle size", "core web vitals", "profiling" | performance-profiling |
| Design | "design UI", "color scheme", "typography", "layout" | frontend-design |
| Deployment | "deploy", "rollback", "release", "CI/CD" | deployment-procedures |
| API | "REST API", "GraphQL", "tRPC", "API design" | api-patterns |
| Database | "schema design", "migration", "query optimization" | database-design |
| Planning | "plan this", "break down", "task list", "requirements" | plan-writing |
| Brainstorming | "explore options", "what are the approaches", "pros and cons" | brainstorming |
| Code Review | "review this", "code quality", "best practices" | code-review-checklist |
| i18n | "translate", "localization", "RTL", "locale" | i18n-localization |
| AWS | "terraform", "EKS", "Lambda", "S3", "CloudFront" | aws-skills / terraform-skill |
| Infrastructure | "service mesh", "Kubernetes", "Helm" | docker-expert / server-management |
| What You Want | Command / Phrase |
|---|---|
| Boot memory | python3 execution/session_boot.py --auto-fix |
| Check before a task | python3 execution/memory_manager.py auto --query "..." |
| Store a decision | python3 execution/memory_manager.py store --content "..." --type decision |
| Cache a response | python3 execution/memory_manager.py cache-store --query "..." --response "..." |
| Health check | python3 execution/memory_manager.py health |
| Skip cache for this task | Say "fresh", "no cache", or "skip memory" in your prompt |
The Full tier includes 774 community skills adapted from the Antigravity Awesome Skills project (v5.4.0) by @sickn33, distributed under the MIT License.
This collection aggregates skills from 50+ open-source contributors and organizations including Anthropic, Microsoft, Vercel Labs, Supabase, Trail of Bits, Expo, Sentry, Neon, fal.ai, and many more. For the complete attribution ledger, see SOURCES.md.
Each community skill has been adapted for the AGI framework with:
If these community skills help you, consider starring the original repo or supporting the author.
| Feature | Status | Description |
|---|---|---|
| Federated Agent Memory | β Shipped | Cross-agent knowledge sharing via shared Qdrant. Multi-tenancy with developer isolation, --shared flag for team visibility. 15/15 tests. (docs) |
| Blockchain Agent Trust & Tenancy | β Shipped | HMAC-SHA256 signed writes, hash anchoring, project access control, audit trail β all via shared Qdrant agent_auth collection. Optional W3C DID via Hyperledger Aries ACA-Py 1.5.0. 36/36 tests. (docs) |
| Event-Driven Agent Streaming | β Shipped | Apache Pulsar event bus with auto-publish on memory_manager.py store. Project-scoped topics, graceful degradation. 19/19 tests. (docs) |
| Memory Mode Tiers | β Shipped | Solo β Team β Pro progression. Backward-compatible upgrades, no data migration. BM25 auto-synced from shared Qdrant on boot. (docs) |
| MCP Compatibility | β Shipped | Memory + cross-agent coordination exposed as MCP tools via execution/mcp_server.py (13 tools) and skills/qdrant-memory/mcp_server.py (6 tools). Pure chat clients (Claude Desktop) get full memory access. (docs) |
| Platform-Adaptive Orchestration | β Shipped | 10 platforms share one AGENTS.md via symlinks (Claude Code, Gemini CLI, Codex CLI, Cursor, Copilot, OpenCode, AdaL, Antigravity, OpenClaw, Kiro). Each uses its native orchestration strategy automatically. |
| Workflow Engine | β Shipped | execution/workflow_engine.py executes data/workflows.json playbooks as guided multi-skill sequences with progress tracking, skip/abort, and state persistence in .tmp/playbook_state.json. |
| Skill Self-Improvement | β Shipped | Karpathy Loop: run_skill_eval.py (18 binary assertion types) + karpathy_loop.py (autonomous test/improve/commit/reset). Skills include eval/evals.json for objective quality measurement. |
| Control Tower Orchestrator | π§ Active | Basic dispatcher for agent registration and heartbeat via Qdrant (control_tower.py). Needs dedicated docs, test coverage, and integration with session_boot. |
| Secrets Management (Vault) | π¬ Design | HashiCorp Vault integration for secure secret sharing. Agents authenticate via Ed25519 keypair, access tenant-scoped secrets. Zero long-lived credentials. |
This package includes a pre-flight security scanner that checks for private terms before publishing. All templates are sanitized for public use.
If the AGI Agent Kit helps you build better AI-powered workflows, consider supporting the project:
Apache-2.0 Β© Elton Machado@TechWaveDev
Community skills in the Full tier are licensed under the MIT License. See THIRD-PARTY-LICENSES.md for details.
FAQs
Enterprise-Grade Agentic Framework - Modular skill-based AI assistant toolkit with deterministic execution, semantic memory, and platform-adaptive orchestration.
We found that @techwavedev/agi-agent-kit demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Β It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.