New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details
Socket
Book a DemoSign in
Socket

gsd-skill-creator

Package Overview
Dependencies
Maintainers
1
Versions
4
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

gsd-skill-creator

Adaptive learning layer for Claude Code — skills, agents, teams, and chipsets

latest
Source
npmnpm
Version
1.49.551
Version published
Weekly downloads
20
-91.94%
Maintainers
1
Weekly downloads
 
Created
Source

GSD Skill Creator

An adaptive learning and coprocessor architecture for Claude Code, built as an extension to GSD (Get Shit Done).


npx get-shit-done-cc@latest

npx gsd-skill-creator@latest

Table of Contents

The Problem: Why AI-Assisted Development Breaks Down

AI coding assistants are powerful in short bursts, but they degrade on sustained, complex work. The core issues:

  • Context rot -- As a session fills its context window, the AI loses track of earlier decisions, repeats mistakes, and produces lower-quality output. By the time you're deep into implementation, the assistant has forgotten why you made the architectural choices it's now contradicting.
  • Amnesia between sessions -- Every new session starts from zero. The AI doesn't know what worked last time, what patterns you prefer, or what it should avoid. You re-explain the same things across every conversation.
  • No workflow memory -- Recurring sequences (test, fix, commit, verify) are executed ad-hoc every time. The AI never learns that "after a test failure, you always check the fixture data first" or that "deploy means these seven steps in this order."
  • Scaling complexity -- A single AI session can handle a function or a file. It cannot reliably coordinate a multi-phase feature across dozens of files without losing coherence, skipping steps, or drifting from the plan.

These aren't limitations of the models themselves. They're limitations of how work is structured around them.

How GSD and Skill Creator Solve It Together

GSD is the workflow engine. It solves context rot and scaling complexity by structuring work into phases with atomic execution boundaries. Each phase gets a fresh context window, a detailed plan, and persistent state tracking. The AI executes one well-scoped unit of work at a time, commits atomically, and hands off cleanly to the next phase. Context never rots because it never accumulates beyond what's needed for the current task.

Skill Creator is the learning layer that extends GSD. It doesn't replace any GSD functionality -- it observes how you work within the GSD lifecycle and builds reusable knowledge from your patterns:

  • Observe -- Watches tool sequences, file patterns, corrections, and phase outcomes across sessions
  • Detect -- Identifies recurring workflows using n-gram extraction and DBSCAN clustering when patterns repeat 3+ times
  • Suggest -- Proposes skill creation from detected patterns, always requiring explicit user confirmation
  • Apply -- Loads relevant skills automatically based on context, respecting a 2-5% token budget
  • Learn -- Refines skills from your corrections with bounded guardrails (minimum 3 corrections, 7-day cooldown, maximum 20% change per refinement)
  • Compose -- When skills consistently co-activate (5+ times over 7+ days), composes them into purpose-built agents and multi-agent teams

The two systems working together solve a fundamentally different problem than either one alone. GSD prevents the AI from degrading during work. Skill Creator prevents the AI from forgetting between work. The result is an AI development environment that maintains quality over long projects and gets meaningfully better the more you use it.

What They Solve Together

ProblemGSD's RoleSkill Creator's Role
Context rotFresh context per phase, atomic executionPre-compiled skill activation eliminates re-explanation
Lost decisionsPersistent .planning/ state artifactsSession observations capture decision patterns
Repeated mistakesPlan verification against requirementsBounded learning from corrections refines behavior
Scaling complexityPhase decomposition with dependency graphsAgent teams coordinate specialized roles
Workflow amnesiaStructured lifecycle (plan/execute/verify)Pattern discovery codifies recurring sequences
Cross-session continuitySTATE.md tracks position and blockersWarm-start briefings restore learned context

The Coprocessor Architecture

Complex agent systems face the same coordination challenges that early computer designers solved decades ago: multiple specialized processors need to share resources, communicate efficiently, and synchronize their work without a single bottleneck controlling everything.

Skill Creator uses two complementary chipset models: the Amiga chipset for resource coordination within a single agent context, and the Gastown chipset for orchestrating multiple agents across parallel workstreams.

Amiga Chipset: Resource Coordination

The original architecture is modeled after the Amiga's custom chipset -- a system where dedicated coprocessors handled graphics, sound, and I/O in parallel while a lightweight kernel coordinated scheduling and resource allocation. This isn't an analogy for presentation purposes; it's the actual architectural pattern used to coordinate multi-agent teams.

Just as the Amiga distributed work across specialized chips rather than routing everything through the CPU, Skill Creator distributes agent responsibilities across four domain-specific chips:

ChipDomainReal Computer Analog
AgnusContext management -- memory allocation, context window budgets, state trackingMemory controller
DeniseOutput generation -- code production, documentation, renderingGraphics processor
PaulaI/O operations -- file access, API calls, external tool integrationI/O controller
GaryGlue logic -- routing, lifecycle coordination, inter-chip communicationBus controller

Each chip has dedicated budget channels (token budgets with guaranteed minimums), message ports (FIFO queues with reply-based ownership), and a 32-bit signal system for lightweight wake/sleep coordination -- the same primitives that real hardware uses for inter-processor communication.

Pipeline Lists: Declarative Workflow Programs

Pipeline Lists are declarative workflow programs -- sequences of WAIT, MOVE, and SKIP instructions synchronized to GSD lifecycle events. Inspired by the Amiga's Copper coprocessor (which executed display lists synced to the video beam), Pipeline Lists bring the same concept to workflow automation:

# A Pipeline List synchronized to GSD lifecycle events
- wait: phase-planned        # Block until planning completes
- move:
    target: skill
    name: test-generator
    mode: sprite              # Lightweight activation (~200 tokens)
- wait: tests-passing        # Block until tests pass
- skip:
    condition: "!exists:.planning/phases/*/SUMMARY.md"
- move:
    target: script
    name: generate-docs
    mode: offload             # Execute outside context window

Pipeline Lists pre-compile during planning and execute automatically during phase transitions. The AI doesn't decide what skills to load at runtime -- the workflow program has already determined the optimal activation sequence based on observed patterns. This eliminates the overhead of skill selection from the critical path.

The Offload Engine: Bulk Operations Outside the Context Window

The Offload engine handles deterministic operations that don't need AI reasoning -- running test suites, generating boilerplate, formatting code, computing metrics. These operations are "promoted" from skill metadata to standalone scripts and executed as child processes, freeing the context window for work that actually requires intelligence.

The Exec Kernel

A prioritized round-robin scheduler coordinates the chips, managing 18 typed message protocols for inter-team communication and per-team token budgets with burst mode for temporary overallocation. Teams at different priority levels (phase-critical at 60%, workflow at 15%, background at 10%, pattern detection at 10%) share resources without starvation.

Why This Matters

The chipset architecture means that building a complex agent system -- one with specialized roles, coordinated communication, resource budgets, and synchronized execution -- uses the same proven patterns that make high-performance computers work. You don't architect message passing from scratch. You define chips, wire up ports, write Pipeline Lists, and the kernel handles scheduling. The system learns which Pipeline Lists work well from execution feedback, refining activation sequences over time.

Gastown Chipset: Multi-Agent Orchestration

The Gastown chipset, absorbed from steveyegge/gastown, provides a coordination model for multi-agent workspaces. Where the Amiga chipset manages resources within a single agent context, Gastown manages the orchestration of multiple agents working in parallel across a shared codebase.

Multi-rig architecture -- A Gastown "rig" is a complete autonomous coordination instance. Each rig has its own Mayor (the per-rig coordinator), Polecats (workers), Witness (observer), and Refinery (merge queue). A user's project can run multiple rigs in parallel — different teams working on different subsystems, each with their own mayor coordinating independently. Skill Creator's agent chipset can wire up and manage multiple rigs from a unified mission control dashboard, simplifying the wiring harness complexity that connects the dashboard control surface to distributed rig management features.

Agent topology -- Four specialized roles coordinate work within each rig:

RoleFunction
MayorPer-rig coordinator -- plans work, assigns tasks, resolves conflicts within one rig
PolecatExecutor -- ephemeral autonomous worker, one per task, within one rig
WitnessObserver -- monitors agent health within a rig, validates outputs
RefineryMerge queue -- sequential deterministic integration of parallel work per rig

Communication channels -- Three messaging primitives cover different coordination needs:

ChannelTypeUse
MailAsync durableTask assignments, status reports, results that persist
NudgeSync immediateReal-time coordination signals between active agents
HookPull-basedWork assignment via GUPP (Generalized Universal Pull Protocol)

Dispatch pipeline -- Work flows through two 7-stage pipelines:

  • Sling dispatches instructions from planning through validation, routing, and agent assignment
  • Done retires completed work through verification, merge, and state updates

Gastown is not a runtime dependency. Its coordination patterns are encoded as a declarative chipset (skills, types, and configuration) that teaches Claude Code how to orchestrate multiple agents. See the Gastown Integration Guide for the full 10-document reference.

Why Two Chipsets

The Amiga chipset handles the internal problem: how a single agent manages its context window, token budget, I/O, and output generation efficiently. The Gastown chipset handles the external problem: how multiple agents coordinate, communicate, and merge their work without conflicts. Together they provide full-stack orchestration from resource allocation to multi-agent dispatch.

Skills, agents, and teams generated by Skill Creator follow the official Claude Code and Agent Skills specifications. They work natively in Claude Code and export to OpenAI Codex CLI, Cursor, GitHub Copilot, and Gemini CLI.

GSD-OS Desktop Application

GSD-OS is a native desktop application that wraps the entire skill-creator ecosystem in an Amiga Workbench-inspired environment. Built with Tauri v2 (Rust backend + webview frontend), it provides a WebGL 8-bit graphics engine with CRT post-processing, an embedded terminal with native PTY, and a live planning dashboard -- all running locally with no cloud dependencies.

Features

  • WebGL CRT shader engine -- Scanlines, barrel distortion, phosphor glow, chromatic aberration, and vignette with per-effect intensity controls
  • 32-color indexed palette -- Five retro-computing presets (Amiga 1.3/2.0/3.1, C64, custom) with OKLCH-based palette generation and copper list raster effects
  • Native PTY terminal -- Rust-backed pseudo-terminal with xterm.js emulator, watermark-based flow control, and tmux session binding
  • Window manager -- Amiga-style depth cycling, drag/resize, custom chrome with frameless Tauri window
  • Desktop shell -- Taskbar with process indicators, pixel-art icons, system menu, keyboard shortcuts (Alt+Tab, F10, Ctrl+Q)
  • Live dashboard -- All 6 planning doc pages render inside GSD-OS windows with file-watcher-driven refresh
  • First-boot calibration -- Three-screen wizard for color picking, CRT tuning, and theme selection
  • Boot sequence -- Amiga chipset initialization animation (Agnus, Denise, Paula, Gary), skippable
  • Accessibility mode -- Auto-activates on prefers-reduced-motion or prefers-contrast, disabling CRT effects and applying a high-contrast palette

Prerequisites

Node.js 18+ and Rust (via rustup) are required.

Linux -- Install Tauri system dependencies:

# Debian/Ubuntu
sudo apt install libwebkit2gtk-4.1-dev build-essential curl wget file \
  libxdo-dev libssl-dev libayatana-appindicator3-dev librsvg2-dev

# Fedora
sudo dnf install webkit2gtk4.1-devel openssl-devel curl wget file \
  libxdo-devel libappindicator-gtk3-devel librsvg2-devel

# Arch
sudo pacman -S webkit2gtk-4.1 base-devel curl wget file openssl \
  libxdo libappindicator-gtk3 librsvg

macOS -- Install Xcode Command Line Tools:

xcode-select --install

See the Tauri v2 prerequisites guide for the full list.

Install

# Clone the repository
git clone <repository-url>
cd gsd-skill-creator

# Install root dependencies (skill-creator library)
npm install

# Install desktop frontend dependencies
cd desktop && npm install && cd ..

Run (Development)

# Launch GSD-OS in development mode (hot-reload for both Rust and webview)
npm run desktop:dev

This starts the Vite dev server on port 1420 and opens the Tauri window. Changes to desktop/src/ hot-reload instantly; changes to src-tauri/src/ trigger a Rust recompile.

Build (Production)

# Build a release binary
npm run desktop:build

Produces platform-specific packages in src-tauri/target/release/bundle/:

  • Linux: .deb and .AppImage
  • macOS: .dmg

Test

# Run desktop test suite
npm run desktop:test

# Run skill-creator library tests (24,700+ tests)
npm test

Technology Stack

ComponentTechnologyVersion
Desktop frameworkTauriv2.10.x
Frontend bundlerVitev6.x
Terminal emulatorxterm.jsv5.5.x
PTY managementportable-pty0.9.0
File watchingnotify8.2.0
Color scienceculoriv4.0
Schema validationZodv4.x

Architecture

src/           TypeScript library (skill-creator CLI, dashboard, AMIGA, AGC, brainstorm)
src-tauri/     Rust backend (PTY, file watcher, tmux, Claude sessions, MCP host, IPC)
desktop/       Vite webview frontend (WebGL engine, desktop shell, terminal)
infra/         Bash infrastructure (PXE, VM provisioning, Minecraft, runbooks)

Strict module boundaries: src/ never imports @tauri-apps/api; desktop/ never imports Node.js modules. All communication between Rust and the webview goes through Tauri IPC commands, events, and channels. infra/ is self-contained bash scripts with no TypeScript dependencies.

Quick Start

Skill Creator CLI

# Clone and install
git clone <repository-url>
cd gsd-skill-creator
npm install && npm run build

# Link globally (optional)
npm link

# Verify
skill-creator help

# Create your first skill
skill-creator create

# See what patterns have been detected
skill-creator suggest

# Check active skills and token budget
skill-creator status

See INSTALL.md for detailed installation instructions.

Documentation

All documentation lives in docs/.

DocumentDescription
Getting StartedInstallation, quickstart, and tutorials
FeaturesFull capability table and version history
Core ConceptsSkills, scopes, observations, and agents
How It WorksThe 6-step observe-detect-suggest-apply-learn-compose loop
CLI ReferenceComplete command documentation
API ReferenceProgrammatic usage for library consumers
Skill FormatFrontmatter fields, descriptions, official vs extension
Official FormatClaude Code official skill/agent specification
Token BudgetBudget management and priority tiers
Bounded LearningRefinement guardrails and drift tracking
Agent GenerationAuto-composed agents from skill clusters
Agent TeamsMulti-agent coordination and topologies
Pattern DiscoverySession log scanning and DBSCAN clustering
GSD OrchestratorIntent classification and lifecycle routing
Workflows & RolesSkill workflows, roles, bundles, and events
ConfigurationThresholds, retention, and cluster settings
File StructureProject and source code layout
DevelopmentBuilding, testing, and contributing
RequirementsAll shipped requirements across 65 milestones
GSD Teams GuideTeams vs subagents for GSD workflows
ComparisonSkills vs Agents vs Teams
Release HistoryVersion index linking to per-release notes (89 versions)
TroubleshootingCommon issues and solutions
Cartridge ForgeUnified cartridge/chipset format, skill-creator cartridge CLI, migration guide
Examples34 ready-to-use skills, agents, and teams

Security

See SECURITY.md for vulnerability reporting, threat model, and security boundaries.

Project Stats

89 milestones shipped (v1.0-v1.50.43) | 550+ phases | 1,320+ plans | ~783K LOC TypeScript, Rust, GLSL, Bash & Python | 24,700+ tests

License

Licensed under the Business Source License 1.1 (BSL 1.1). Converts to GPL 3.0 on 2030-03-11.

Contributing

  • Fork the repository
  • Create a feature branch
  • Make changes with tests
  • Submit a pull request

All contributions should include tests and pass the existing test suite.

Keywords

claude-code

FAQs

Package last updated on 16 Apr 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts