New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

@pyrex/scud

Package Overview
Dependencies
Maintainers
1
Versions
2
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@pyrex/scud

Fast, simple task master for AI-driven development - BMAD-TM workflow automation

latest
Source
npmnpm
Version
1.2.1
Version published
Maintainers
1
Created
Source

SCUD CLI (Rust)

Fast, simple task master for AI-driven development - Rust implementation.

Overview

This is a high-performance Rust rewrite of the SCUD task management system. It replaces the external task-master CLI with a fast, single-binary solution that:

  • 50x faster startup time (~10ms vs ~500ms)
  • 🎯 42x token reduction (~500 tokens vs ~21k tokens per operation)
  • 📦 Simple distribution - single binary, no dependencies
  • 🔧 Direct LLM integration - no MCP overhead

Architecture

scud (Rust Binary)
├── Core Commands (No AI - Instant)
│   ├── init               # Initialize .taskmaster/
│   ├── tags               # List epic tags
│   ├── use-tag            # Switch active epic
│   ├── list               # List tasks with filters
│   ├── show               # Show task details
│   ├── set-status         # Update task status
│   ├── next               # Find next available task
│   └── stats              # Show epic statistics
│
├── AI Commands (Direct Anthropic API)
│   ├── parse-prd          # Parse PRD markdown into tasks
│   ├── analyze-complexity # Analyze task complexity
│   ├── expand             # Break down complex tasks
│   └── research           # AI-powered research
│
└── Storage (JSON)
    ├── .taskmaster/tasks/tasks.json
    └── .taskmaster/workflow-state.json

Building

Development

cargo build

Release (Optimized)

cargo build --release

Usage

Core Commands

# Initialize SCUD
scud init

# List epic tags
scud tags

# Switch to an epic
scud use-tag epic-1-auth

# List tasks
scud list
scud list --status pending

# Show task details
scud show 3

# Update task status
scud set-status 3 in-progress

# Find next available task
scud next

# Show statistics
scud stats

AI Commands

Requires: API key environment variable (see Provider Configuration)

# Parse PRD into tasks
scud parse-prd docs/epics/auth.md --tag epic-1-auth

# Analyze complexity
scud analyze-complexity                # All tasks
scud analyze-complexity --task 5       # Specific task

# Expand complex tasks
scud expand 7                          # Specific task
scud expand --all                      # All tasks >13 complexity

# Research a topic
scud research "OAuth 2.0 best practices"

Performance Comparison

OperationOld (task-master)New (Rust)Improvement
Startup~500ms~10ms50x faster
List tasks~100ms~5ms20x faster
Parse PRD~3-5s~2-3s~40% faster
Token overhead~21k~50042x reduction

Provider Configuration

SCUD supports multiple LLM providers: xAI (Grok), Anthropic (Claude), OpenAI (GPT), and OpenRouter.

Quick Start

# Initialize with xAI (Grok) - recommended for fast code generation
scud init --provider xai
export XAI_API_KEY=your-key

# Or initialize with Anthropic (Claude)
scud init --provider anthropic
export ANTHROPIC_API_KEY=your-key

# Interactive mode - prompt for provider
scud init

Configuration File

The configuration is stored in .taskmaster/config.toml:

[llm]
provider = "xai"
model = "grok-code-fast-1"
max_tokens = 4096

For complete provider documentation, see PROVIDERS.md.

Supported Providers

ProviderEnvironment VariableDefault Model
xAIXAI_API_KEYgrok-code-fast-1
AnthropicANTHROPIC_API_KEYclaude-sonnet-4-20250514
OpenAIOPENAI_API_KEYgpt-4-turbo
OpenRouterOPENROUTER_API_KEYanthropic/claude-sonnet-4

Data Models

Task

struct Task {
    id: String,
    title: String,
    description: String,
    status: TaskStatus,         // pending, in-progress, done, etc.
    complexity: u32,            // Fibonacci scale: 1,2,3,5,8,13,21
    priority: Priority,         // high, medium, low
    dependencies: Vec<String>,  // Task IDs this depends on
    details: Option<String>,    // Technical details
    test_strategy: Option<String>,
    complexity_analysis: Option<String>,
    created_at: Option<String>,
    updated_at: Option<String>,
}

Epic

struct Epic {
    name: String,
    tasks: Vec<Task>,
}

Workflow State

struct WorkflowState {
    version: String,
    current_phase: String,      // ideation, planning, etc.
    active_epic: Option<String>,
    phases: HashMap<String, PhaseInfo>,
    history: Vec<Value>,
    completed_epics: Vec<CompletedEpic>,
    last_updated: Option<String>,
}

LLM Integration

Direct Anthropic API

  • No MCP server overhead
  • Simple HTTP requests
  • Minimal token usage
  • Fast response times

Prompt Templates

Located in src/llm/prompts.rs:

  • parse_prd() - Converts markdown to structured tasks
  • analyze_complexity() - Scores task difficulty
  • expand_task() - Breaks down complex tasks
  • research_topic() - AI research assistant

Integration with SCUD

The Rust CLI integrates seamlessly with the existing SCUD system:

  • bin/scud.js detects and delegates to Rust binary
  • Falls back to debug build if release not available
  • Auto-builds if binary not found
  • All agents and slash commands work unchanged

Development

Project Structure

scud-cli/
├── Cargo.toml
├── src/
│   ├── main.rs              # CLI entry point
│   ├── commands/
│   │   ├── mod.rs
│   │   ├── init.rs          # Core commands
│   │   ├── tags.rs
│   │   ├── ...
│   │   └── ai/              # AI commands
│   │       ├── parse_prd.rs
│   │       ├── analyze_complexity.rs
│   │       ├── expand.rs
│   │       └── research.rs
│   ├── models/
│   │   ├── task.rs
│   │   ├── epic.rs
│   │   └── workflow.rs
│   ├── storage/
│   │   └── mod.rs           # JSON I/O
│   └── llm/
│       ├── client.rs        # Anthropic API
│       └── prompts.rs       # Prompt templates

Adding New Commands

  • Add command to Commands enum in main.rs
  • Create handler in src/commands/
  • Add to rustCommands array in bin/scud.js
  • Update help text

Adding New LLM Prompts

  • Add prompt function to src/llm/prompts.rs
  • Create command handler in src/commands/ai/
  • Use LLMClient::complete() or complete_json()

Testing

# Build and test
cargo build
cargo test

# Test specific command
cargo run -- init
cargo run -- tags
cargo run -- --help

# Test AI commands (requires API key)
export ANTHROPIC_API_KEY=sk-...
cargo run -- parse-prd test.md --tag test

Distribution

As Standalone Binary

cargo build --release
# Binary: target/release/scud
# Copy to /usr/local/bin or similar

As Part of npm Package

The SCUD npm package includes the Rust binary:

  • Pre-built binaries for major platforms
  • Auto-built on first use if needed
  • Seamless integration via bin/scud.js

Future Enhancements

  • Cross-compilation for multiple platforms
  • Pre-built binaries in npm package
  • Configuration file support
  • Additional LLM providers (OpenAI, etc.)
  • Offline mode for core commands
  • Task export/import
  • Custom prompt templates
  • Parallel task execution analysis
  • Integration tests with real API calls

License

MIT

Contributing

See main SCUD repository for contribution guidelines.

Keywords

task-management

FAQs

Package last updated on 22 Nov 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts