New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details
Socket
Book a DemoSign in
Socket

tryappstack-audit

Package Overview
Dependencies
Maintainers
1
Versions
4
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

tryappstack-audit

16-module code audit + AI assistant for JS/TS projects. Runs in your terminal. Supports Claude, GPT-4o, Grok, Gemini, and DeepSeek.

latest
Source
npmnpm
Version
1.1.2
Version published
Maintainers
1
Created
Source

TryAppStack Audit

Code audit + AI assistant for JS/TS projects — runs in your terminal, no IDE required

npm version downloads license

Website · npm · GitHub

What it does

tryappstack-audit is a terminal CLI for JS/TS projects with two distinct parts:

Part 1 — Static audit (no AI key required) Scans your project across 16 modules and produces a scored report. Finds things that linters and TypeScript don't check: unused packages, dead components, missing error handling, security patterns, test coverage gaps, accessibility issues. Saves reports to audits/ so you can track changes over time.

Part 2 — AI assistant (your own API key) Connects to Claude, GPT-4o, Grok, Gemini, or DeepSeek using your key. Reads the structured audit output rather than raw source files, so token usage stays low. Runs as a REPL where you talk to AI personas (@dev, @architect, @security, @qa, @pm) that write code directly to your files.

# No key needed
npx tryappstack-audit          # scan and score
npx tryappstack-audit insights # hidden issues report
npx tryappstack-audit codedocs # generate project documentation
npx tryappstack-audit context  # create .tas-context.md for any AI chat

# Add your key once, then
npx tryappstack-audit team     # interactive REPL — AI writes code to disk
npx tryappstack-audit bizplan  # business analysis from your codebase
npx tryappstack-audit testplan # test case generation
npx tryappstack-audit brand    # marketing copy and GTM notes
npx tryappstack-audit legal    # GDPR/CCPA checklist + ToS template

Part of the TryAppStack ecosystem:

  • tryappstack — production-ready boilerplates
  • tryappstack-audit — this package

Free vs AI-assisted

Free (no key)                    With your AI key
────────────────────────────     ────────────────────────────────────
✓ 16-module audit + scores       ✓ Everything in the free tier
✓ insights — hidden issues       ✓ AI priority list + production fixes
✓ codedocs — project docs        ✓ Architecture analysis in codedocs
✓ context — .tas-context.md      ✓ Executive summary in context output
✓ legal — static checklist       ✓ Full GDPR/CCPA + ToS template
✓ Pre-push git gate              ✓ bizplan — revenue + roadmap analysis
✓ Score trend + watch mode       ✓ features — feature gap analysis
Free forever                     ✓ estimate — sprint plan + story points
                                 ✓ testplan — test cases per route
                                 ✓ brand — channel strategy + copy
                                 ✓ team — REPL that writes code to files
                                 ~$0.01/run, your key, your data

Supported providers: Claude · GPT-4o · Grok · Gemini · DeepSeek

Your source code values and env var contents never leave your machine. Only structured metadata (route paths, component names, dependency names, issue categories) is sent to the AI.

Commands

Free (no AI key)

CommandWhat it doesOutput
tas-auditScore 16 modulesaudits/*.md
tas-audit insightsFind hidden security and arch issuesaudits/insights-*.md
tas-audit codedocsGenerate project documentationPROJECT_DOCS.md
tas-audit contextBuild portable AI context file.tas-context.md
tas-audit legalStatic compliance checklistaudits/legal-*.md
tas-audit initCreate config, hooks, audits dir
tas-audit hookInstall pre-push git gate
tas-audit fixAuto-fix barrel exports and configs
tas-audit doctorCheck system dependencies
tas-audit trendPlot score history
tas-audit watchRe-audit on file save
tas-audit badgeGenerate Shields.io badge
tas-audit compare a.md b.mdDiff two audit reports

With AI key

Run tas-audit ai-setup once to save your provider and key to ~/.tryappstack/config.

CommandWhat it does
tas-audit teamInteractive REPL — AI writes code to your files
tas-audit bizplanRevenue analysis, market position, 90-day roadmap
tas-audit featuresFeature gaps, competitor notes, priority list
tas-audit estimateSprint plan, story points, cost range, risk register
tas-audit testplanTest cases per route/component, E2E scenarios, CI config
tas-audit brandICP, positioning, GTM notes, copy by channel
tas-audit legalGDPR/CCPA checklist, ToS clauses, Privacy Policy draft
tas-audit insightsPriority fixes, production checklist (AI-enhanced)
tas-audit codedocsArchitecture analysis, data flow, deployment guide
tas-audit contextCompressed context with AI executive summary
tas-audit ai-plan2-week sprint plan from audit findings
tas-audit ai-chatAsk questions about your codebase
tas-audit ai-estimateTech debt in hours by module
tas-audit ai-review <file>Deep review with before/after code
tas-audit --aiAppend AI insights to main audit

team — the interactive REPL

tas-audit team is the main AI command. It scans your project once, builds a structured context, and opens a REPL where you pick an AI persona.

tas-audit team

Personas:

  • @dev — writes production code, creates and modifies files
  • @architect — reviews structure, patterns, trade-offs
  • @security — finds vulnerabilities, writes patches
  • @qa — writes test cases and E2E scenarios
  • @pm — defines requirements and acceptance criteria
  • @all — routes your question across all roles

Example:

@dev > create a rate limiting middleware for Express

  @dev  (Senior Developer)

  Code Changes Detected  (1 file)

  CREATE  src/middleware/rateLimit.ts
  + import rateLimit from 'express-rate-limit';
  + export const apiLimiter = rateLimit({
  +   windowMs: 15 * 60 * 1000,
  +   max: 100,
  + });

  Apply these changes? (y/n/skip) y
  ✓ Created: src/middleware/rateLimit.ts

Why the token count is lower than sending raw files: The scanner builds a structured summary of your project — routes, components, dependencies, issue categories, file shapes. That summary is what goes to the AI, not the raw source. A typical project context is 2,000–4,000 tokens rather than 50,000+.

Tab autocomplete for slash commands: type / and press Tab to see all commands. Type /re and press Tab to complete to /remember, /retry, or /resume.

Slash commands available inside the REPL:

/help              full command reference
/context           show what context the AI is working from
/remember "fact"   save a fact to tas-memory.md (persists across sessions)
/memory            show tas-memory.md
/star              bookmark the last response
/search <query>    search past responses
/verify            fact-check the last response with the same model
/disagree          ask the AI to argue against its last answer
/confidence        toggle confidence badge on responses
/tag <label>       label this exchange for cost tracking
/cost              show session cost so far
/save-session <n>  save session to disk
/resume <name>     resume a saved session
/dry-run           preview the prompt without sending
/security          show where TAS stores data and check for exposed keys
/init-config       create .tasrc in the project root
/help              full list

Setup

# Install globally (optional — npx works too)
npm install -g tryappstack-audit

# Preview what any command's output looks like before running it
tas-audit show-template audit
tas-audit show-template team
tas-audit show-template bizplan

# Run static audit
tas-audit

# Set up AI (one time)
tas-audit ai-setup
# Choose: Claude / OpenAI / Grok / Gemini / DeepSeek
# Enter your API key
# Key saved to ~/.tryappstack/config (chmod 600, not in any project file)

# Start the team REPL
tas-audit team

Model selection

At ai-setup you choose both provider and model. Pick based on what you need:

ProviderGeneral useMore careful reasoningFastest
Claudeclaude-sonnet-4claude-opus-4claude-haiku-4
OpenAIgpt-4oo1-previewgpt-4o-mini
Grokgrok-3grok-3grok-3-mini
Geminigemini-2.0-flashgemini-1.5-progemini-1.5-flash
DeepSeekdeepseek-chatdeepseek-reasonerdeepseek-chat

Switch at any time: tas-audit ai-setup again.

Per-project config (.tasrc)

To override model or temperature for one project:

# Inside the team REPL
/init-config

This creates .tasrc in your project root. Edit it:

{
  "provider": "claude",
  "model": "claude-opus-4-20250514",
  "temperature": 0.2,
  "budget": { "daily": 1.00, "monthly": 15.00 },
  "profiles": {
    "fast": { "provider": "gemini", "model": "gemini-2.0-flash" },
    "careful": { "provider": "claude", "model": "claude-opus-4-20250514" }
  }
}

API keys are never written to .tasrc. Switch profiles inside the REPL with /use fast.

context command

tas-audit context

Generates .tas-context.md — a structured summary of your codebase including stack, routes, components, dependencies, and current audit scores. Attach it to any AI chat tool (Claude.ai, ChatGPT, Gemini) to get answers about your project without pasting raw files.

The file includes ready-to-use prompt templates for common tasks:

  • Debug and fix critical issues
  • Add a new feature (stack-aware)
  • Audit the auth or payment flow
  • Write tests for critical paths
  • Refactor for production
  • Generate API documentation

What insights checks

Things static analysis tools typically miss, common in code written quickly or with AI assistance:

CategoryChecks
SecurityHardcoded secrets, SQL injection via template literals, dangerouslySetInnerHTML without sanitisation, eval()
API patternsExpress without helmet, no rate limiting, no input validation, no CORS config
Error handlingEmpty catch {} blocks, async functions without try/catch, swallowed promise rejections
ComponentsFiles over 300 lines, fetch calls directly in components, missing memoisation
ArchitectureNo service layer, hardcoded localhost URLs, missing .env.example
TypeScriptWidespread any usage

A note on AI output

All AI-generated output in this tool — estimates, plans, legal templates, business analysis — should be reviewed before use.

  • Estimates: AI doesn't know your team's context. Add buffer.
  • Legal: Review with a qualified attorney before publishing.
  • Business plans: Validate assumptions with real data.
  • Test cases: Check correctness before adding to CI.

CLI options

--ai                    Append AI insights to audit
--ai-provider <p>       claude | openai | grok | gemini | deepseek
--strict [N]            Exit 1 if score < N (useful in CI)
--json                  JSON output
--pre-push              Strict + minimal output
--exclude <dirs>        Skip directories
--include <dirs>        Audit only these directories
--verbose               Show all files

Run individual modules

npx tryappstack-audit --loc --security --tests --a11y

Available flags: --loc --unused-packages --dead-code --structure --bundle --deps --complexity --security --performance --best-practices --alternatives --env --git-health --tests --a11y --docs

16 audit modules

ModuleWhat it checks
LOC HealthFile sizes against framework-aware thresholds
Unused PackagesDependencies that are not imported anywhere
Dead CodeUnused components, hooks, and utilities
StructureNaming conventions, barrel files, duplicates, nesting depth
DependenciesLock file presence, version pinning, TypeScript strict mode
ComplexityHook counts, state, any types, console.log in production
SecurityHardcoded secrets, .env exposure, XSS patterns, eval
BundleHeavy dependencies, available lighter alternatives
PerformanceMissing memo/lazy, caching patterns, OnPush detection
Best PracticesError boundaries, input validation patterns
Alternatives40+ package replacement suggestions
EnvironmentCI/CD config, Docker presence, README quality
Git HealthBranch count, commit patterns, large tracked files
Test CoverageTest-to-source ratio, test runner detection, untested critical files
Accessibilityalt attributes, ARIA roles, semantic HTML, skip links
DocumentationREADME completeness, JSDoc coverage, CHANGELOG, Swagger/OpenAPI

CI/CD

# Fail the pipeline if score drops below 70
- run: npx tryappstack-audit --strict 70

# Parse score from JSON output
- run: npx tryappstack-audit --json | tail -1 | jq '.score'

Platform support

PlatformMode
macOS / LinuxBash (all 16 modules)
Windows + WSLBash via WSL (all 16 modules)
Windows without WSLJS engine (9 modules)
Docker / CIBash (all 16 modules)

Package size

Package30 KB
Dependencies5
Audit modules16
AI providers5
Commands15+
Supported frameworks10+

Contributing

See CONTRIBUTING.md for how to add audit modules, submit fixes, or improve commands.

License

MIT — TryAppStack

Keywords

code audit

FAQs

Package last updated on 26 Mar 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts