
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
Hyntx is a CLI tool that analyzes your Claude Code prompts and helps you become a better prompt engineer through retrospective analysis and actionable feedback.
🧪 BETA: This project is functional but still evolving. Feedback and contributions welcome!
Hyntx reads your Claude Code conversation logs and uses AI to detect common prompt engineering anti-patterns. It provides you with:
Think of it as a retrospective code review for your prompts.
npm install -g hyntx
npx hyntx
pnpm add -g hyntx
Run Hyntx with a single command:
hyntx
On first run, Hyntx will guide you through an interactive setup:
That's it! Hyntx will analyze today's prompts and show you improvement suggestions with concrete "Before/After" examples.
# Analyze today's prompts
hyntx
# Analyze yesterday
hyntx --date yesterday
# Analyze a specific date
hyntx --date 2025-01-20
# Analyze a date range
hyntx --from 2025-01-15 --to 2025-01-20
# Filter by project name
hyntx --project my-awesome-app
# Save report to file
hyntx --output report.md
# Preview without sending to AI
hyntx --dry-run
# Check reminder status
hyntx --check-reminder
# Watch mode - real-time analysis
hyntx --watch
# Watch specific project only
hyntx --watch --project my-app
# Analysis modes - control speed vs accuracy trade-off
hyntx --analysis-mode batch # Fast (default): ~300-400ms/prompt
hyntx --analysis-mode individual # Accurate: ~1000-1500ms/prompt
hyntx -m individual # Short form
# Analyze last week for a specific project
hyntx --from 2025-01-15 --to 2025-01-22 --project backend-api
# Generate markdown report for yesterday
hyntx --date yesterday --output yesterday-analysis.md
# Deep analysis with individual mode for critical project
hyntx -m individual --project production-api --date today
# Fast batch analysis across date range
hyntx --from 2025-01-15 --to 2025-01-20 --analysis-mode batch -o report.md
# Watch mode with individual analysis (slower but detailed)
hyntx --watch -m individual --project critical-app
Hyntx offers two analysis modes to balance speed and accuracy based on your needs:
hyntx # Uses batch mode by default
hyntx --analysis-mode batch # Explicit batch mode
hyntx --analysis-mode individual # Use individual mode
hyntx -m individual # Short form
| Mode | Speed/Prompt | Use Case | Accuracy | When to Use |
|---|---|---|---|---|
| Batch | ~300-400ms | Daily analysis, monitoring | Good | Quick feedback, large datasets |
| Individual | ~1-1.5s | Deep analysis, learning | Better | Quality-focused reviews, critical prompts |
Speedup: Batch mode is 3-4x faster than individual mode.
Recommendation: Use batch mode (default) for daily analysis to get fast feedback. Switch to individual mode when:
Performance Note: Numbers based on gemma3:4b on CPU. Actual speed varies by hardware, model size, and prompt complexity.
Detailed Guide: See Analysis Modes Documentation for comprehensive comparison, examples, and decision guidelines.
Hyntx allows you to customize which analysis rules are enabled and their severity levels through a .hyntxrc.json file in your project root.
vague - Detects vague requests lacking specificityno-context - Detects missing background informationtoo-broad - Detects overly broad requests that should be broken downno-goal - Detects prompts without a clear outcomeimperative - Detects commands without explanationFor each pattern, you can:
enabled: false to skip detectionseverity to "low", "medium", or "high"Create .hyntxrc.json in your project root:
{
"rules": {
"imperative": {
"enabled": false
},
"vague": {
"severity": "high"
},
"no-context": {
"severity": "high"
},
"too-broad": {
"severity": "medium"
}
}
}
Hyntx will warn you about:
These warnings appear immediately when the configuration is loaded.
Hyntx uses environment variables for configuration. The interactive setup can auto-save these to your shell config (~/.zshrc, ~/.bashrc).
Configure one or more providers in priority order. Hyntx will try each provider in order and fall back to the next if unavailable.
# Single provider (Ollama only)
export HYNTX_SERVICES=ollama
export HYNTX_OLLAMA_MODEL=gemma3:4b
# Multi-provider with fallback (tries Ollama first, then Anthropic)
export HYNTX_SERVICES=ollama,anthropic
export HYNTX_OLLAMA_MODEL=gemma3:4b
export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here
# Cloud-first with local fallback
export HYNTX_SERVICES=anthropic,ollama
export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here
export HYNTX_OLLAMA_MODEL=gemma3:4b
Ollama:
| Variable | Default | Description |
|---|---|---|
HYNTX_OLLAMA_MODEL | gemma3:4b | Model to use |
HYNTX_OLLAMA_HOST | http://localhost:11434 | Ollama server URL |
Anthropic:
| Variable | Default | Description |
|---|---|---|
HYNTX_ANTHROPIC_MODEL | claude-3-5-haiku-latest | Model to use |
HYNTX_ANTHROPIC_KEY | - | API key (required) |
Google:
| Variable | Default | Description |
|---|---|---|
HYNTX_GOOGLE_MODEL | gemini-2.0-flash-exp | Model to use |
HYNTX_GOOGLE_KEY | - | API key (required) |
# Set reminder frequency (7d, 14d, 30d, or never)
export HYNTX_REMINDER=7d
# Add to ~/.zshrc or ~/.bashrc (or let Hyntx auto-save it)
export HYNTX_SERVICES=ollama,anthropic
export HYNTX_OLLAMA_MODEL=gemma3:4b
export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here
export HYNTX_REMINDER=14d
# Optional: Enable periodic reminders
hyntx --check-reminder 2>/dev/null
Then reload your shell:
source ~/.zshrc # or source ~/.bashrc
Ollama runs AI models locally for privacy and cost savings.
Install Ollama: ollama.ai
Pull a model:
ollama pull gemma3:4b
Verify it's running:
ollama list
Run Hyntx (it will auto-configure on first run):
hyntx
Get API key from console.anthropic.com
Run Hyntx and select Anthropic during setup, or set manually:
export HYNTX_SERVICES=anthropic
export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here
Get API key from ai.google.dev
Run Hyntx and select Google during setup, or set manually:
export HYNTX_SERVICES=google
export HYNTX_GOOGLE_KEY=your-google-api-key
Configure multiple providers for automatic fallback:
# If Ollama is down, automatically try Anthropic
export HYNTX_SERVICES=ollama,anthropic
export HYNTX_OLLAMA_MODEL=gemma3:4b
export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here
When running, Hyntx will show fallback behavior:
⚠️ ollama unavailable, trying anthropic...
✅ anthropic connected
📊 Hyntx - 2025-01-20
──────────────────────────────────────────────────
📈 Statistics
Prompts: 15
Projects: my-app, backend-api
Score: 6.5/10
⚠️ Patterns (3)
🔴 Missing Context (60%)
• "Fix the bug in auth"
• "Update the component"
💡 Include specific error messages, framework versions, and file paths
Before:
❌ "Fix the bug in auth"
After:
✅ "Fix authentication bug in src/auth/login.ts where users get
'Invalid token' error. Using Next.js 14.1.0 with next-auth 4.24.5."
🟡 Vague Instructions (40%)
• "Make it better"
• "Improve this"
💡 Define specific success criteria and expected outcomes
Before:
❌ "Make it better"
After:
✅ "Optimize the database query to reduce response time from 500ms
to under 100ms. Focus on adding proper indexes."
──────────────────────────────────────────────────
💎 Top Suggestion
"Add error messages and stack traces to debugging requests for
10x faster resolution."
──────────────────────────────────────────────────
Hyntx can run as a Model Context Protocol (MCP) server, enabling real-time prompt analysis directly within MCP-compatible clients like Claude Code.
Add hyntx to your Claude Code MCP configuration. You have two options:
Configuration visible only to you, stored in ~/.claude.json:
# Add using Claude Code CLI
claude mcp add hyntx
# Or manually edit ~/.claude.json
{
"mcpServers": {
"hyntx": {
"command": "hyntx",
"args": ["--mcp-server"]
}
}
}
Configuration shared with your team via Git, stored in .mcp.json at your project root:
{
"mcpServers": {
"hyntx": {
"command": "hyntx",
"args": ["--mcp-server"]
}
}
}
After adding the configuration, restart your Claude Code session. The hyntx tools will be available in your conversations.
npm install -g hyntxIf using Ollama (recommended for privacy):
# Ensure Ollama is running
ollama serve
# Pull a model if needed
ollama pull gemma3:4b
# Set environment variables (add to ~/.zshrc or ~/.bashrc)
export HYNTX_SERVICES=ollama
export HYNTX_OLLAMA_MODEL=gemma3:4b
Hyntx exposes three tools through the MCP interface:
Analyze a prompt to detect anti-patterns, issues, and get improvement suggestions.
Input Schema:
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt | string | Yes | The prompt text to analyze |
date | string | No | Date context in ISO format. Defaults to current date. |
Example Output:
{
"patterns": [
{
"id": "no-context",
"name": "Missing Context",
"severity": "high",
"frequency": "100%",
"suggestion": "Include specific error messages and file paths",
"examples": ["Fix the bug in auth"]
}
],
"stats": {
"promptCount": 1,
"overallScore": 4.5
},
"topSuggestion": "Add error messages and stack traces for faster resolution"
}
Get concrete before/after rewrites showing how to improve a prompt.
Input Schema:
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt | string | Yes | The prompt text to analyze for improvements |
date | string | No | Date context in ISO format. Defaults to current date. |
Example Output:
{
"improvements": [
{
"issue": "Missing Context",
"before": "Fix the bug in auth",
"after": "Fix authentication bug in src/auth/login.ts where users get 'Invalid token' error. Using Next.js 14.1.0 with next-auth 4.24.5.",
"suggestion": "Include specific error messages, framework versions, and file paths"
}
],
"summary": "Found 1 improvement(s)",
"topSuggestion": "Add error messages and stack traces for faster resolution"
}
Verify if a prompt has sufficient context for effective AI interaction.
Input Schema:
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt | string | Yes | The prompt text to check for context |
date | string | No | Date context in ISO format. Defaults to current date. |
Example Output:
{
"hasSufficientContext": false,
"score": 4.5,
"issues": ["Missing Context", "Vague Instructions"],
"suggestion": "Include specific error messages and file paths",
"details": "Prompt lacks sufficient context for effective AI interaction"
}
Once configured, you can use these tools in your Claude Code conversations:
Analyze a prompt before sending:
Use the analyze-prompt tool to check: "Fix the login bug"
Get improvement suggestions:
Use suggest-improvements on: "Make the API faster"
Check if your prompt has enough context:
Use check-context to verify: "Update the component to handle errors"
Verify hyntx is installed globally:
which hyntx
# Should output: /usr/local/bin/hyntx or similar
Test manual startup:
hyntx --mcp-server
# Should output: MCP server running on stdio
Check environment variables are set (if using cloud providers):
echo $HYNTX_SERVICES
echo $HYNTX_ANTHROPIC_KEY # if using Anthropic
If using Ollama, ensure it's running:
ollama list
# If no output, start Ollama:
ollama serve
If using cloud providers, verify API keys are set:
# Check if keys are configured
env | grep HYNTX_
Restart Claude Code completely after config changes
Verify the config file exists and is in the correct location:
~/.claude.json.mcp.json (in project root)Check JSON syntax in the config file:
# Verify user-scoped config
cat ~/.claude.json | jq .
# Or verify project-scoped config
cat .mcp.json | jq .
# Or use Claude Code CLI to list MCP servers
claude mcp list
export HYNTX_OLLAMA_MODEL=gemma3:4b:1bHyntx takes your privacy seriously:
sk-*, claude-*)AKIA*, secret keys)~/.claude/projects/For local analysis with Ollama, you need to have a compatible model installed. See docs/MINIMUM_VIABLE_MODEL.md for detailed recommendations and performance benchmarks.
Quick picks:
| Use Case | Model | Parameters | Disk Size | Speed (CPU) | Quality |
|---|---|---|---|---|---|
| Daily use | gemma3:4b | 2-3B | ~2GB | ~2-5s/prompt | Good |
| Production | mistral:7b | 7B | ~4GB | ~5-10s/prompt | Better |
| Maximum quality | qwen2.5:14b | 14B | ~9GB | ~15-30s/prompt | Excellent |
Installation:
# Install recommended model (gemma3:4b)
ollama pull gemma3:4b
# Or choose a different model
ollama pull mistral:7b
ollama pull qwen2.5:14b
For complete model comparison, compatibility info, and performance notes, see the Model Requirements documentation.
Make sure you've used Claude Code at least once. Logs are stored in:
~/.claude/projects/<project-hash>/logs.jsonl
ollama listollama serveecho $HYNTX_OLLAMA_HOST (default: http://localhost:11434)YYYY-MM-DD--dry-run to see what logs are being readHyntx can also be used as a library in your Node.js applications for custom integrations, CI/CD pipelines, or building tooling on top of the analysis engine.
npm install hyntx
# or
pnpm add hyntx
import {
analyzePrompts,
sanitizePrompts,
readLogs,
createProvider,
getEnvConfig,
type AnalysisResult,
type ExtractedPrompt,
} from 'hyntx';
// Read Claude Code logs for a specific date
const { prompts } = await readLogs({ date: 'today' });
// Sanitize prompts to remove secrets
const { prompts: sanitizedTexts } = sanitizePrompts(
prompts.map((p: ExtractedPrompt) => p.content),
);
// Get environment configuration
const config = getEnvConfig();
// Create an AI provider
const provider = await createProvider('ollama', config);
// Analyze the prompts
const result: AnalysisResult = await analyzePrompts({
provider,
prompts: sanitizedTexts,
date: '2025-12-26',
});
// Use the results
console.log(`Overall score: ${result.stats.overallScore}/10`);
console.log(`Patterns detected: ${result.patterns.length}`);
result.patterns.forEach((pattern) => {
console.log(`- ${pattern.name}: ${pattern.severity}`);
console.log(` Suggestion: ${pattern.suggestion}`);
});
CI/CD Integration - Fail builds when prompt quality drops below threshold:
import { analyzePrompts, readLogs, createProvider, getEnvConfig } from 'hyntx';
const config = getEnvConfig();
const provider = await createProvider('ollama', config);
const { prompts } = await readLogs({ date: 'today' });
const result = await analyzePrompts({
provider,
prompts: prompts.map((p) => p.content),
date: new Date().toISOString().split('T')[0],
});
// Fail CI if quality score is too low
const QUALITY_THRESHOLD = 7.0;
if (result.stats.overallScore < QUALITY_THRESHOLD) {
console.error(
`Quality score ${result.stats.overallScore} below threshold ${QUALITY_THRESHOLD}`,
);
process.exit(1);
}
Custom Analysis - Analyze specific prompts without reading logs:
import { analyzePrompts, createProvider, getEnvConfig } from 'hyntx';
const config = getEnvConfig();
const provider = await createProvider('anthropic', config);
const customPrompts = [
'Fix the bug',
'Make it better',
'Refactor the authentication module to use JWT tokens instead of sessions',
];
const result = await analyzePrompts({
provider,
prompts: customPrompts,
date: '2025-12-26',
context: {
role: 'developer',
techStack: ['TypeScript', 'React', 'Node.js'],
},
});
console.log(result.patterns);
History Management - Track analysis over time:
import {
analyzePrompts,
saveAnalysisResult,
loadAnalysisResult,
compareResults,
type HistoryMetadata,
} from 'hyntx';
// Run analysis
const result = await analyzePrompts({
/* ... */
});
// Save to history
const metadata: HistoryMetadata = {
date: '2025-12-26',
promptCount: result.stats.promptCount,
score: result.stats.overallScore,
projectFilter: undefined,
provider: 'ollama',
};
await saveAnalysisResult(result, metadata);
// Load previous analysis
const previousResult = await loadAnalysisResult('2025-12-19');
// Compare results
const comparison = await compareResults('2025-12-19', '2025-12-26');
console.log(
`Score change: ${comparison.scoreChange > 0 ? '+' : ''}${comparison.scoreChange}`,
);
analyzePrompts(options: AnalysisOptions): Promise<AnalysisResult> - Analyze prompts and detect anti-patternsreadLogs(options?: ReadLogsOptions): Promise<LogReadResult> - Read Claude Code conversation logssanitize(text: string): SanitizeResult - Remove secrets from a single textsanitizePrompts(prompts: string[]): { prompts: string[]; totalRedacted: number } - Remove secrets from multiple promptscreateProvider(type: ProviderType, config: EnvConfig): Promise<AnalysisProvider> - Create an AI provider instancegetAvailableProvider(config: EnvConfig, onFallback?: Function): Promise<AnalysisProvider> - Get first available provider with fallbackgetAllProviders(services: string[], config: EnvConfig): AnalysisProvider[] - Get all configured providerssaveAnalysisResult(result: AnalysisResult, metadata: HistoryMetadata): Promise<void> - Save analysis to historyloadAnalysisResult(date: string): Promise<HistoryEntry | null> - Load analysis from historylistAvailableDates(): Promise<string[]> - Get list of dates with saved analysescompareResults(beforeDate: string, afterDate: string): Promise<ComparisonResult> - Compare two analysesgetEnvConfig(): EnvConfig - Get environment configurationclaudeProjectsExist(): boolean - Check if Claude projects directory existsparseDate(dateStr: string): Date - Parse date string to Date objectgroupByDay(prompts: ExtractedPrompt[]): DayGroup[] - Group prompts by daygenerateCacheKey(config: CacheKeyConfig): string - Generate cache key for analysisgetCachedResult(cacheKey: string): Promise<AnalysisResult | null> - Get cached resultsetCachedResult(cacheKey: string, result: AnalysisResult, ttlMinutes?: number): Promise<void> - Cache analysis resultHyntx is written in TypeScript and provides full type definitions. All types are exported:
import type {
AnalysisResult,
AnalysisPattern,
AnalysisStats,
ExtractedPrompt,
ProviderType,
EnvConfig,
HistoryEntry,
ComparisonResult,
} from 'hyntx';
See the TypeScript definitions for complete API documentation.
# Clone the repository
git clone https://github.com/jmlweb/hyntx.git
cd hyntx
# Install dependencies
pnpm install
# Run in development mode
pnpm dev
# Build
pnpm build
# Test the CLI
pnpm start
hyntx/
├── src/
│ ├── index.ts # Library entry point (re-exports api/)
│ ├── cli.ts # CLI entry point
│ ├── api/
│ │ └── index.ts # Public API surface
│ ├── core/ # Core business logic
│ │ ├── setup.ts # Interactive setup (multi-provider)
│ │ ├── reminder.ts # Reminder system
│ │ ├── log-reader.ts # Log parsing
│ │ ├── schema-validator.ts # Log schema validation
│ │ ├── sanitizer.ts # Secret redaction
│ │ ├── analyzer.ts # Analysis orchestration + batching
│ │ ├── reporter.ts # Output formatting (Before/After)
│ │ ├── watcher.ts # Real-time log file monitoring
│ │ └── history.ts # Analysis history management
│ ├── providers/ # AI providers
│ │ ├── base.ts # Interface & prompts
│ │ ├── ollama.ts # Ollama integration
│ │ ├── anthropic.ts # Claude integration
│ │ ├── google.ts # Gemini integration
│ │ └── index.ts # Provider factory with fallback
│ ├── utils/ # Utility functions
│ │ ├── env.ts # Environment config
│ │ ├── shell-config.ts # Shell auto-configuration
│ │ ├── paths.ts # System path constants
│ │ ├── logger-base.ts # Base logger (no CLI deps)
│ │ ├── logger.ts # CLI logger (with chalk)
│ │ └── terminal.ts # Terminal utilities
│ └── types/
│ └── index.ts # TypeScript type definitions
├── docs/
│ └── SPECS.md # Technical specifications
└── package.json
Contributions are welcome! Please:
git checkout -b feature/amazing-feature)git push origin feature/amazing-feature)For detailed development roadmap, planned features, and implementation status, see GitHub Issues and GitHub Projects.
MIT License - see LICENSE file for details.
Made with ❤️ for better prompt engineering
FAQs
CLI that analyzes Claude Code prompts and generates improvement suggestions
We found that hyntx demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.