New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details
Socket
Book a DemoSign in
Socket

promptsentry

Package Overview
Dependencies
Maintainers
1
Versions
10
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

promptsentry

AI prompt security scanner with differential validation - detects vulnerabilities before they reach production

pipPyPI
Version
0.2.0
Maintainers
1

🛡️ PromptSentry

AI Prompt Security Scanner - Detect and prevent vulnerabilities in AI prompts before they reach production.

Python 3.9+ License: MIT OWASP LLM Top 10

🚀 Features

  • 3-Stage Security Pipeline: Comprehensive analysis using prompt detection, pattern matching, and optional LLM validation
  • OWASP LLM Top 10 2025 Rules: Industry-standard vulnerability detection loaded as context
  • Differential Validation: No "moving goalposts" - only blocks for previously identified unfixed issues
  • Git Pre-commit Hook: Automatic scanning before every commit
  • Beautiful CLI: Rich terminal output with actionable recommendations
  • Local LLM Analysis: Uses Ollama with Qwen 2.5 Coder 0.5B (~500MB) for intelligent security assessment
  • Stable Fingerprinting: Advanced code normalization ensures issues are tracked across refactoring

📦 Installation

pip install promptsentry

⚡ Quick Start

# Initialize PromptSentry
promptsentry init

# Install git pre-commit hook
cd your-project
promptsentry install-hook

# Scan a file manually
promptsentry scan chatbot.py

# Scan without LLM (faster, pattern matching only)
promptsentry scan --no-llm chatbot.py

🔍 How It Works

PromptSentry uses a 3-stage pipeline to detect vulnerabilities:

┌─────────────┐    ┌──────────────┐    ┌─────────────┐
│  1. DETECT  │ -> │  2. PATTERNS │ -> │ 3. SLM      │
│   Prompts   │    │    Check     │    │   ANALYSIS  │
└─────────────┘    └──────────────┘    └─────────────┘
  Fast filter       OWASP LLM Top 10    Ollama + Qwen
   (0.1s)           rules (0.2s)        (optional, 2s)

Stage 1: Prompt Detection

Quickly identifies AI prompts in source code using heuristics and pattern matching.

Stage 2: Pattern Matching

Applies OWASP LLM Top 10 2025 rules for deterministic vulnerability detection.

Stage 3: SLM Analysis (Optional)

Uses Ollama with Qwen 2.5 Coder 0.5B for intelligent, context-aware vulnerability assessment. OWASP rules are passed as context to the model for comprehensive analysis.

🛡️ Vulnerabilities Detected

Based on OWASP LLM Top 10:

CategoryDescriptionDetection
LLM01Prompt InjectionDirect concatenation, missing delimiters, weak system prompts
LLM02Sensitive Information DisclosurePII, credentials, API keys in prompts
LLM03Supply Chain VulnerabilitiesVulnerable dependencies, untrusted models/data
LLM04Data and Model PoisoningPoisoned training data, malicious fine-tuning
LLM05Improper Output Handlingeval(), exec(), subprocess, SQL/XSS from LLM output
LLM06Excessive AgencyUnrestricted file/network access, auto-execution
LLM07System Prompt LeakageExtractable logic, secrets in system prompts
LLM08Vector and Embedding WeaknessesRAG poisoning, embedding manipulation
LLM09MisinformationHallucinations, lack of factual grounding
LLM10Unbounded ConsumptionNo rate limiting, infinite loops, resource exhaustion

💡 Key Innovation: Differential Validation

PromptSentry tracks issues across commits and only blocks for previously identified unfixed issues:

  • First scan: Detects issues and tracks them
  • Fix and commit: Only checks if tracked issues are fixed
  • No moving goalposts: New findings are noted but don't block

This prevents the frustrating experience of fixing one issue only to have the scanner find new nitpicks.

📋 CLI Commands

Setup

promptsentry init              # Initialize PromptSentry
promptsentry install-hook      # Install git pre-commit hook
promptsentry uninstall-hook    # Remove the hook

Scanning

promptsentry scan file.py      # Scan a single file (LLM enabled by default)
promptsentry scan .            # Scan current directory
promptsentry scan --staged     # Scan staged git files
promptsentry scan --no-llm     # Disable LLM for faster scanning

Issue Management

promptsentry issues list       # List tracked issues
promptsentry issues stats      # Show statistics
promptsentry issues clear file.py  # Clear issues for a file
promptsentry issues ignore ID  # Ignore a specific issue

Configuration

promptsentry config show       # Show current config
promptsentry config set scan.threshold 80  # Set blocking threshold
promptsentry config reset      # Reset to defaults

Rules

promptsentry rules            # List all detection rules

⚙️ Configuration

Configuration is stored in ~/.promptsentry/config.yaml:

scan:
  threshold: 50              # Score to block (0-100)
  min_confidence: 0.6        # Prompt detection threshold
  file_extensions:
    - .py
    - .js
    - .ts

llm:
  model_name: deepseek-r1:1.5b  # Ollama model
  enabled: true                    # Enable LLM analysis by default
  timeout: 30                      # LLM request timeout (seconds)
  
hook:
  enabled: true
  block_on_issues: true
  allow_bypass: true         # Allow --no-verify

🎬 Demo Flow

# First commit - issues detected
$ git add chatbot.py
$ git commit -m "Add chatbot"

🔍 PromptSentry: Scanning staged files...
   ✓ Found 1 prompt in chatbot.py

❌ COMMIT BLOCKED - 2 vulnerabilities found

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔴 HIGH - Direct Concatenation
   chatbot.py:15

   Problem:
   > prompt = "Translate: " + user_input

   Fix:
   > prompt = f"Translate: <input>{user_input}</input>"
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

# Fix and commit again
$ git commit -m "Fix: Add delimiters"

🔍 PromptSentry: Checking fixes...

✅ Fixed: Direct Concatenation ✓

🎉 All vulnerabilities resolved!
✅ COMMIT ALLOWED

🔧 Development

# Clone repository
git clone https://github.com/Brightlord5/PromptGuard
cd PromptGuard

# Install development dependencies
pip install -e ".[dev]"

# Run tests
pytest

# Run linting
black promptsentry
ruff check promptsentry

📄 License

MIT License - see LICENSE for details.

🙏 Acknowledgments

  • OWASP LLM Top 10 for vulnerability categories
  • Ollama for local LLM infrastructure
  • Qwen for the efficient coder model
  • Rich for beautiful terminal output

Made with ❤️ for secure AI development

Keywords

ai

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts