New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details
Socket
Book a DemoSign in
Socket

shadowlens

Package Overview
Dependencies
Maintainers
1
Versions
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

shadowlens

ShadowLens - Enterprise AI compliance middleware for detecting PII, blocking risky content, and logging AI interactions. Drop-in replacement for OpenAI and Anthropic SDKs.

latest
Source
npmnpm
Version
0.1.0
Version published
Maintainers
1
Created
Source

ShadowLens

Enterprise AI Compliance Middleware for Node.js/TypeScript

npm version License: MIT TypeScript Node

ShadowLens is a comprehensive compliance middleware that automatically detects PII, blocks risky content, redacts sensitive information, tracks costs, and logs all AI interactions for audit trails. Drop-in replacement for OpenAI and Anthropic SDKs.

✨ Features

🔒 Security & Compliance

  • PII Detection - Automatically detects 9+ types of personally identifiable information (email, SSN, credit cards, phone numbers, etc.)
  • Auto-Redaction - Redacts sensitive information before sending to AI with configurable placeholders
  • Keyword Blocking - Configurable keyword filtering with 30+ default blocked terms
  • Risk Scoring - Intelligent risk assessment (0-100) with weighted calculations
  • Automatic Blocking - Prevents requests exceeding risk thresholds from reaching AI providers

📊 Monitoring & Analytics

  • Cost Tracking - Automatic token usage and cost monitoring across providers and models
  • Audit Logging - Complete audit trail of all AI interactions sent to your compliance endpoint
  • Real-time Statistics - Live metrics on requests, costs, PII detections, and blocks
  • Request History - Detailed history with configurable retention

🔌 Provider Support

  • OpenAI Integration - Full support for GPT-3.5, GPT-4, and all OpenAI models
  • Anthropic Integration - Complete Claude support including streaming responses
  • Unified API - Single interface for both providers with automatic configuration validation
  • Drop-in Replacement - Use like native SDKs with zero code changes

⚙️ Configuration

  • Environment-based - Secure configuration via environment variables
  • Flexible Thresholds - Configurable risk thresholds per deployment
  • Custom Keywords - Industry-specific keyword sets (healthcare, finance, enterprise)
  • Feature Flags - Enable/disable features individually
  • Runtime Updates - Update configuration without restarting

📦 Installation

npm install shadowlens

Peer Dependencies

npm install openai @anthropic-ai/sdk axios dotenv

🚀 Quick Start

OpenAI Example

import { ComplianceLayer } from 'shadowlens';

const client = new ComplianceLayer({
  provider: 'openai',
  apiKey: process.env.OPENAI_API_KEY,
  compliance: {
    apiEndpoint: 'https://api.yourcompany.com/compliance/logs',
    apiKey: process.env.COMPLIANCE_API_KEY,
    riskThreshold: 70,
    autoRedact: true,
  }
});

// Use exactly like OpenAI SDK
const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Hello!' }]
});

await client.shutdown(); // Flush logs

Anthropic Example

import { ComplianceLayer } from 'shadowlens';

const client = new ComplianceLayer({
  provider: 'anthropic',
  apiKey: process.env.ANTHROPIC_API_KEY,
  compliance: {
    apiEndpoint: 'https://api.yourcompany.com/compliance/logs',
    apiKey: process.env.COMPLIANCE_API_KEY,
    riskThreshold: 70,
  }
});

// Use exactly like Anthropic SDK
const response = await client.messages.create({
  model: 'claude-3-sonnet-20240229',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'Hello!' }]
});

await client.shutdown();

⚙️ Configuration

ComplianceLayer Configuration

OptionTypeRequiredDefaultDescription
provider'openai' | 'anthropic'✅ Yes-AI provider to use
apiKeystring✅ Yes-API key for the AI provider
compliance.apiEndpointstring✅ Yes-HTTPS URL for compliance logging
compliance.apiKeystring✅ Yes-API key for compliance endpoint
compliance.riskThresholdnumber❌ No75Risk threshold (0-100) for blocking
compliance.blockedKeywordsstring[]❌ No[]Custom blocked keywords
compliance.enablePIIDetectionboolean❌ NotrueEnable PII detection
compliance.enableKeywordScanningboolean❌ NotrueEnable keyword scanning
compliance.autoRedactboolean❌ NofalseAutomatically redact PII
compliance.enableCostTrackingboolean❌ NotrueTrack API costs
compliance.userIdstring❌ No-User identifier for logging

Redactor Configuration

OptionTypeDefaultDescription
redactorConfig.placeholdersobjectSee belowCustom redaction placeholders
redactorConfig.showPartialbooleanfalseShow partial PII (e.g., ****@example.com)
redactorConfig.preserveLengthbooleanfalsePreserve original text length with *

Default Placeholders:

{
  email: '[EMAIL_REDACTED]',
  phone: '[PHONE_REDACTED]',
  ssn: '[SSN_REDACTED]',
  creditCard: '[CC_REDACTED]',
  ipAddress: '[IP_REDACTED]'
}

Cost Tracker Configuration

OptionTypeDefaultDescription
costTrackerConfig.enableTrackingbooleantrueEnable cost tracking
costTrackerConfig.maxHistorySizenumber1000Max requests to store in history
costTrackerConfig.customPricingobject-Custom pricing per model

📚 API Reference

ComplianceLayer

Main class for interacting with AI providers with compliance.

Constructor

new ComplianceLayer(config: ComplianceLayerConfig)

Methods

chat.completions.create() (OpenAI only)
await client.chat.completions.create(
  params: ChatCompletionCreateParamsNonStreaming,
  userId?: string
): Promise<ChatCompletion>

Creates a chat completion with OpenAI with automatic compliance checks.

Throws: ComplianceError if risk exceeds threshold

messages.create() (Anthropic only)
await client.messages.create(
  params: MessageCreateParamsNonStreaming | MessageCreateParamsStreaming,
  userId?: string
): Promise<Message | Stream<Message>>

Creates a message with Claude with automatic compliance checks.

Throws: ComplianceError if risk exceeds threshold

getLoggerStats()
client.getLoggerStats(): LoggerStats

Returns compliance logging statistics.

Returns:

{
  totalEvents: number;
  eventsSent: number;
  eventsPending: number;
  failedSends: number;
}
getCostTrackerStats()
client.getCostTrackerStats(): CostStats

Returns cost tracking statistics.

Returns:

{
  totalCost: number;
  totalInputTokens: number;
  totalOutputTokens: number;
  totalRequests: number;
  costByProvider: Record<string, number>;
  costByModel: Record<string, number>;
}
getComplianceConfig()
client.getComplianceConfig(): ComplianceConfig

Returns current compliance configuration.

updateComplianceConfig()
client.updateComplianceConfig(updates: {
  riskThreshold?: number;
  enablePIIDetection?: boolean;
  enableKeywordScanning?: boolean;
}): void

Updates compliance configuration at runtime.

shutdown()
await client.shutdown(): Promise<void>

Gracefully shuts down, flushing all pending logs.

Standalone Utilities

detectPII(text: string): PIIDetectionResult

Detects PII in text without making API calls.

Returns:

{
  hasPII: boolean;
  detectedTypes: string[];
  matches: Array<{
    type: string;
    value: string;
    redacted: string;
    position?: { start: number; end: number; };
  }>;
  riskScore: number; // 0-100
}

KeywordScanner

Scans text for blocked keywords.

import { KeywordScanner } from 'shadowlens';

const scanner = new KeywordScanner(['confidential', 'secret']);
const result = scanner.scan('This is confidential information');
// { hasBlockedContent: true, matchedKeywords: ['confidential'], riskScore: 20 }

aggregateRisk()

Combines risk scores from multiple scanners.

import { aggregateRisk } from 'shadowlens';

const risk = aggregateRisk(piiResult, keywordResult);
// { overallScore: 45, riskLevel: 'medium', shouldBlock: false }

Types

// Main configuration
export interface ComplianceLayerConfig {
  provider: 'openai' | 'anthropic';
  apiKey: string;
  compliance: {
    apiEndpoint: string;
    apiKey: string;
    riskThreshold?: number;
    blockedKeywords?: string[];
    enablePIIDetection?: boolean;
    enableKeywordScanning?: boolean;
    autoRedact?: boolean;
    enableCostTracking?: boolean;
    userId?: string;
    redactorConfig?: RedactorConfig;
    costTrackerConfig?: CostTrackerConfig;
  };
}

// Compliance event (logged)
export interface ComplianceEvent {
  timestamp: string;
  provider: 'openai' | 'anthropic' | 'other';
  userId?: string;
  promptText: string;
  responseText?: string;
  riskScore: number;
  riskLevel: string;
  action: 'allowed' | 'blocked' | 'redacted';
  piiDetected: boolean;
  blockedKeywords: string[];
  estimatedCost?: number;
  inputTokens?: number;
  outputTokens?: number;
}

// Compliance error
export class ComplianceError extends Error {
  riskScore: number;
  riskLevel: string;
  details: {
    piiScore: number;
    keywordScore: number;
    piiDetected: boolean;
    blockedKeywords: string[];
  };
}

📖 Examples

Basic Usage

See the examples/ folder for fully working examples:

Common Use Cases

Healthcare (HIPAA Compliance)

const client = new ComplianceLayer({
  provider: 'openai',
  apiKey: process.env.OPENAI_API_KEY,
  compliance: {
    apiEndpoint: process.env.COMPLIANCE_API_ENDPOINT,
    apiKey: process.env.COMPLIANCE_API_KEY,
    riskThreshold: 60, // Strict
    blockedKeywords: [
      'patient name', 'medical record', 'diagnosis',
      'prescription', 'health insurance'
    ],
    enablePIIDetection: true,
    autoRedact: true,
  }
});

Finance (PCI-DSS Compliance)

const client = new ComplianceLayer({
  provider: 'openai',
  apiKey: process.env.OPENAI_API_KEY,
  compliance: {
    apiEndpoint: process.env.COMPLIANCE_API_ENDPOINT,
    apiKey: process.env.COMPLIANCE_API_KEY,
    riskThreshold: 50, // Very strict
    blockedKeywords: [
      'credit card', 'bank account', 'routing number',
      'CVV', 'PIN', 'account number'
    ],
    enablePIIDetection: true,
    autoRedact: true,
  }
});

Enterprise (Trade Secret Protection)

const client = new ComplianceLayer({
  provider: 'anthropic',
  apiKey: process.env.ANTHROPIC_API_KEY,
  compliance: {
    apiEndpoint: process.env.COMPLIANCE_API_ENDPOINT,
    apiKey: process.env.COMPLIANCE_API_KEY,
    riskThreshold: 70,
    blockedKeywords: [
      'confidential', 'proprietary', 'trade secret',
      'internal only', 'restricted'
    ],
  }
});

Auto-Redaction Example

const client = new ComplianceLayer({
  provider: 'openai',
  apiKey: process.env.OPENAI_API_KEY,
  compliance: {
    apiEndpoint: process.env.COMPLIANCE_API_ENDPOINT,
    apiKey: process.env.COMPLIANCE_API_KEY,
    autoRedact: true,
    redactorConfig: {
      showPartial: true, // Show partial: "j***@example.com"
    }
  }
});

// User sends: "My email is john@example.com"
// AI receives: "My email is j***@example.com"
const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [{
    role: 'user',
    content: 'My email is john@example.com'
  }]
});

Cost Monitoring Example

import { OpenAIProvider } from 'shadowlens';

const provider = new OpenAIProvider({
  openaiApiKey: process.env.OPENAI_API_KEY,
  complianceConfig: {
    loggerApiEndpoint: process.env.COMPLIANCE_API_ENDPOINT,
    loggerApiKey: process.env.COMPLIANCE_API_KEY,
    enableCostTracking: true,
  }
});

// Make requests
await provider.chat.completions.create({ /* ... */ });
await provider.chat.completions.create({ /* ... */ });

// Get cost statistics
const stats = provider.getCostTrackerStats();
console.log('Total Cost:', stats.totalCost);
console.log('Cost by Model:', stats.costByModel);
console.log('Total Tokens:', stats.totalInputTokens + stats.totalOutputTokens);

🎯 Risk Scoring

How Risk Scores Are Calculated

Risk scores range from 0-100, with higher scores indicating higher risk.

PII Risk Scores (60% weight)

PII TypePointsExamples
Email+15john@example.com
Phone+20555-123-4567, (555) 123-4567
SSN+50123-45-6789
Credit Card+604532-1234-5678-9010
IP Address+10192.168.1.1

Maximum PII Score: 100 (capped)

Keyword Risk Scores (40% weight)

MatchesPoints
First keyword+20
Each additional occurrence+5
Maximum100

Overall Risk Calculation

// Weighted average
overallScore = (piiScore * 0.6) + (keywordScore * 0.4)

// Critical override: If any individual score > 80, use highest score
if (piiScore > 80 || keywordScore > 80) {
  overallScore = Math.max(piiScore, keywordScore)
}

Risk Levels

ScoreLevelActionDescription
0-30Low✅ AllowSafe content, no significant risks
31-60Medium✅ AllowMinor concerns, monitored
61-80High⚠️ Allow/LogSignificant risk, requires review
81-100Critical❌ BlockSevere risk, automatic blocking

When Requests Are Blocked

A request is blocked when:

  • Overall risk score exceeds riskThreshold

    if (overallScore > riskThreshold) {
      throw new ComplianceError('Request blocked: risk exceeds threshold');
    }
    
  • ComplianceError is thrown with details:

    {
      name: 'ComplianceError',
      message: 'Compliance check failed: Risk score 85 exceeds threshold 75',
      riskScore: 85,
      riskLevel: 'critical',
      details: {
        piiScore: 50,
        keywordScore: 20,
        piiDetected: true,
        blockedKeywords: ['confidential']
      }
    }
    
  • Request never reaches the AI provider - blocked at compliance layer

  • Event is logged as 'blocked' to your compliance endpoint

Adjusting Risk Thresholds

// Strict (block more)
riskThreshold: 50

// Balanced (recommended)
riskThreshold: 75

// Permissive (block less)
riskThreshold: 90

Recommended thresholds by industry:

  • Healthcare (HIPAA): 60-70
  • Finance (PCI-DSS): 50-60
  • Enterprise: 70-80
  • General use: 75

📋 Logging

What Data Is Logged

Every AI interaction generates a ComplianceEvent that is sent to your compliance endpoint:

{
  timestamp: "2024-01-15T10:30:00.000Z",
  provider: "openai",
  userId: "user-123",
  promptText: "Hello, how are you?",      // Truncated to 500 chars
  responseText: "I'm doing well...",      // Truncated to 500 chars
  riskScore: 0,
  riskLevel: "low",
  action: "allowed",                      // 'allowed' | 'blocked' | 'redacted'
  piiDetected: false,
  blockedKeywords: [],
  estimatedCost: 0.0002,
  inputTokens: 10,
  outputTokens: 15,
  metadata: {}
}

Logging Behavior

  • Batched: Events are batched (default: 10 per batch) for efficiency
  • Retry Logic: Failed sends are retried 3 times with exponential backoff
  • Non-blocking: Logging failures don't affect AI requests
  • Truncation: Text is truncated to 500 characters to limit data size
  • Graceful Shutdown: client.shutdown() flushes all pending logs

Privacy Considerations

What We Log

Truncated text (500 chars max) ✅ Risk scores and levelsPII detection results (types found, not values) ✅ Matched keywords (which keywords, not full text) ✅ Cost and token usageTimestamps and user IDs

What We Don't Log

Full prompt text (only first 500 chars) ❌ Complete responses (only first 500 chars) ❌ Actual PII values (only types detected) ❌ API keys (never logged) ❌ Unredacted sensitive content

Compliance Endpoint Requirements

Your compliance endpoint should:

  • Accept POST requests with JSON body
  • Authenticate using the apiKey header
  • Return 200 OK for successful logging
  • Handle batches of up to 100 events
  • Be HTTPS (required for security)

Example endpoint payload:

{
  "events": [
    {
      "timestamp": "2024-01-15T10:30:00.000Z",
      "provider": "openai",
      "riskScore": 15,
      "action": "allowed",
      ...
    },
    ...
  ]
}

🛠️ Development

Running Tests

# Run all tests
npm test

# Run specific test file
npm test -- pii-detector.test.ts

# Run with coverage
npm run test:coverage

# Watch mode
npm run test:watch

Building

# Build TypeScript
npm run build

# Watch mode (auto-rebuild on changes)
npm run dev

Publishing to npm

# 1. Update version in package.json
# 2. Update CHANGELOG.md

# 3. Build and test
npm run build
npm test

# 4. Publish
npm publish --access public

# See PUBLISHING.md for detailed instructions

Project Structure

shadowlens/
├── src/
│   ├── providers/         # OpenAI and Anthropic integrations
│   ├── scanners/          # PII and keyword detection
│   ├── utils/             # Logging, redaction, cost tracking
│   └── index.ts           # Main exports
├── examples/              # Working examples
├── tests/                 # Test suites
└── dist/                  # Compiled output

Running Examples

# Setup
cp examples/env.example .env
# Add your API keys to .env

# Build
npm run build

# Run examples
node dist/examples/openai-basic.js
node dist/examples/pii-detection.js
node dist/examples/cost-tracking.js

See examples/README.md for detailed instructions.

Contributing

Contributions are welcome! Please:

  • Fork the repository
  • Create a feature branch (git checkout -b feature/amazing-feature)
  • Commit your changes (git commit -m 'Add amazing feature')
  • Push to the branch (git push origin feature/amazing-feature)
  • Open a Pull Request

Before submitting:

  • ✅ Run tests: npm test
  • ✅ Run linter: npm run lint (if available)
  • ✅ Update documentation
  • ✅ Add tests for new features

📄 License

MIT License

Copyright (c) 2024 ShadowLens

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

🙏 Acknowledgments

Built with:

📞 Support

For enterprise support and custom integrations:

Built with ❤️ for enterprise AI security

⭐ Star us on GitHub📦 View on npm

Keywords

ai

FAQs

Package last updated on 23 Jan 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts