ShadowLens
Enterprise AI Compliance Middleware for Node.js/TypeScript

ShadowLens is a comprehensive compliance middleware that automatically detects PII, blocks risky content, redacts sensitive information, tracks costs, and logs all AI interactions for audit trails. Drop-in replacement for OpenAI and Anthropic SDKs.
✨ Features
🔒 Security & Compliance
- PII Detection - Automatically detects 9+ types of personally identifiable information (email, SSN, credit cards, phone numbers, etc.)
- Auto-Redaction - Redacts sensitive information before sending to AI with configurable placeholders
- Keyword Blocking - Configurable keyword filtering with 30+ default blocked terms
- Risk Scoring - Intelligent risk assessment (0-100) with weighted calculations
- Automatic Blocking - Prevents requests exceeding risk thresholds from reaching AI providers
📊 Monitoring & Analytics
- Cost Tracking - Automatic token usage and cost monitoring across providers and models
- Audit Logging - Complete audit trail of all AI interactions sent to your compliance endpoint
- Real-time Statistics - Live metrics on requests, costs, PII detections, and blocks
- Request History - Detailed history with configurable retention
🔌 Provider Support
- OpenAI Integration - Full support for GPT-3.5, GPT-4, and all OpenAI models
- Anthropic Integration - Complete Claude support including streaming responses
- Unified API - Single interface for both providers with automatic configuration validation
- Drop-in Replacement - Use like native SDKs with zero code changes
⚙️ Configuration
- Environment-based - Secure configuration via environment variables
- Flexible Thresholds - Configurable risk thresholds per deployment
- Custom Keywords - Industry-specific keyword sets (healthcare, finance, enterprise)
- Feature Flags - Enable/disable features individually
- Runtime Updates - Update configuration without restarting
📦 Installation
npm install shadowlens
Peer Dependencies
npm install openai @anthropic-ai/sdk axios dotenv
🚀 Quick Start
OpenAI Example
import { ComplianceLayer } from 'shadowlens';
const client = new ComplianceLayer({
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY,
compliance: {
apiEndpoint: 'https://api.yourcompany.com/compliance/logs',
apiKey: process.env.COMPLIANCE_API_KEY,
riskThreshold: 70,
autoRedact: true,
}
});
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }]
});
await client.shutdown();
Anthropic Example
import { ComplianceLayer } from 'shadowlens';
const client = new ComplianceLayer({
provider: 'anthropic',
apiKey: process.env.ANTHROPIC_API_KEY,
compliance: {
apiEndpoint: 'https://api.yourcompany.com/compliance/logs',
apiKey: process.env.COMPLIANCE_API_KEY,
riskThreshold: 70,
}
});
const response = await client.messages.create({
model: 'claude-3-sonnet-20240229',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Hello!' }]
});
await client.shutdown();
⚙️ Configuration
ComplianceLayer Configuration
provider | 'openai' | 'anthropic' | ✅ Yes | - | AI provider to use |
apiKey | string | ✅ Yes | - | API key for the AI provider |
compliance.apiEndpoint | string | ✅ Yes | - | HTTPS URL for compliance logging |
compliance.apiKey | string | ✅ Yes | - | API key for compliance endpoint |
compliance.riskThreshold | number | ❌ No | 75 | Risk threshold (0-100) for blocking |
compliance.blockedKeywords | string[] | ❌ No | [] | Custom blocked keywords |
compliance.enablePIIDetection | boolean | ❌ No | true | Enable PII detection |
compliance.enableKeywordScanning | boolean | ❌ No | true | Enable keyword scanning |
compliance.autoRedact | boolean | ❌ No | false | Automatically redact PII |
compliance.enableCostTracking | boolean | ❌ No | true | Track API costs |
compliance.userId | string | ❌ No | - | User identifier for logging |
Redactor Configuration
redactorConfig.placeholders | object | See below | Custom redaction placeholders |
redactorConfig.showPartial | boolean | false | Show partial PII (e.g., ****@example.com) |
redactorConfig.preserveLength | boolean | false | Preserve original text length with * |
Default Placeholders:
{
email: '[EMAIL_REDACTED]',
phone: '[PHONE_REDACTED]',
ssn: '[SSN_REDACTED]',
creditCard: '[CC_REDACTED]',
ipAddress: '[IP_REDACTED]'
}
Cost Tracker Configuration
costTrackerConfig.enableTracking | boolean | true | Enable cost tracking |
costTrackerConfig.maxHistorySize | number | 1000 | Max requests to store in history |
costTrackerConfig.customPricing | object | - | Custom pricing per model |
📚 API Reference
ComplianceLayer
Main class for interacting with AI providers with compliance.
Constructor
new ComplianceLayer(config: ComplianceLayerConfig)
Methods
chat.completions.create() (OpenAI only)
await client.chat.completions.create(
params: ChatCompletionCreateParamsNonStreaming,
userId?: string
): Promise<ChatCompletion>
Creates a chat completion with OpenAI with automatic compliance checks.
Throws: ComplianceError if risk exceeds threshold
messages.create() (Anthropic only)
await client.messages.create(
params: MessageCreateParamsNonStreaming | MessageCreateParamsStreaming,
userId?: string
): Promise<Message | Stream<Message>>
Creates a message with Claude with automatic compliance checks.
Throws: ComplianceError if risk exceeds threshold
getLoggerStats()
client.getLoggerStats(): LoggerStats
Returns compliance logging statistics.
Returns:
{
totalEvents: number;
eventsSent: number;
eventsPending: number;
failedSends: number;
}
getCostTrackerStats()
client.getCostTrackerStats(): CostStats
Returns cost tracking statistics.
Returns:
{
totalCost: number;
totalInputTokens: number;
totalOutputTokens: number;
totalRequests: number;
costByProvider: Record<string, number>;
costByModel: Record<string, number>;
}
getComplianceConfig()
client.getComplianceConfig(): ComplianceConfig
Returns current compliance configuration.
updateComplianceConfig()
client.updateComplianceConfig(updates: {
riskThreshold?: number;
enablePIIDetection?: boolean;
enableKeywordScanning?: boolean;
}): void
Updates compliance configuration at runtime.
shutdown()
await client.shutdown(): Promise<void>
Gracefully shuts down, flushing all pending logs.
Standalone Utilities
detectPII(text: string): PIIDetectionResult
Detects PII in text without making API calls.
Returns:
{
hasPII: boolean;
detectedTypes: string[];
matches: Array<{
type: string;
value: string;
redacted: string;
position?: { start: number; end: number; };
}>;
riskScore: number;
}
KeywordScanner
Scans text for blocked keywords.
import { KeywordScanner } from 'shadowlens';
const scanner = new KeywordScanner(['confidential', 'secret']);
const result = scanner.scan('This is confidential information');
aggregateRisk()
Combines risk scores from multiple scanners.
import { aggregateRisk } from 'shadowlens';
const risk = aggregateRisk(piiResult, keywordResult);
Types
export interface ComplianceLayerConfig {
provider: 'openai' | 'anthropic';
apiKey: string;
compliance: {
apiEndpoint: string;
apiKey: string;
riskThreshold?: number;
blockedKeywords?: string[];
enablePIIDetection?: boolean;
enableKeywordScanning?: boolean;
autoRedact?: boolean;
enableCostTracking?: boolean;
userId?: string;
redactorConfig?: RedactorConfig;
costTrackerConfig?: CostTrackerConfig;
};
}
export interface ComplianceEvent {
timestamp: string;
provider: 'openai' | 'anthropic' | 'other';
userId?: string;
promptText: string;
responseText?: string;
riskScore: number;
riskLevel: string;
action: 'allowed' | 'blocked' | 'redacted';
piiDetected: boolean;
blockedKeywords: string[];
estimatedCost?: number;
inputTokens?: number;
outputTokens?: number;
}
export class ComplianceError extends Error {
riskScore: number;
riskLevel: string;
details: {
piiScore: number;
keywordScore: number;
piiDetected: boolean;
blockedKeywords: string[];
};
}
📖 Examples
Basic Usage
See the examples/ folder for fully working examples:
Common Use Cases
Healthcare (HIPAA Compliance)
const client = new ComplianceLayer({
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY,
compliance: {
apiEndpoint: process.env.COMPLIANCE_API_ENDPOINT,
apiKey: process.env.COMPLIANCE_API_KEY,
riskThreshold: 60,
blockedKeywords: [
'patient name', 'medical record', 'diagnosis',
'prescription', 'health insurance'
],
enablePIIDetection: true,
autoRedact: true,
}
});
Finance (PCI-DSS Compliance)
const client = new ComplianceLayer({
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY,
compliance: {
apiEndpoint: process.env.COMPLIANCE_API_ENDPOINT,
apiKey: process.env.COMPLIANCE_API_KEY,
riskThreshold: 50,
blockedKeywords: [
'credit card', 'bank account', 'routing number',
'CVV', 'PIN', 'account number'
],
enablePIIDetection: true,
autoRedact: true,
}
});
Enterprise (Trade Secret Protection)
const client = new ComplianceLayer({
provider: 'anthropic',
apiKey: process.env.ANTHROPIC_API_KEY,
compliance: {
apiEndpoint: process.env.COMPLIANCE_API_ENDPOINT,
apiKey: process.env.COMPLIANCE_API_KEY,
riskThreshold: 70,
blockedKeywords: [
'confidential', 'proprietary', 'trade secret',
'internal only', 'restricted'
],
}
});
Auto-Redaction Example
const client = new ComplianceLayer({
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY,
compliance: {
apiEndpoint: process.env.COMPLIANCE_API_ENDPOINT,
apiKey: process.env.COMPLIANCE_API_KEY,
autoRedact: true,
redactorConfig: {
showPartial: true,
}
}
});
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [{
role: 'user',
content: 'My email is john@example.com'
}]
});
Cost Monitoring Example
import { OpenAIProvider } from 'shadowlens';
const provider = new OpenAIProvider({
openaiApiKey: process.env.OPENAI_API_KEY,
complianceConfig: {
loggerApiEndpoint: process.env.COMPLIANCE_API_ENDPOINT,
loggerApiKey: process.env.COMPLIANCE_API_KEY,
enableCostTracking: true,
}
});
await provider.chat.completions.create({ });
await provider.chat.completions.create({ });
const stats = provider.getCostTrackerStats();
console.log('Total Cost:', stats.totalCost);
console.log('Cost by Model:', stats.costByModel);
console.log('Total Tokens:', stats.totalInputTokens + stats.totalOutputTokens);
🎯 Risk Scoring
How Risk Scores Are Calculated
Risk scores range from 0-100, with higher scores indicating higher risk.
PII Risk Scores (60% weight)
| Email | +15 | john@example.com |
| Phone | +20 | 555-123-4567, (555) 123-4567 |
| SSN | +50 | 123-45-6789 |
| Credit Card | +60 | 4532-1234-5678-9010 |
| IP Address | +10 | 192.168.1.1 |
Maximum PII Score: 100 (capped)
Keyword Risk Scores (40% weight)
| First keyword | +20 |
| Each additional occurrence | +5 |
| Maximum | 100 |
Overall Risk Calculation
overallScore = (piiScore * 0.6) + (keywordScore * 0.4)
if (piiScore > 80 || keywordScore > 80) {
overallScore = Math.max(piiScore, keywordScore)
}
Risk Levels
| 0-30 | Low | ✅ Allow | Safe content, no significant risks |
| 31-60 | Medium | ✅ Allow | Minor concerns, monitored |
| 61-80 | High | ⚠️ Allow/Log | Significant risk, requires review |
| 81-100 | Critical | ❌ Block | Severe risk, automatic blocking |
When Requests Are Blocked
A request is blocked when:
-
Overall risk score exceeds riskThreshold
if (overallScore > riskThreshold) {
throw new ComplianceError('Request blocked: risk exceeds threshold');
}
-
ComplianceError is thrown with details:
{
name: 'ComplianceError',
message: 'Compliance check failed: Risk score 85 exceeds threshold 75',
riskScore: 85,
riskLevel: 'critical',
details: {
piiScore: 50,
keywordScore: 20,
piiDetected: true,
blockedKeywords: ['confidential']
}
}
-
Request never reaches the AI provider - blocked at compliance layer
-
Event is logged as 'blocked' to your compliance endpoint
Adjusting Risk Thresholds
riskThreshold: 50
riskThreshold: 75
riskThreshold: 90
Recommended thresholds by industry:
- Healthcare (HIPAA): 60-70
- Finance (PCI-DSS): 50-60
- Enterprise: 70-80
- General use: 75
📋 Logging
What Data Is Logged
Every AI interaction generates a ComplianceEvent that is sent to your compliance endpoint:
{
timestamp: "2024-01-15T10:30:00.000Z",
provider: "openai",
userId: "user-123",
promptText: "Hello, how are you?",
responseText: "I'm doing well...",
riskScore: 0,
riskLevel: "low",
action: "allowed",
piiDetected: false,
blockedKeywords: [],
estimatedCost: 0.0002,
inputTokens: 10,
outputTokens: 15,
metadata: {}
}
Logging Behavior
- Batched: Events are batched (default: 10 per batch) for efficiency
- Retry Logic: Failed sends are retried 3 times with exponential backoff
- Non-blocking: Logging failures don't affect AI requests
- Truncation: Text is truncated to 500 characters to limit data size
- Graceful Shutdown:
client.shutdown() flushes all pending logs
Privacy Considerations
What We Log
✅ Truncated text (500 chars max)
✅ Risk scores and levels
✅ PII detection results (types found, not values)
✅ Matched keywords (which keywords, not full text)
✅ Cost and token usage
✅ Timestamps and user IDs
What We Don't Log
❌ Full prompt text (only first 500 chars)
❌ Complete responses (only first 500 chars)
❌ Actual PII values (only types detected)
❌ API keys (never logged)
❌ Unredacted sensitive content
Compliance Endpoint Requirements
Your compliance endpoint should:
- Accept POST requests with JSON body
- Authenticate using the
apiKey header
- Return 200 OK for successful logging
- Handle batches of up to 100 events
- Be HTTPS (required for security)
Example endpoint payload:
{
"events": [
{
"timestamp": "2024-01-15T10:30:00.000Z",
"provider": "openai",
"riskScore": 15,
"action": "allowed",
...
},
...
]
}
🛠️ Development
Running Tests
npm test
npm test -- pii-detector.test.ts
npm run test:coverage
npm run test:watch
Building
npm run build
npm run dev
Publishing to npm
npm run build
npm test
npm publish --access public
Project Structure
shadowlens/
├── src/
│ ├── providers/ # OpenAI and Anthropic integrations
│ ├── scanners/ # PII and keyword detection
│ ├── utils/ # Logging, redaction, cost tracking
│ └── index.ts # Main exports
├── examples/ # Working examples
├── tests/ # Test suites
└── dist/ # Compiled output
Running Examples
cp examples/env.example .env
npm run build
node dist/examples/openai-basic.js
node dist/examples/pii-detection.js
node dist/examples/cost-tracking.js
See examples/README.md for detailed instructions.
Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature)
- Commit your changes (
git commit -m 'Add amazing feature')
- Push to the branch (
git push origin feature/amazing-feature)
- Open a Pull Request
Before submitting:
- ✅ Run tests:
npm test
- ✅ Run linter:
npm run lint (if available)
- ✅ Update documentation
- ✅ Add tests for new features
📄 License
MIT License
Copyright (c) 2024 ShadowLens
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
🔗 Links
🙏 Acknowledgments
Built with:
📞 Support
For enterprise support and custom integrations: