
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
The "Git" for AI Behavior.
Snapshot, version, and diff AI model outputs. Detect drift before your users do.
You updated a prompt. Tests pass. You deploy. Three days later, users complain the bot is "acting weird."
The problem: Traditional tests don't catch AI behavior drift—subtle changes in tone, verbosity, or consistency that emerge over time or after model updates.
SafeStar fixes this by treating AI outputs like code:
No SaaS. No external dependencies. Works with any CLI command.
npm install --save-dev safestar
Create scenarios/refund.yaml:
name: refund_bot_test
description: Ensure the refund bot doesn't hallucinate or get rude.
prompt: "I want a refund immediately."
# Run your AI however you want—Python, Node, curl, anything
exec: "python3 scripts/my_agent.py"
# Test multiple times to catch variance
runs: 5
# Heuristic guardrails
checks:
max_length: 200
must_contain:
- "refund"
must_not_contain:
- "I am just an AI"
Note: SafeStar passes the prompt via
process.env.PROMPT(or equivalent in your language).![]()
Run your scenario:
npx safestar run scenarios/refund.yaml
Happy with the output? Lock it as your gold standard:
npx safestar baseline refund_bot_test
npx safestar diff scenarios/refund.yaml
Example output:
--- SAFESTAR REPORT ---
Status: FAIL
Metrics:
Avg Length: 45 chars
Drift: +210% vs baseline (WARNING)
Variance: 9.8 (High instability)
Violations:
- must_not_contain "sorry sorry": failed in 2 runs
| Check | Description |
|---|---|
max_length | Fail if output exceeds N characters |
must_contain | Fail if any string is missing from output |
must_not_contain | Fail if any string is found in output |
exec ExamplesSafeStar works with anything that prints to stdout:
# Python
exec: "python3 bot.py"
# Node.js
exec: "node agent.js"
# cURL (test an API directly)
exec: "curl -s https://api.openai.com/v1/chat/completions -H 'Authorization: Bearer $OPENAI_KEY' -d '{\"model\":\"gpt-4\",\"messages\":[{\"role\":\"user\",\"content\":\"$PROMPT\"}]}'"
# Any CLI
exec: "./my-binary --prompt \"$PROMPT\""
name: AI Guardrails
on: [push]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci
- run: npx safestar diff scenarios/refund.yaml
.json files you commitISC
FAQs
Snapshot, version, and diff AI behavior over time.
We found that safestar demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.