
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
pit-manager
Advanced tools
The simplest way to track, version, and optimize your AI prompts
PIT provides dead-simple prompt tracking with automatic versioning, cost analytics, and chain execution tracking. Built for production AI applications that need visibility into prompt performance.
Note: Currently, only Human Behavior workers have access to the Python CLI, which is the recommended version. The TypeScript CLI via
npx pitis functional but limited.
# Clone the repository
git clone git@github.com:humanbehavior-gh/pit.git
# Navigate to the directory
cd pit
# Install the Python CLI in development mode
pip install -e .
# Verify installation
pit help
# or
pit docs
Create a .env file in your project directory with your API keys:
# LLM Provider Keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
# PIT Configuration (for online mode)
PIT_REPO_KEY=your-repo-key
PIT_SUPABASE_URL=your-supabase-url
PIT_SUPABASE_KEY=your-supabase-key
# If pit-manager is already in your package.json
pnpm install
# Otherwise, install it directly
pnpm install pit-manager
# Initialize with online backend (recommended)
pit init --online
# This creates a prompts/ folder in your project
Create a prompt template in the prompts/ folder:
# prompts/assistant.md
---
version: 1.0.0
description: General assistant prompt
---
You are a {{role}} assistant specialized in {{domain}}.
Task: {{task}}
Please be {{tone}} in your response.
The simplified API makes it incredibly easy to track and version your prompts with just a few lines of code.
import { prompts, model } from 'pit-manager';
// Load and render a prompt template
const prompt = prompts("assistant.md", [
"helpful AI", // replaces {{role}}
"data analysis", // replaces {{domain}}
"analyze sales", // replaces {{task}}
"concise" // replaces {{tone}}
]);
// Execute with automatic tracking
const response = await model.complete(
"gpt-4", // model name
prompt, // rendered prompt
"analysis-task" // tag for tracking
);
console.log(response.content);
That's it! Every execution is automatically:
Create multi-step workflows by passing responses between calls:
// Step 1: Analyze data
const analysis = await model.complete(
"gpt-4",
prompts("analyze.md", [data]),
"analyze"
);
// Step 2: Generate summary (automatically chains!)
const summary = await model.complete(
"claude-3-opus",
analysis, // Pass the previous response
"summarize"
);
// Step 3: Translate (chain continues)
const translation = await model.complete(
"gemini-pro",
summary,
"translate"
);
Get typed responses using native provider capabilities:
// Define your output structure
interface Analysis {
sentiment: 'positive' | 'negative' | 'neutral';
confidence: number;
keywords: string[];
}
// Get structured response
const result = await model.complete(
"gpt-4",
"Analyze: PIT is amazing for tracking prompts!",
"sentiment",
{
schema: {
type: "object",
properties: {
sentiment: { type: "string", enum: ["positive", "negative", "neutral"] },
confidence: { type: "number" },
keywords: { type: "array", items: { type: "string" } }
},
required: ["sentiment", "confidence", "keywords"]
}
}
);
// TypeScript knows the shape!
console.log(result.content.sentiment); // "positive"
console.log(result.content.confidence); // 0.95
console.log(result.content.keywords); // ["PIT", "amazing", "tracking", "prompts"]
Handle images and other media:
// Analyze an image
const imageAnalysis = await model.complete(
"gpt-4-vision",
{
text: "What's in this image?",
images: ["path/to/image.png"]
},
"image-analysis"
);
// Process Base64 encoded images
const base64Analysis = await model.complete(
"claude-3-opus",
{
text: "Describe this chart",
images: [`data:image/png;base64,${base64String}`]
},
"chart-analysis"
);
prompts(template, variables)Load and render a prompt template:
const prompt = prompts("template.md", ["var1", "var2", "var3"]);
prompts/ folder{{placeholders}} in ordermodel.complete(model, prompt, tag, options?)Execute a model with automatic tracking:
const response = await model.complete(
model: string, // "gpt-4", "claude-3", "gemini-pro", etc.
prompt: string | object, // Prompt text or multimodal content
tag: string, // Tag for tracking and analytics
options?: { // Optional parameters
schema?: object, // JSON schema for structured output
temperature?: number,
maxTokens?: number,
// ... other provider-specific options
}
);
Returns a ModelResponse object:
{
content: string | object, // The response content
model: string, // Model used
promptHash: string, // SHA-256 of prompt
executionId: string, // Unique execution ID
metadata: {
tag: string,
provider: string,
chainId?: string, // Present if part of a chain
tokens: {
prompt: number,
completion: number,
total: number
},
latencyMs: number,
structured: boolean
}
}
# View execution history
pit log
# Show execution analytics
pit analytics summary
# Launch interactive dashboard
pit dashboard
# View token usage
pit analytics tokens --days 7
# Track costs by model
pit analytics cost --group-by model
# Start the web dashboard
pit dashboard --web
# Access at http://localhost:3000
import { prompts, model } from 'pit-manager';
async function researchPipeline(topic: string) {
// Step 1: Generate research questions
const questions = await model.complete(
"gpt-4",
prompts("research/questions.md", [topic]),
"generate-questions",
{
schema: {
type: "object",
properties: {
questions: {
type: "array",
items: { type: "string" }
}
}
}
}
);
// Step 2: Research each question (parallel execution)
const research = await Promise.all(
questions.content.questions.map(q =>
model.complete(
"claude-3-opus",
prompts("research/investigate.md", [q]),
"research"
)
)
);
// Step 3: Synthesize findings
const synthesis = await model.complete(
"gpt-4",
research.map(r => r.content).join("\n\n"),
"synthesize"
);
// Step 4: Generate final report
const report = await model.complete(
"gpt-4",
synthesis,
"final-report",
{
schema: {
type: "object",
properties: {
title: { type: "string" },
summary: { type: "string" },
findings: {
type: "array",
items: {
type: "object",
properties: {
finding: { type: "string" },
confidence: { type: "string" },
evidence: { type: "string" }
}
}
},
recommendations: {
type: "array",
items: { type: "string" }
}
}
}
}
);
return report.content;
}
// Run the pipeline
const findings = await researchPipeline("AI safety");
console.log(findings);
# Create a new branch for experimentation
pit branch experiment/new-prompts
pit checkout experiment/new-prompts
# Edit your prompts and test
# ... make changes ...
# Merge back when satisfied
pit checkout main
pit merge experiment/new-prompts
# List all templates
pit templates list
# Show template details
pit templates show assistant.md
# Compare template versions
pit diff prompts/assistant.md HEAD~1
# Analyze costs by tag
pit analytics cost --group-by tag --days 30
# Find expensive prompts
pit analytics expensive --limit 10
# Compare model costs
pit analytics compare gpt-4 claude-3-opus
After initialization, your project will have:
your-project/
├── prompts/ # Your prompt templates
│ ├── assistant.md
│ ├── analyzer.md
│ └── summarizer.md
├── .pit/ # PIT repository (auto-managed)
│ ├── config.json # Repository configuration
│ ├── HEAD # Current branch reference
│ └── objects/ # Content-addressed storage
├── .env # Your API keys
└── package.json # Your project config
Required for online mode:
# LLM Providers
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
# PIT Backend
PIT_REPO_KEY=your-repo-key
PIT_SUPABASE_URL=https://your-project.supabase.co
PIT_SUPABASE_KEY=your-supabase-anon-key
# Run all unit tests
npm test
# Run specific test suite
npm test -- --testPathPattern=storage
npm test -- --testPathPattern=chains
npm test -- --testPathPattern=versioning
The complete end-to-end test validates the entire PIT system including:
# Run the complete end-to-end test
./test-e2e-complete.sh
# The test will:
# 1. Create a temporary test directory
# 2. Initialize a PIT repository
# 3. Test prompt templates and model execution
# 4. Verify chain tracking and storage
# 5. Clean up after completion
For integration testing with real LLM providers:
# Set your API keys first
export OPENAI_API_KEY="your-key"
export ANTHROPIC_API_KEY="your-key"
# Run integration tests
tsx test/integration/test_simplified_api.ts
tsx test/integration/test-typescript-workflow.ts
"pit: command not found"
# Ensure you installed with pip install -e .
# Check your PATH includes Python scripts
echo $PATH | grep -i python
"Cannot find module 'pit-manager'"
# Ensure you ran pnpm install
pnpm install pit-manager
"No prompts folder found"
# Initialize your repository
pit init --online
MIT License - see LICENSE for details.
FAQs
Centralized prompt management system for Human Behavior AI agents
We found that pit-manager demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.