New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details
Socket
Book a DemoSign in
Socket

@flvmnt/pgfence

Package Overview
Dependencies
Maintainers
1
Versions
16
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@flvmnt/pgfence

Postgres migration safety CLI: lock mode analysis, risk scoring, and safe rewrite recipes

latest
npmnpm
Version
0.4.1
Version published
Maintainers
1
Created
Source
pgfence logo

pgfence

Postgres migration safety CLI. Know your lock modes, risk levels, and safe rewrite recipes before you merge.

CI npm npm downloads License: MIT Node.js VS Code Website

The Problem

Your ORM migration just took down production for 47 seconds.

A seemingly innocent ALTER TABLE users ADD COLUMN email_verified BOOLEAN NOT NULL DEFAULT false grabbed an ACCESS EXCLUSIVE lock on your 12M-row users table. Every query queued behind it. Your healthchecks failed. Pods restarted. Customers noticed.

This happens because ORMs hide the Postgres lock semantics from you. You can't fix what you can't see.

What pgfence Does

pgfence analyzes your SQL migration files before they hit production and tells you:

  • What lock mode each DDL statement acquires and what it blocks (reads, writes, or both)
  • Risk level for each operation, optionally adjusted by actual table size from your database
  • Safe rewrite recipes, the exact expand/contract sequence to run instead

Works with raw SQL, TypeORM, Prisma, Knex, Drizzle, and Sequelize migrations. No Ruby, no Rust, no Go. Just TypeScript.

Quick Demo

$ pgfence analyze migrations/add-email-verified.sql

pgfence - Migration Safety Report

┌─────────────────────────────────────────────────┬──────────────────┬──────────┬────────┐
│ Statement                                       │ Lock Mode        │ Blocks   │ Risk   │
├─────────────────────────────────────────────────┼──────────────────┼──────────┼────────┤
│ ALTER TABLE users ADD COLUMN email_verified     │ ACCESS EXCLUSIVE │ R + W    │ HIGH   │
│ BOOLEAN NOT NULL DEFAULT false                  │                  │          │        │
├─────────────────────────────────────────────────┼──────────────────┼──────────┼────────┤
│ CREATE INDEX idx_users_email ON users(email)    │ SHARE            │ W        │ MEDIUM │
└─────────────────────────────────────────────────┴──────────────────┴──────────┴────────┘

Policy Violations:
  ✗ Missing SET lock_timeout: add SET lock_timeout = '2s' at the start

Safe Rewrites:
  1. ADD COLUMN with NOT NULL + DEFAULT → split into expand/backfill/contract:
     • ALTER TABLE users ADD COLUMN IF NOT EXISTS email_verified BOOLEAN;
     • Backfill in batches: WITH batch AS (SELECT ctid FROM users WHERE email_verified IS NULL LIMIT 1000 FOR UPDATE SKIP LOCKED) UPDATE users t SET email_verified = <fill_value> FROM batch WHERE t.ctid = batch.ctid;
     • ALTER TABLE users ADD CONSTRAINT ... CHECK (email_verified IS NOT NULL) NOT VALID;
     • ALTER TABLE users VALIDATE CONSTRAINT ...;

  2. CREATE INDEX → use CONCURRENTLY:
     • CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_users_email ON users(email);

=== Coverage ===
Analyzed: 2 statements  |  Unanalyzable: 0  |  Coverage: 100%

Postgres Version Support

pgfence is tested against PostgreSQL 14 through 17. The default assumption is PG 14+ (the oldest version still supported by the PostgreSQL project). Use --min-pg-version to override if needed:

pgfence analyze --min-pg-version 12 migrations/*.sql

Version-sensitive behavior:

  • ADD COLUMN ... DEFAULT <constant> is instant (metadata-only) on PG 11+
  • ALTER TYPE ADD VALUE is instant on PG 12+
  • REINDEX CONCURRENTLY available on PG 12+
  • RENAME COLUMN is instant on PG 14+
  • DETACH PARTITION CONCURRENTLY available on PG 14+

Known Limitations

pgfence performs static analysis. The following are not supported:

  • Dynamic SQL: template literals, string concatenation, runtime-computed table or column names
  • PL/pgSQL and stored procedures: DDL inside DO $$ ... $$ blocks is not analyzed
  • DDL inside functions: CREATE FUNCTION bodies are not parsed for migration safety
  • Non-migration SQL: arbitrary application queries, not just DDL

When dynamic SQL is detected (TypeORM/Knex extractors), pgfence emits a warning rather than silently skipping it. Every report includes a coverage line showing how many statements were analyzed vs. skipped.

To explicitly acknowledge a statement pgfence cannot analyze, add -- pgfence-ignore before it, see Suppressing warnings.

Alternatives

Other tools in this space worth knowing about:

ToolLanguageFocus
SquawkRustSQL linter with GitHub Action
EugeneRustDDL lint + trace modes
strong_migrationsRubyRails/ActiveRecord migration checks

These tools only analyze raw SQL. pgfence is the only migration linter that can analyze ORM migration files (TypeORM, Prisma, Knex, Drizzle, Sequelize), with full AST-walking transpilers that convert builder patterns into analyzable SQL. It also provides DB-size-aware risk scoring and complete expand/contract rewrite recipes.

pgfence's safety rules have been adopted by postgres-language-server (5,100+ stars, Supabase community), which ported 18 rules with explicit source attribution.

VS Code Extension

Get real-time migration safety analysis directly in your editor:

  • Inline diagnostics: lock modes, risk levels, and policy violations as you type
  • Quick fixes: one-click safe rewrite replacements
  • Hover info: lock mode, blocked operations, and safe alternatives

Install from the VS Code Marketplace or search "pgfence" in the Extensions panel. Requires @flvmnt/pgfence installed in your project or globally. See the extension docs for configuration and commands.

Installation

npm install -g @flvmnt/pgfence

Or with pnpm:

pnpm add -g @flvmnt/pgfence

Usage

Install pre-commit or pre-push hook

Prevent footguns locally before committing or pushing by installing a git hook.

To install a pre-commit hook:

pgfence init

(Automatically detects .husky/ or .git/hooks/ and creates a pre-commit hook.)

If you prefer to run checks only when pushing to remote, simply rename the generated file:

# Standard Git
mv .git/hooks/pre-commit .git/hooks/pre-push

# Husky
mv .husky/pre-commit .husky/pre-push

Analyze SQL migrations

pgfence analyze migrations/*.sql

Analyze TypeORM migrations

pgfence analyze --format typeorm src/migrations/*.ts

Analyze Prisma migrations

pgfence analyze --format prisma prisma/migrations/**/migration.sql

Analyze Knex migrations

pgfence analyze --format knex migrations/*.ts

Auto-detect format

pgfence analyze migrations/*  # detects format from file content

DB-size-aware risk scoring

You can provide table stats in two ways:

  • Live connection: pgfence connects to your database and queries pg_stat_user_tables:
pgfence analyze --db-url postgres://readonly@replica:5432/mydb migrations/*.sql
  • Stats snapshot file: use a pre-generated JSON file (e.g. from your CI) so pgfence never needs DB credentials:
pgfence analyze --stats-file pgfence-stats.json migrations/*.sql

If both --db-url and --stats-file are provided, --db-url is used and the stats file is ignored.

When stats are available (from either source), pgfence adjusts risk levels as follows:

Table SizeRisk Adjustment
< 10K rowsNo change
10K - 1M rows+1 level
1M - 10M rows+2 levels
> 10M rowsCRITICAL

Output formats

# Terminal table (default)
pgfence analyze migrations/*.sql

# Machine-readable JSON
pgfence analyze --output json migrations/*.sql

# GitHub PR comment markdown
pgfence analyze --output github migrations/*.sql

CI mode

# Exit 1 if any check exceeds MEDIUM risk
pgfence analyze --ci --max-risk medium migrations/*.sql

Suppressing warnings

Add an inline comment immediately before a statement to suppress checks for it:

-- pgfence-ignore
DROP TABLE old_sessions;  -- all checks suppressed for this statement

-- pgfence-ignore: drop-table
DROP TABLE old_logs;  -- only the drop-table check suppressed; others still fire

-- pgfence-ignore: drop-table, prefer-robust-drop-table
DROP TABLE old_queue;  -- multiple rules suppressed, comma-separated

The directive applies to the single statement immediately following the comment.

SyntaxEffect
-- pgfence-ignoreSuppress all checks for the next statement
-- pgfence-ignore: <ruleId>Suppress one specific rule
-- pgfence-ignore: <ruleId>, <ruleId>Suppress multiple specific rules
-- pgfence: ignore <ruleId>Legacy syntax, still supported

Use --output json to see ruleId values for any check you want to suppress.

What It Catches

pgfence checks 42 DDL patterns against Postgres's lock mode semantics:

Lock & Safety Checks

#PatternLock ModeRiskSafe Alternative
1ADD COLUMN ... NOT NULL (no DEFAULT)ACCESS EXCLUSIVEHIGHAdd nullable, backfill, SET NOT NULL
2ADD COLUMN ... DEFAULT <volatile>ACCESS EXCLUSIVEHIGHAdd without default, backfill in batches
3ADD COLUMN ... DEFAULT <constant>ACCESS EXCLUSIVE (instant)LOWSafe on PG11+ (metadata-only)
4ADD COLUMN ... GENERATED STOREDACCESS EXCLUSIVEHIGHAdd regular column + trigger + backfill
5CREATE INDEX (non-concurrent)SHAREMEDIUMCREATE INDEX CONCURRENTLY
6DROP INDEX (non-concurrent)ACCESS EXCLUSIVEMEDIUMDROP INDEX CONCURRENTLY
7ALTER COLUMN TYPE (text/varchar widening)ACCESS EXCLUSIVELOWMetadata-only, no table rewrite
ALTER COLUMN TYPE varchar(N)ACCESS EXCLUSIVEMEDIUMSafe if widening; verify with schema
ALTER COLUMN TYPE (cross-family)ACCESS EXCLUSIVEHIGHExpand/contract pattern
8ALTER COLUMN SET NOT NULLACCESS EXCLUSIVEMEDIUMCHECK constraint NOT VALID + validate
9ADD CONSTRAINT ... FOREIGN KEYSHARE ROW EXCLUSIVEHIGHNOT VALID + VALIDATE CONSTRAINT
10ADD CONSTRAINT ... CHECKSHARE ROW EXCLUSIVEMEDIUMNOT VALID + VALIDATE CONSTRAINT
11ADD CONSTRAINT ... UNIQUESHARE ROW EXCLUSIVEHIGHCONCURRENTLY unique index + USING INDEX
ADD CONSTRAINT ... UNIQUE USING INDEXSHARE UPDATE EXCLUSIVELOWInstant, attaches pre-built index
12ADD CONSTRAINT ... EXCLUDESHARE ROW EXCLUSIVEHIGHNo concurrent alternative; use lock_timeout
13DROP TABLEACCESS EXCLUSIVECRITICALSeparate release
14DROP COLUMNACCESS EXCLUSIVEHIGHRemove app references first, then drop
15TRUNCATEACCESS EXCLUSIVECRITICALBatched DELETE
16TRUNCATE ... CASCADEACCESS EXCLUSIVECRITICALExplicit per-table truncate or batched DELETE
17RENAME COLUMNACCESS EXCLUSIVELOWInstant on PG14+
18RENAME TABLEACCESS EXCLUSIVEHIGHRename + create view for backwards compat
19VACUUM FULLACCESS EXCLUSIVEHIGHUse pg_repack
20ALTER TYPE ... ADD VALUE (PG < 12)ACCESS EXCLUSIVEMEDIUMUpgrade to PG12+ for instant enum adds
ALTER TYPE ... ADD VALUE (PG12+)EXCLUSIVE (instant)LOWSafe; cannot run inside transaction
21ATTACH PARTITION (PG < 12)ACCESS EXCLUSIVEHIGHCreate matching CHECK constraint first
ATTACH PARTITION (PG12+)SHARE UPDATE EXCLUSIVEMEDIUMBriefly locks parent; CHECK constraint helps
22DETACH PARTITION (non-concurrent)ACCESS EXCLUSIVEHIGHDETACH PARTITION CONCURRENTLY (PG14+)
23REFRESH MATERIALIZED VIEWACCESS EXCLUSIVEHIGHREFRESH MATERIALIZED VIEW CONCURRENTLY
REFRESH MATERIALIZED VIEW CONCURRENTLYEXCLUSIVEMEDIUMBlocks writes; requires unique index
24aREINDEX TABLE (non-concurrent)SHAREHIGHREINDEX TABLE CONCURRENTLY (PG12+)
24bREINDEX INDEX (non-concurrent)ACCESS EXCLUSIVEHIGHREINDEX INDEX CONCURRENTLY (PG12+)
24cREINDEX SCHEMA/DATABASE (non-concurrent)ACCESS EXCLUSIVECRITICALREINDEX CONCURRENTLY (PG12+)
25CREATE TRIGGERSHARE ROW EXCLUSIVEMEDIUMUse lock_timeout to bound lock wait
26DROP TRIGGERACCESS EXCLUSIVEMEDIUMUse lock_timeout to bound lock wait
27ENABLE/DISABLE TRIGGERSHARE ROW EXCLUSIVELOWBlocks concurrent DDL only
28SET LOGGED/UNLOGGEDACCESS EXCLUSIVEHIGHFull table rewrite; no non-blocking alternative

Data Type Best Practices

#PatternRiskSuggestion
29ADD COLUMN ... jsonLOWUse jsonb, json has no equality operator
30ADD COLUMN ... serialMEDIUMUse GENERATED ALWAYS AS IDENTITY
31integer / int columnsLOWUse bigint to avoid future overflow + rewrite
32varchar(N) columnsLOWUse text, changing varchar length requires ACCESS EXCLUSIVE
33timestamp without time zoneLOWUse timestamptz to avoid timezone bugs
34char(N) / character(N) columnsLOWUse text, char pads with spaces and length changes require rewrite
35serial / bigserial / smallserialLOWUse IDENTITY columns, cleaner semantics

Destructive & Domain Checks

#PatternLock ModeRiskSafe Alternative
36DROP DATABASEACCESS EXCLUSIVECRITICALIrreversible, requires separate process
37ALTER DOMAIN ADD CONSTRAINTSHAREHIGHValidates against all tables using domain
38CREATE DOMAIN with constraintACCESS SHARELOWUse table-level CHECK constraints instead

Transaction & Policy Checks

#PatternSeverity
39NOT VALID + VALIDATE CONSTRAINT in same transactionerror
40Multiple ACCESS EXCLUSIVE statements compoundingwarning
41CREATE INDEX CONCURRENTLY inside transactionerror
42Bulk UPDATE without WHERE in migrationwarning

Policy Checks

Beyond DDL analysis, pgfence enforces operational best practices:

  • Missing SET lock_timeout: prevents lock queue death spirals
  • Missing SET statement_timeout: safety net for long operations
  • Missing SET application_name: enables pg_stat_activity visibility
  • Missing SET idle_in_transaction_session_timeout: prevents orphaned locks
  • CREATE INDEX CONCURRENTLY inside transaction: will fail at runtime
  • NOT VALID + VALIDATE in same transaction: defeats the purpose of NOT VALID
  • Multiple ACCESS EXCLUSIVE statements: compounding lock duration
  • Bulk UPDATE without WHERE: should run out-of-band in batches
  • Inline ignore: -- pgfence: ignore <ruleId> to suppress specific checks
  • Visibility logic: skips warnings for tables created in the same migration

Safe Rewrite Recipes

When pgfence detects a dangerous pattern, it outputs the exact safe alternative:

ADD COLUMN with NOT NULL + DEFAULT

Dangerous:

ALTER TABLE users ADD COLUMN email_verified BOOLEAN NOT NULL DEFAULT false;
-- ACCESS EXCLUSIVE lock on entire table for duration of rewrite

Safe (expand/contract):

-- Migration 1: Add nullable column (instant, no lock)
ALTER TABLE users ADD COLUMN IF NOT EXISTS email_verified BOOLEAN;

-- Migration 2: Create index (non-blocking)
CREATE INDEX CONCURRENTLY idx_users_email_verified ON users(email_verified);

-- Out-of-band backfill job (not in migration, repeat until 0 rows updated):
-- WITH batch AS (
--   SELECT ctid FROM users WHERE email_verified IS NULL LIMIT 1000 FOR UPDATE SKIP LOCKED
-- )
-- UPDATE users t SET email_verified = false FROM batch WHERE t.ctid = batch.ctid;

-- Migration 3: Add NOT NULL constraint
ALTER TABLE users ADD CONSTRAINT chk_email_verified CHECK (email_verified IS NOT NULL) NOT VALID;
ALTER TABLE users VALIDATE CONSTRAINT chk_email_verified;
ALTER TABLE users ALTER COLUMN email_verified SET NOT NULL;
ALTER TABLE users DROP CONSTRAINT chk_email_verified;

CI/CD Integration

GitHub Actions

- name: Check migration safety
  uses: flvmnt/pgfence@v1
  with:
    path: migrations/*.sql
    max-risk: medium

GitHub PR Comments

- name: Analyze migrations
  run: |
    npx pgfence analyze --output github migrations/*.sql > pgfence-report.md
- name: Comment on PR
  uses: marocchino/sticky-pull-request-comment@v2
  with:
    path: pgfence-report.md

GitHub Code Scanning (SARIF)

Upload pgfence findings to GitHub Code Scanning for inline PR annotations:

- name: Analyze migrations
  run: npx @flvmnt/pgfence analyze --output sarif migrations/*.sql > pgfence.sarif
- name: Upload to GitHub Code Scanning
  uses: github/codeql-action/upload-sarif@v3
  with:
    sarif_file: pgfence.sarif

Trace Mode (Verified Analysis)

Run migrations against a real Postgres instance to verify pgfence's static analysis:

pgfence trace migrations/*.sql

Trace mode spins up a disposable Docker Postgres container, executes each statement, and compares actual lock behavior against pgfence's predictions. No credentials needed, no risk to real data.

# Specific PG version
pgfence trace --pg-version 14 migrations/*.sql

# Custom Docker image (for PostGIS, pgvector, etc.)
pgfence trace --docker-image postgis/postgis:17 migrations/*.sql

# CI mode (also fails on mismatches between static and traced locks)
pgfence trace --ci --max-risk medium migrations/*.sql

Each statement gets a verification status:

  • Confirmed: static prediction matches actual Postgres behavior
  • Mismatch: static prediction was wrong (trace result takes precedence)
  • Trace-only: trace found something static analysis missed (e.g., table rewrite)
  • Static-only: policy/best-practice check that trace cannot verify

Requires Docker. Use pgfence analyze for static-only analysis without Docker.

pgfence Cloud (Coming Soon)

Upgrade to pgfence Cloud for team-grade migration safety:

  • Approval workflows: require sign-off on HIGH+ risk migrations before merge
  • Exemptions with justification + expiry: bypass a warning with a recorded reason and expiration date
  • Centralized policies: enforce org-wide rules (e.g., "block all CRITICAL risk") that individual developers cannot override
  • SOC2 audit logging: immutable log of every analysis, approval, and bypass
  • Schema drift detection: compare your migrations against production schema
  • Migration history: track every analyzed migration across your org

pgfence Cloud never asks for database credentials. DB-size-aware scoring uses a stats snapshot: your CI runs a provided script against your read replica, outputs a JSON file, and pgfence consumes it locally.

Learn more at pgfence.com.

All cloud features are additive. The source-available CLI works exactly the same without an API key.

Plugins

pgfence supports custom rules via a plugin system. Create a module that exports rule or policy functions, then reference it in your config:

pgfence analyze --plugin ./my-rules.js migrations/*.sql

Plugin rule IDs are namespaced with plugin: to avoid collisions with built-in checks.

Schema Snapshots

For rules that need to know your actual column types (e.g., distinguishing safe varchar widenings from cross-type rewrites), pgfence can load a schema snapshot:

pgfence analyze --schema-file pgfence-schema.json migrations/*.sql

This replaces heuristic guesses with definitive type classification from your database. Generate the snapshot with pgfence extract-schema against a read replica.

Contributing

Adding a new rule

  • Create src/rules/your-rule.ts implementing the check function
  • Add it to the rule pipeline in src/analyzer.ts
  • Add test fixtures in tests/fixtures/
  • Add tests in tests/analyzer.test.ts

Running locally

pnpm install
pnpm test        # Run tests
pnpm typecheck   # Type checking
pnpm lint        # Lint
pnpm build       # Compile

License

MIT © Munteanu Flavius-Ioan

Contact

contact@pgfence.com

Keywords

postgres

FAQs

Package last updated on 16 Mar 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts