
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
prisma-sql
Advanced tools
Convert Prisma queries to optimized SQL with type safety. 2-7x faster than Prisma Client.
Prerender Prisma queries to SQL and execute them directly via postgres.js or better-sqlite3.
Same Prisma API. Same Prisma types. Lower read overhead.
import { PrismaClient } from '@prisma/client'
import { speedExtension, type SpeedClient } from './generated/sql'
import postgres from 'postgres'
const sql = postgres(process.env.DATABASE_URL!)
const basePrisma = new PrismaClient()
export const prisma = basePrisma.$extends(
speedExtension({ postgres: sql }),
) as SpeedClient<typeof basePrisma>
const users = await prisma.user.findMany({
where: { status: 'ACTIVE' },
include: { posts: true },
})
const dashboard = await prisma.$batch((batch) => ({
activeUsers: batch.user.count({ where: { status: 'ACTIVE' } }),
recentPosts: batch.post.findMany({
take: 10,
orderBy: { createdAt: 'desc' },
}),
taskStats: batch.task.aggregate({
_count: true,
_avg: { estimatedHours: true },
}),
}))
prisma-sql accelerates Prisma read queries by skipping Prisma's read execution path and running generated SQL directly through a database-native client.
It keeps the Prisma client for:
It accelerates:
findManyfindFirstfindUniquecountaggregategroupBy$batchPrisma's DX is excellent, but read queries still pay runtime overhead for query-engine planning, validation, transformation, and result mapping.
prisma-sql moves that work out of the hot path:
postgres.js or better-sqlite3The goal is simple:
npm install prisma-sql postgres
npm install prisma-sql better-sqlite3
generator client {
provider = "prisma-client"
}
generator sql {
provider = "prisma-sql-generator"
}
model User {
id Int @id @default(autoincrement())
email String @unique
status String
posts Post[]
}
model Post {
id Int @id @default(autoincrement())
title String
authorId Int
author User @relation(fields: [authorId], references: [id])
}
npx prisma generate
This generates ./generated/sql/index.ts.
import { PrismaClient } from '@prisma/client'
import { speedExtension, type SpeedClient } from './generated/sql'
import postgres from 'postgres'
const sql = postgres(process.env.DATABASE_URL!)
const basePrisma = new PrismaClient()
export const prisma = basePrisma.$extends(
speedExtension({ postgres: sql }),
) as SpeedClient<typeof basePrisma>
import { PrismaClient } from '@prisma/client'
import { speedExtension, type SpeedClient } from './generated/sql'
import Database from 'better-sqlite3'
const db = new Database('./data.db')
const basePrisma = new PrismaClient()
export const prisma = basePrisma.$extends(
speedExtension({ sqlite: db }),
) as SpeedClient<typeof basePrisma>
Apply speedExtension last so it sees the final query surface.
import { PrismaClient } from '@prisma/client'
import { speedExtension, type SpeedClient } from './generated/sql'
import postgres from 'postgres'
const sql = postgres(process.env.DATABASE_URL!)
const basePrisma = new PrismaClient()
const extendedPrisma = basePrisma
.$extends(myCustomExtension)
.$extends(anotherExtension)
export const prisma = extendedPrisma.$extends(
speedExtension({ postgres: sql }),
) as SpeedClient<typeof extendedPrisma>
findManyfindFirstfindUniquecountaggregategroupBy$batch for PostgreSQLThese continue to run through Prisma:
createupdatedeleteupsertcreateManyupdateManydeleteManyIf a query shape is unsupported or cannot be accelerated safely, the extension falls back to Prisma instead of returning incorrect results.
Enable debug: true to see generated SQL and fallback behavior.
Any supported read query can be converted from Prisma args into SQL at runtime.
const users = await prisma.user.findMany({
where: {
status: 'ACTIVE',
email: { contains: '@example.com' },
},
orderBy: { createdAt: 'desc' },
take: 20,
})
@optimizeFor the hottest query shapes, you can prebake SQL at generate time.
/// @optimize {
/// "method": "findMany",
/// "query": {
/// "where": { "status": "ACTIVE" },
/// "orderBy": { "createdAt": "desc" },
/// "skip": "$skip",
/// "take": "$take"
/// }
/// }
model User {
id Int @id @default(autoincrement())
email String @unique
status String
createdAt DateTime @default(now())
}
At runtime:
$batch combines multiple independent read queries into one round trip.
const results = await prisma.$batch((batch) => ({
users: batch.user.findMany({ where: { status: 'ACTIVE' } }),
posts: batch.post.count(),
stats: batch.task.aggregate({ _count: true }),
}))
For supported include trees, prisma-sql can execute flat SQL and reduce rows back into Prisma-like nested results.
Aggregates are mapped back to Prisma-style value types instead of flattening everything into strings or plain numbers.
That includes preserving types like:
DecimalBigIntDateTime_count{ age: { gt: 18, lte: 65 } }
{ status: { in: ['ACTIVE', 'PENDING'] } }
{ status: { notIn: ['DELETED'] } }
{ email: { contains: '@example.com' } }
{ email: { startsWith: 'user' } }
{ email: { endsWith: '.com' } }
{ email: { contains: 'EXAMPLE', mode: 'insensitive' } }
{ AND: [{ status: 'ACTIVE' }, { verified: true }] }
{ OR: [{ role: 'ADMIN' }, { role: 'MODERATOR' }] }
{ NOT: { status: 'DELETED' } }
{ deletedAt: null }
{ deletedAt: { not: null } }
{
include: {
posts: true,
profile: true,
}
}
{
include: {
posts: {
where: { published: true },
orderBy: { createdAt: 'desc' },
take: 5,
include: {
comments: true,
},
},
},
}
{
where: {
posts: { some: { published: true } },
},
}
{
where: {
posts: { every: { published: true } },
},
}
{
where: {
posts: { none: { published: false } },
},
}
{
take: 10,
skip: 20,
orderBy: { createdAt: 'desc' },
}
{
cursor: { id: 100 },
skip: 1,
take: 10,
orderBy: { id: 'asc' },
}
{
orderBy: [
{ status: 'asc' },
{ priority: 'desc' },
{ createdAt: 'desc' },
],
}
For composite cursors, use an orderBy that starts with the cursor fields in the same order.
{
cursor: { tenantId: 10, id: 500 },
skip: 1,
take: 20,
orderBy: [
{ tenantId: 'asc' },
{ id: 'asc' },
],
}
This matches keyset pagination expectations and avoids unstable page boundaries.
await prisma.user.count({
where: { status: 'ACTIVE' },
})
await prisma.task.aggregate({
where: { status: 'DONE' },
_count: { _all: true },
_sum: { estimatedHours: true },
_avg: { estimatedHours: true },
_min: { startedAt: true },
_max: { completedAt: true },
})
await prisma.task.groupBy({
by: ['status', 'priority'],
_count: { _all: true },
_avg: { estimatedHours: true },
having: {
status: {
_count: { gte: 5 },
},
},
})
The cardinality planner is the piece that decides how relation-heavy reads should be executed for best performance.
In practice, it helps choose between strategies such as:
This matters because the fastest strategy depends on cardinality, not just query shape.
A profile include behaves very differently from a posts.comments.likes include.
A naive join strategy can explode row counts:
User -> Profile is usually low fan-outUser -> Posts -> Comments can multiply rows aggressivelyOrganization -> Users -> Sessions -> Events can become huge very quicklyThe planner tries to keep read amplification under control.
To get the best results, prepare your schema and indexes so the planner can make good choices.
Use correct relation fields and uniqueness constraints.
Good examples:
model User {
id Int @id @default(autoincrement())
profile Profile?
posts Post[]
}
model Profile {
id Int @id @default(autoincrement())
userId Int @unique
user User @relation(fields: [userId], references: [id])
}
model Post {
id Int @id @default(autoincrement())
authorId Int
author User @relation(fields: [authorId], references: [id])
@@index([authorId])
}
Why this helps:
@unique on one-to-one foreign keys tells the planner the relation is boundedAt minimum, index:
@relation(fields: [...]) foreign keys on the child sidewhereorderByExample:
model Comment {
id Int @id @default(autoincrement())
postId Int
createdAt DateTime @default(now())
published Boolean @default(false)
post Post @relation(fields: [postId], references: [id])
@@index([postId])
@@index([postId, createdAt])
@@index([postId, published])
}
When including collections, always provide a stable order when practical.
const users = await prisma.user.findMany({
include: {
posts: {
orderBy: { createdAt: 'desc' },
take: 5,
},
},
})
That helps both the planner and the reducer keep result shapes predictable.
Use the cardinality planner wherever your generator/runtime exposes it.
Because config names can differ between versions, the safe rule is:
If your project has planner thresholds, start conservatively:
Use debug and onQuery.
Look for:
const prisma = basePrisma.$extends(
speedExtension({
postgres: sql,
debug: true,
onQuery: (info) => {
console.log(`${info.model}.${info.method} ${info.duration}ms`)
console.log(info.sql)
},
}),
) as SpeedClient<typeof basePrisma>
What good results look like:
$batch or splitting it improves latency materiallyThe cardinality planner collects relation statistics and roundtrip cost measurements directly from the database during prisma generate. In CI/CD pipelines or containerized builds, the database is often unreachable.
Set PRISMA_SQL_SKIP_PLANNER=true to skip stats collection at generate time. The generator will emit default planner values instead.
PRISMA_SQL_SKIP_PLANNER=true npx prisma generate
Run prisma-sql-collect-stats as a pre-start step, after deployment, when the database is reachable.
prisma-sql-collect-stats \
--output dist/prisma/generated/sql/planner.generated.js \
--prisma-client dist/prisma/generated/client/index.js
| Flag | Default | Description |
|---|---|---|
--output | ./dist/prisma/generated/sql/planner.generated.js | Path to the generated planner module |
--prisma-client | @prisma/client | Path to the compiled Prisma client (must expose Prisma.dmmf) |
The script reads DATABASE_URL from the environment (supports .env via dotenv). If the connection fails or times out, it exits silently without blocking startup.
{
"prisma:generate": "PRISMA_SQL_SKIP_PLANNER=true prisma generate",
"collect-planner-stats": "prisma-sql-collect-stats --output dist/prisma/generated/sql/planner.generated.js --prisma-client dist/prisma/generated/client/index.js",
"start:production": "yarn collect-planner-stats; node dist/src/index.js"
}
The semicolon (;) after collect-planner-stats ensures the server starts even if stats collection fails. Use && instead if you want startup to abort on failure.
When stats are not collected, the planner uses conservative defaults:
roundtripRowEquivalent: 73jsonRowFactor: 1.5relationStats: empty (all relations treated as unknown cardinality)This means the planner cannot make informed decisions about join strategies. Queries still work correctly — the planner falls back to safe general-purpose strategies — but relation-heavy reads may not use the optimal execution plan.
Stats collection has a default timeout of 15 seconds. Override with:
PRISMA_SQL_PLANNER_TIMEOUT_MS=5000 yarn collect-planner-stats
For best results with the planner:
@uniqueorderBytake$batch$batch runs multiple independent read queries in one PostgreSQL round trip.
const dashboard = await prisma.$batch((batch) => ({
totalUsers: batch.user.count(),
activeUsers: batch.user.count({
where: { status: 'ACTIVE' },
}),
recentProjects: batch.project.findMany({
take: 5,
orderBy: { createdAt: 'desc' },
include: { organization: true },
}),
taskStats: batch.task.aggregate({
_count: true,
_avg: { estimatedHours: true },
where: { status: 'IN_PROGRESS' },
}),
}))
Do not await inside the callback.
Incorrect:
await prisma.$batch(async (batch) => ({
users: await batch.user.findMany(),
}))
Correct:
await prisma.$batch((batch) => ({
users: batch.user.findMany(),
}))
$transaction when you need transactional guaranteesconst prisma = basePrisma.$extends(
speedExtension({
postgres: sql,
debug: true,
}),
) as SpeedClient<typeof basePrisma>
const prisma = basePrisma.$extends(
speedExtension({
postgres: sql,
onQuery: (info) => {
console.log(`${info.model}.${info.method}: ${info.duration}ms`)
console.log(`prebaked=${info.prebaked}`)
},
}),
) as SpeedClient<typeof basePrisma>
The callback receives:
interface QueryInfo {
model: string
method: string
sql: string
params: unknown[]
duration: number
prebaked: boolean
}
generator sql {
provider = "prisma-sql-generator"
// optional
// dialect = "postgres"
// optional
// output = "./generated/sql"
// optional
// skipInvalid = "true"
}
@optimize examples/// @optimize {
/// "method": "findMany",
/// "query": {
/// "where": { "status": "ACTIVE" }
/// }
/// }
model User {
id Int @id
status String
}
/// @optimize {
/// "method": "findMany",
/// "query": {
/// "where": { "status": "$status" },
/// "skip": "$skip",
/// "take": "$take"
/// }
/// }
model User {
id Int @id
status String
}
/// @optimize {
/// "method": "findMany",
/// "query": {
/// "include": {
/// "posts": {
/// "where": { "published": true },
/// "orderBy": { "createdAt": "desc" },
/// "take": 5
/// }
/// }
/// }
/// }
model User {
id Int @id
posts Post[]
}
import { PrismaClient } from '@prisma/client'
import { speedExtension, type SpeedClient } from './generated/sql'
import postgres from 'postgres'
const sql = postgres(process.env.DATABASE_URL!)
const prisma = new PrismaClient().$extends(
speedExtension({ postgres: sql }),
) as SpeedClient<typeof PrismaClient>
export const config = { runtime: 'edge' }
export default async function handler() {
const users = await prisma.user.findMany()
return Response.json(users)
}
Use the standalone SQL generation API.
import { createToSQL } from 'prisma-sql'
import { MODELS } from './generated/sql'
const toSQL = createToSQL(MODELS, 'sqlite')
export default {
async fetch(request: Request, env: Env) {
const { sql, params } = toSQL('User', 'findMany', {
where: { status: 'ACTIVE' },
})
const result = await env.DB.prepare(sql)
.bind(...params)
.all()
return Response.json(result.results)
},
}
Performance depends on:
Typical gains are strongest when:
Run your own benchmarks on production-shaped data.
speedExtension requires postgres or sqlite clientPass a database-native client to the generated extension.
const prisma = new PrismaClient().$extends(speedExtension({ postgres: sql }))
If generated code targets PostgreSQL, do not pass SQLite, and vice versa.
Override dialect in the generator if needed.
generator sql {
provider = "prisma-sql-generator"
dialect = "postgres"
}
Turn on debug logging and compare generated SQL with Prisma query logs.
const prisma = new PrismaClient().$extends(
speedExtension({
postgres: sql,
debug: true,
}),
)
If behavior differs, open an issue with:
Check these first:
takeorderBy$batchIncrease postgres.js pool size if needed.
const sql = postgres(process.env.DATABASE_URL!, {
max: 50,
})
These should fall back to Prisma:
searchgroupBy edge casesDo I still need Prisma?
Yes. Prisma remains the source of truth for schema, migrations, types, writes, and fallback behavior.
Does this replace Prisma Client?
No. It extends Prisma Client.
What gets accelerated?
Supported read queries only.
What about writes?
Writes continue through Prisma.
Do I need @optimize?
No. It is optional. It only reduces the overhead of repeated hot query shapes.
Does $batch work with SQLite?
Not currently.
Is it safe to use in production?
Use it the same way you would adopt any query-path optimization layer: benchmark it on real data, compare against Prisma for parity, and keep Prisma fallback enabled for unsupported cases.
const prisma = new PrismaClient()
const users = await prisma.user.findMany()
import { PrismaClient } from '@prisma/client'
import { speedExtension, type SpeedClient } from './generated/sql'
import postgres from 'postgres'
const sql = postgres(process.env.DATABASE_URL!)
const basePrisma = new PrismaClient()
export const prisma = basePrisma.$extends(
speedExtension({ postgres: sql }),
) as SpeedClient<typeof basePrisma>
const users = await prisma.user.findMany()
examples/generator-modetests/e2e/postgres.test.tstests/e2e/sqlite.e2e.test.tstests/sql-injection/batch-transaction.test.tsgit clone https://github.com/multipliedtwice/prisma-to-sql
cd prisma-sql
npm install
npm run build
npm test
MIT
FAQs
Convert Prisma queries to optimized SQL with type safety. 2-7x faster than Prisma Client.
The npm package prisma-sql receives a total of 485 weekly downloads. As such, prisma-sql popularity was classified as not popular.
We found that prisma-sql demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.