Getting started
Install
$ npm install postgres
Use
const postgres = require('postgres')
const sql = postgres({ ...options })
module.exports = sql
const sql = require('./db.js')
const users = await sql`
select name, age from users
`
Connection options postgres([url], [options])
You can use either a postgres://
url connection string or the options to define your database connection properties. Options in the object will override any present in the url.
const sql = postgres('postgres://username:password@host:port/database', {
host : '',
port : 5432,
path : '',
database : '',
username : '',
password : '',
ssl : false,
max : 10,
idle_timeout : 0,
connect_timeout : 30,
no_prepare : false,
types : [],
onnotice : fn,
onparameter : fn,
debug : fn,
transform : {
column : fn,
value : fn,
row : fn
},
connection : {
application_name : 'postgres.js',
...
},
target_session_attrs : null,
fetch_array_types : true,
})
SSL
More info for the ssl
option can be found in the Node.js docs for tls connect options.
Although it is vulnerable to MITM attacks, a common configuration for the ssl
option for some cloud providers like Heroku is to set rejectUnauthorized
to false
(if NODE_ENV
is production
):
const sql =
process.env.NODE_ENV === 'production'
?
postgres({ ssl: { rejectUnauthorized: false } })
: postgres();
Multi host connections - High Availability (HA)
Connection uri strings with multiple hosts works like in psql multiple host uris
Connecting to the specified hosts/ports will be tried in order, and on a successfull connection retries will be reset. This ensures that hosts can come up and down seamless to your application.
If you specify target_session_attrs: 'read-write'
or PGTARGETSESSIONATTRS=read-write
Postgres.js will only connect to a writeable host allowing for zero down time failovers.
Auto fetching of array types
When Postgres.js first connects to the database it automatically fetches array type information.
If you have revoked access to pg_catalog
this feature will no longer work and will need to be disabled.
You can disable fetching array types by setting fetch_array_types
to false
when creating an instance.
Environment Variables for Options
It is also possible to connect to the database without a connection string or any options. Postgres.js will fall back to the common environment variables used by psql
as in the table below:
const sql = postgres()
Option | Environment Variables |
---|
host | PGHOST |
port | PGPORT |
database | PGDATABASE |
username | PGUSERNAME or PGUSER |
password | PGPASSWORD |
idle_timeout | PGIDLE_TIMEOUT |
connect_timeout | PGCONNECT_TIMEOUT |
Query sql` ` -> Promise
A query will always return a Promise
which resolves to a results array [...]{ count, command, columns }
. Destructuring is great to immediately access the first element.
const [new_user] = await sql`
insert into users (
name, age
) values (
'Murray', 68
)
returning *
`
Query parameters
Parameters are automatically inferred and handled by Postgres so that SQL injection isn't possible. No special handling is necessary, simply use JS tagged template literals as usual.
let search = 'Mur'
const users = await sql`
select
name,
age
from users
where
name like ${ search + '%' }
`
Be careful with quotation marks here. Because Postgres infers the types, you don't need to wrap your interpolated parameters in quotes like '${name}'
. In fact, this will cause an error because the tagged template replaces ${name}
with $1
in the query string, leaving Postgres to do the interpolation. If you wrap that in a string, Postgres will see '$1'
and interpret it as a string as opposed to a parameter.
Arrays
Arrays will be handled by replacement parameters too, so where in
queries are also simple.
const users = await sql`
select
*
from users
where age in (${ [68, 75, 23] })
`
TypeScript support
postgres
has TypeScript support. You can pass a row list type for your queries in this way:
interface User {
id: number
name: string
}
const users = await sql<User[]>`SELECT * FROM users`
users[0].id
users[1].name
users[0].invalid
However, be sure to check the array length to avoid accessing properties of undefined
rows:
const users = await sql<User[]>`SELECT * FROM users WHERE id = ${id}`
if (!users.length)
throw new Error('Not found')
return users[0]
You can also prefer destructuring when you only care about a fixed number of rows.
In this case, we recommand you to prefer using tuples to handle undefined
properly:
const [user]: [User?] = await sql`SELECT * FROM users WHERE id = ${id}`
if (!user)
throw new Error('Not found')
return user
const [first, second]: [User?] = await sql`SELECT * FROM users WHERE id = ${id}`
const [first, second] = await sql<[User?]>`SELECT * FROM users WHERE id = ${id}`
All the public API is typed. Also, TypeScript support is still in beta. Feel free to open an issue if you have trouble with types.
Stream sql` `.stream(fn) -> Promise
If you want to handle rows returned by a query one by one, you can use .stream
which returns a promise that resolves once there are no more rows.
await sql`
select created_at, name from events
`.stream(row => {
})
Cursor sql` `.cursor([rows = 1], fn) -> Promise
Use cursors if you need to throttle the amount of rows being returned from a query. New results won't be requested until the promise / async callback function has resolved.
await sql`
select * from generate_series(1,4) as x
`.cursor(async row => {
await http.request('https://example.com/wat', { row })
})
A single row will be returned by default, but you can also request batches by setting the number of rows desired in each batch as the first argument. That is usefull if you can do work with the rows in parallel like in this example:
await sql`
select * from generate_series(1,1000) as x
`.cursor(10, async rows => {
await Promise.all(rows.map(row =>
http.request('https://example.com/wat', { row })
))
})
If an error is thrown inside the callback function no more rows will be requested and the promise will reject with the thrown error.
You can also stop receiving any more rows early by returning an end token sql.END
from the callback function.
await sql`
select * from generate_series(1,1000) as x
`.cursor(row => {
return Math.random() > 0.9 && sql.END
})
Raw sql``.raw()
Using .raw()
will return rows as an array with Buffer
values for each column, instead of objects.
This can be useful to receive identical named columns, or for specific performance / transformation reasons. The column definitions are still included on the result array with access to parsers for each column.
Listen and notify
When you call listen, a dedicated connection will automatically be made to ensure that you receive notifications in real time. This connection will be used for any further calls to listen. Listen returns a promise which resolves once the LISTEN
query to Postgres completes, or if there is already a listener active.
await sql.listen('news', payload => {
const json = JSON.parse(payload)
console.log(json.this)
})
Notify can be done as usual in sql, or by using the sql.notify
method.
sql.notify('news', JSON.stringify({ no: 'this', is: 'news' }))
Tagged template function sql``
Tagged template functions are not just ordinary template literal strings. They allow the function to handle any parameters within before interpolation. This means that they can be used to enforce a safe way of writing queries, which is what Postgres.js does. Any generic value will be serialized according to an inferred type, and replaced by a PostgreSQL protocol placeholders $1, $2, ...
and then sent to the database as a parameter to let it handle any need for escaping / casting.
This also means you cannot write dynamic queries or concat queries together by simple string manipulation. To enable dynamic queries in a safe way, the sql
function doubles as a regular function which escapes any value properly. It also includes overloads for common cases of inserting, selecting, updating and querying.
Dynamic query helpers - sql()
inside tagged template
Postgres.js has a safe, ergonomic way to aid you in writing queries. This makes it easier to write dynamic insert
, select
and update
queries, and pass where
parameters.
Insert
const user = {
name: 'Murray',
age: 68
}
sql`
insert into users ${
sql(user, 'name', 'age')
}
`
insert into users (name, age) values ($1, $2)
You can leave out the column names and simply do sql(user)
if you want to get all fields from the object as columns, but be careful not to allow users to supply columns you don't want.
Multiple inserts in one query
If you need to insert multiple rows at the same time it's also much faster to do it with a single insert
. Simply pass an array of objects to sql()
.
const users = [{
name: 'Murray',
age: 68,
garbage: 'ignore'
}, {
name: 'Walter',
age: 78
}]
sql`
insert into users ${
sql(users, 'name', 'age')
}
`
Update
This is also useful for update queries
const user = {
id: 1,
name: 'Muray'
}
sql`
update users set ${
sql(user, 'name')
} where
id = ${ user.id }
`
update users set name = $1 where id = $2
Select
const columns = ['name', 'age']
sql`
select ${
sql(columns)
} from users
`
select name, age from users
Dynamic table name
const table = 'users'
sql`
select id from ${sql(table)}
`
select id from users
Arrays sql.array(Array)
PostgreSQL has a native array type which is similar to js arrays, but only allows the same type and shape for nested items. This method automatically infers the item type and serializes js arrays into PostgreSQL arrays.
const types = sql`
insert into types (
integers,
strings,
dates,
buffers,
multi
) values (
${ sql.array([1,2,3,4,5]) },
${ sql.array(['Hello', 'Postgres']) },
${ sql.array([new Date(), new Date(), new Date()]) },
${ sql.array([Buffer.from('Hello'), Buffer.from('Postgres')]) },
${ sql.array([[[1,2],[3,4]][[5,6],[7,8]]]) },
)
`
JSON sql.json(object)
const body = { hello: 'postgres' }
const [{ json }] = await sql`
insert into json (
body
) values (
${ sql.json(body) }
)
returning body
`
File query sql.file(path, [args], [options]) -> Promise
Using an .sql
file for a query. The contents will be cached in memory so that the file is only read once.
sql.file(path.join(__dirname, 'query.sql'), [], {
cache: true
})
Subscribe / Realtime
Postgres.js implements the logical replication protocol of PostgreSQL to support subscription to realtime updates of insert
, update
and delete
operations.
NOTE To make this work you must create the proper publications in your database, enable logical replication by setting wal_level = logical
in postgresql.conf
and connect using either a replication or superuser.
Quick start
Create a publication (eg. in migration)
CREATE PUBLICATION alltables FOR ALL TABLES
Subscribe to updates
const sql = postgres({ publications: 'alltables' })
const { unsubscribe } = await sql.subscribe('insert:events', row =>
)
Subscribe pattern
You can subscribe to specific operations, tables or even rows with primary keys.
operation
:
schema
.
table
=
primary_key
operation
is one of * | insert | update | delete
and defaults to *
schema
defaults to public.
table
is a specific table name and defaults to *
primary_key
can be used to only subscribe to specific rows
Examples
sql.subscribe('*', () => )
sql.subscribe('insert', () => )
sql.subscribe('*:users', () => )
sql.subscribe('delete:users', () => )
sql.subscribe('update:users=1', () => )
Transactions
BEGIN / COMMIT sql.begin(fn) -> Promise
Calling begin with a function will return a Promise which resolves with the returned value from the function. The function provides a single argument which is sql
with a context of the newly created transaction. BEGIN
is automatically called, and if the Promise fails ROLLBACK
will be called. If it succeeds COMMIT
will be called.
const [user, account] = await sql.begin(async sql => {
const [user] = await sql`
insert into users (
name
) values (
'Alice'
)
`
const [account] = await sql`
insert into accounts (
user_id
) values (
${ user.user_id }
)
`
return [user, account]
})
SAVEPOINT sql.savepoint([name], fn) -> Promise
sql.begin(async sql => {
const [user] = await sql`
insert into users (
name
) values (
'Alice'
)
`
const [account] = (await sql.savepoint(sql =>
sql`
insert into accounts (
user_id
) values (
${ user.user_id }
)
`
).catch(err => {
})) || []
return [user, account]
})
.then(([user, account]) => {
})
.catch(() => {
})
Do note that you can often achieve the same result using WITH
queries (Common Table Expressions) instead of using transactions.
Custom Types
You can add ergonomic support for custom types, or simply pass an object with a { type, value }
signature that contains the Postgres oid
for the type and the correctly serialized value. (oid
values for types can be found in the pg_catalog.pg_types
table.)
Adding Query helpers is the recommended approach which can be done like this:
const sql = postgres({
types: {
rect: {
to : 1337,
from : [1337],
serialize : ({ x, y, width, height }) => [x, y, width, height],
parse : ([x, y, width, height]) => { x, y, width, height }
}
}
})
const [custom] = sql`
insert into rectangles (
name,
rect
) values (
'wat',
${ sql.types.rect({ x: 13, y: 37, width: 42, height: 80 }) }
)
returning *
`
Teardown / Cleanup
To ensure proper teardown and cleanup on server restarts use sql.end({ timeout: 0 })
before process.exit()
.
Calling sql.end()
will reject new queries and return a Promise which resolves when all queries are finished and the underlying connections are closed. If a timeout is provided any pending queries will be rejected once the timeout is reached and the connections will be destroyed.
Sample shutdown using Prexit
import prexit from 'prexit'
prexit(async () => {
await sql.end({ timeout: 5 })
await new Promise(r => server.close(r))
})
Numbers, bigint, numeric
Number
in javascript is only able to represent 253-1 safely which means that types in PostgreSQLs like bigint
and numeric
won't fit into Number
.
Since Node.js v10.4 we can use BigInt
to match the PostgreSQL type bigint
which is returned for eg. count(*)
. Unfortunately it doesn't work with JSON.stringify
out of the box, so Postgres.js will return it as a string.
If you want to use BigInt
you can add this custom type:
const sql = postgres({
types: {
bigint: postgres.BigInt
}
})
There is currently no way to handle numeric / decimal
in a native way in Javascript, so these and similar will be returned as string
. You can also handle types like these using custom types if you want to.
The Connection Pool
Connections are created lazily once a query is created. This means that simply doing const sql = postgres(...)
won't have any effect other than instantiating a new sql
instance.
No connection will be made until a query is made.
This means that we get a much simpler story for error handling and reconnections. Queries will be sent over the wire immediately on the next available connection in the pool. Connections are automatically taken out of the pool if you start a transaction using sql.begin()
, and automatically returned to the pool once your transaction is done.
Any query which was already sent over the wire will be rejected if the connection is lost. It'll automatically defer to the error handling you have for that query, and since connections are lazy it'll automatically try to reconnect the next time a query is made. The benefit of this is no weird generic "onerror" handler that tries to get things back to normal, and also simpler application code since you don't have to handle errors out of context.
There are no guarantees about queries executing in order unless using a transaction with sql.begin()
or setting max: 1
. Of course doing a series of queries, one awaiting the other will work as expected, but that's just due to the nature of js async/promise handling, so it's not necessary for this library to be concerned with ordering.
Idle timeout
By default, connections will not close until .end()
is called. However, it may be useful to have them close automatically when:
- there is no activity for some period of time
- if using Postgres.js in Lamdas / Serverless environments
- if using Postgres.js with a database service that automatically closes the connection after some time (see
ECONNRESET
issue)
This can be done using the idle_timeout
option to specify the amount of seconds to wait before automatically closing an idle connection.
For example, to close idle connections after 2 seconds:
const sql = postgres({
idle_timeout: 2
})
Prepared statements
Prepared statements will automatically be created for any queries where it can be inferred that the query is static. This can be disabled by using the no_prepare
option. For instance β this is useful when using PGBouncer in transaction mode
.
sql.unsafe
- Advanced unsafe use cases
Unsafe queries sql.unsafe(query, [args], [options]) -> promise
If you know what you're doing, you can use unsafe
to pass any string you'd like to postgres. Please note that this can lead to sql injection if you're not careful.
sql.unsafe('select ' + danger + ' from users where id = ' + dragons)
Errors
Errors are all thrown to related queries and never globally. Errors coming from PostgreSQL itself are always in the native Postgres format, and the same goes for any Node.js errors eg. coming from the underlying connection.
Query errors will contain a stored error with the origin of the query to aid in tracing errors.
Query errors will also contain the query
string and the parameters
which are not enumerable to avoid accidentally leaking confidential information in logs. To log these it is required to specifically access error.query
and error.parameters
.
There are also the following errors specifically for this library.
UNDEFINED_VALUE
Undefined values are not allowed
Postgres.js won't accept undefined
as values in tagged template queries since it becomes ambiguous what to do with the value. If you want to set something to null, use null
explicitly.
MESSAGE_NOT_SUPPORTED
X (X) is not supported
Whenever a message is received from Postgres which is not supported by this library. Feel free to file an issue if you think something is missing.
MAX_PARAMETERS_EXCEEDED
Max number of parameters (65534) exceeded
The postgres protocol doesn't allow more than 65534 (16bit) parameters. If you run into this issue there are various workarounds such as using sql([...])
to escape values instead of passing them as parameters.
SASL_SIGNATURE_MISMATCH
Message type X not supported
When using SASL authentication the server responds with a signature at the end of the authentication flow which needs to match the one on the client. This is to avoid man in the middle attacks. If you receive this error the connection was cancelled because the server did not reply with the expected signature.
NOT_TAGGED_CALL
Query not called as a tagged template literal
Making queries has to be done using the sql function as a tagged template. This is to ensure parameters are serialized and passed to Postgres as query parameters with correct types and to avoid SQL injection.
AUTH_TYPE_NOT_IMPLEMENTED
Auth type X not implemented
Postgres supports many different authentication types. This one is not supported.
CONNECTION_CLOSED
write CONNECTION_CLOSED host:port
This error is thrown if the connection was closed without an error. This should not happen during normal operation, so please create an issue if this was unexpected.
CONNECTION_ENDED
write CONNECTION_ENDED host:port
This error is thrown if the user has called sql.end()
and performed a query afterwards.
CONNECTION_DESTROYED
write CONNECTION_DESTROYED host:port
This error is thrown for any queries that were pending when the timeout to sql.end({ timeout: X })
was reached.
CONNECTION_CONNECT_TIMEOUT
write CONNECTION_CONNECT_TIMEOUT host:port
This error is thrown if the startup phase of the connection (tcp, protocol negotiation and auth) took more than the default 30 seconds or what was specified using connect_timeout
or PGCONNECT_TIMEOUT
.
Migration tools
Postgres.js doesn't come with any migration solution since it's way out of scope, but here are some modules that supports Postgres.js for migrations:
Thank you
A really big thank you to @JAForbes who introduced me to Postgres and still holds my hand navigating all the great opportunities we have.
Thanks to @ACXgit for initial tests and dogfooding.
Also thanks to Ryan Dahl for letting me have the postgres
npm package name.