Research
Security News
Quasar RAT Disguised as an npm Package for Detecting Vulnerabilities in Ethereum Smart Contracts
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
node-pg-migrate
Advanced tools
Node.js database migration management built exclusively for postgres.
$ npm install node-pg-migrate
Installing this module adds a runnable file into your node_modules/.bin
directory. If installed globally (with the -g option), you can run pg-migrate
and if not, you can run ./node_modules/.bin/pg-migrate
You can specify your database connection information using config.
// config/default.json
{
"db": "postgres://postgres:password@localhost:5432/name"
}
or
// config/default.json
{
"db": {
"user": "postgres",
"password": "",
"host": "localhost",
"port": 5432,
"name": "name"
}
}
You could also specify your database url by setting the environment variable DATABASE_URL
.
DATABASE_URL=postgres://postgres@localhost/name node-pg-migrate
You can specify custom JSON file with config (format is same as for db
entry of config file), for example:
// path/to/config.json
{
"user": "postgres",
"password": "",
"host": "localhost",
"port": 5432,
"name": "name"
}
If a .env file exists, it will be loaded using dotenv (if installed) when running the pg-migrate binary.
Depending on your project's setup, it may make sense to write some custom grunt tasks that set this env var and run your migration commands. More on that below.
The following are the available commands:
pg-migrate create {migration-name}
- creates a new migration file with the name you give it. Spaces and underscores will be replaced by dashes and a timestamp is prepended to your file name.pg-migrate up
- runs all up migrations from the current state.pg-migrate up {N}
- runs N up migrations from the current state.pg-migrate down
- runs a single down migration.pg-migrate down {N}
- runs N down migrations from the current state.pg-migrate unlock
- unlocks migrations (if previous up/down migration failed and was not automatically unlocked).pg-migrate redo
- redoes last migration (runs a single down migration, then single up migration).pg-migrate redo {N}
- redoes N last migrations (runs N down migrations, then N up migrations).You can adjust defaults by passing arguments to pg-migrate
:
config-file
(f
) - The file with migration JSON config (defaults to undefined)
schema
(s
) - The schema on which migration will be run (defaults to public
)
database-url-var
(d
) - Name of env variable with database url string (defaults to DATABASE_URL
)
migrations-dir
(m
) - The directory containing your migration files (defaults to migrations
)
migrations-schema
- The schema storing table which migrations have been run (defaults to same value as schema
)
migrations-table
(t
) - The table storing which migrations have been run (defaults to pgmigrations
)
check-order
- Check order of migrations before running them (defaults to true
, to switch it off supply --no-check-order
on command line).
(There should be no migration with timestamp lesser than last run migration.)
See all by running pg-migrate --help
.
Most of configuration options can be also specified in config file.
For SSL connection to DB you can set PGSSLMODE
environment variable to value from list other then disable
.
e.g. PGSSLMODE=require pg-migrate up
(pg will take it into account)
You can use config or your own json file with configuration (config-file
command line option).
Available options are:
migrations-dir
, migrations-schema
, migrations-table
, check-order
- same as above
either url
or [user
], [password
], host
(defaults to localhost), port
(defaults to 5432), name
- for connection details
type-shorthands
- for column type shorthands
You can specify custom types which will be expanded to column definition (e.g. for module.exports = { "type-shorthands": { id: { type: 'uuid', primaryKey: true }, createdAt: { type: 'timestamp', notNull: true, default: new require('node-pg-migrate').PgLiteral('current_timestamp') } } }
it will in pgm.createTable('test', { id: 'id', createdAt: 'createdAt' });
produce SQL CREATE TABLE "test" ("id" uuid PRIMARY KEY, "createdAt" timestamp DEFAULT current_timestamp NOT NULL);
).
pg-migrate
automatically checks if no other migration is running. To do so, it locks the migration table and enters comment there.
There are other options how to do it, but I choose this one (see #88).
In some circumstances it is possible that lock will not be released (Error message - Error: Unable to fetch migrations: Error: Another migration is already running
).
In that case you need to run pg-migrate unlock
to release the lock again.
When you run pg-migrate create
a new migration file is created that looks like this:
exports.up = function up(pgm) {
}
exports.down = function down(pgm) {
}
pgm
is a helper object that provides migration operations and run
is the callback to call when you are done.
IMPORTANT
Calling the migration functions on pgm
doesn't actually migrate your database. These functions just add sql commands to a stack that is run.
If exports.down
is not present in a migration, pg-migrate will try to automatically infer the operations that make up the down migration by reversing the operations of the up migration. Only some operations have automatically inferrable equivalents (details below on each operation). Sometimes, migrations are destructive and cannot be rolled back. In this case, you can set exports.down = false
to tell pg-migrate that the down migration is impossible.
In some cases, you may want to perform some async operation during a migration, for example fetching some information from an external server, or inserting some data into the database. To make a migration block operate in async mode, just add another callback argument to the function signature. However, be aware that NONE of the pgm operations will be executed until run()
is called. Here's an example:
exports.up = function up(pgm, run) {
doSomethingAsync(function() {
run();
});
}
Another way how to perform some async operation is to return Promise from up
or down
function. Example:
exports.up = function(pgm) {
return new Promise(resolve => {
// doSomethingAsync
resolve();
});
}
The pgm
object that is passed to each up/down block has many different operations available. Each operation is simply a function that generates some sql and stores it in the current pgm context.
By default, each migration will be run in a transaction. To disable transactions for a specific migration, call pgm.noTransaction()
This is required for some SQL operations that cannot be run within a transaction. It should be used carefully.
pgm.createTable( tablename, columns, options )
Create a new table - postgres docs
Arguments:
tablename
[string] - name for the new tablecolumns
[object] - column names / options -- see column definitions sectionoptions
[object] - table options (optional)
temporary
[bool] - default falseifNotExists
[bool] - default falseinherits
[string] - table(s) to inherit fromconstraints
[object] - table constraints
check
[string] - sql for a check constraintunique
[string or array of strings] - names of unique columnsprimaryKey
[string or array] - names of primary columnsexclude
[string] - sql for an exclude constraintdeferrable
[boolean] - flag for deferrable tabledeferred
[boolean] - flag for initially deferred deferrable tableforeignKeys
[object or array of objects] - foreign keys specification
columns
[string or array of strings] - names of columnsreferences
[string] - names of foreign table and column namesonDelete
[string] - action to perform on deleteonUpdate
[string] - action to perform on updatematch
[string] - FULL
or SIMPLE
like
[string] - table(s) to inherit fromReverse Operation: dropTable
pgm.dropTable( tablename, options )
Drop existing table - postgres docs
Arguments:
tablename
[string] - name of the table to dropoptions
[object] - options:
ifExists
[boolean] - drops table only if it existscascade
[boolean] - drops also dependent objectspgm.renameTable( tablename, new_tablename )
Rename a table - postgres docs
Arguments:
tablename
[string] - name of the table to renamenew_table
[object] - new name of the tableReverse Operation: same operation in opposite direction
pgm.addColumns( tablename, new_columns )
Add columns to an existing table - postgres docs
Arguments:
tablename
[string] - name of the table to alternew_columns
[object] - column names / options -- see column definitions sectionAliases: addColumn
Reverse Operation: dropColumns
pgm.dropColumns( tablename, columns, options )
Drop columns from a table - postgres docs
Arguments:
tablename
[string] - name of the table to altercolumns
[array of strings or object] - columns to drop (if object, uses keys)options
[object] - options:
ifExists
[boolean] - drops column only if it existscascade
[boolean] - drops also dependent objectsAliases: dropColumn
pgm.renameColumn( tablename, old_column_name, new_column_name )
Rename a column - postgres docs
Arguments:
tablename
[string] - name of the table to alterold_column_name
[string] - current column namenew_column_name
[string] - new column nameReverse Operation: same operation in opposite direction
pgm.alterColumn( tablename, column_name, column_options )
Alter a column (default value, type, allow null) - postgres docs
Arguments:
tablename
[string] - name of the table to altercolumn_name
[string] - column to altercolumn_options
[object] - optional new column options
default
[string or null] - null, stringtype
[string] - new datatypenotNull
[boolean] - sets NOT NULL if trueusing
[string] - adds USING clause to change values in columncollation
[string] - adds COLLATE clause to change values in columnpgm.addConstraint( tablename, constraint_name, expression )
Add a named column constraint - postgres docs
Arguments:
tablename
[string] - name of the table to alterconstraint_name
[string] - name for the constraintexpression
[string or object] - constraint expression (raw sql) or see constraints section of create tableAliases: createConstraint
Reverse Operation: dropConstraint
pgm.dropConstraint( tablename, constraint_name, options )
Drop a named column constraint - postgres docs
Arguments:
tablename
[string] - name of the table to alterconstraint_name
[string] - name for the constraintoptions
[object] - options:
ifExists
[boolean] - drops constraint only if it existscascade
[boolean] - drops also dependent objectspgm.renameConstraint( tablename, old_constraint_name, new_constraint_name )
Rename a constraint - postgres docs
Arguments:
tablename
[string] - name of the table to alterold_constraint_name
[string] - current constraint namenew_constraint_name
[string] - new constraint nameReverse Operation: same operation in opposite direction
pgm.createIndex( tablename, columns, options )
Create a new index - postgres docs
Arguments:
tablename
[string] - name of the table to altercolumns
[string or array of strings] - columns to add to the indexoptions
[index options] - optional options:
name
[string] - name for the index (one will be inferred from table/columns if undefined)unique
[boolean] - set to true if this is a unique indexwhere
[string] - raw sql for where clause of indexconcurrently
[boolean] - create this index concurrentlymethod
[string] - btree | hash | gist | spgist | ginAliases: addIndex
Reverse Operation: dropIndex
pgm.dropIndex( tablename, columns, options )
Drop an index - postgres docs
Arguments:
tablename
[string] - name of the table to altercolumns
[string or array of strings] - column names, used only to infer an index nameoptions
[index options] - optional options:
name
[string] - name of the index to droppgm.createExtension( extension )
Install postgres extension(s) - postgres docs
Arguments:
extension
[string or array of strings] - name(s) of extensions to installAliases: addExtension
Reverse Operation: dropExtension
pgm.dropExtension( extension )
Un-install postgres extension(s) - postgres docs
Arguments:
extension
[string or array of strings] - name(s) of extensions to installpgm.createType( type_name, values )
Create a new data type - postgres docs
Arguments:
type_name
[string] - name of the new typevalues
[array of strings or object] if an array the contents are possible values for an enum type, if an object names and types for a composite typeAliases: addType
Reverse Operation: dropType
pgm.dropType( type_name )
Drop a custom data type - postgres docs
Arguments:
type_name
[string] - name of the new typepgm.renameType( type_name, new_type_name )
Rename a data type - postgres docs
Arguments:
type_name
[string] - name of the type to renamenew_type_name
[string] - name of the new typepgm.addTypeAttribute( type_name, attribute_name, attribute_type )
Add attribute to an existing data type - postgres docs
Arguments:
type_name
[string] - name of the typeattribute_name
[string] - name of the attribute to addattribute_type
[string] - type of the attribute to addpgm.dropTypeAttribute( type_name, attribute_name, options )
Drop attribute from a data type - postgres docs
Arguments:
type_name
[string] - name of the typeattribute_name
[string] - name of the attribute to dropoptions
[object] - options:
ifExists
[boolean] - default falsepgm.setTypeAttribute( type_name, attribute_name, attribute_type )
Set data type of an existing attribute of data type - postgres docs
Arguments:
type_name
[string] - name of the typeattribute_name
[string] - name of the attributeattribute_type
[string] - new type of the attributepgm.addTypeValue( type_name, value, options )
Add value to a list of enum data type - postgres docs
Arguments:
type_name
[string] - name of the typevalue
[string] - value to add to listoptions
[object] - options:
ifNotExists
[boolean] - default falsebefore
[string] - value before which the new value should be addafter
[string] - value after which the new value should be addpgm.renameTypeAttribute( type_name, attribute_name, new_attribute_name )
Rename an attribute of data type - postgres docs
Arguments:
type_name
[string] - name of the typeattribute_name
[string] - name of the attribute to renamenew_attribute_name
[string] - new name of the attributepgm.createRole( role_name, role_options )
Create a new role - postgres docs
Arguments:
role_name
[string] - name of the new rolerole_options
[object] - options:
superuser
[boolean] - default falsecreatedb
[boolean] - default falsecreaterole
[boolean] - default falseinherit
[boolean] - default truelogin
[boolean] - default falsereplication
[boolean] - default falsebypassrls
[boolean]limit
[number] -password
[string] -encrypted
[boolean] - default truevalid
[string] - timestampinRole
[string or array of strings] - role or array of rolesrole
[string or array of strings] - role or array of rolesadmin
[string or array of strings] - role or array of rolesReverse Operation: dropRole
pgm.dropRole( role_name )
Drop a role - postgres docs
Arguments:
role_name
[string] - name of the new rolepgm.alterRole( role_name, role_options )
Alter a role - postgres docs
Arguments:
role_name
[string] - name of the new rolerole_options
[object] - seepgm.renameRole( old_role_name, new_role_name )
Rename a role - postgres docs
Arguments:
old_role_name
[string] - old name of the rolenew_role_name
[string] - new name of the rolepgm.createFunction( function_name, function_params, function_options, definition )
Create a new function - postgres docs
Arguments:
function_name
[string] - name of the new function
function_params
[array] - parameters of the new function
Either array of strings or objects. If array of strings, it is interpreted as is, if array of objects:
mode
[string] - IN
, OUT
, INOUT
, or VARIADIC
name
[string] - name of argumenttype
[string] - datatype of argumentdefault
[string] - default value of argumentfunction_options
[object] - options:
returns
[string] - returns clauselanguage
[string] - language name of function definitionreplace
[boolean] - create or replace functiondelimiter
[string] - delimiter for definitionwindow
[boolean] - window functionbehavior
[string] - IMMUTABLE
, STABLE
, or VOLATILE
onNull
[boolean] - RETURNS NULL ON NULL INPUT
parallel
[string] - UNSAFE
, RESTRICTED
, or SAFE
definition
[string] - definition of function
Reverse Operation: dropFunction
pgm.dropFunction( function_name, function_params, drop_options )
Drop a function - postgres docs
Arguments:
function_name
[string] - name of the the function to dropfunction_params
[array] - seedrop_options
[object] - options:
ifExists
[boolean] - drops function only if it existscascade
[boolean] - drops also dependent objectspgm.renameFunction( old_function_name, function_params, new_function_name )
Rename a function - postgres docs
Arguments:
old_function_name
[string] - old name of the functionfunction_params
[array] - seenew_function_name
[string] - new name of the functionpgm.createTrigger( table_name, trigger_name, trigger_options )
Create a new trigger - postgres docs
Arguments:
table_name
[string] - name of the table where the new trigger will livetrigger_name
[string] - name of the new triggertrigger_options
[object] - options:
when
[string] - BEFORE
, AFTER
, or INSTEAD OF
operation
[string or array of strings] - INSERT
, UPDATE[ OF ...]
, DELETE
or TRUNCATE
constraint
[boolean] - creates constraint triggerfunction
[string] - the name of procedure to executelevel
[string] - STATEMENT
, or ROW
condition
[string] - condition to met to execute triggerdeferrable
[boolean] - flag for deferrable constraint triggerdeferred
[boolean] - flag for initially deferred deferrable constraint triggerdefinition
[string] - optional definition of function which will be created with same name as triggerReverse Operation: dropTrigger
pgm.dropTrigger( table_name, trigger_name, drop_options )
Drop a trigger - postgres docs
Arguments:
table_name
[string] - name of the table where the trigger livestrigger_name
[string] - name of the the trigger to dropdrop_options
[object] - options:
ifExists
[boolean] - drops trigger only if it existscascade
[boolean] - drops also dependent objectspgm.renameTrigger( table_name, old_trigger_name, new_trigger_name )
Rename a trigger - postgres docs
Arguments:
table_name
[string] - name of the table where the trigger livesold_trigger_name
[string] - old name of the triggernew_trigger_name
[string] - new name of the triggerpgm.createSchema( schema_name, schema_options )
Create a new schema - postgres docs
Arguments:
schema_name
[string] - name of the new schemaschema_options
[object] - options:
ifNotExists
[boolean] - adds IF NOT EXISTS
clauseauthorization
[string] - alternative user to own new schemaReverse Operation: dropSchema
pgm.dropSchema( schema_name, drop_options )
Drop a schema - postgres docs
Arguments:
schema_name
[string] - name of the the schema to dropdrop_options
[object] - options:
ifExists
[boolean] - drops schema only if it existscascade
[boolean] - drops also dependent objectspgm.renameSchema( old_schema_name, new_schema_name )
Rename a schema - postgres docs
Arguments:
old_schema_name
[string] - old name of the schemanew_schema_name
[string] - new name of the schemapgm.sql( sql )
Run raw sql -- with some optional very basic mustache templating
Arguments:
sql
[string] - SQL query to runargs
[object] - (optional) key/val of arguments to replacepgm.func( sql )
Inserts raw string, which is not escaped
e.g. pgm.func('CURRENT_TIMESTAMP')
to use in default
option for column definition
Arguments:
sql
[string] - string to not be escapedThe createTable
and addColumns
methods both take a columns
argument that specifies column names and options. It is a object (key/value) where each key is the name of the column, and the value is another object that defines the options for the column.
type
[string] - data type (use normal postgres types)collation
[string] - collation of data typeunique
[boolean] - set to true to add a unique constraint on this columnprimaryKey
[boolean] - set to true to make this column the primary keynotNull
[boolean] - set to true to make this column not nulldefault
[string] - adds DEFAULT clause for columncheck
[string] - sql for a check constraint for this columnreferences
[string] - a table name that this column is a foreign key toonDelete
[string] - adds ON DELETE constraint for a reference columnonUpdate
[string] - adds ON UPDATE constraint for a reference columnmatch
[string] - FULL
or SIMPLE
Data type strings will be passed through directly to postgres, so write types as you would if you were writing the queries by hand.
There are some aliases on types to make things more foolproof: (int, string, float, double, datetime, bool)
There is a shorthand to pass only the type instead of an options object:
pgm.addColumns('myTable', { age: 'integer' });
is equivalent to
pgm.addColumns('myTable', { age: { type: 'integer' } });
There is a shorthand for normal auto-increment IDs:
pgm.addColumns('myTable', { id: 'id' });
is equivalent to
pgm.addColumns('myTable', { id: { type: 'serial', primaryKey: true } });
Instead of passing string as name to pgm
functions, you can pass an object with keys schema
and name
. E.g.
pgm.createTable( {schema: 'my_schema', name: 'my_table_name'}, {id: 'serial'});
will generate
CREATE TABLE "my_schema"."my_table_name" (
"id" serial
);
Why only Postgres? - By writing this migration tool specifically for postgres instead of accommadating many databases, we can actually provide a full featured tool that is much simpler to use and maintain. I was tired of using crippled database tools just in case one day we switch our database.
Async / Sync - Everything is async in node, and that's great, but a migration tool should really just be a fancy wrapper that generates SQL. Most other migration tools force you to bring in control flow libraries or wrap everything in callbacks as soon as you want to do more than a single operation in a migration. Plus by building up a stack of operations, we can automatically infer down migrations (sometimes) to save even more time.
Naming / Raw Sql - Many tools force you to use their constants to do things like specify data types. Again, this tool should be a fancy wrapper that generates SQL, so whenever possible, it should just pass through user values directly to the SQL. The hard part is remembering the syntax of the specific operation, not remembering how to type "timestamp"!
The MIT License (MIT)
Copyright (c) 2016 Jan Dolezel <dolezel.jan@gmail.com>
Copyright (c) 2014 Theo Ephraim
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
FAQs
PostgreSQL database migration management tool for node.js
The npm package node-pg-migrate receives a total of 51,121 weekly downloads. As such, node-pg-migrate popularity was classified as popular.
We found that node-pg-migrate demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
Security News
Research
A supply chain attack on Rspack's npm packages injected cryptomining malware, potentially impacting thousands of developers.
Research
Security News
Socket researchers discovered a malware campaign on npm delivering the Skuld infostealer via typosquatted packages, exposing sensitive data.