New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

django-pg-zero-downtime-migrations

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

django-pg-zero-downtime-migrations

Django postgresql backend that apply migrations with respect to database locks

  • 0.17
  • PyPI
  • Socket score

Maintainers
1

PyPI PyPI - Python Version PyPI - Django Version Postgres Version PyPI - License

PyPI - Downloads GitHub last commit Build Status

django-pg-zero-downtime-migrations

Django postgresql backend that apply migrations with respect to database locks.

Installation

pip install django-pg-zero-downtime-migrations

Usage

To enable zero downtime migrations for postgres just setup django backend provided by this package and add most safe settings:

DATABASES = {
    'default': {
        'ENGINE': 'django_zero_downtime_migrations.backends.postgres',
        #'ENGINE': 'django_zero_downtime_migrations.backends.postgis',
        ...
    }
}
ZERO_DOWNTIME_MIGRATIONS_LOCK_TIMEOUT = '2s'
ZERO_DOWNTIME_MIGRATIONS_STATEMENT_TIMEOUT = '2s'
ZERO_DOWNTIME_MIGRATIONS_FLEXIBLE_STATEMENT_TIMEOUT = True
ZERO_DOWNTIME_MIGRATIONS_RAISE_FOR_UNSAFE = True

NOTE: this backend brings zero downtime improvements only for migrations (schema and RunSQL operations, but not for RunPython operation), for other purpose it works the same as standard django backend.

NOTE: this package is in beta, please check your migrations SQL before applying on production and submit issue for any question.

Differences with standard django backend

This backend provides same result state (except of ZERO_DOWNTIME_MIGRATIONS_KEEP_DEFAULT=True usage for django < 5.0), but different way and with additional guarantees for avoiding stuck table locks.

This backend doesn't use transactions for migrations (except RunPython operation), because not all SQL fixes can be run in transaction and it allows to avoid deadlocks for complex migration. So when your migration will down in the middle of migration file operations you need to fix db state manually (instead potential downtime). For that reason good practice to make migration modules small as possible. Also ZERO_DOWNTIME_MIGRATIONS_IDEMPOTENT_SQL=True allows to automate manual db state fixing.

Deployment flow

There are requirements for zero downtime deployment:

  1. We have one database;
  2. We have several instances with application - application always should be available, even you restart one of instances;
  3. We have balancer before instances;
  4. Our application works fine before, on and after migration - old application works fine with old and new database schema version;
  5. Our application works fine before, on and after instance updating - old and new application versions work fine with new database schema version.

deployment timeline

Flow:

  1. apply migrations
  2. disconnect instance form balancer, restart it and back to balancer - repeat this operation one by one for all instances

If our deployment don't satisfy zero downtime deployment rules, then we split it to smaller deployments.

deployment flow

Settings

ZERO_DOWNTIME_MIGRATIONS_LOCK_TIMEOUT

Apply lock_timeout for SQL statements that require ACCESS EXCLUSIVE lock, default None:

ZERO_DOWNTIME_MIGRATIONS_LOCK_TIMEOUT = '2s'

Allowed values:

  • None - current postgres setting used
  • other - timeout will be applied, 0 and equivalents mean that timeout will be disabled
ZERO_DOWNTIME_MIGRATIONS_STATEMENT_TIMEOUT

Apply statement_timeout for SQL statements that require ACCESS EXCLUSIVE lock, default None:

ZERO_DOWNTIME_MIGRATIONS_STATEMENT_TIMEOUT = '2s'

Allowed values:

  • None - current postgres setting used
  • other - timeout will be applied, 0 and equivalents mean that timeout will be disabled
ZERO_DOWNTIME_MIGRATIONS_FLEXIBLE_STATEMENT_TIMEOUT

Set statement_timeout to 0ms for SQL statements that require SHARE UPDATE EXCLUSIVE lock that useful in case when statement_timeout enabled globally and you try run long-running operations like index creation or constraint validation, default False:

ZERO_DOWNTIME_MIGRATIONS_FLEXIBLE_STATEMENT_TIMEOUT = True
ZERO_DOWNTIME_MIGRATIONS_RAISE_FOR_UNSAFE

Enabled option doesn't allow run potential unsafe migration, default False:

ZERO_DOWNTIME_MIGRATIONS_RAISE_FOR_UNSAFE = True
ZERO_DOWNTIME_DEFERRED_SQL

Define way to apply deferred sql, default True:

ZERO_DOWNTIME_DEFERRED_SQL = True

Allowed values:

  • True - run deferred sql similar to default django way
  • False - run deferred sql as soon as possible
ZERO_DOWNTIME_MIGRATIONS_IDEMPOTENT_SQL

Define idempotent mode, default False:

ZERO_DOWNTIME_MIGRATIONS_IDEMPOTENT_SQL = False

Allowed values:

  • True - skip already applied sql migrations
  • False - standard non atomic django behaviour

As this backend doesn't use transactions for migrations any failed migration can be cause of stopped process in intermediate state. To avoid manual schema manipulation idempotent mode allows to rerun failed migration after fixed issue (eg. data issue or long running CRUD queries).

NOTE: idempotent mode checks rely only on name and index and constraint valid state, so it can ignore name collisions and recommended do not use it for CI checks.

ZERO_DOWNTIME_MIGRATIONS_EXPLICIT_CONSTRAINTS_DROP

Define way to drop foreign key, unique constraints and indexes before drop table or column, default True:

ZERO_DOWNTIME_MIGRATIONS_EXPLICIT_CONSTRAINTS_DROP = True

Allowed values:

  • True - before dropping table drop all foreign keys related to this table and before dropping column drop all foreign keys related to this column, unique constraints on this column and indexes used this column.
  • False - standard django behaviour that will drop constraints with CASCADE mode (some constraints can be dropped explicitly too).

Explicitly dropping constraints and indexes before dropping tables or columns allows for splitting schema-only changes with an ACCESS EXCLUSIVE lock and the deletion of physical files, which can take significant time and cause downtime.

ZERO_DOWNTIME_MIGRATIONS_KEEP_DEFAULT

Define way keep or drop code defaults on database level when adding new column, default False:

ZERO_DOWNTIME_MIGRATIONS_KEEP_DEFAULT = False

Allowed values:

  • True - after adding column with code default this default will not be dropped, this option allows to use ALTER TABLE ADD COLUMN SET DEFAULT NOT NULL as safe operation that much more simple and efficient than creating column without default on database level and populating column next
  • False - after adding column with code default this default will be dropped, this is standard django behaviour

NOTE: this option works only for django < 5.0, in django 5.0+ explicit db_default should be used instead.

PgBouncer and timeouts

In case you using PgBouncer and expect timeouts will work as expected you need make sure that run migrations using session pool_mode or use direct database connection.

How it works

Postgres table level locks

Postgres has different locks on table level that can conflict with each other https://www.postgresql.org/docs/current/static/explicit-locking.html#LOCKING-TABLES:

ACCESS SHAREROW SHAREROW EXCLUSIVESHARE UPDATE EXCLUSIVESHARESHARE ROW EXCLUSIVEEXCLUSIVEACCESS EXCLUSIVE
ACCESS SHAREX
ROW SHAREXX
ROW EXCLUSIVEXXXX
SHARE UPDATE EXCLUSIVEXXXXX
SHAREXXXXX
SHARE ROW EXCLUSIVEXXXXXX
EXCLUSIVEXXXXXXX
ACCESS EXCLUSIVEXXXXXXXX

Migration and business logic locks

Lets split this lock to migration and business logic operations.

  • Migration operations work synchronously in one thread and cover schema migrations (data migrations conflict with business logic operations same as business logic conflict concurrently).
  • Business logic operations work concurrently.
Migration locks
lockoperations
ACCESS EXCLUSIVECREATE SEQUENCE, DROP SEQUENCE, CREATE TABLE, DROP TABLE *, ALTER TABLE **, DROP INDEX
SHARECREATE INDEX
SHARE UPDATE EXCLUSIVECREATE INDEX CONCURRENTLY, DROP INDEX CONCURRENTLY, ALTER TABLE VALIDATE CONSTRAINT ***

*: CREATE SEQUENCE, DROP SEQUENCE, CREATE TABLE, DROP TABLE shouldn't have conflicts, because your business logic shouldn't yet operate with created tables and shouldn't already operate with deleted tables.

**: Not all ALTER TABLE operations take ACCESS EXCLUSIVE lock, but all current django's migrations take it https://github.com/django/django/blob/master/django/db/backends/base/schema.py, https://github.com/django/django/blob/master/django/db/backends/postgresql/schema.py and https://www.postgresql.org/docs/current/static/sql-altertable.html.

***: Django doesn't have VALIDATE CONSTRAINT logic, but we will use it for some cases.

Business logic locks
lockoperationsconflict with lockconflict with operations
ACCESS SHARESELECTACCESS EXCLUSIVEALTER TABLE, DROP INDEX
ROW SHARESELECT FOR UPDATEACCESS EXCLUSIVE, EXCLUSIVEALTER TABLE, DROP INDEX
ROW EXCLUSIVEINSERT, UPDATE, DELETEACCESS EXCLUSIVE, EXCLUSIVE, SHARE ROW EXCLUSIVE, SHAREALTER TABLE, DROP INDEX, CREATE INDEX

So you can find that all django schema changes for exist table conflicts with business logic, but fortunately they are safe or has safe alternative in general.

Postgres row level locks

As business logic mostly works with table rows it's also important to understand lock conflicts on row level https://www.postgresql.org/docs/current/static/explicit-locking.html#LOCKING-ROWS:

lockFOR KEY SHAREFOR SHAREFOR NO KEY UPDATEFOR UPDATE
FOR KEY SHAREX
FOR SHAREXX
FOR NO KEY UPDATEXXX
FOR UPDATEXXXX

Main point there is if you have two transactions that update one row, then second transaction will wait until first will be completed. So for business logic and data migrations better to avoid updates for whole table and use batch operations instead.

NOTE: batch operations also can work faster because postgres can use more optimal execution plan with indexes for small data range.

Transactions FIFO waiting

postgres FIFO

Found same diagram in interesting article http://pankrat.github.io/2015/django-migrations-without-downtimes/.

In this diagram we can extract several metrics:

  1. operation time - time spent changing schema, in the case of long running operations on many rows tables like CREATE INDEX or ALTER TABLE ADD CONSTRAINT, so you need a safe equivalent.
  2. waiting time - your migration will wait until all transactions complete, so there is issue for long running operations/transactions like analytic, so you need avoid it or disable during migration.
  3. queries per second + execution time and connections pool - if executing many queries, especially long running ones, they can consume all available database connections until the lock is released, so you need different optimizations there: run migrations when least busy, decrease query count and execution time, split data.
  4. too many operations in one transaction - you have issues in all previous points for one operation so if you have many operations in one transaction then you have more likelihood to get this issue, so you need avoid too many simultaneous operations in a single transaction (or even not run it in a transaction at all but being careful when an operation fails).

Dealing with timeouts

Postgres has two settings to dealing with waiting time and operation time presented in diagram: lock_timeout and statement_timeout.

SET lock_timeout TO '2s' allow you to avoid downtime when you have long running query/transaction before run migration (https://www.postgresql.org/docs/current/static/runtime-config-client.html#GUC-LOCK-TIMEOUT).

SET statement_timeout TO '2s' allow you to avoid downtime when you have long running migration query (https://www.postgresql.org/docs/current/static/runtime-config-client.html#GUC-STATEMENT-TIMEOUT).

Deadlocks

There no downtime issues for deadlocks, but too many operations in one transaction can take most conflicted lock and release it only after transaction commit or rollback. So it's a good idea to avoid ACCESS EXCLUSIVE lock operations and long time operations in one transaction. Deadlocks also can make you migration stuck on production deployment when different tables will be locked, for example, for FOREIGN KEY that take ACCESS EXCLUSIVE lock for two tables.

Rows and values storing

Postgres store values of different types different ways. If you try to convert one type to another and it stored different way then postgres will rewrite all values. Fortunately some types stored same way and postgres need to do nothing to change type, but in some cases postgres need to check that all values have same with new type limitations, for example string length.

Multiversion Concurrency Control

Regarding documentation https://www.postgresql.org/docs/current/static/mvcc-intro.html data consistency in postgres is maintained by using a multiversion model. This means that each SQL statement sees a snapshot of data. It has advantage for adding and deleting columns without any indexes, CONSTRAINTS and defaults do not change exist data, new version of data will be created on INSERT and UPDATE, delete just mark you record expired. All garbage will be collected later by VACUUM or AUTO VACUUM.

Django migrations hacks

Any schema changes can be processed with creation of new table and copy data to it, but it can take significant time.

#namesafesafe alternativedescription
1CREATE SEQUENCEXsafe operation, because your business logic shouldn't operate with new sequence on migration time *
2DROP SEQUENCEXsafe operation, because your business logic shouldn't operate with this sequence on migration time *
3CREATE TABLEXsafe operation, because your business logic shouldn't operate with new table on migration time *
4DROP TABLEXsafe operation, because your business logic shouldn't operate with this table on migration time *
5ALTER TABLE RENAME TOuse updatable viewunsafe operation, because it's too hard write business logic that operate with two tables simultaneously, so propose to use temporary updatable view and switch names in transaction *
6ALTER TABLE SET TABLESPACEadd new table and copy dataunsafe operation, but probably you don't need it at all or often *
7ALTER TABLE ADD COLUMNXsafe operation if without SET NOT NULL, PRIMARY KEY, UNIQUE *
8ALTER TABLE ADD COLUMN SET DEFAULTXsafe operation, however it can be unsafe if code default used within NOT NULL, for db_default or NULL there are no issue *
9ALTER TABLE ADD COLUMN SET NOT NULL+/-unsafe operation, because doesn't work without SET DEFAULT or after migration old code can insert rows without new column and raise exception, so propose to use ALTER TABLE ADD COLUMN SET DEFAULT with db_default or ALTER TABLE ADD COLUMN and then populate column and then ALTER TABLE ALTER COLUMN SET NOT NULL * and **
10ALTER TABLE ADD COLUMN PRIMARY KEYadd index and add constraintunsafe operation, because you spend time in migration to CREATE INDEX, so propose ALTER TABLE ADD COLUMN and then CREATE INDEX CONCURRENTLY and then ALTER TABLE ADD CONSTRAINT PRIMARY KEY USING INDEX ***
11ALTER TABLE ADD COLUMN UNIQUEadd index and add constraintunsafe operation, because you spend time in migration to CREATE INDEX, so propose ALTER TABLE ADD COLUMN and then CREATE INDEX CONCURRENTLY and then ALTER TABLE ADD CONSTRAINT UNIQUE USING INDEX ***
12ALTER TABLE ALTER COLUMN TYPE+/-unsafe operation, because you spend time in migration to check that all items in column valid or to change type, but some operations can be safe ****
13ALTER TABLE ALTER COLUMN SET NOT NULLadd check constraint beforeunsafe operation, because you spend time in migration to check that all items in column NOT NULL, so propose ALTER TABLE ADD CONSTRAINT CHECK and then ALTER TABLE VALIDATE CONSTRAINT and then ALTER TABLE ALTER COLUMN SET NOT NULL **
14ALTER TABLE ALTER COLUMN DROP NOT NULLXsafe operation
15ALTER TABLE ALTER COLUMN SET DEFAULTXsafe operation
16ALTER TABLE ALTER COLUMN DROP DEFAULTXsafe operation
17ALTER TABLE DROP COLUMNXsafe operation, because your business logic shouldn't operate with this column on migration time, however better ALTER TABLE ALTER COLUMN DROP NOT NULL, ALTER TABLE DROP CONSTRAINT and DROP INDEX before * and *****
18ALTER TABLE RENAME COLUMNuse updatable viewunsafe operation, because it's too hard write business logic that operate with two columns simultaneously, so propose to use temporary updatable view and switch names in transaction *
19ALTER TABLE ADD CONSTRAINT CHECKadd as not valid and validateunsafe operation, because you spend time in migration to check constraint
20ALTER TABLE DROP CONSTRAINT (CHECK)Xsafe operation
21ALTER TABLE ADD CONSTRAINT FOREIGN KEYadd as not valid and validateunsafe operation, because you spend time in migration to check constraint, lock two tables
22ALTER TABLE DROP CONSTRAINT (FOREIGN KEY)Xsafe operation, lock two tables
23ALTER TABLE ADD CONSTRAINT PRIMARY KEYadd index and add constraintunsafe operation, because you spend time in migration to create index ***
24ALTER TABLE DROP CONSTRAINT (PRIMARY KEY)Xsafe operation ***
25ALTER TABLE ADD CONSTRAINT UNIQUEadd index and add constraintunsafe operation, because you spend time in migration to create index ***
26ALTER TABLE DROP CONSTRAINT (UNIQUE)Xsafe operation ***
27ALTER TABLE ADD CONSTRAINT EXCLUDEadd new table and copy data
28ALTER TABLE DROP CONSTRAINT (EXCLUDE)X
29CREATE INDEXCREATE INDEX CONCURRENTLYunsafe operation, because you spend time in migration to create index
30DROP INDEXXDROP INDEX CONCURRENTLYsafe operation ***
31CREATE INDEX CONCURRENTLYXsafe operation
32DROP INDEX CONCURRENTLYXsafe operation ***

*: main point with migration on production without downtime that your old and new code should correctly work before and after migration, lets look this point closely in Dealing with logic that should work before and after migration section.

**: postgres will check that all items in column NOT NULL that take time, lets look this point closely in Dealing with NOT NULL constraint section.

***: postgres will have same behaviour when you skip ALTER TABLE ADD CONSTRAINT UNIQUE USING INDEX and still unclear difference with CONCURRENTLY except difference in locks, lets look this point closely in Dealing with UNIQUE constraint.

****: lets look this point closely in Dealing with ALTER TABLE ALTER COLUMN TYPE section.

*****: if you check migration on CI with python manage.py makemigrations --check you can't drop column in code without migration creation, so in this case you can be useful back migration flow: apply code on all instances and then migrate database

Dealing with logic that should work before and after migration
Adding and removing models and columns

Migrations: CREATE SEQUENCE, DROP SEQUENCE, CREATE TABLE, DROP TABLE, ALTER TABLE ADD COLUMN, ALTER TABLE DROP COLUMN.

This migrations are pretty safe, because your logic doesn't work with this data before migration

Rename models

Migrations: ALTER TABLE RENAME TO.

Standard django's approach does not allow to operate simultaneously for old and new code with old and new table name, hopefully next workaround allows to rename table by splitting migration to few steps:

  1. provide code changes but replace standard migration with SeparateDatabaseAndState sql operation that in transaction rename table and create updatable view that has old table name
    • old code can work with updatable view by old name
    • new code can work with table by new name
  2. after new code deployment old code is not used anymore, so we can drop view
    • new code can work with renamed table
Rename columns

Migrations: ALTER TABLE RENAME COLUMN.

Standard django's approach does not allow to operate simultaneously for old and new code with old and new column name, hopefully next workaround allows to rename column by splitting migration to few steps:

  1. provide code changes but replace standard migration with SeparateDatabaseAndState sql operation that in transaction rename column, rename table to temporary and create updatable view that has old table name with both old and new columns
  2. after new code deployment old code is not used anymore, so in transaction we can drop view and rename table back
    • new code can work with renamed column
Changes for working logic

Migrations: ALTER TABLE SET TABLESPACE, ALTER TABLE ADD CONSTRAINT EXCLUDE.

For this migration too hard implement logic that will work correctly for all instances, so there are two ways to dealing with it:

  1. create new table, copy exist data, drop old table
  2. downtime
Create column not null

Migrations: ALTER TABLE ADD COLUMN NOT NULL.

Postgres doesn't allow to create column with NOT NULL if table not empty and DEFAULT is not provided. So you want to ALTER TABLE ADD COLUMN DEFAULT NOT NULL. Django has two ways to create column default: code default and db_default for django 5.0+. Main difference between them for us in operations they do for migration and old code inserts handling after migration:

Code default migration and business logic SQL:

-- migration
ALTER TABLE tbl ADD COLUMN new_col integer DEFAULT 0 NOT NULL;
ALTER TABLE tbl ALTER COLUMN new_col DROP DEFAULT;

-- business logic
INSERT INTO tbl (old_col) VALUES (1);  -- old code inserts fail
INSERT INTO tbl (old_col, new_col) VALUES (1, 1);  -- new code inserts work fine

db_default migration and business logic SQL:

-- migration
ALTER TABLE tbl ADD COLUMN new_col integer DEFAULT 0 NOT NULL;

-- business logic
INSERT INTO tbl (old_col) VALUES (1);  -- old code inserts work fine with default
INSERT INTO tbl (old_col, new_col) VALUES (1, 1);  -- new code inserts work fine

db_default is most robust way to apply default and it's works fine with NOT NULL constraints too. In django<5.0 you can use ZERO_DOWNTIME_MIGRATIONS_KEEP_DEFAULT=True to emulate db_default behaviour for default field.

Dealing with NOT NULL column constraint

Postgres checks that all column values NOT NULL (full table scan) when you are applying ALTER TABLE ALTER COLUMN SET NOT NULL, this check skipped if appropriate valid CHECK CONSTRAINT exists for postgres 12+. So to make existing column NOT NULL safe way you can follow next steps:

  • ALTER TABLE ADD CONSTRAINT CHECK (column IS NOT NULL) NOT VALID - create invalid check constraint for column, this operation takes ACCESS EXCLUSIVE lock only for table metadata update
  • ALTER TABLE VALIDATE CONSTRAINT - validate constraint, at this moment all column values should be NOT NULL, this operation takes SHARE UPDATE EXCLUSIVE lock until full table scan will be completed
  • ALTER TABLE ALTER COLUMN SET NOT NULL - set column NOT NULL don't check column values if appropriate valid CHECK CONSTRAINT exists, in this case this operation takes ACCESS EXCLUSIVE lock only for table metadata update
  • ALTER TABLE DROP CONSTRAINT - clean up CHECK CONSTRAINT that duplicates column NOT NULL, this operation takes ACCESS EXCLUSIVE lock only for table metadata update
Dealing with UNIQUE constraint

Postgres has two approaches for uniqueness: CREATE UNIQUE INDEX and ALTER TABLE ADD CONSTRAINT UNIQUE - both use unique index inside. Difference that we can find that we cannot apply DROP INDEX CONCURRENTLY for constraint. However it still unclear what difference for DROP INDEX and DROP INDEX CONCURRENTLY except difference in locks, but as we seen before both marked as safe - we don't spend time in DROP INDEX, just wait for lock. So as django use constraint for uniqueness we also have a hacks to use constraint safely.

Dealing with ALTER TABLE ALTER COLUMN TYPE

Next operations are safe:

  1. varchar(LESS) to varchar(MORE) where LESS < MORE
  2. varchar(ANY) to text
  3. numeric(LESS, SAME) to numeric(MORE, SAME) where LESS < MORE and SAME == SAME

For other operations propose to create new column and copy data to it. Eg. some types can be also safe, but you should check yourself.

django-pg-zero-downtime-migrations Changelog

0.17

  • added django 5.1 support
  • added python 3.13 support
  • added postgres 17 support
  • marked postgres 12 support as deprecated
  • marked postgres 13 support as deprecated
  • dropped django 3.2 support
  • dropped django 4.0 support
  • dropped django 4.1 support
  • dropped python 3.6 support
  • dropped python 3.7 support
  • dropped migrate_isnotnull_check_constraints command

0.16

  • changed ADD COLUMN DEFAULT NULL to a safe operation for code defaults
  • changed ADD COLUMN DEFAULT NOT NULL to a safe operation for db_default in django 5.0+
  • added the ZERO_DOWNTIME_MIGRATIONS_KEEP_DEFAULT setting and changed ADD COLUMN DEFAULT NOT NULL with this setting to a safe operation for django < 5.0
  • added the ZERO_DOWNTIME_MIGRATIONS_EXPLICIT_CONSTRAINTS_DROP setting and enabled dropping constraints and indexes before dropping a column or table
  • fixed sqlmigrate in idempotent mode
  • fixed unique constraint creation with the include parameter
  • fixed idempotent mode tests
  • updated unsafe migration links to the documentation
  • updated patched code to the latest django version
  • updated test image to ubuntu 24.04
  • improved README

0.15

  • added idempotent mode and the ZERO_DOWNTIME_MIGRATIONS_IDEMPOTENT_SQL setting
  • fixed django 3.2 degradation due to the missing skip_default_on_alter method
  • improved README
  • updated the release github action

0.14

  • fixed deferred sql errors
  • added django 5.0 support
  • added python 3.12 support
  • added postgres 16 support
  • dropped postgres 11 support
  • removed the ZERO_DOWNTIME_MIGRATIONS_USE_NOT_NULL setting
  • marked the migrate_isnotnull_check_constraints command as deprecated

0.13

  • added django 4.2 support
  • marked django 3.2 support as deprecated
  • marked django 4.0 support as deprecated
  • marked django 4.1 support as deprecated
  • marked postgres 11 support as deprecated
  • dropped postgres 10 support
  • updated the test docker image to ubuntu 22.04

0.12

  • added support for serial and integer, bigserial and bigint, as well as smallserial and smallint, implementing the same type changes as safe migrations
  • fixed the AutoField type change and concurrent insertion issue for django < 4.1
  • added sequence dropping and creation timeouts, as they can be used with the CASCADE keyword and may affect other tables
  • added django 4.1 support
  • added python 3.11 support
  • added postgres 15 support
  • marked postgres 10 support as deprecated
  • dropped django 2.2 support
  • dropped django 3.0 support
  • dropped django 3.1 support
  • dropped postgres 9.5 support
  • dropped postgres 9.6 support
  • added github actions checks for pull requests

0.11

  • fixed an issue where renaming a model while keeping db_table raised an ALTER_TABLE_RENAME error (#26)
  • added django 3.2 support
  • added django 4.0 support
  • added python 3.9 support
  • added python 3.10 support
  • added postgres 14 support
  • marked django 2.2 support as deprecated
  • marked django 3.0 support as deprecated
  • marked django 3.1 support as deprecated
  • marked python 3.6 support as deprecated
  • marked python 3.7 support as deprecated
  • marked postgres 9.5 support as deprecated
  • marked postgres 9.6 support as deprecated
  • switched to github actions for testing

0.10

  • added django 3.1 support
  • added postgres 13 support
  • dropped python 3.5 support
  • updated the test environment

0.9

  • fixed the decimal-to-float migration error
  • fixed tests for django 3.0.2 and later

0.8

  • added django 3.0 support
  • added support for concurrent index creation and removal operations
  • added support for exclude constraints as an unsafe operation
  • dropped postgres 9.4 support
  • dropped django 2.0 support
  • dropped django 2.1 support
  • removed the deprecated django_zero_downtime_migrations_postgres_backend module

0.7

  • added python 3.8 support
  • added support for postgres-specific indexes
  • improved test clarity
  • fixed regexp escaping warnings in the management command
  • fixed style checks
  • improved README
  • marked python 3.5 support as deprecated
  • marked postgres 9.4 support as deprecated
  • marked django 2.0 support as deprecated
  • marked django 2.1 support as deprecated

0.6

  • marked the ZERO_DOWNTIME_MIGRATIONS_USE_NOT_NULL option as deprecated for postgres 12+
  • added a management command for migrating from a CHECK IS NOT NULL constraint to a real NOT NULL constraint
  • added integration tests for postgres 12, postgres 11 (root), postgres 11 with compatible not null constraints, postgres 11 with standard not null constraints, as well as postgres 10, 9.6, 9.5, 9.4, and postgis databases
  • fixed bugs related to the deletion and creation of compatible check not null constraints via pg_attribute
  • minimized side effects with deferred sql execution between operations in one migration module
  • added safe NOT NULL constraint creation for postgres 12
  • added safe NOT NULL constraint creation for extra permissions for pg_catalog.pg_attribute when the ZERO_DOWNTIME_MIGRATIONS_USE_NOT_NULL=USE_PG_ATTRIBUTE_UPDATE_FOR_SUPERUSER option is enabled
  • marked AddField with the null=False parameter and the compatible CHECK IS NOT NULL constraint option as an unsafe operation, ignoring the ZERO_DOWNTIME_MIGRATIONS_USE_NOT_NULL value in this case
  • added versioning to the package
  • fixed pypi README image links
  • improved README

0.5

  • extracted zero-downtime-schema logic into a mixin to allow using it with other backends
  • moved the module from django_zero_downtime_migrations_postgres_backend to django_zero_downtime_migrations.backends.postgres
  • marked the django_zero_downtime_migrations_postgres_backend module as deprecated
  • added support for the postgis backend
  • improved README

0.4

  • changed the defaults for ZERO_DOWNTIME_MIGRATIONS_LOCK_TIMEOUT and ZERO_DOWNTIME_MIGRATIONS_STATEMENT_TIMEOUT from 0ms to None to match the default django behavior that respects postgres timeouts
  • updated the documentation with option defaults
  • updated the documentation with best practices for option usage
  • fixed the issue where adding a nullable field with a default did not raise an error or warning
  • added links to the documentation describing the issue and safe alternative usage for errors and warnings
  • updated the documentation with type casting workarounds

0.3

  • added django 2.2 support with the Meta.indexes and Meta.constraints attributes
  • fixed python deprecation warnings for regular expressions
  • removed the unused TimeoutException
  • improved README and PYPI description

0.2

  • added an option to disable statement_timeout for long operations, such as index creation and constraint validation, when statement_timeout is set globally

0.1.1

  • added long description content type

0.1

  • replaced default sql queries with safer alternatives
  • added options for statement_timeout and lock_timeout
  • added an option for NOT NULL constraint behavior
  • added an option for restricting unsafe operations

Keywords

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc