New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More ā†’
Socket
Sign inDemoInstall
Socket

@linked-db/linked-ql

Package Overview
Dependencies
Maintainers
0
Versions
125
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@linked-db/linked-ql

A query client that extends standard SQL with new syntax sugars and enables auto-versioning capabilities on any database

  • 0.8.10
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
8
decreased by-93.89%
Maintainers
0
Weekly downloads
Ā 
Created
Source

Linked QL

npm version bundle License

Linked QL Banner

Save the overhead working with SQL and structured data - from the time and effort spent figuring out relational queries to the labour managing schemas! Try a modern, simplistic take on SQL and databases in general!

Follow ā€¢ Sponsor

Linked QL is a database query client that simplfies how you interact with your database and manage your schemas.

šŸ’„ Takes the ORM and friends out of the way and let's you just write SQL, but SQL that you will actually enjoy. (Linked QL extends standard SQL with new syntax sugars that let you write relational queries in less than half the code and without a single JOIN clause in most cases.)

āš”ļø Takes the process out of schema management and lets you just ALTER away your DB, but in a safety net. (Linked QL extends your DB behind the scenes to automatically version each edit you make and have them kept as "savepoints" that you can always rollback to.)

šŸ’„ Brings the "schema-as-code" practice to its true meaning and essentially lets you have your entire DB structure go in a single schema.json file that you edit in-place, as against the "hundreds of migration files" experience. (Linked QL essentially rewrites your "migrations" experience.)

It comes as a small library and is usable over your DB of choice - from the server-side Postgres, mariadb and MySQL, to the client-side IndexedDB, and the in-memory plain JSON object!

Jump to sections and features:

Getting Started

Install Linked QL:

npm install @linked-db/linked-ql

Obtain the Linked QL client for your target database:

  1. For SQL databases, install the regular SQL client you use for your DB. (Typically, pg for PostgreSQL, mariadb for mariadb, mysql/mysql2 for MySQL databases.)

    Using PostgreSQL as an example, install the pg client:

    npm install pg
    

    Use Linked QL as a wrapper over that:

    // Import pg and LinkedQl
    import pg from 'pg';
    import LinkedQl from '@linked-db/linked-ql/sql';
    
    // Connect pg
    const pgClient = new pg.Client({
        host: 'localhost',
        port: 5432,
    });
    await pgClient.connect();
    
    // Use LinkedQl as a wrapper over that
    const client = new LinkedQl(pgClient, { dialect: 'postgres' });
    
    See also: sample setup for mariadb

    Note that your mariadb database must be v10.5.2 or higher. (MySQL v8 comparably.) In addition, Linked QL needs to be able to run multiple statements in one query. The multipleStatements connector parameter below is thus required. We also need to have the bitOneIsBoolean parameter in place.

    // Import mariadb and LinkedQl
    import mariadb from 'mariadb';
    import LinkedQl from '@linked-db/linked-ql/sql';
    
    // Connect pg
    const myConnection = await mariadb.createConnection({
        host: '127.0.0.1',
        user: 'root',
        port: 3306,
        // -------
        multipleStatements: true, // Required
        bitOneIsBoolean: true, // The default, but required
        trace: true, // Recommended
    });
    
    // Use LinkedQl as a wrapper over that
    const client = new LinkedQl(myConnection, { dialect: 'mysql' });
    
  2. For the client-side IndexedDB, import and instantiate the IDB client. (Coming soon)

    // Import IDB as LinkedQl
    import LinkedQl from '@linked-db/linked-ql/idb';
    
    // Create an instance.
    const client = new LinkedQl;
    
  3. To work with Linked QL's in-memory object database, import and instantiate the ODB client. (Coming soon)

    // Import ODB as LinkedQl
    import LinkedQl from '@linked-db/linked-ql/odb';
    
    // Create an instance.
    const LinkedQlClient = new LinkedQl;
    

All client instances above implement the same interface:

client.query('SELECT fname, lname FROM users WHERE role = $1', { values: ['admin'] }).then(result => {
    console.log(result);
});
const result = await client.query('SELECT fname, lname FROM users WHERE role = $1', { values: ['admin'] });
console.log(result);

Other APIs are covered right in The Linked QL API section. You'll find that, in addition to running pure SQL using client.query(), you can also programmatically compose queries if you want; an example being the client.createDatabase() API for a CREATE DATABASE statement.

Introducing Magic Paths

šŸ’„ Express relationships graphically.

JOINS can be good but can be a mess as they almost always obfuscate your entire query! But what if you didn't have to write JOINS to express certain relationships?

Meet Linked QL's magic path operators, a syntax extension to SQL, that lets you connect to columns on other tables without writing a single JOIN. Linked QL uses heuristics on your DB structure to figure out the details and the relevant JOINs behind the scenes.

Where you normally would write...

-- Regular SQL
SELECT title, users.fname AS author_name FROM posts
LEFT JOIN users ON users.id = posts.author

Linked QL lets you draw a path to express the relationship:

-- Linked QL
SELECT title, author ~> fname AS author_name FROM posts

And here's a scenario showing a typical schema and an example query each:

-- The users table
CREATE TABLE users (
    id int primary key generated always as identity,
    title varchar,
    name varchar,
    role int references roles (id),
    created_time timestamp
);
-- The books table
CREATE TABLE books (
    id int primary key generated always as identity,
    title varchar,
    content varchar,
    author int references users (id),
    created_time timestamp
);
-- Regular SQL
SELECT book.id, book.title, content, book.created_time, user.id AS author_id, user.title AS author_title, user.name AS author_name 
FROM books AS book LEFT JOIN users AS user ON user.id = book.author
-- Linked QL
SELECT id, title, content, created_time, author ~> id, author ~> title, author ~> name 
FROM books

āœØ Now, that translates to about 50% code, and whole namespacing exercise, having been eliminated! And yet, no questions asked about your schemas; no such thing as the usual upfront relationship mapping!

Taking things further, multi-level relationships also get a corresponding pattern: multi-level paths:

-- Linked QL
SELECT * FROM books
WHERE author ~> role ~> codename = 'admin'

and for when you need to model the different forms of relationships out there (one-to-many, many-to-one, many-to-many), path operators can go in any direction:

-- Linked QL
SELECT * FROM users
WHERE author <~ books ~> title = 'Beauty and the Beast'

And it turns out, you can practically have the new magic together with the old craft:

-- Linked QL
SELECT users.* FROM users, some_other_table.id
LEFT JOIN some_other_table USING some_other_condition
WHERE author <~ books ~> title = 'Beauty and the Beast'

with zero implications!

This means, game on with the regular JOINs for whatever calls for them; take the "magic path" option for whatever doesn't benefit otherwise!

We think this will make a lot of the tooling and manual work around SQL obsolete and your codebase saner! And notice how this gives you back SQL - and every other thing as only an extension of that!

Introducing Auto-Versioning

āš”ļø Create, Alter, and Drop schemas without needing to worry about versioning.

Databases have historically lacked the concept of versioning, and that has seen all of the engineering work pushed down to the client application. If you've ever had to adopt a special process for defining and managing your schemas, wherein changes are handled through serially-named files within your application, each written as an UP/DOWN pair of actions, and in all supported by tooling...

app
  ā”œā”€ migrations
  ā”‚ ā”‚
  ā”‚ ā”œā”€ 20240523_1759_create_users_table_and_drop_accounts_table
  ā”‚ ā”‚ ā”‚
  ā”‚ ā”‚ ā”œā”€ up.sql
  ā”‚ ā”‚ ā”‚    CREATE TABLE users (id INT, first_n...);
  ā”‚ ā”‚ ā”‚    DROP TABLE accounts;
  ā”‚ ā”‚ ā””ā”€ down.sql
  ā”‚ ā”‚      DROP TABLE users;
  ā”‚ ā”‚      CREATE TABLE accounts (id INT, first_name VAR...);
  ā”‚ ā”‚
  ā”‚ ā”œā”€ 20240523_1760_add_last_login_to_users_table_and_rename_order_status_table
  ā”‚ ā”‚ ā”‚
  ā”‚ ā”‚ ā”œā”€ up.sql
  ā”‚ ā”‚ ā”‚    ALTER TABLE users ADD COLUMN last_lo...;
  ā”‚ ā”‚ ā”‚    ALTER TABLE order_status RENAME TO o...;
  ā”‚ ā”‚ ā””ā”€ down.sql
  ā”‚ ā”‚      ALTER TABLE users DROP COLUMN last_login;
  ā”‚ ā”‚      ALTER TABLE order_tracking RENAME TO order_status;
  ā”‚ ā”‚
  ā”‚ ā”œā”€ +256 more...

then you've faced the problem that this defeciency in databases creates! But what if databases magically got to do the heavy lifting?

Meet Linked QL's little addition to your database that does exactly that and lets you alter your DB carefree, but in the safety net of some behind-the-scenes magic that snapshots the relevant schemas before each alteration! Meet Automatic Schema Savepoints and Rollbacks!

You alter your schema and get back a reference to the savepoint automatically created for you:

// Alter schema
const savepoint = await client.query('CREATE TABLE public.users (id int, name varchar)', {
    description: 'Create users table',
});
// As an axample of what you see:
console.log(savepoint.description);   // Create users table
console.log(savepoint.versionTag);    // 1
console.log(savepoint.savepointDate); // 2024-07-17T22:40:56.786Z
// Or to see everything:
console.table(savepoint.toJson());

And you're able to access the same savepoint on-demand using the database.savepoint() API:

const savepoint = await client.database('public').savepoint();

Either way, you get a nifty rollback button, should you want to:

// Rollback all associated changes (Gets the users table dropped)
await savepoint.rollback();

all the way back to a point in time, should you want to:

// Rollback to public@3
let savepoint;
while((savepoint = await client.database('public').savepoint()) && savepoint.versionTag <= 3) {
    await savepoint.rollback();
}

āœØ Now, that translates to all the engineering work you once did manaually having been moved to the database! Plus, your schema histories now having been encoded as data (instead of as files), making them queryable, analyzable, and even visualizable, as regular data!

Taking that further, you also get a way to roll forward from a rollback! (Much like hitting "Redo" to reverse a certain "Undo"). This time, you simply specify a "forward" movement from your current point in time:

// "Undo" the last rollback (Gets the users table re-created)
let savepoint = await client.database('public').savepoint({ direction: 'forward' });
await savepoint.rollback();

giving you time travel in any direction! You essentially are able to go back in time or forward in time in as seamlessly as you move on a movie track!

Re-Introducing Schema-as-Code with schema.json

šŸ’„ Have your entire DB structure live in a single schema.json (or schema.yml) file that you edit in-place!

With schema versioning now over to the database, much of the old conventions and formalities should now be irrelevant. We found that you could essentially streamline you whole "database" footprint from spanning dozens of migration files to fitting into a single schema.json (or schema.yml) file!

schema.json

[
    {
        // string
        "name": "database_1",
        // TableSchemaSpec[]
        "tables": []
    },
    {
        // string
        "name": "database_2",
        // TableSchemaSpec[]
        "tables": []
    }
]
See a full example
[
    {
        // string - required
        "name": "database_1",
        // TableSchemaSpec[]
        "tables": [
            {
                // string - required
                "name": "users",
                // ColumnSchemaSpec[] - required
                "columns": [
                    {
                        // string - required
                        "name": "id",
                        // string or array like ["int",3] - required
                        "type": "int",
                        // boolean or PrimaryKeySchemaSpec
                        "primaryKey": true,
                        // boolean or IdentityConstraintSchemaSpec
                        "identity": true
                    },
                    {
                        // string - required
                        "name": "first_name",
                        // array or string like "varchar" - required
                        "type": ["varchar", 101]
                    },
                    {
                        // string - required
                        "name": "last_name",
                        // array or string like "varchar" - required
                        "type": ["varchar", 101]
                    },
                    {
                        // string - required
                        "name": "full_name",
                        // array or string like "varchar" - required
                        "type": ["varchar", 101],
                        // string or ExpressionConstraintSchemaSpec
                        "expression": "(first_name || ' ' || last_name)"
                    },
                    {
                        // string - required
                        "name": "email",
                        // array or string like "varchar" - required
                        "type": ["varchar", 50],
                        // boolean or UniqueKeySchemaSpec
                        "uniqueKey": true,
                        // boolean
                        "notNull": true,
                        // string or CheckConstraintSchemaSpec
                        "check": "(email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$')"
                    },
                    {
                        // string - required
                        "name": "parent",
                        // string or array like ["int",3] - required
                        "type": "int",
                        // boolean
                        "notNull": true,
                        // ForeignKeySchemaSpec
                        "references": {
                            // string or string[] like ["database_2", "users"] - required
                            "targetTable": "users",
                            // string[] - required
                            "targetColumns": ["id"],
                            // string
                            "matchRull": "full",
                            // string or object like { rule: "cascade", columns: ["col1"] }
                            "updateRule": "cascade",
                            // string or object like { rule: "restrict", columns: ["col1"] }
                            "deleteRule": "restrict"
                        }
                    }
                ],
                // TableConstraintSchemaType[]
                "constraints": [
                    {
                        // string - required
                        "type": "PRIMARY_KEY",
                        // string[] - required
                        "columns": ["id"],
                    },
                    {
                        // string - required
                        "type": "FOREIGN_KEY",
                        // string[] - required
                        "columns": ["parent_2"],
                        // string or string[] like ["database_2", "users"] - required
                        "targetTable": "users",
                        // string[] - required
                        "targetColumns": ["id"],
                        // string
                        "matchRull": "full",
                        // string or object like { rule: "cascade", columns: ["col1"] }
                        "updateRule": "cascade",
                        // string or object like { rule: "restrict", columns: ["col1"] }
                        "deleteRule": "restrict"
                    },
                    {
                        // string - required
                        "type": "UNIQUE_KEY",
                        // string
                        "name": "constraint_name",
                        // string[] - required
                        "columns": ["parent", "full_name"]
                    },
                    {
                        // string - required
                        "type": "CHECK",
                        // string - required
                        "expr": "(email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$')"
                    }
                ],
                // IndexSchemaSpec[]
                "indexes": [
                    {
                        // string - required
                        "type": "FULLTEXT",
                        // string[] - required
                        "columns": ["full_name"]
                    },
                    {
                        // string - required
                        "type": "SPATIAL",
                        // string[] - required
                        "columns": ["full_name"]
                    }
                ]
            }
        ]
    },
    {
        // string - required
        "name": "database_2",
        // TableSchemaSpec[]
        "tables": []
    }
]
See the schema spec
interface DatabaseSchemaSpec {
    name: string;
    tables: TableSchemaSpec[];
}
interface TableSchemaSpec {
    name: string | string[];
    columns: ColumnSchemaSpec[];
    constraints: TableConstraintSchemaType[];
    indexes: IndexSchemaSpec[];
}
interface ColumnSchemaSpec {
    name: string;
    type: string | Array;
    primaryKey?: boolean | PrimaryKeySchemaSpec;
    [ foreignKey | references ]?: ForeignKeySchemaSpec;
    uniqueKey?: boolean | UniqueKeySchemaSpec;
    check?: string | CheckConstraintSchemaSpec;
    default?: string | DefaultConstraintSchemaSpec;
    expression?: string | ExpressionConstraintSchemaSpec;
    identity: boolean | IdentityConstraintSchemaSpec;
    onUpdate?: string | OnUpdateConstraintSchemaSpec; // (MySQL-specific attributes)
    autoIncrement?: boolean; // (MySQL-specific attributes)
    notNull?: boolean;
    null?: boolean;
}
type TableConstraintSchemaType = TablePrimaryKeySchemaSpec | TableForeignKeySchemaSpec | TableUniqueKeySchemaSpec | TableCheckConstraintSchemaSpec;
interface TablePrimaryKeySchemaSpec extends PrimaryKeySchemaSpec {
    type: 'PRIMARY_KEY';
    columns: string[];
}

interface TableForeignKeySchemaSpec extends ForeignKeySchemaSpec {
    type: 'FOREIGN_KEY';
    columns: string[];
}

interface TableUniqueKeySchemaSpec extends UniqueKeySchemaSpec {
    type: 'UNIQUE_KEY';
    columns: string[];
}

interface TableCheckConstraintSchemaSpec extends CheckConstraintSchemaSpec {
    type: 'CHECK';
}
type ColumnConstraintSchemaType = PrimaryKeySchemaSpec | ForeignKeySchemaSpec | UniqueKeySchemaSpec | CheckConstraintSchemaSpec | DefaultConstraintSchemaSpec | ExpressionConstraintSchemaSpec | IdentityConstraintSchemaSpec | OnUpdateConstraintSchemaSpec;
interface PrimaryKeySchemaSpec {
    name: string;
}

interface ForeignKeySchemaSpec {
    name?: string;
    targetTable: string | string[];
    targetColumns: string[];
    matchRule?: string;
    updateRule?: string | { rule: string, columns: string[] };
    deleteRule?: string | { rule: string, columns: string[] };
}

interface UniqueKeySchemaSpec {
    name: string;
}

interface CheckConstraintSchemaSpec {
    name?: string;
    expr: string;
}

interface DefaultConstraintSchemaSpec {
    expr: string;
}

interface ExpressionConstraintSchemaSpec {
    expr: string;
    stored: boolean;
}

interface IdentityConstraintSchemaSpec {
    always: boolean;
}

interface OnUpdateConstraintSchemaSpec {
    expr: string;
}
interface IndexSchemaSpec {
    name?: string;
    type: string;
    columns: string[];
}

Now, if you had that somewhere in your application, say at ./database/schema.json, Linked QL could help keep it in sync both ways with your database:

  • you add or remove a database object or table object or column object... and it is automatically reflected in your DB structure at the click of a command: linkedql migrate
  • your colleague makes new changes from their codebase... and it is automatically reflected in your local copy at your next git pull, or at the click of a command: linkedql refresh

āš”ļø You also get to see a version number on each database object in your schema essentially incrementing on each migrate operation (whether by you or by colleague), and decrementing on each rollback operation (whether by you or by colleague).

Thanks to a DB-native schema version control system, no need to maintain past states, or risk losing them; the DB now becomes the absolute source of truth for both itself and its client applications, as against the other way around. (You may want to see how that brings us to true "Schema as Code" in practice.)

To setup:

  1. Make a directory within your application for database concerns. Linked QL will want to look in ./database, but you will be able to point to your preferred location when running Linked QL commands.

  2. Have a driver.js file in that directory that has a default export function that returns a Linked QL instance. This will be imported and used by Linked QL to interact with your database. This could look something like:

    import pg from 'pg';
    import SQLClient from '@linked-db/linked-ql/sql';
    
    const pgClient = new pg.Client({
        host: 'localhost',
        port: 5432,
    });
    await pgClient.connect();
    const sqlClient = new SQLClient(pgClient, { dialect: 'postgres' });
    
    export default function() {
        return sqlClient;
    }
    
  3. Have your DB structure defined in a schema.json (or schema.yml) file in that directory. (See schema.json above for a guide.)

    You can always extend your schema with new objects, and you can always drop objects or edit them in-place. For an existing database, table, column, constraint, or index, names may be changed, but not in-place! A "rename" operation is done with the addition of a temporary $name attribute:

    {
        "name": "old_name",
        "$name": "new_name"
    }
    

    The old name being in place is needed to find the target during migration. The temporary $name attribute automatically disappears after new name has been picked up by Linked QL at next linkedql migrate.

To run:

  • Use linkedql migrate to walk through your staged local changes and interactively perform a migration against your database.
  • Use linkedql rollback to walk through the latest savepoint at each database and interactively perform a rollback.
  • Use linkedql leaderboard to just view the latest savepoint at each database.

(More details in the Linked QL CLI section.)

šŸ£ And that's a wrap!

DOCS

If you've made it this far, you may want to go here next:

Roadmap

  • [ONGOING] Improve support for MySQL.
  • [DONE] Implement support for a schema.yml alternative to schema.json file.
  • [PENDING] Implement support for IndexedDB.
  • [PENDING] Implement the in-memory database.

Much of that could happen sooner with your support! If you'd like to help out, please consider a sponsorship. PRs are also always welcome.

Issues

To report bugs or request features, please submit an issue to this repository.

License

MIT.

Keywords

FAQs

Package last updated on 02 Aug 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with āš”ļø by Socket Inc