New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More ā†’
Socket
Sign inDemoInstall
Socket

@linked-db/linked-ql

Package Overview
Dependencies
Maintainers
0
Versions
125
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@linked-db/linked-ql

A query client that extends standard SQL with new syntax sugars and enables auto-versioning capabilities on any database

  • 0.8.2
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
8
decreased by-93.89%
Maintainers
0
Weekly downloads
Ā 
Created
Source

Linked QL

npm version bundle License

Linked QL Banner

Save the overhead working with SQL and structured data - from the time and effort spent figuring out relational queries to the labour managing schemas! Try a modern, simplistic take on SQL and databases in general!

Follow ā€¢ Sponsor

Linked QL is a database query client that simplfies how you interact with your database and manage your schemas.

šŸ’„ Takes the ORM and friends out of the way and let's you just write SQL, but SQL that you will actually enjoy. (Linked QL extends standard SQL with new syntax sugars that let you write relational queries in less than half the code and without a single JOIN clause in most cases.)

āš”ļø Takes the process out of schema management and lets you just ALTER away your DB, but in a safety net. (Linked QL extends your DB behind the scenes to automatically version each edit you make and have them kept as "savepoints" that you can always rollback to.)

šŸ’„ Brings the "schema-as-code" practice to its true meaning and essentially lets you have your entire DB structure go in a single schema.json file that you edit in-place, as against the "hundreds of migration files" experience. (Linked QL essentially rewrites your "migrations" experience.)

It comes as a small library and is usable over your DB of choice - from the server-side Postgres, mariadb and MySQL, to the client-side IndexedDB, to the plain JSON object!

Jump to sections and features:

Getting Started

Install Linked QL:

npm install @linked-db/linked-ql

Obtain the Linked QL client for your target database:

  1. For SQL databases, install the regular SQL client you use for your DB. (Typically, pg for PostgreSQL, mariadb for mariadb, mysql/mysql2 for MySQL databases.)

    Using PostgreSQL as an example, install the pg client:

    npm install pg
    

    Use Linked QL as a wrapper over that:

    // Import pg and LinkedQl
    import pg from 'pg';
    import LinkedQl from '@linked-db/linked-ql/sql';
    
    // Connect pg
    const pgClient = new pg.Client({
        host: 'localhost',
        port: 5432,
    });
    await pgClient.connect();
    
    // Use LinkedQl as a wrapper over that
    const client = new LinkedQl(pgClient, { dialect: 'postgres' });
    
    Sample setup for mariadb

    Note that your mariadb database must be v10.3 or higher. (MySQL v8 comparably.) In addition, Linked QL needs to be able to run multiple statements in one query. The multipleStatements connector parameter below is thus required. We also need to have the bitOneIsBoolean parameter in place.

    // Import mariadb and LinkedQl
    import mariadb from 'mariadb';
    import LinkedQl from '@linked-db/linked-ql/sql';
    
    // Connect pg
    const myConnection = await mariadb.createConnection({
        host: '127.0.0.1',
        user: 'root',
        port: 3306,
        // -------
        multipleStatements: true, // Required
        bitOneIsBoolean: true, // The default, but required
        trace: true, // Recommended
    });
    
    // Use LinkedQl as a wrapper over that
    const client = new LinkedQl(myConnection, { dialect: 'mysql' });
    
  2. For the client-side IndexedDB, import and instantiate the IDB client. (Coming soon)

    // Import IDB as LinkedQl
    import LinkedQl from '@linked-db/linked-ql/idb';
    
    // Create an instance.
    const client = new LinkedQl;
    
  3. To work with Linked QL's in-memory object database, import and instantiate the ODB client. (Coming soon)

    // Import ODB as LinkedQl
    import LinkedQl from '@linked-db/linked-ql/odb';
    
    // Create an instance.
    const LinkedQlClient = new LinkedQl;
    

All client instances above implement the same interface:

client.query('SELECT fname, lname FROM users WHERE role = $1', { values: ['admin'] }).then(result => {
    console.log(result);
});
const result = await client.query('SELECT fname, lname FROM users WHERE role = $1', { values: ['admin'] });
console.log(result);

Other APIs are covered right in The Linked QL API section. You'll find that, in addition to writing pure SQL, you can also programmatically compose queries if you want; an example being the client.createDatabase() API for a CREATE DATABASE statement.

Introducing Magic Paths

šŸ’„ Express relationships graphically.

JOINS can be good but can be a mess as they almost always obfuscate your entire query! But what if you didn't have to write JOINS to express certain relationships?

Meet Linked QL's magic path operators, a syntax extension to SQL, that lets you connect to columns on other tables without writing a single JOIN. Linked QL uses heuristics on your DB structure to figure out the details and the relevant JOINs behind the scenes.

Where you normally would write...

-- Regular SQL
SELECT title, users.fname AS author_name FROM posts
LEFT JOIN users ON users.id = posts.author

Linked QL lets you draw a path to express the relationship:

-- Linked QL
SELECT title, author ~> fname AS author_name FROM posts

And here's a scenario showing a typical schema and an example query each:

-- The users table
CREATE TABLE users (
    id int primary key generated always as identity,
    title varchar,
    name varchar,
    role int references roles (id),
    created_time timestamp
);
-- The books table
CREATE TABLE books (
    id int primary key generated always as identity,
    title varchar,
    content varchar,
    author int references users (id),
    created_time timestamp
);
-- Regular SQL
SELECT book.id, book.title, content, book.created_time, user.id AS author_id, user.title AS author_title, user.name AS author_name 
FROM books AS book LEFT JOIN users AS user ON user.id = book.author
-- Linked QL
SELECT id, title, content, created_time, author ~> id, author ~> title, author ~> name 
FROM books

āœØ PRO: About 50% code, and whole namespacing exercise, now eliminated; all with zero upfront setup!

Taking that further, paths can be multi-level:

-- Linked QL
SELECT * FROM books
WHERE author ~> role ~> codename = 'admin'

and they can also be used to express the relationships in the reverse direction (many-to-one):

-- Linked QL
SELECT * FROM users
WHERE author <~ books ~> title = 'Beauty and the Beast'

(Now pivot/junction/link tables get an easier way!)

Introducing Auto-Versioning

āš”ļø Create, Alter, and Drop schemas without needing to worry about versioning.

Databases have historically lacked the concept of versioning, and that has seen all of the engineering work pushed down to the client application. If you've ever had to adopt a special process for defining and managing your schemas, wherein changes are handled through serially-named files within your application, each written as an UP/DOWN pair of actions, and in all supported by tooling...

app
  ā”œā”€ migrations
  ā”‚ ā”‚
  ā”‚ ā”œā”€ 20240523_1759_create_users_table_and_drop_accounts_table
  ā”‚ ā”‚ ā”‚
  ā”‚ ā”‚ ā”œā”€ up.sql
  ā”‚ ā”‚ ā”‚    CREATE TABLE users (id INT, first_n...);
  ā”‚ ā”‚ ā”‚    DROP TABLE accounts;
  ā”‚ ā”‚ ā””ā”€ down.sql
  ā”‚ ā”‚      DROP TABLE users;
  ā”‚ ā”‚      CREATE TABLE accounts (id INT, first_name VAR...);
  ā”‚ ā”‚
  ā”‚ ā”œā”€ 20240523_1760_add_last_login_to_users_table_and_rename_order_status_table
  ā”‚ ā”‚ ā”‚
  ā”‚ ā”‚ ā”œā”€ up.sql
  ā”‚ ā”‚ ā”‚    ALTER TABLE users ADD COLUMN last_lo...;
  ā”‚ ā”‚ ā”‚    ALTER TABLE order_status RENAME TO o...;
  ā”‚ ā”‚ ā””ā”€ down.sql
  ā”‚ ā”‚      ALTER TABLE users DROP COLUMN last_login;
  ā”‚ ā”‚      ALTER TABLE order_tracking RENAME TO order_status;
  ā”‚ ā”‚
  ā”‚ ā”œā”€ +256 more...

then you've faced the problem that this defeciency in databases creates! But what if databases magically got to do the heavy lifting?

Meet Linked QL's little addition to your database that does exactly that and lets you alter your DB carefree, but in the safety net of some behind-the-scenes magic that snapshots your schema before each alteration! Meet Automatic Schema Savepoints and Rollbacks!

You:

// Alter schema
const savepoint = await client.query('CREATE TABLE public.users (id int, name varchar)', {
    description: 'Create users table',
});

Linked QL:

// A savepoint automatically created for you
console.log(savepoint.description);   // Create users table
console.log(savepoint.versionTag);    // 1
console.log(savepoint.savepointDate); // 2024-07-17T22:40:56.786Z

(More details in the Savepoint API.)

āœØ PRO: Whole engineering work now essentially moved over to the DB where it rightly belongs; all with zero upfront setup!

Taking that further, you get a nifty rollback button should you want to:

// Rollback all associated changes (Gets the users table dropped)
await savepoint.rollback();

and you can go many levels back:

// Rollback to public@3
let savepoint;
while(savepoint = await client.database('public').savepoint()) {
    await savepoint.rollback();
    if (savepoint.versionTag === 3) break;
}

and you can "undo" a rollback, or in other words, roll forward to a point in time:

// "Undo" the last rollback (Gets the users table re-created)
let savepoint = await client.database('public').savepoint({ direction: 'forward' });
await savepoint.rollback();

You essentially are able to go back in time or forward in time as randomly as iteration may demand.

Re-Introducing Schema-as-Code with schema.json

šŸ’„ Have your entire DB structure live in a single schema.json (or schema.yml) file that you edit in-place!

With schema versioning now over to the database, much of the old conventions and formalities should now be irrelevant. We found that we could essentially streamline the whole "database" footprint from spanning hundreds of migration files to fitting into a single schema.json (or schema.yml) file!

schema.json

[
    {
        "name": "database_1",
        "tables": [] // Table objects
    },
    {
        "name": "database_2",
        "tables": [] // Table objects
    }
]
See the database schema spec
interface DatabaseSchemaSpec {
    name: string;
    tables: TableSchemaSpec[];
}
Explore the structure further

ā”” Table schema example:

{
    "name": "users", // or something like ['db1', 'tbl1'] which would translate to db1.tbl1
    "columns": [], // Column objects (minimum of 1)
    "constraints": [], // Constraint objects
    "indexes": [] // Index objects
}
See the table schema spec
interface TableSchemaSpec {
    name: string | string[];
    columns: ColumnSchemaSpec[];
    constraints: TableConstraintSchemaType[];
    indexes: IndexSchemaSpec[];
}

ā”” Column schema examples:

{
    "name": "id",
    "type": "int",
    "primaryKey": true,
    "identity": true
}
{
    "name": "full_name",
    "type": ["varchar", 101],
    "generated": "(first_name || ' ' || last_name)"
}
{
    "name": "email",
    "type": ["varchar", "50"],
    "uniqueKey": true,
    "notNull": true,
    "check": "(email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$')"
}
{
    "name": "parent",
    "type": "int",
    "notNull": true,
    "references": {
        "targetTable": "users",
        "targetColumns": ["id"],
        "matchRull": "full",
        "updateRule": "cascade",
        "deleteRule": "restrict"
    }
}
See the column schema spec
interface ColumnSchemaSpec {
    name: string;
    type: string | Array;
    primaryKey?: boolean | PrimaryKeySchemaSpec;
    [ foreignKey | references ]?: ForeignKeySchemaSpec;
    uniqueKey?: boolean | UniqueKeySchemaSpec;
    check?: string | CheckConstraintSchemaSpec;
    default?: string | DefaultConstraintSchemaSpec;
    expression?: string | ExpressionConstraintSchemaSpec;
    identity: boolean | IdentityConstraintSchemaSpec;
    onUpdate?: string | OnUpdateConstraintSchemaSpec; // (MySQL-specific attributes)
    autoIncrement?: boolean; // (MySQL-specific attributes)
    notNull?: boolean;
    null?: boolean;
}

ā”” Table constraint examples:

{
    "type": "PRIMARY_KEY",
    "name": "constraint_name",
    "columns": ["id"],
}
{
    "type": "UNIQUE_KEY",
    "columns": ["email"]
}
{
    "type": "FOREIGN_KEY",
    "columns": ["parent"],
    "targetTable": "users", // or something like ['db1', 'tbl1'] which would translate to db1.tbl1
    "targetColumns": ["id"],
    "matchRull": "full",
    "updateRule": "cascade",
    "deleteRule": { rule: "restrict", "columns": ["col1", "col2"] }
}
{
    "type": "CHECK",
    "expr": "(email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$')"
}
See the table constraint schema spec
type TableConstraintSchemaType = TablePrimaryKeySchemaSpec | TableForeignKeySchemaSpec | TableUniqueKeySchemaSpec | TableCheckConstraintSchemaSpec;
interface TablePrimaryKeySchemaSpec extends PrimaryKeySchemaSpec {
    type: 'PRIMARY_KEY';
    columns: string[];
}

interface TableForeignKeySchemaSpec extends ForeignKeySchemaSpec {
    type: 'FOREIGN_KEY';
    columns: string[];
}

interface TableUniqueKeySchemaSpec extends UniqueKeySchemaSpec {
    type: 'UNIQUE_KEY';
    columns: string[];
}

interface TableCheckConstraintSchemaSpec extends CheckConstraintSchemaSpec {
    type: 'CHECK';
}
See the column constraint schema spec
type ColumnConstraintSchemaType = PrimaryKeySchemaSpec | ForeignKeySchemaSpec | UniqueKeySchemaSpec | CheckConstraintSchemaSpec | DefaultConstraintSchemaSpec | ExpressionConstraintSchemaSpec | IdentityConstraintSchemaSpec | OnUpdateConstraintSchemaSpec;
interface PrimaryKeySchemaSpec {
    name: string;
}

interface ForeignKeySchemaSpec {
    name?: string;
    targetTable: string | string[];
    targetColumns: string[];
    matchRule?: string;
    updateRule?: string | { rule: string, columns: string[] };
    deleteRule?: string | { rule: string, columns: string[] };
}

interface UniqueKeySchemaSpec {
    name: string;
}

interface CheckConstraintSchemaSpec {
    name?: string;
    expr: string;
}

interface DefaultConstraintSchemaSpec {
    expr: string;
}

interface ExpressionConstraintSchemaSpec {
    expr: string;
    stored: boolean;
}

interface IdentityConstraintSchemaSpec {
    always: boolean;
}

interface OnUpdateConstraintSchemaSpec {
    expr: string;
}

ā”” Index schema examples:

{
    "type": "FULLTEXT",
    "columns": ["full_name"]
}
{
    "type": "SPATIAL",
    "columns": ["full_name"]
}
See the index schema spec
interface IndexSchemaSpec {
    name?: string;
    type: string;
    columns: string[];
}

Now, if you had that somewhere in your application, say at ./database/schema.json, Linked QL could help keep it in sync both ways with your database:

  • you add or remove a database object or table object or column object... and it is automatically reflected in your DB structure at the click of a command: linkedql migrate
  • your colleague makes new changes from their codebase... and it is automatically reflected in your local copy at your next git pull, or at the click of a command: linkedql refresh

šŸ„ You also get to see a version number on each database object in your schema essentially incrementing on each migrate operation (whether by you or by colleague), and decrementing on each rollback operation (whether by you or by colleague).

Thanks to a DB-native schema version control system, no need to maintain past states, or risk losing them; the DB now becomes the absolute source of truth for both itself and its client applications, as against the other way around. (You may want to see how that brings us to true "Schema as Code" in practice.)

To setup:

  1. Make a directory within your application for database concerns. Linked QL will want to look in ./database, but you will be able to point to your preferred location when running Linked QL commands.

  2. Have a driver.js file in that directory that has a default export function that returns a Linked QL instance. This will be imported and used by Linked QL to interact with your database. This could look something like:

    import pg from 'pg';
    import SQLClient from '@linked-db/linked-ql/sql';
    
    const pgClient = new pg.Client({
        host: 'localhost',
        port: 5432,
    });
    await pgClient.connect();
    const sqlClient = new SQLClient(pgClient, { dialect: 'postgres' });
    
    export default function() {
        return sqlClient;
    }
    
  3. Have your DB structure defined in a schema.json (or schema.yml) file in that directory. (See schema.json above for a guide.)

    You can always extend your schema with new objects, and you can always drop objects or edit them in-place. For an existing database, table, column, constraint, or index, names may be changed, but not in-place! A "rename" operation is done with the addition of a temporary $name attribute:

    {
        "name": "old_name",
        "$name": "new_name"
    }
    

    The old name being in place is needed to find the target during migration. The temporary $name attribute automatically disappears after new name has been picked up by Linked QL at next linkedql migrate.

To run:

  • Use linkedql migrate to walk through your staged local changes and interactively perform a migration against your database.
  • Use linkedql rollback to walk through the latest savepoint at each database and interactively perform a rollback.
  • Use linkedql leaderboard to just view the latest savepoint at each database.

(More details in the Linked QL CLI section.)

šŸ£ And that's a wrap!

DOCS

If you've made it this far, you may want to go here next:

Roadmap

  • [ONGOING] Improve support for MySQL.
  • [DONE] Implement support for a schema.yml alternative to schema.json file.
  • [PENDING] Implement support for IndexedDB.
  • [PENDING] Implement the in-memory database.

Much of that could happen sooner with your support! If you'd like to help out, please consider a sponsorship. PRs are also always welcome.

Issues

To report bugs or request features, please submit an issue to this repository.

License

MIT.

Keywords

FAQs

Package last updated on 31 Jul 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with āš”ļø by Socket Inc