Comparing version 11.1.1 to 11.2.0
@@ -101,2 +101,15 @@ (function() { | ||
//------------------------------------------------------------------------------------------------------- | ||
this.create_function({ | ||
name: prefix + 'str_is_blank', | ||
deterministic: true, | ||
varargs: false, | ||
call: function(s) { | ||
if (/^\s+$/.test(s)) { | ||
return 1; | ||
} else { | ||
return 0; | ||
} | ||
} | ||
}); | ||
//------------------------------------------------------------------------------------------------------- | ||
this.create_table_function({ | ||
@@ -103,0 +116,0 @@ name: prefix + 'str_split', |
{ | ||
"name": "dbay", | ||
"version": "11.1.1", | ||
"version": "11.2.0", | ||
"description": "In-Process, In-Memory & File-Based Relational Data Processing with SQLite, BetterSQLite3", | ||
"main": "lib/main.js", | ||
"scripts": { | ||
"build": "coffee --map -o lib -c src", | ||
"test": "echo see 'https://github.com/loveencounterflow/hengist/tree/master/dev/dbay'", | ||
"preinstall": "./build-sqlite3" | ||
}, | ||
"repository": { | ||
@@ -27,9 +32,3 @@ "type": "git", | ||
"intertype": "7.7.0" | ||
}, | ||
"scripts": { | ||
"build": "coffee --map -o lib -c src", | ||
"test": "echo see 'https://github.com/loveencounterflow/hengist/tree/master/dev/dbay'", | ||
"preinstall": "./build-sqlite3" | ||
}, | ||
"readme": "\n\n# π€DBay\n\n\n<!-- START doctoc generated TOC please keep comment here to allow auto update -->\n<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->\n**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*\n\n- [π€DBay](#%F0%93%86%A4dbay)\n - [Introduction](#introduction)\n - [Documentation](#documentation)\n - [Main](#main)\n - [Using Defaults](#using-defaults)\n - [Automatic Location](#automatic-location)\n - [Randomly Chosen Filename](#randomly-chosen-filename)\n - [Using Parameters](#using-parameters)\n - [Opening and Closing DBs](#opening-and-closing-dbs)\n - [Opening / Attaching DBs](#opening--attaching-dbs)\n - [Closing / Detaching DBs](#closing--detaching-dbs)\n - [Transactions and Context Handlers](#transactions-and-context-handlers)\n - [Query](#query)\n - [`SQL` Tag Function for Better Embedded Syntax](#sql-tag-function-for-better-embedded-syntax)\n - [Executing SQL](#executing-sql)\n - [User-Defined Functions (UDFs)](#user-defined-functions-udfs)\n - [Standard Library of SQL Functions (StdLib)](#standard-library-of-sql-functions-stdlib)\n - [List of Functions](#list-of-functions)\n - [Use Case for DBay Exceptions and Assertions: Enforcing Invariants](#use-case-for-dbay-exceptions-and-assertions-enforcing-invariants)\n - [Use Case for DBay Variables: Parametrized Views](#use-case-for-dbay-variables-parametrized-views)\n - [Safe Escaping for SQL Values and Identifiers](#safe-escaping-for-sql-values-and-identifiers)\n - [Purpose](#purpose)\n - [Escaping Identifiers, General Values, and List Values](#escaping-identifiers-general-values-and-list-values)\n - [Statement Interpolation](#statement-interpolation)\n - [SQL Statement Generation](#sql-statement-generation)\n - [Insert Statement Generation](#insert-statement-generation)\n - [Insert Statements with a `returning` Clause](#insert-statements-with-a-returning-clause)\n - [Trash Your DB for Fun and Profit](#trash-your-db-for-fun-and-profit)\n - [Motivation](#motivation)\n - [Properties of Trashed DBs](#properties-of-trashed-dbs)\n - [API](#api)\n - [Random](#random)\n - [Note on Package Structure](#note-on-package-structure)\n - [`better-sqlite3` an 'Unsaved' Dependency](#better-sqlite3-an-unsaved-dependency)\n - [Use npm, Not pnpm](#use-npm-not-pnpm)\n - [To Do](#to-do)\n\n<!-- END doctoc generated TOC please keep comment here to allow auto update -->\n\n\n\n# π€DBay\n\nDBay is built on [`better-sqlite3`](https://github.com/JoshuaWise/better-sqlite3), which is a NodeJS adapter\nfor [SQLite](https://www.sqlite.org). It provides convenient access to in-process, on-file and in-memory\nrelational databases. <!-- The mascot of DBay is the -->\n\n\nDBay is the successor to and a re-write of [ICQL-DBA](https://github.com/loveencounterflow/icql-dba). It is\nunder development and nearing feature-parity with its predecessor while already providing some significant\nimprovements in terms of ease of use and simplicity of implementation.\n\n## Introduction\n\nDBay provides\n* In-Process,\n* In-Memory & File-Based\n* Relational Data Processing\n* for NodeJS\n* with SQLite;\n* being based on [`better-sqlite3`](https://github.com/JoshuaWise/better-sqlite3),\n* it works (almost) exclusively in a synchronous fashion.\n\n## Documentation\n\n* **[Benchmarks](./README-benchmarks.md)**\n\n------------------------------------------------------------------------------------------------------------\n\n### Main\n\n#### Using Defaults\n\nIn order to construct (instantiate) a DBay object, you can call the constructor without any arguments:\n\n```coffee\n{ DBay } = require 'dbay'\ndb = new DBay()\n```\n\nThe `db` object will then have two properties `db.sqlt1` and `db.sqlt2` that are `better-sqlite3`\nconnections to the same temporary DB in the ['automatic location'](#automatic-location).\n\n#### Automatic Location\n\nThe so-called 'automatic location' is either\n\n* the directory `/dev/shm` on Linux systems that support **SH**ared **M**emory (a.k.a a RAM disk)\n* the OS's temporary directory as announced by `os.tmpdir()`\n\nIn either case, a [file with a random name](#randomly-chosen-filename) will be created in that location.\n\n#### Randomly Chosen Filename\n\nFormat `dbay-NNNNNNNNNN.sqlite`, where `N` is a digit `[0-9]`.\n\n#### Using Parameters\n\nYou can also call the constructor with a configuration object that may have one or more of the following\nfields:\n\n* **`cfg.path`** (`?non-empty text`): Specifies which file system path to save the DB to; if the path given\n is relative, it will be resolved in reference to the current directory (`process.cwd()`). When not\n specified, `cfg.path` will be derived from [`DBay.C.autolocation`](#automatic-location) and a [randomly\n chosen filename](#randomly-chosen-filename).\n\n* **`cfg.temporary`** (`?boolean`): Specifies whether DB file is to be removed when process exits or\n `db.destry()` is called explicitly. `cfg.temporary` defaults to `false` if `cfg.path` is given, and `true`\n otherwise (when a random filename is chosen).\n\n\n\n------------------------------------------------------------------------------------------------------------\n\n### Opening and Closing DBs\n\n\n#### Opening / Attaching DBs\n\n* **`db.open cfg`**: [Attach](https://www.sqlite.org/lang_attach.html) a new or existing DB to the `db`'s\n connections (`db.sqlt1`, `db.sqlt1`).\n* `cfg`:\n * `schema` (non-empty string): Required property that specifies the name under which the newly attached\n DB's objects can be accessed as; having attached a DB as, say, `db.open { schema: 'foo', path:\n 'path/to/my.db', }`, one can then run queries like `db \"select * from foo.main;\"` against it. Observe\n that\n * the DB opened at object creation time (`db = new DBay()`) always has the implicit name `main`, and\n schema `temp` is reserved for temporary databases.\n * `path` (string): FS path to existing or to-be-created DB file; for compatibility, this may also be set\n [to one of the special values that indicates a in-memory\n DB](./README-benchmarks.md#sqlite-is-not-fast-except-when-it-is), although that is not recommended.\n * `temporary` (boolean): Defaults to `false` when a `path` is given, and to `true` otherwise.\n\n* The custom SQLite library that is compiled when installing DBay has its `SQLITE_LIMIT_ATTACHED`\n compilation parameter set to the maximum allowed value of 125 (instead of the default 10). This allows\n developers to assemble a DB application from dozens of smaller pieces when desired.\n\n#### Closing / Detaching DBs\n\nββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\nββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\nββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\n\n\n\n------------------------------------------------------------------------------------------------------------\n\n### Transactions and Context Handlers\n\nββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\nββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\nββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\n\n------------------------------------------------------------------------------------------------------------\n\n### Query\n\nββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\nββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\nββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\n\n#### `SQL` Tag Function for Better Embedded Syntax\n\nMixing SQL and application code has the drawback that instead of editing SQL\nin your SQL-aware text editor, now you are editing bland string literals in\nyour SQL-aware editor. If there only was a way to tell the editor that some\nstrings contain SQL and should be treated as such!βWell, now there is. The\ncombined power of [JavaScript Tagged Templates]\n(https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals#tagged_template_literals)\nand an (exprimental proof-of-concept level) [set of Sublime Text syntax\ndefinitions called `coffeeplus`]\n(https://github.com/loveencounterflow/coffeeplus) makes it possible to embed\nSQL into JavaScript (and CoffeeScript) source code. The way this works is by\nproviding a 'tag function' that can be prepended to string literals. The name\nof the function together with the ensuing quotes can be recognized by the editor's\nhiliter so that constructs like `SQL\"...\"`, `SQL\"\"\"...\"\"\"` and so will trigger\nswitching languages. The tag function does next to nothing; here is its definition:\n\n```coffee\nclass DBay\n @SQL: ( parts, expressions... ) ->\n R = parts[ 0 ]\n for expression, idx in expressions\n R += expression.toString() + parts[ idx + 1 ]\n return R\n```\n\nIt can be used like this:\n\n```coffee\n{ DBay } = require 'dbay'\n{ SQL } = DBay\n\ndb = new DBay { path: 'path/to/db.sqlite', }\n\nfor row from db SQL\"select id, name, price from products order by 1;\"\n # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n # imagine proper embedded hiliting etc here\n console.log row.id, row.name, row.price\n```\n\nBe aware that `coffeeplus` is more of an MVP than a polished package. As such, not\neven reckognizing backticks has been implemented yet so is probably best used\nwith CoffeeScript.\n\n\n#### Executing SQL\n\nOne thing that sets DBay apart from other database adapters is the fact that the object returned from `new\nDBay()` is both the representative of the database opened *and* a callable function. This makes executing\nstatements and running queries very concise. This is an excerpt from the [DBay test suite]():\n\n```coffee\n{ DBay } = require H.dbay_path\ndb = new DBay()\ndb ->\n db SQL\"drop table if exists texts;\"\n db SQL\"create table texts ( nr integer not null primary key, text text );\"\n db SQL\"insert into texts values ( 3, 'third' );\"\n db SQL\"insert into texts values ( 1, 'first' );\"\n db SQL\"insert into texts values ( ?, ? );\", [ 2, 'second', ]\n #.......................................................................................................\n T?.throws /cannot start a transaction within a transaction/, ->\n db ->\n#.........................................................................................................\nT?.throws /UNIQUE constraint failed: texts\\.nr/, ->\n db ->\n db SQL\"insert into texts values ( 3, 'third' );\"\n#.........................................................................................................\nrows = db SQL\"select * from texts order by nr;\"\nrows = [ rows..., ]\nT?.eq rows, [ { nr: 1, text: 'first' }, { nr: 2, text: 'second' }, { nr: 3, text: 'third' } ]\n```\n\n> **Note** In the above `SQL` has been set to `String.raw` and has no further effect on the string it\n> precedes; it is just used as a syntax marker (cool because then you can have nested syntax hiliting).\n\nAs shown by [benchmarks](./README-benchmarks.md), a crucial factor for getting maximum performance out of\nusing SQLite is strategically placed transactions. SQLite will not ever execute a DB query *outside* of a\ntransaction; when no transaction has been explicitly opened with `begin transaction`, the DB engine will\nprecede each query implicitly with (the equivalent of) `begin transaction` and follow it with either\n`commit` or `rollback`. This means when a thousand `insert` statements are run, a thousand transactions will\nbe started and committed, leavin performance pretty much in the dust.\n\nTo avoid that performance hit, users are advised to always start and commit transactions when doing many\nconsecutive queries. DBay's callable `db` object makes that easy: just write `db -> many; inserts; here;`\n(JS: `db( () -> { many; inserts; here; })`), i.e. pass a function as the sole argument to `db`, and DBay\nwill wrap that function with a transaction. In case an error should occur, DBay guarantees to call\n`rollback` (in a `try ... finally ...` clause). Those who like to make things more explicit can also use\n`db.with_transaction ->`. Both formats allow to pass in a configuration object with an attribute `mode` that\nmay be set to [one of `'deferred'`, `'immediate'`, or\n`'exclusive'`](https://www.sqlite.org/lang_transaction.html), the default being `'deferred'`.\n\nAnother slight performance hit may be caused by the logic DBay uses to (look up an SQL text in a cache or)\nprepare a statement and then decide whether to call `better-sqlite3`'s' `Database::execute()`,\n`Statement::run()` or `Statement::iterate()`; in order to circumvent that extra work, users may choose to\nfall back on to `better-sqlite3` explicitly:\n\n```coffee\ninsert = db.prepare SQL\"insert into texts values ( ?, ? );\" # returns a `better-sqlite3` `Statement` instance\ndb ->\n insert.run [ 2, 'second', ]\n```\n\n\n\n------------------------------------------------------------------------------------------------------------\n\n### User-Defined Functions (UDFs)\n\nββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\nββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\nββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\n\n------------------------------------------------------------------------------------------------------------\n\n\n### Standard Library of SQL Functions (StdLib)\n\n#### List of Functions\n\n* Strings\n * **`std_str_reverse()`**\n * **`std_str_join()`**\n * **`std_str_split()`**\n * **`std_str_split_re()`**\n * **`std_str_split_first()`**\n * **`std_re_matches()`**\n\n* XXX\n * **`std_generate_series()`**\n\n* Output\n * **`std_echo()`**\n * **`std_debug()`**\n * **`std_info()`**\n * **`std_warn()`**\n\n* Exceptions and Assertions\n * **`std_raise( message )`**βunconditionally throw an error with message given.\n * **`std_raise_json( facets_json )`**βunconditionally throw an error with informational properties encoded\n as a JSON string.\n * **`std_assert( test, message )`**βthrow an error with `message` if `test` is falsy.\n * **`std_warn_if( test, message )`**βprint an error `message` if `test` is truthy.\n * **`std_warn_unless()`**βprint an error `message` if `test` is falsy.\n\n* Variables\n * **`std_getv()`**\n * **`std_variables()`**\n\n#### Use Case for DBay Exceptions and Assertions: Enforcing Invariants\n\n* `std_assert: ( test, message ) ->` throws error if `test` is false(y)\n* `std_warn_unless: ( test, message ) ->` prints warning if `test` is false(y)\n* often one wants to ensure a given SQL statement returns / affects exactly zero or one rows\n* easy to do if some rows are affected, but more difficult when no rows are affected, because a function in\n the statement won't be called when there are no rows.\n* The trick is to ensure that at least one row is computed even when no rows match the query, and the way to\n do that is to include an aggregate function such as `count(*)`.\n* May want to include `limit 1` where appropriate.\n\n```sql\nselect\n *,\n std_assert(\n count(*) > 0,\n '^2734-1^ expected one or more rows, got ' || count(*) ) as _message\n from nnt\n where true\n and ( n != 0 );\n```\n\n```sql\nselect\n *,\n std_assert(\n count(*) > 0, -- using `count(*)` will cause the function to be called\n -- even in case there are no matching rows\n '^2734-2^ expected one or more rows, got ' || count(*) ) as _message\n from nnt\n where true\n and ( n != 0 )\n and ( t = 'nonexistant' ); -- this condition is never fulfilled\n```\n\n#### Use Case for DBay Variables: Parametrized Views\n\n* An alternative for user-defined table functions where those functions would perform queries against the\n DB, which is tricky.\n* Inside the view definition, use `std_getv( name )` to retrieve variable values *which must have been set\n immediately prior to accessing the view*.\n* Downside is that it's easy to forget to update a given value, so best done from inside a specialized\n method in your application.\n\n------------------------------------------------------------------------------------------------------------\n\n### Safe Escaping for SQL Values and Identifiers\n\n\n#### Purpose\n\n* Facilitate the creation of securely escaped SQL literals.\n* In general not thought of as a replacement for the value interpolation offered by `DBay::prepare()`,\n `DBay::query()` and so, except when\n * one wants to parametrize DB object names (e.g. use table or column names like variables),\n * one wants to interpolate an SQL `values` list, as in `select employee from employees where department in\n ( 'sales', 'HR' );`.\n\n#### Escaping Identifiers, General Values, and List Values\n\n* **`db.sql.I: ( name ) ->`**: returns a properly quoted and escaped SQL **I**dentifier.\n* **`db.sql.L: ( x ) ->`**: returns a properly quoted and escaped SQL **V**alue. Note that booleans\n (`true`, `false`) will be converted to `1` and `0`, respectively.\n* **`db.sql.V: ( x ) ->`**: returns a bracketed SQL list of values (using `db.sql.V()` for each list\n element).\n\n\n#### Statement Interpolation\n\n**`db.interpolate( sql, values ) ->`** accepts a template (a string with placeholder formulas) and a list\nor object of values. It returns a string with the placeholder formulas replaced with the escaped values.\n\n```coffee\n# using named placeholders\nsql = SQL\"select $:col_a, $:col_b where $:col_b in $V:choices\"\nd = { col_a: 'foo', col_b: 'bar', choices: [ 1, 2, 3, ], }\nresult = db.sql.interpolate sql, d\n# > \"\"\"select \"foo\", \"bar\" where \"bar\" in ( 1, 2, 3 )\"\"\"\n```\n\n```coffee\n# using positional placeholders\nsql = SQL\"select ?:, ?: where ?: in ?V:\"\nd = [ 'foo', 'bar', 'bar', [ 1, 2, 3, ], ]\nresult = db.sql.interpolate sql, d\n# > \"\"\"select \"foo\", \"bar\" where \"bar\" in ( 1, 2, 3 )\"\"\"\n```\n\n```coffee\n# using an unknown format\nsql = SQL\"select ?:, ?X: where ?: in ?V:\"\nd = [ 'foo', 'bar', 'bar', [ 1, 2, 3, ], ]\nresult = db.sql.interpolate sql, d\n# throws \"unknown interpolation format 'X'\"\n```\n\n------------------------------------------------------------------------------------------------------------\n\n\n### SQL Statement Generation\n\nDBay offers limited support for the declarative generation of a small number of recurring classes of SQL\nstatements. These facilities are in no way intended to constitute or grow into a full-blown\nObject-Relational Mapper (ORM); instead, they are meant to make working with relational data less of a\nrepetitive chore.\n\n#### Insert Statement Generation\n\nTo pick one case in point, SQL `insert` statements when called from a procedural language have a nasty habit\nof demanding not two, but *three* copies of a table's column names:\n\n```coffee\ndb SQL\"\"\"\n create table xy (\n a integer not null primary key,\n b text not null,\n c boolean not null );\"\"\"\ndb SQL\"insert into xy ( b, c ) values ( $b, $c )\", { b, c, }\n# ^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^\n```\n\n<details><summary><ins>As stated above, DBay does not strive to implement full SQL statement generation.\nEven if one wanted to only generate SQL <code>insert</code> statements, one would still have to implement\nalmost all of SQL, as is evidenced by the screenshot of the <a\nhref=https://sqlite.org/lang_insert.html>SQLite <code>insert</code> Statement Railroad Diagram</a> that will\nbe displayed when clicking/tapping on this paragraph.</ins></summary> <img alt='SQLite Insert Statement\nRailroad Diagram'\nsrc=https://loveencounterflow.github.io/hengist/sqlite-syntax-diagrams/insert.railroad.png> </details>\n\nInstead, we implement facilities to cover the most frequent use cases and offer opportunities to insert SQL\nfragments at strategic points.\n\nOften, when an `insert` statement is being called for, one wants to insert full rows (minus `generate`d\ncolumns, for which see below) into tables. This is the default that DBay makes easy: A call to\n`db.prepare_insert()` with the insertion target identified with `into` will return a prepared statement that\ncan then be used as first argument to the `db` callable:\n\n```coffee\ninsert_into_xy = db.prepare_insert { into: 'xy', }\ndb insert_into_xy, { a, b, c, }\n```\n\nObserve that named parameters (as opposed to positional ones) are used, so values must be passed as an\nobject (as opposed to a list).\n\nIn case the actual SQL text of the statement is needed, call `db.create_insert()` instead:\n\n```coffee\ninsert_sql = db.create_insert { into: 'xy', }\n# 'insert into \"main\".\"xy\" ( \"a\", \"b\", \"c\" ) values ( $a, $b, $c );'\n```\n\nWhen one or more columns in a table are [`autoincrement`ed](https://sqlite.org/autoinc.html) or have a\n`default` value, then those columns are often intended not to be set explicitly. What's more, [columns with\n`generate`d values]() *must not* be set explicitly. For this reason, **`db.create_insert()` (and, by\nextension, `db.prepare_insert()`) will skip `generate`d columns** and allow to explicitly specify either\n*included* columns (as `fields`) or else *excluded* columns (as `exclude`):\n\n```coffee\ndb SQL\"\"\"\n create table t1(\n a integer primary key,\n b integer,\n c text,\n d integer generated always as (a*abs(b)) virtual,\n e text generated always as (substr(c,b,b+1)) stored );\"\"\"\ninsert_into_t1 = db.create_insert { into: 't1', }\n\n### Observe `d` and `e` are left out because they're generated, but `a` is present: ###\n# 'insert into \"main\".\"t1\" ( \"a\", \"b\", \"c\" ) values ( $a, $b, $c );'\n\n### You probably want either this: ###\ninsert_into_t1 = db.create_insert { into: 't1', fields: [ 'b', 'c', ], }\n# 'insert into \"main\".\"t1\" ( \"b\", \"c\" ) values ( $b, $c );'\n\n### Or this: ###\ninsert_into_t1 = db.create_insert { into: 't1', exclude: [ 'a', ], }\n# 'insert into \"main\".\"t1\" ( \"b\", \"c\" ) values ( $b, $c );'\n```\n\n> There's a subtle yet important semantic difference in how the `fields` and `exclude` settings are handled:\n> When `fields` are explicitly given, the table **does not have to exist** when generating the SQL; however,\n> when `fields` is not given, the table **must already exist** at the time of calling `create_insert()`.\n>\n> In either case, `prepare_insert()` can only succeed when all referenced object in an SQL statement have\n> already been created.\n\nThe next important thing one often wants in inserts is resolving conflicts. DBay `create_insert()` supports\nsetting `on_conflict` to either **(1)** an arbitrary string that should spell out a syntactically valid SQL\n`on conflict` clause, or **(2)** an object `{ update: true, }` to generate SQL that updates the explicitly\nor implicitly selected columns. This form has been chosen to leave the door open to future expansions of\nsupported features.\n\nWhen choosing the first option, observe that whatever string is passed in, `create_insert()` will prepend\n`'on conflict '` to it; therefore, to create an insert statement that ignores insert conflicts, and\naccording to the [`upsert` syntax railroad diagram](https://sqlite.org/lang_upsert.html): β\n\n![](artwork/upsert.railroad.svg)\n\nβ the right thing to do is to call `db.create_insert { into: table_name, on_conflict: 'do nothing', }`.\nAssuming table `t1` has been declared as above, calling\n\n```coffee\ndb.create_insert { into: 't1', exclude: [ 'a', ], on_conflict: \"do nothing\", }\n```\n\nwill generate the (unformatted but properly escaped) equivalent to:\n\n```sql\ninsert into main.t1 ( b, c )\n values ( $b, $c )\n on conflict do nothing;\n -- |<------>|\n -- inserted string\n```\n\nwhile calling\n\n```coffee\ndb.create_insert { into: 't1', exclude: [ 'a', ], on_conflict: { update: true, }, }\n```\n\nwiil generate the (unformatted but properly escaped) equivalent to:\n\n```sql\ninsert into main.t1 ( b, c )\n values ( $b, $c )\n on conflict do update set --| conflict resolution clause\n b = excluded.b, --| mandated by { update: true, }\n c = excluded.c; --| containing same fields as above\n```\n\n#### Insert Statements with a `returning` Clause\n\nIt is sometimes handy to have `insert` statements that return a useful value. Here's a toy example\nthat demonstrates how one can have a table with generated columns:\n\n```coffee\ndb SQL\"\"\"\n create table xy (\n a integer not null primary key,\n b text not null,\n c text generated always as ( '+' || b || '+' ) );\"\"\"\ninsert_into_xy_sql = db.create_insert { into: 'xy', on_conflict: SQL\"do nothing\", returning: '*', }\n# -> 'insert into \"main\".\"xy\" ( \"a\", \"b\" ) values ( $a, $b ) on conflict do nothing returning *;'\ndb.single_row insert_into_xy_sql, { a: 1, b: 'any', } # -> { a: 1, b: 'any', c: '+any+' }\ndb.single_row insert_into_xy_sql, { a: 2, b: 'duh', } # -> { a: 2, b: 'duh', c: '+duh+' }\ndb.single_row insert_into_xy_sql, { a: 3, b: 'foo', } # -> { a: 3, b: 'foo', c: '+foo+' }\n```\n\nGenerally, the `returning` clause must be defined by a non-empty string that is valid SQL for the position\nafter `returning` and the end of the statement. A star `*` will return the entire row that has been\ninserted; we here use `db.single_row()` to eschew the result iterator that would be returned by default.\n\n\n------------------------------------------------------------------------------------------------------------\n\n### Trash Your DB for Fun and Profit\n\n#### Motivation\n\n**The Problem**βyou have a great SQLite3 database with all the latest features (like `strict` tables,\n`generate`d columns, user-defined function calls in views and so on), and now you would like to use a tool\nlike [`visualize-sqlite`](https://lib.rs/crates/visualize-sqlite) or\n[SchemaCrawler](https://www.schemacrawler.com/diagramming.html) to get a nice ER diagram for your many\ntables. Well, now you have two problems.\n\nThing is, the moment you use UDFs in your DDL (as in, `create view v as select myfunction( x ) as x1 from\nt;`) your `*.sqlite` file stops being viable as a stand-alone DB; because UDFs are declared on the\nconnection and defined in the host app's environment, they are not stored inside `*.sqlite` files, nor are\nthey present in an SQL dump file. Your database and your application have become an inseparable unit with a\nmutual dependency on each other. But the way the common visualizers work is they require a standalone DB or\nan SQL dump to generate output from, and they will choke on stuff they don't understand (even though the ER\nrelationships might not even be affected by the use of a user-defined function).\n\n**The solution** to this conundrum that I've come up with is to prepare a copy of a given DB with all the\nfancy stuff removed but all the essential building blocksβtables, views, primary keys, secondary keys,\nuniqueness constraintsβpreserved.\n\nI call this functionality `trash` which is both a pun on `dump` (as in 'dump the DB to an SQL file') and a\nwarning to the user that this is not a copy. You *do* trash your DB using this feature.\n\n#### Properties of Trashed DBs\n\nThe following invariants of trashed DBs hold:\n\n* To trash a DB, an SQL script is computed that replicates the DB's salient structural features.\n* This script is either returned, written to a file, or used to produce a binary representation which is,\n again, either returned or written to a file.\n* The SQL script runs in a single transaction.\n* It starts by removing all relations, should they exist. This means one can always do `sqlite3 path/to/db <\n mytrasheddb.sql` even on an existing `path/to/db`.\n* All fields of all relations will be present in the trashed copy.\n* All trashed fields will have the same type declaration as the original DB (in the sense that they will use\n the same text used in the original DDL). However, depending on meta data as provided by SQLite3's internal\n tables and pragmas, some views may miss some type information.\n* Empty type declarations and the missing type declaration of view fields will be rendered as `any` in the\n trash DDL.\n* The trashed DB will contain no data (but see below).\n\n**Discussion and Possible Enhancements**\n\n* It is both trivial to show that, on the one hand, in a properly structured RDB, views can always be\n materialized to a table, complete with field names, data, and at least partial type information. However,\n on the other hand, it is also trivial to show that any given view (and any generated field, for that\n matter) may use arbitrarily complex computations in its definitionβimagine a UDF that fetches content from\n the network as an example.\n * In SQLite, not all fields of all views have an explicit type (and even fields of tables can lack an\n explicit type or be of type `any`)\n* There's somewhat of a grey zone between the two extremes of a view just being a join of two tables or an\n excerpt of a single oneβsomething that would probably be reproducible in a trash DB with some effort\n towards SQL parsing. Whether this would be worth the effortβtackle SQL parsing with the goal to preserve\n views as views in a trash DBβis questionable. Observe that not even all built-in functions of SQLite3 are\n guaranteed to be present in a given compiled library or command line tool because those can be (and often\n are) configured to be left out; in this area there's also a certain variation across SQLite versions.\n* An alternative to tackling the generally inattainable goal of leaving views as views would be to use\n user-defined prefixes for views (a view `\"comedy_shows\"` could be rendered as `\"(view) comedy_shows\"`).\n In light of the complications outlined here, this option looks vastly superior.\n\n* The trashed DB will contain no data, but this could conceivably be changed in the future. When\n implemented, this will allow to pass around DBs 'for your information and pleasure only'. When this\n feature is implemented, a way to include/exclude specific relations will likely also be implemented.\n\n#### API\n\n**`trash_to_sql: ( { path: false, overwrite: false, walk: false, } ) ->`**\n * renders DB as SQL text\n * if `path` is given...\n * ... and a valid FS path, writes the SQL to that file and returns the path.\n * ... and `true`, a random path in DBay's `autolocation` will be chosen, written to, and returned.\n * ... and `false`, it will be treated as not given, see below.\n * if `path` exists, will fail unless `overwrite: true` is specified\n * if `path` is not given or `false`,\n * will return a string if `walk` is not `true`,\n * otherwise, will return an iterator over the lines of the produced SQL source.\n\n**`trash_to_sqlite: ( { path: false, overwrite: false, } ) ->`**\n * renders DB as an SQLite3 binary representation\n * handling of `path`, `overwrite`, and the return value is done as described above for `trash_to_sql()`.\n * instead of writing or returning an SQL string, this method will write or return a `Buffer` (or a\n `TypedArray`???)\n\nIn any event, parameters that make no sense in the given combination (such as omitting `path` but specifying\n`overwrite: true`) will be silently ignored.\n\n\n\n------------------------------------------------------------------------------------------------------------\n\n### Random\n\n\nββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\nββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\nββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\n\n\n\n------------------------------------------------------------------------------------------------------------\n\n## Note on Package Structure\n\n### `better-sqlite3` an 'Unsaved' Dependency\n\nSince DBay depends on [`better-sqlite3`](https://github.com/JoshuaWise/better-sqlite3) with a\n[custom-configured build of the SQLite C\nengine](https://github.com/JoshuaWise/better-sqlite3/blob/master/docs/compilation.md), it is (for whatever\nreason) important that **`better-sqlite3` must not be listed under `package.json#dependencies`**; otherwise,\ncompilation will not work properly. The [build script](./build-sqlite3) will run `npm install\nbetter-sqlite3@'^7.4.3'` but with an added `--no-save` flag.\n\n## Use npm, Not pnpm\n\nAlso, at the time of this writing (2021-09), while the project compiles fine using npm v7.21.1 (on NodeJS\nv16.9.1 on Linux Mint), but it fails using pnpm v6.14.6 with `Unknown options: 'build-from-source',\n'sqlite3'`. Yarn has not been tried.\n\n**Note**β*These considerations only concern those who wish to fork/clone DBay to work on the code. Those who\njust want to use DBay as a dependency of their project can both either run `npm install dbay` or `pnpm add\ndbay`, both package managers work fine.*\n\n## To Do\n\n* **[β]** port foundational code from hengist &c\n* **[β]** at construction time, allow `dbnick` when `path` is given and `ram` is `false`\n* **[β]** to solve the table-UDF-with-DB-access conundrum, consider\n * <del>**[+]** https://github.com/mapnik/mapnik/issues/797, where connection parameters are discussed (see also\n https://www.sqlite.org/c3ref/open.html);</del> <ins>nothing of interested AFAICS</ins>\n * **[β]** mirroring a given DB into a second (RAM or file) location, taking care to replay any goings-on\n on both instances. This is probably unattractive from a performance POV.\n * **[β]** using [NodeJS worker threads](https://nodejs.org/api/worker_threads.html) to perform updates;\n maybe one could even continuously mirror a RAM DB on disk to get a near-synchronous copy, obliviating\n the necessity to explicitly call `db.save()`. See\n https://github.com/JoshuaWise/better-sqlite3/blob/master/docs/threads.md\n * **[β]** implementing **macros** so one could write eg `select * from foo( x ) as d;` to get `select *\n from ( select a, b, c from blah order by 1 ) as d` (i.e. inline expansion)\n * **[β]** Observe that, seemingly, only *table-valued* UDFs hang while with shared-cache we already *can*\n issue `select`s from inside UDFs, so maybe there's a teeny, fixable difference between how both are\n implemented that leads to the undesirable behavior\n* **[β]** let users choose between SQLite-only RAM DBs and `tmpfs`-based in-memory DBs (b/c the latter allow\n `pragma journal_mode = WAL` for better concurrent access). Cons include: `tmpfs`-based RAM DBs necessitate\n mounting a RAM disk which needs `sudo` rights, so might as well just instruct users to mount RAM disk,\n then use that path? Still, it would be preferrable to have some automatic copy-to-durable in place.\n* **[β]** implement context handler for discardable / temporary file\n* **[+]** implement `DBay::do()` as a method that unifies all of `better-sqlite3`'s `Statement::run()`,\n `Statement::iterate()`, and `Database::execute()`.\n* **[+]** allow to call `DBay::do -> ...` with a synchronous function with the same semantics as\n `DBay::with_transaction -> ...`.\n* **[+]** allow to call `DBay::do { mode: 'deferred', }, -> ...`.\n* **[β]** allow to call `DBay::do -> ...` with an asynchronous function\n* **[+]** make `db = new DBay()` an instance of `Function` that, when called, runs `DBay::do()`\n `Database::execute()`.\n `statement = DBay::prepare.insert_into.<table> [ 'field1', 'field2', ..., ]`\n* **[+]** change classname(s) from `Dbay` to `DBay` to avoid spelling variant proliferation\n* **[β]** implement `DBay::open()`, `DBay::close()`\n* **[β]** ensure how cross-schema foreign keys work when re-attaching DBs / schemas one by one\n* **[β]** demote `random` from a mixin to functions in `helpers`.\n* **[β]** implement `db.truncate()` / `db.delete()`; allow to retrieve SQL.\n* **[β]** implement `DBay::insert_into.<table> [ 'field1', 'field2', ..., ], { field1, field2, ..., }`;\n allow to retrieve SQL.\n* **[β]** clarify whether UDFs get called at all when any argument is `null` b/c it looks like they\n don't get called which would be unfortunate\n* **[β]** add schematic to clarify terms like *database*, *schema*, *connection*; hilite that UDFs are\n defined on *connections* (not *schemas* or *databases* as would be the case in e.g. PostgreSQL).\n* **[β]** allow to transparently treat key/value tables as caches\n* **[+]** let `db.do()` accept prepared statement objects.\n* **[β]** implement escaping of dollar-prefixed SQL placeholders (needed by `create_insert()`).\n* **[β]** implement\n * **[β]** `db.commit()`\n * **[β]** `db.rollback()`\n* **[β]** allow to use sets with `sql.V()`\n* **[+]** make `first_row()`, `all_rows()` etc accept statements and strings\n* **[+]** at the moment we use `cfg.prefix` for (inherently schema-less) UDF names (and require a trailing\n underscore to be part of the prefix), and `cfg.schema` for plugin-in-specific DB tables and views; in the\n future, we should use a single parameter for both (and make the underscore implicit). In addition, it\n should be possible to choose whether a plugin will create its objects with a prefix (in the same schema as\n the main DB) or within another schema.\n* **[+]** fix generated SQL `insert` statements without explicit fields\n* **[β]** implement export/snapshot function that generates a DB with a simplified structure:\n * replace generated fields, results from function calls by constants\n * remove `strict` and similar newer attributes\n * DB should be readable by tools like `sqlite3` command line, [`visualize-sqlite`](https://lib.rs/crates/visualize-sqlite)\n* **[+]** consider to implement `trash()` as `trash_to_sql()` (`path` optional), `trash_to_sqlite()` (`path`\n optional)\n* **[β]** consider to implement iterating over statements instead of lines in `trash_to_sql()`\n* **[β]** consider to refactor trash into project `dbay-trash` b/c either it or (an additional module)\n will be in need of an SQL parser to provide in-depth structural insights\n\n\n" | ||
} | ||
} | ||
} |
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
2483
12243854