New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

dbay

Package Overview
Dependencies
Maintainers
1
Versions
70
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

dbay - npm Package Compare versions

Comparing version 14.8.0 to 14.9.0

5

lib/stdlib-mixin.js

@@ -435,2 +435,7 @@ (function() {

//---------------------------------------------------------------------------------------------------------
dt_from_iso(iso_ts) {
return (this._dayjs(iso_ts)).utc().format(this._dt_dbay_timestamp_output_template);
}
//---------------------------------------------------------------------------------------------------------
dt_parse(dbay_timestamp) {

@@ -437,0 +442,0 @@ var R;

5

package.json
{
"name": "dbay",
"version": "14.8.0",
"version": "14.9.0",
"description": "In-Process, In-Memory & File-Based Relational Data Processing with SQLite, BetterSQLite3",

@@ -33,4 +33,3 @@ "main": "lib/main.js",

"preinstall": "./build-sqlite3"
},
"readme": "\n\n# 𓆤DBay\n\n\n<!-- START doctoc generated TOC please keep comment here to allow auto update -->\n<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->\n**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*\n\n- [𓆤DBay](#%F0%93%86%A4dbay)\n - [Introduction](#introduction)\n - [Documentation](#documentation)\n - [Main](#main)\n - [Using Defaults](#using-defaults)\n - [Automatic Location](#automatic-location)\n - [Randomly Chosen Filename](#randomly-chosen-filename)\n - [Using Parameters](#using-parameters)\n - [Opening and Closing DBs](#opening-and-closing-dbs)\n - [Opening / Attaching DBs](#opening--attaching-dbs)\n - [Closing / Detaching DBs](#closing--detaching-dbs)\n - [Transactions and Context Handlers](#transactions-and-context-handlers)\n - [Query](#query)\n - [Use the `alt`ernative Connection to Avoid Connection Busy Errors](#use-the-alternative-connection-to-avoid-connection-busy-errors)\n - [`SQL` Tag Function for Better Embedded Syntax](#sql-tag-function-for-better-embedded-syntax)\n - [Executing SQL](#executing-sql)\n - [User-Defined Functions (UDFs)](#user-defined-functions-udfs)\n - [Standard Library of SQL Functions (StdLib)](#standard-library-of-sql-functions-stdlib)\n - [List of Functions](#list-of-functions)\n - [Use Case for DBay Exceptions and Assertions: Enforcing Invariants](#use-case-for-dbay-exceptions-and-assertions-enforcing-invariants)\n - [Use Case for DBay Variables: Parametrized Views](#use-case-for-dbay-variables-parametrized-views)\n - [Safe Escaping for SQL Values and Identifiers](#safe-escaping-for-sql-values-and-identifiers)\n - [Purpose](#purpose)\n - [Escaping Identifiers, General Values, and List Values](#escaping-identifiers-general-values-and-list-values)\n - [Statement Interpolation](#statement-interpolation)\n - [SQL Statement Generation](#sql-statement-generation)\n - [Insert Statement Generation](#insert-statement-generation)\n - [Insert Statements with a `returning` Clause](#insert-statements-with-a-returning-clause)\n - [Random](#random)\n - [Note on Package Structure](#note-on-package-structure)\n - [`better-sqlite3` an 'Unsaved' Dependency](#better-sqlite3-an-unsaved-dependency)\n - [To Do](#to-do)\n - [Is Done](#is-done)\n\n<!-- END doctoc generated TOC please keep comment here to allow auto update -->\n\n\n\n# 𓆤DBay\n\nDBay is built on [`better-sqlite3`](https://github.com/JoshuaWise/better-sqlite3), which is a NodeJS adapter\nfor [SQLite](https://www.sqlite.org). It provides convenient access to in-process, on-file and in-memory\nrelational databases. <!-- The mascot of DBay is the -->\n\n\nDBay is the successor to and a re-write of [ICQL-DBA](https://github.com/loveencounterflow/icql-dba). It is\nunder development and nearing feature-parity with its predecessor while already providing some significant\nimprovements in terms of ease of use and simplicity of implementation.\n\n## Introduction\n\nDBay provides\n* In-Process,\n* In-Memory & File-Based\n* Relational Data Processing\n* for NodeJS\n* with SQLite;\n* being based on [`better-sqlite3`](https://github.com/JoshuaWise/better-sqlite3),\n* it works (almost) exclusively in a synchronous fashion.\n\n## Documentation\n\n* **[Benchmarks](./README-benchmarks.md)**\n\n------------------------------------------------------------------------------------------------------------\n\n### Main\n\n#### Using Defaults\n\nIn order to construct (instantiate) a DBay object, you can call the constructor without any arguments:\n\n```coffee\n{ DBay } = require 'dbay'\ndb = new DBay()\n```\n\n<del>The `db` object will then have two properties `db.sqlt1` and `db.sqlt2` that are `better-sqlite3`\nconnections to the same temporary DB in the ['automatic location'](#automatic-location).</del>\n\nThe `db` object will then have a (non-enumerable) property `db.sqlt1` which is a `better-sqlite3` connection\nto a temporary DB in the ['automatic location'](#automatic-location).\n\n#### Automatic Location\n\nThe so-called 'automatic location' is either\n\n* the directory `/dev/shm` on Linux systems that support **SH**ared **M**emory (a.k.a a RAM disk)\n* the OS's temporary directory as announced by `os.tmpdir()`\n\nIn either case, a [file with a random name](#randomly-chosen-filename) will be created in that location.\n\n#### Randomly Chosen Filename\n\nFormat `dbay-NNNNNNNNNN.sqlite`, where `N` is a digit `[0-9]`.\n\n#### Using Parameters\n\nYou can also call the constructor with a configuration object that may have one or more of the following\nfields:\n\n* **`cfg.path`** (`?non-empty text`): Specifies which file system path to save the DB to; if the path given\n is relative, it will be resolved in reference to the current directory (`process.cwd()`). When not\n specified, `cfg.path` will be derived from [`DBay.C.autolocation`](#automatic-location) and a [randomly\n chosen filename](#randomly-chosen-filename).\n\n* **`cfg.temporary`** (`?boolean`): Specifies whether DB file is to be removed when process exits or\n `db.destry()` is called explicitly. `cfg.temporary` defaults to `false` if `cfg.path` is given, and `true`\n otherwise (when a random filename is chosen).\n\n\n\n------------------------------------------------------------------------------------------------------------\n\n### Opening and Closing DBs\n\n\n#### Opening / Attaching DBs\n\n* **`db.open cfg`**: [Attach](https://www.sqlite.org/lang_attach.html) a new or existing DB to the `db`'s\n connection<del>s (`db.sqlt1`, `db.sqlt1`).</del> (`db.sqlt1`).\n* `cfg`:\n * `schema` (non-empty string): Required property that specifies the name under which the newly attached\n DB's objects can be accessed as; having attached a DB as, say, `db.open { schema: 'foo', path:\n 'path/to/my.db', }`, one can then run queries like `db \"select * from foo.main;\"` against it. Observe\n that\n * the DB opened at object creation time (`db = new DBay()`) always has the implicit name `main`, and\n schema `temp` is reserved for temporary databases.\n * `path` (string): FS path to existing or to-be-created DB file; for compatibility, this may also be set\n [to one of the special values that indicates a in-memory\n DB](./README-benchmarks.md#sqlite-is-not-fast-except-when-it-is), although that is not recommended.\n * `temporary` (boolean): Defaults to `false` when a `path` is given, and to `true` otherwise.\n\n* The custom SQLite library that is compiled when installing DBay has its `SQLITE_LIMIT_ATTACHED`\n compilation parameter set to the maximum allowed value of 125 (instead of the default 10). This allows\n developers to assemble a DB application from dozens of smaller pieces when desired.\n\n#### Closing / Detaching DBs\n\n▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊\n▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊\n▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊\n\n\n\n------------------------------------------------------------------------------------------------------------\n\n### Transactions and Context Handlers\n\n▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊\n▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊\n▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊\n\n------------------------------------------------------------------------------------------------------------\n\n### Query\n\n▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊\n▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊\n▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊\n\n#### Use the `alt`ernative Connection to Avoid Connection Busy Errors\n\nSQLite imposes certain restrictions on what one can and cannot do in a concurrent fashion, one restriction\nbeing that within the same connection, one cannot at the same time iterate over query results and insert\nvalues—not even into a completely unrelated table. There are ways to get around that limitation. For one,\nthe standard recommendation of the maker of `better-sqlite3` is to just fetch all the needed rows from the\nDB and then iterate over the list of values. This is totally doable and the simplest and most transparent\nsolution, but of course a nagging thought remains—what if the dataset gets really huge? In reality, this may\nturn out *never* to be a problem, realistically, but that consideration won't make that nagging thought\nvanish.\n\nThere's a (seemingly) better way. [Commit `57e062a`: make `alt` an on-demand clone of present\ninstance](https://github.com/loveencounterflow/dbay/commit/57e062a941c9a6d3b589bcdddebbdf82e7088812)\nintroduces the new (non-enumerable) property `db.alt` which represents a clone of the `db` object. Previous\nversions had two underlying DB connections `sqlt1` and `sqlt2` which could be used for the purposes\ndescribed in this section, but their drawback was that one falls back to the underlying `better-sqlite3` API\nwhich can be a little confusing.\n\nLet's have a look at a toy DB and see how to use `db.alt`. This is the definition, two tables with one\ninteger field each:\n\n```coffee\n#.................................................................................\ndb SQL\"\"\"\n create table foo ( n integer );\n create table bar ( n integer );\"\"\"\nfor n in [ 10 .. 12 ]\n db SQL\"insert into foo ( n ) values ( $n );\", { n, }\n```\n\nAnd here's what we want to accomplish: read data from one table and, based on that data, insert records into\nanother one. Naïvely one would perhaps write it this way (the transaction being added because we need it\nlater anyway):\n\n```coffee\ndb.with_transaction =>\n for row from db SQL\"select * from foo order by n;\"\n info '^806-2^', row\n db SQL\"insert into bar values ( $n );\", { n: n ** 2, }\n return null\n```\n\nThis will not run, however, but fail with `TypeError: This database connection is busy executing a query`.\nThis is where `db.alt` comes in: we have to use one connection for the iteration and another one for the\ninsertion; this works:\n\n```coffee\n#.................................................................................\n# (1)\ndb.with_transaction =>\n for { n, } from db.alt SQL\"select * from foo order by n;\"\n # ^^^^^^\n db SQL\"insert into bar values ( $n ) returning *;\", { n: n ** 2, }\n return null\n```\n\nThe following points should be kept in mind: **Explicit transactions and explicit prepared statements are\nthey key factors for speedy `insert`s. Since explicit transactions are crucial for concurrent inserts, it's\nrecommended to do *all* inserts within explicit transactions.**\n\nTherefore, because it's good practice to use explicit transactions and explicit prepared statements when\ndoing inserts, most of the time inserts should take on the form shown in snippets `(2)` or `(4)`:\n\n```coffee\n#.................................................................................\n# (2)\ninsert_into_bar = db.prepare SQL\"insert into bar values ( $n ) returning *;\"\ndb.with_transaction =>\n for { n, } from db.alt SQL\"select * from foo order by n;\"\n insert_into_bar.run { n: n ** 2, }\n return null\n```\n\nObserve that we have to explicitly exhaust the iterator that is returned from `insert ... returning`\nstatements; to do so, either use `db.first_row()` or call a prepared statement's `.get()` (instead of\n`.run()`) method:\n\n```coffee\n#.................................................................................\n# (3)\ndb.with_transaction =>\n for { n, } from db.alt SQL\"select * from foo order by n;\"\n new_row = db.first_row SQL\"insert into bar values ( $n ) returning *;\", { n: n ** 2, }\n return null\n```\n\n```coffee\n#.................................................................................\n# (4)\ninsert_into_bar = db.prepare SQL\"insert into bar values ( $n ) returning *;\"\ndb.with_transaction =>\n for { n, } from db.alt SQL\"select * from foo order by n;\"\n new_row = insert_into_bar.get { n: n ** 2, }\n return null\n```\n\n#### `SQL` Tag Function for Better Embedded Syntax\n\nMixing SQL and application code has the drawback that instead of editing SQL\nin your SQL-aware text editor, now you are editing bland string literals in\nyour SQL-aware editor. If there only was a way to tell the editor that some\nstrings contain SQL and should be treated as such!—Well, now there is. The\ncombined power of [JavaScript Tagged Templates]\n(https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals#tagged_template_literals)\nand an (exprimental proof-of-concept level) [set of Sublime Text syntax\ndefinitions called `coffeeplus`]\n(https://github.com/loveencounterflow/coffeeplus) makes it possible to embed\nSQL into JavaScript (and CoffeeScript) source code. The way this works is by\nproviding a 'tag function' that can be prepended to string literals. The name\nof the function together with the ensuing quotes can be recognized by the editor's\nhiliter so that constructs like `SQL\"...\"`, `SQL\"\"\"...\"\"\"` and so will trigger\nswitching languages. The tag function does next to nothing; here is its definition:\n\n```coffee\nclass DBay\n @SQL: ( parts, expressions... ) ->\n R = parts[ 0 ]\n for expression, idx in expressions\n R += expression.toString() + parts[ idx + 1 ]\n return R\n```\n\nIt can be used like this:\n\n```coffee\n{ DBay } = require 'dbay'\n{ SQL } = DBay\n\ndb = new DBay { path: 'path/to/db.sqlite', }\n\nfor row from db SQL\"select id, name, price from products order by 1;\"\n # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n # imagine proper embedded hiliting etc here\n console.log row.id, row.name, row.price\n```\n\nBe aware that `coffeeplus` is more of an MVP than a polished package. As such, not\neven reckognizing backticks has been implemented yet so is probably best used\nwith CoffeeScript.\n\n\n#### Executing SQL\n\nOne thing that sets DBay apart from other database adapters is the fact that the object returned from `new\nDBay()` is both the representative of the database opened *and* a callable function. This makes executing\nstatements and running queries very concise. This is an excerpt from the [DBay test suite]():\n\n```coffee\n{ DBay } = require H.dbay_path\ndb = new DBay()\ndb ->\n db SQL\"drop table if exists texts;\"\n db SQL\"create table texts ( nr integer not null primary key, text text );\"\n db SQL\"insert into texts values ( 3, 'third' );\"\n db SQL\"insert into texts values ( 1, 'first' );\"\n db SQL\"insert into texts values ( ?, ? );\", [ 2, 'second', ]\n #.......................................................................................................\n T?.throws /cannot start a transaction within a transaction/, ->\n db ->\n#.........................................................................................................\nT?.throws /UNIQUE constraint failed: texts\\.nr/, ->\n db ->\n db SQL\"insert into texts values ( 3, 'third' );\"\n#.........................................................................................................\nrows = db SQL\"select * from texts order by nr;\"\nrows = [ rows..., ]\nT?.eq rows, [ { nr: 1, text: 'first' }, { nr: 2, text: 'second' }, { nr: 3, text: 'third' } ]\n```\n\n> **Note** In the above `SQL` has been set to `String.raw` and has no further effect on the string it\n> precedes; it is just used as a syntax marker (cool because then you can have nested syntax hiliting).\n\nAs shown by [benchmarks](./README-benchmarks.md), a crucial factor for getting maximum performance out of\nusing SQLite is strategically placed transactions. SQLite will not ever execute a DB query *outside* of a\ntransaction; when no transaction has been explicitly opened with `begin transaction`, the DB engine will\nprecede each query implicitly with (the equivalent of) `begin transaction` and follow it with either\n`commit` or `rollback`. This means when a thousand `insert` statements are run, a thousand transactions will\nbe started and committed, leavin performance pretty much in the dust.\n\nTo avoid that performance hit, users are advised to always start and commit transactions when doing many\nconsecutive queries. DBay's callable `db` object makes that easy: just write `db -> many; inserts; here;`\n(JS: `db( () -> { many; inserts; here; })`), i.e. pass a function as the sole argument to `db`, and DBay\nwill wrap that function with a transaction. In case an error should occur, DBay guarantees to call\n`rollback` (in a `try ... finally ...` clause). Those who like to make things more explicit can also use\n`db.with_transaction ->`. Both formats allow to pass in a configuration object with an attribute `mode` that\nmay be set to [one of `'deferred'`, `'immediate'`, or\n`'exclusive'`](https://www.sqlite.org/lang_transaction.html), the default being `'deferred'`.\n\nAnother slight performance hit may be caused by the logic DBay uses to (look up an SQL text in a cache or)\nprepare a statement and then decide whether to call `better-sqlite3`'s' `Database::execute()`,\n`Statement::run()` or `Statement::iterate()`; in order to circumvent that extra work, users may choose to\nfall back on to `better-sqlite3` explicitly:\n\n```coffee\ninsert = db.prepare SQL\"insert into texts values ( ?, ? );\" # returns a `better-sqlite3` `Statement` instance\ndb ->\n insert.run [ 2, 'second', ]\n```\n\n\n\n------------------------------------------------------------------------------------------------------------\n\n### User-Defined Functions (UDFs)\n\n▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊\n▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊\n▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊\n\n------------------------------------------------------------------------------------------------------------\n\n\n### Standard Library of SQL Functions (StdLib)\n\n#### List of Functions\n\n* Strings\n * **`std_str_reverse()`**\n * **`std_str_join()`**\n * **`std_str_split()`**\n * **`std_str_split_re()`**\n * **`std_str_split_first()`**\n * **`std_re_matches()`**\n\n* XXX\n * **`std_generate_series()`**\n\n* Output\n * **`std_echo()`**\n * **`std_debug()`**\n * **`std_info()`**\n * **`std_warn()`**\n\n* Exceptions and Assertions\n * **`std_raise( message )`**—unconditionally throw an error with message given.\n * **`std_raise_json( facets_json )`**—unconditionally throw an error with informational properties encoded\n as a JSON string.\n * **`std_assert( test, message )`**—throw an error with `message` if `test` is falsy.\n * **`std_warn_if( test, message )`**—print an error `message` if `test` is truthy.\n * **`std_warn_unless()`**—print an error `message` if `test` is falsy.\n\n* Variables\n * **`std_getv()`**\n * **`std_variables()`**\n\n#### Use Case for DBay Exceptions and Assertions: Enforcing Invariants\n\n* `std_assert: ( test, message ) ->` throws error if `test` is false(y)\n* `std_warn_unless: ( test, message ) ->` prints warning if `test` is false(y)\n* often one wants to ensure a given SQL statement returns / affects exactly zero or one rows\n* easy to do if some rows are affected, but more difficult when no rows are affected, because a function in\n the statement won't be called when there are no rows.\n* The trick is to ensure that at least one row is computed even when no rows match the query, and the way to\n do that is to include an aggregate function such as `count(*)`.\n* May want to include `limit 1` where appropriate.\n\n```sql\nselect\n *,\n std_assert(\n count(*) > 0,\n '^2734-1^ expected one or more rows, got ' || count(*) ) as _message\n from nnt\n where true\n and ( n != 0 );\n```\n\n```sql\nselect\n *,\n std_assert(\n count(*) > 0, -- using `count(*)` will cause the function to be called\n -- even in case there are no matching rows\n '^2734-2^ expected one or more rows, got ' || count(*) ) as _message\n from nnt\n where true\n and ( n != 0 )\n and ( t = 'nonexistant' ); -- this condition is never fulfilled\n```\n\n#### Use Case for DBay Variables: Parametrized Views\n\n* An alternative for user-defined table functions where those functions would perform queries against the\n DB, which is tricky.\n* Inside the view definition, use `std_getv( name )` to retrieve variable values *which must have been set\n immediately prior to accessing the view*.\n* Downside is that it's easy to forget to update a given value, so best done from inside a specialized\n method in your application.\n\n------------------------------------------------------------------------------------------------------------\n\n### Safe Escaping for SQL Values and Identifiers\n\n\n#### Purpose\n\n* Facilitate the creation of securely escaped SQL literals.\n* In general not thought of as a replacement for the value interpolation offered by `DBay::prepare()`,\n `DBay::query()` and so, except when\n * one wants to parametrize DB object names (e.g. use table or column names like variables),\n * one wants to interpolate an SQL `values` list, as in `select employee from employees where department in\n ( 'sales', 'HR' );`.\n\n#### Escaping Identifiers, General Values, and List Values\n\n* **`db.sql.I: ( name ) ->`**: returns a properly quoted and escaped SQL **I**dentifier.\n* **`db.sql.L: ( x ) ->`**: returns a properly quoted and escaped SQL **V**alue. Note that booleans\n (`true`, `false`) will be converted to `1` and `0`, respectively.\n* **`db.sql.V: ( x ) ->`**: returns a bracketed SQL list of values (using `db.sql.V()` for each list\n element).\n\n\n#### Statement Interpolation\n\n**`db.interpolate( sql, values ) ->`** accepts a template (a string with placeholder formulas) and a list\nor object of values. It returns a string with the placeholder formulas replaced with the escaped values.\n\n```coffee\n# using named placeholders\nsql = SQL\"select $:col_a, $:col_b where $:col_b in $V:choices\"\nd = { col_a: 'foo', col_b: 'bar', choices: [ 1, 2, 3, ], }\nresult = db.sql.interpolate sql, d\n# > \"\"\"select \"foo\", \"bar\" where \"bar\" in ( 1, 2, 3 )\"\"\"\n```\n\n```coffee\n# using positional placeholders\nsql = SQL\"select ?:, ?: where ?: in ?V:\"\nd = [ 'foo', 'bar', 'bar', [ 1, 2, 3, ], ]\nresult = db.sql.interpolate sql, d\n# > \"\"\"select \"foo\", \"bar\" where \"bar\" in ( 1, 2, 3 )\"\"\"\n```\n\n```coffee\n# using an unknown format\nsql = SQL\"select ?:, ?X: where ?: in ?V:\"\nd = [ 'foo', 'bar', 'bar', [ 1, 2, 3, ], ]\nresult = db.sql.interpolate sql, d\n# throws \"unknown interpolation format 'X'\"\n```\n\n------------------------------------------------------------------------------------------------------------\n\n\n### SQL Statement Generation\n\nDBay offers limited support for the declarative generation of a small number of recurring classes of SQL\nstatements. These facilities are in no way intended to constitute or grow into a full-blown\nObject-Relational Mapper (ORM); instead, they are meant to make working with relational data less of a\nrepetitive chore.\n\n#### Insert Statement Generation\n\nTo pick one case in point, SQL `insert` statements when called from a procedural language have a nasty habit\nof demanding not two, but *three* copies of a table's column names:\n\n```coffee\ndb SQL\"\"\"\n create table xy (\n a integer not null primary key,\n b text not null,\n c boolean not null );\"\"\"\ndb SQL\"insert into xy ( b, c ) values ( $b, $c )\", { b, c, }\n# ^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^\n```\n\n<details><summary><ins>As stated above, DBay does not strive to implement full SQL statement generation.\nEven if one wanted to only generate SQL <code>insert</code> statements, one would still have to implement\nalmost all of SQL, as is evidenced by the screenshot of the <a\nhref=https://sqlite.org/lang_insert.html>SQLite <code>insert</code> Statement Railroad Diagram</a> that will\nbe displayed when clicking/tapping on this paragraph.</ins></summary> <img alt='SQLite Insert Statement\nRailroad Diagram'\nsrc=https://loveencounterflow.github.io/hengist/sqlite-syntax-diagrams/insert.railroad.png> </details>\n\nInstead, we implement facilities to cover the most frequent use cases and offer opportunities to insert SQL\nfragments at strategic points.\n\nOften, when an `insert` statement is being called for, one wants to insert full rows (minus `generate`d\ncolumns, for which see below) into tables. This is the default that DBay makes easy: A call to\n`db.prepare_insert()` with the insertion target identified with `into` will return a prepared statement that\ncan then be used as first argument to the `db` callable:\n\n```coffee\ninsert_into_xy = db.prepare_insert { into: 'xy', }\ndb insert_into_xy, { a, b, c, }\n```\n\nObserve that named parameters (as opposed to positional ones) are used, so values must be passed as an\nobject (as opposed to a list).\n\nIn case the actual SQL text of the statement is needed, call `db.create_insert()` instead:\n\n```coffee\ninsert_sql = db.create_insert { into: 'xy', }\n# 'insert into \"main\".\"xy\" ( \"a\", \"b\", \"c\" ) values ( $a, $b, $c );'\n```\n\nWhen one or more columns in a table are [`autoincrement`ed](https://sqlite.org/autoinc.html) or have a\n`default` value, then those columns are often intended not to be set explicitly. What's more, [columns with\n`generate`d values]() *must not* be set explicitly. For this reason, **`db.create_insert()` (and, by\nextension, `db.prepare_insert()`) will skip `generate`d columns** and allow to explicitly specify either\n*included* columns (as `fields`) or else *excluded* columns (as `exclude`):\n\n```coffee\ndb SQL\"\"\"\n create table t1(\n a integer primary key,\n b integer,\n c text,\n d integer generated always as (a*abs(b)) virtual,\n e text generated always as (substr(c,b,b+1)) stored );\"\"\"\ninsert_into_t1 = db.create_insert { into: 't1', }\n\n### Observe `d` and `e` are left out because they're generated, but `a` is present: ###\n# 'insert into \"main\".\"t1\" ( \"a\", \"b\", \"c\" ) values ( $a, $b, $c );'\n\n### You probably want either this: ###\ninsert_into_t1 = db.create_insert { into: 't1', fields: [ 'b', 'c', ], }\n# 'insert into \"main\".\"t1\" ( \"b\", \"c\" ) values ( $b, $c );'\n\n### Or this: ###\ninsert_into_t1 = db.create_insert { into: 't1', exclude: [ 'a', ], }\n# 'insert into \"main\".\"t1\" ( \"b\", \"c\" ) values ( $b, $c );'\n```\n\n> There's a subtle yet important semantic difference in how the `fields` and `exclude` settings are handled:\n> When `fields` are explicitly given, the table **does not have to exist** when generating the SQL; however,\n> when `fields` is not given, the table **must already exist** at the time of calling `create_insert()`.\n>\n> In either case, `prepare_insert()` can only succeed when all referenced object in an SQL statement have\n> already been created.\n\nThe next important thing one often wants in inserts is resolving conflicts. DBay `create_insert()` supports\nsetting `on_conflict` to either **(1)** an arbitrary string that should spell out a syntactically valid SQL\n`on conflict` clause, or **(2)** an object `{ update: true, }` to generate SQL that updates the explicitly\nor implicitly selected columns. This form has been chosen to leave the door open to future expansions of\nsupported features.\n\nWhen choosing the first option, observe that whatever string is passed in, `create_insert()` will prepend\n`'on conflict '` to it; therefore, to create an insert statement that ignores insert conflicts, and\naccording to the [`upsert` syntax railroad diagram](https://sqlite.org/lang_upsert.html): —\n\n![](artwork/upsert.railroad.svg)\n\n— the right thing to do is to call `db.create_insert { into: table_name, on_conflict: 'do nothing', }`.\nAssuming table `t1` has been declared as above, calling\n\n```coffee\ndb.create_insert { into: 't1', exclude: [ 'a', ], on_conflict: \"do nothing\", }\n```\n\nwill generate the (unformatted but properly escaped) equivalent to:\n\n```sql\ninsert into main.t1 ( b, c )\n values ( $b, $c )\n on conflict do nothing;\n -- |<------>|\n -- inserted string\n```\n\nwhile calling\n\n```coffee\ndb.create_insert { into: 't1', exclude: [ 'a', ], on_conflict: { update: true, }, }\n```\n\nwiil generate the (unformatted but properly escaped) equivalent to:\n\n```sql\ninsert into main.t1 ( b, c )\n values ( $b, $c )\n on conflict do update set --| conflict resolution clause\n b = excluded.b, --| mandated by { update: true, }\n c = excluded.c; --| containing same fields as above\n```\n\n#### Insert Statements with a `returning` Clause\n\nIt is sometimes handy to have `insert` statements that return a useful value. Here's a toy example\nthat demonstrates how one can have a table with generated columns:\n\n```coffee\ndb SQL\"\"\"\n create table xy (\n a integer not null primary key,\n b text not null,\n c text generated always as ( '+' || b || '+' ) );\"\"\"\ninsert_into_xy_sql = db.create_insert { into: 'xy', on_conflict: SQL\"do nothing\", returning: '*', }\n# -> 'insert into \"main\".\"xy\" ( \"a\", \"b\" ) values ( $a, $b ) on conflict do nothing returning *;'\ndb.single_row insert_into_xy_sql, { a: 1, b: 'any', } # -> { a: 1, b: 'any', c: '+any+' }\ndb.single_row insert_into_xy_sql, { a: 2, b: 'duh', } # -> { a: 2, b: 'duh', c: '+duh+' }\ndb.single_row insert_into_xy_sql, { a: 3, b: 'foo', } # -> { a: 3, b: 'foo', c: '+foo+' }\n```\n\nGenerally, the `returning` clause must be defined by a non-empty string that is valid SQL for the position\nafter `returning` and the end of the statement. A star `*` will return the entire row that has been\ninserted; we here use `db.single_row()` to eschew the result iterator that would be returned by default.\n\n\n\n\n------------------------------------------------------------------------------------------------------------\n\n### Random\n\n\n▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊\n▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊\n▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊▌▊\n\n\n\n------------------------------------------------------------------------------------------------------------\n\n## Note on Package Structure\n\n### `better-sqlite3` an 'Unsaved' Dependency\n\nSince DBay depends on [`better-sqlite3`](https://github.com/JoshuaWise/better-sqlite3) with a\n[custom-configured build of the SQLite C\nengine](https://github.com/JoshuaWise/better-sqlite3/blob/master/docs/compilation.md), it is (for whatever\nreason) important that **`better-sqlite3` must not be listed under `package.json#dependencies`**; otherwise,\ncompilation will not work properly. The [build script](./build-sqlite3) will run `npm install\nbetter-sqlite3@'^7.4.3'` but with an added `--no-save` flag.\n\n<del>## Use npm, Not pnpm</del>\n\n<del>Also, at the time of this writing (2021-09), while the project compiles fine using npm v7.21.1 (on NodeJS\nv16.9.1 on Linux Mint), but it fails using pnpm v6.14.6 with `Unknown options: 'build-from-source',\n'sqlite3'`. Yarn has not been tried.</del>\n\n<del>**Note**—*These considerations only concern those who wish to fork/clone DBay to work on the code. Those who\njust want to use DBay as a dependency of their project can both either run `npm install dbay` or `pnpm add\ndbay`, both package managers work fine.*</del>\n\n## To Do\n\n* **[–]** port foundational code from hengist &c\n* **[–]** at construction time, allow `dbnick` when `path` is given and `ram` is `false`\n* **[–]** to solve the table-UDF-with-DB-access conundrum, consider\n * <del>**[+]** https://github.com/mapnik/mapnik/issues/797, where connection parameters are discussed (see also\n https://www.sqlite.org/c3ref/open.html);</del> <ins>nothing of interested AFAICS</ins>\n * **[–]** mirroring a given DB into a second (RAM or file) location, taking care to replay any goings-on\n on both instances. This is probably unattractive from a performance POV.\n * **[–]** using [NodeJS worker threads](https://nodejs.org/api/worker_threads.html) to perform updates;\n maybe one could even continuously mirror a RAM DB on disk to get a near-synchronous copy, obliviating\n the necessity to explicitly call `db.save()`. See\n https://github.com/JoshuaWise/better-sqlite3/blob/master/docs/threads.md\n * **[–]** implementing **macros** so one could write eg `select * from foo( x ) as d;` to get `select *\n from ( select a, b, c from blah order by 1 ) as d` (i.e. inline expansion)\n * **[–]** Observe that, seemingly, only *table-valued* UDFs hang while with shared-cache we already *can*\n issue `select`s from inside UDFs, so maybe there's a teeny, fixable difference between how both are\n implemented that leads to the undesirable behavior\n* **[–]** let users choose between SQLite-only RAM DBs and `tmpfs`-based in-memory DBs (b/c the latter allow\n `pragma journal_mode = WAL` for better concurrent access). Cons include: `tmpfs`-based RAM DBs necessitate\n mounting a RAM disk which needs `sudo` rights, so might as well just instruct users to mount RAM disk,\n then use that path? Still, it would be preferrable to have some automatic copy-to-durable in place.\n* **[–]** implement context handler for discardable / temporary file\n* **[–]** allow to call `DBay::do -> ...` with an asynchronous function\n* **[–]** implement `DBay::open()`, `DBay::close()`\n* **[–]** ensure how cross-schema foreign keys work when re-attaching DBs / schemas one by one\n* **[–]** demote `random` from a mixin to functions in `helpers`.\n* **[–]** implement `db.truncate()` / `db.delete()`; allow to retrieve SQL.\n* **[–]** implement `DBay::insert_into.<table> [ 'field1', 'field2', ..., ], { field1, field2, ..., }`;\n allow to retrieve SQL.\n* **[–]** clarify whether UDFs get called at all when any argument is `null` b/c it looks like they\n don't get called which would be unfortunate\n* **[–]** add schematic to clarify terms like *database*, *schema*, *connection*; hilite that UDFs are\n defined on *connections* (not *schemas* or *databases* as would be the case in e.g. PostgreSQL).\n* **[–]** allow to transparently treat key/value tables as caches\n* **[–]** implement escaping of dollar-prefixed SQL placeholders (needed by `create_insert()`).\n* **[–]** implement\n * **[–]** `db.commit()`\n * **[–]** `db.rollback()`\n* **[–]** allow to use sets with `sql.V()`\n* **[–]** implement export/snapshot function that generates a DB with a simplified structure:\n * replace generated fields, results from function calls by constants\n * remove `strict` and similar newer attributes\n * DB should be readable by tools like `sqlite3` command line,\n [`visualize-sqlite`](https://lib.rs/crates/visualize-sqlite)\n* **[+]** <del>consider to implement `trash()` as `trash_to_sql()` (`path` optional), `trash_to_sqlite()`\n (`path` optional)</del> trash functionality now moved to\n [DeSQL](https://github.com/loveencounterflow/desql)\n* **[–]** rewrite all uses of plain `E` to `@E`\n* **[–]** limit support for schemas, especially in plugins; require a separate instance of DBay for each DB\n file (so that all DB objects are in the default `main` namespace and the `SQL\"#{schema}.xxx\"` constructs\n can become `SQL\"xxx\"`). Complex DBs can still be assembled with `db.open()`, but one must keep in mind\n that in SQLite, `foreign key`s do not work across schemas, only `join`s so, so that limits the usefulness\n of multi-schema connections.\n* **[–]** consider to change `call` argument in UDFs to `callee`\n* **[–]** add fields to `std_re_matches()`:\n\n ```coffee\n db.create_table_function\n name: prefix + '_re_matches'\n columns: [ 'match', 'capture', 'start', 'stop', ]\n parameters: [ 'text', 'pattern', ]\n rows: ( text, pattern ) ->\n regex = new RegExp pattern, 'g'\n while ( match = regex.exec text )?\n [ m, c, ] = match\n yield [ m, ( c ? null ), start: match.index, stop: match.index + m.length, ]\n return null\n ```\n\n* **[–]** update to an SQLite version that includes\n [`#9430ead7ba433cbf`](https://sqlite.org/src/info/9430ead7ba433cbf) to fix [an issue with window\n functions](https://sqlite.org/forum/forumpost/ba160cf2fe)\n* **[–]** write a chapter about application architecture best practices, including:\n * limitations of using UDFs (tools, `sqlite3` CLI will not work)\n * SQLite, like other popular DBs (i.e. Postgres) are notoriously bad at giving exact error locations.\n Using triggers (and generated columns) can exacerbate that problem (imagine an `on insert` trigger that\n performs inserts on another table using `select from t`; ahould an error occur in the last step, SQLite\n will still attribute it to the original `insert` statement (without giving it any kind of locality or\n mentioning `t`'s role))\n * because errors are badly located by SQLite, prefer writing many small steps instead of few big ones\n (i.e. prefer `db SQL\"do this;\"`, `db SQL\"do that;\"` over `db SQL\"do this; do that;\"`)\n* **[–]** implement `select * from t` SQL generation\n\n## Is Done\n\n* **[+]** implement `DBay::do()` as a method that unifies all of `better-sqlite3`'s `Statement::run()`,\n `Statement::iterate()`, and `Database::execute()`.\n* **[+]** allow to call `DBay::do -> ...` with a synchronous function with the same semantics as\n `DBay::with_transaction -> ...`.\n* **[+]** allow to call `DBay::do { mode: 'deferred', }, -> ...`.\n* **[+]** make `db = new DBay()` an instance of `Function` that, when called, runs `DBay::do()`\n `Database::execute()`.\n `statement = DBay::prepare.insert_into.<table> [ 'field1', 'field2', ..., ]`\n* **[+]** change classname(s) from `Dbay` to `DBay` to avoid spelling variant proliferation\n* **[+]** let `db.do()` accept prepared statement objects.\n* **[+]** make `first_row()`, `all_rows()` etc accept statements and strings\n* **[+]** at the moment we use `cfg.prefix` for (inherently schema-less) UDF names (and require a trailing\n underscore to be part of the prefix), and `cfg.schema` for plugin-in-specific DB tables and views; in the\n future, we should use a single parameter for both (and make the underscore implicit). In addition, it\n should be possible to choose whether a plugin will create its objects with a prefix (in the same schema as\n the main DB) or within another schema.\n* **[+]** fix generated SQL `insert` statements without explicit fields\n* **[+]** add hidden `E` attribute to instance giving access to error classes (mainly for plugin use)\n* **[+]** implement `as_object: ( key, sql, P... ) ->`\n* **[+]** modify time stamp format to make it viable for use in file names on most systems\n * new format is `YYYYMMDD-HHmmssZ`, e.g. `20220426-171916Z` is the time of this writing\n* **[+]** fix datetime output to use different formats for input, output so output contains literal `Z`\n instead of numerical offset\n\n\n\n\n\n\n\n\n\n"
}
}

@@ -39,2 +39,4 @@

- [Random](#random)
- [Notes on User Defined Functions (UDFs)](#notes-on-user-defined-functions-udfs)
- [(Outline for a) Draft for a Stored Procedure Feature Request](#outline-for-a-draft-for-a-stored-procedure-feature-request)
- [Note on Package Structure](#note-on-package-structure)

@@ -687,2 +689,107 @@ - [`better-sqlite3` an 'Unsaved' Dependency](#better-sqlite3-an-unsaved-dependency)

## Notes on User Defined Functions (UDFs)
* SQLite principally differs from client/server RDBMSes in that it allows User Defined Functions (UDFs)
only on the DB connection of the host application
* these UDFs are written in the language and run in the environment of the host application
* I believe UDFs are, as such, a huge boon because they help to push many chores to the DB and allow for
such nifty things as having generated columns whose contents are not stored but indexed and that can call
into existing code of the hosting app without code duplication and without network roundtrips (both of
which are hallmarks of client/server architectures)
* but the downside of connection-defined UDFs is that SQLite DBs created with UDFs will break when the
environment changes (e.g. when openening a second connection to the same DB without recreating all UDFs or
openening the DB with other tools such as the `sqlite3` command line tool); an abortive error will occur
as soon as any function is enountered (one may be able to perform some operations that do not cause a
function to be called)
* this means that a DB created with UDFs will not be amenable for *any* of the helpful tools that exist
(like ER diagrammers and so on)
* one can also not pass a DB file around for other people to have a gander into the data—using UDFs means
your host application (or all the relevant parts of it) has to be reproduced on the other machine, and
even then, only the host application can provide access to the data—again, no external tooling here
* because UDFs are so useful, it's probably worthwhile to think about how to work around the limitations.
Possible solutions include:
* *put all the logic in the application*—this is the most straightforward and classical approach. If a
prospective generated field can not be readily or reasonably computed using only SQLite's built in
functions, use an ordinary table field and precompute the value before inserting rows. If the UDF would
be called from a view, turn that view into a table and insert the rows from you application. **Rating:
+1** because it's so straightforward and classical.
* *open a feature request against SQLite* with a view to enable support for something like SQL `CREATE
FUNCTION`; the body of such a function could be formulated in much the same way as the already existing
syntax for `CREATE TRIGGER` which likewise allows to define a block with a sequence of SQL statements.
In its simplest form, `CREATE FUNCTION` would allow for parametrized views, which would be incredibly
useful. [Such a feature request has been issued before](https://sqlite.org/forum/info/7f554820209e0d8c)
(not listed on the [Open Feature Requests page](https://www2.sqlite.org/src/rptview?rn=3)) in 2021 and
lead to an extended and informative discussion (see triggers, below), but so far, nothing has come of
it. **Rating: +0** because while everyone is encouraged to do it, hopes are not high IMHO; however [see
below for a draft](#outline-for-a-draft-for-a-stored-procedure-feature-request) that I think could have
*some* chances.
* *compile your UDFs into a loadable SQLite extension*—this can solve part of the problem, but only just
so. Most tools simply have no way or concept to load an SQLite extension, one exception being the
`sqlite3` command line tool, but even then, you must ship the extension alongside with the DB file, and
the receiving partner will have to do a bit more work to open the DB (and be willing to use the command
line). They must also be on a compatible system or your `*.so`/`*.dll` will not work, or else you must
compile the extension for multiple systems. **Rating: -1** because who wants to do the authoring and
testing and compiling stuff when so little universal usability comes out of it.
* *use triggers*—this is a somewhat promising workaround that only occurred to me when reading [the OP of
the aforementioned feature request](https://sqlite.org/forum/forumpost/78a60bdeec7c1ee9): put your
functionality inside a trigger. Of course, this makes only sense if your function is readily expressable
in terms of SQLite's built in functions, but potentially you can better bundle your functionalities.
**Rating +1** because this is another classical technique; its main downside is that you still have to
(re-)produce your functionalities in pure SQL (with built in functions), and in case the same
functionality is required in more than one place, there is no other way than to do copy/paste. Maybe
code duplication could be avoided with code generation?
All of these solutions suck in one or the other way.
### (Outline for a) Draft for a Stored Procedure Feature Request
* minimal: the extension to SQL consists in introducing a `CREATE FUNCTION` statement; its body would be
much like the existing syntax for triggers. The default (and, initially, only) language to be used is SQL.
Example:
```sql
CREATE FUNCTION product( a number, b number ) RETURNS number LANGUAGE SQL
BEGIN
SELECT a * b; /* or RETURN a * b; */
END;
END;
```
> **Notes:**
> * Type annotations and the `RETURNS` clause should be optional as in `CREATE TABLE` statements, whereas
> `LANGUAGE SQL` should initially be made mandatory to avoid premature fixation of a bad default.
> * Initially at least, functions should not be multi-dispatch (i.e. a name can only appear at most once).
> * Nice-to-have: `CREATE OR REPLACE`, `IF NOT EXISTS`, `DROP FUNCTION`.
This form is already useful because now you can bundle and name recurrent expressions—anything that can
appear in a scalar (single-values) `SELECT` statement can be named and collected into libraries.
**Extension** Add to this `RETURNS SETOF $TYPE`, `RETURNS SETOF ROW`, and now you can have table-valued
functions a.k.a. *parametrized views*!
**Extension** Add to this statement sequences.
**Extension** Add to this branching (`IF`/`THEN` and/or `CASE`/`WHEN`). This extension would cross the line
where language inside a function declaration is significantly different from that outside. OTOH branching
could conceivably work outside of functions, much like C's preprocessor directives.
**Extension** Add to this `LOOP/BREAK/BREAK IF` loops.
**Extension** Add to this `YIELD` for use in table-valued functions.
**Extension** Add to this (function-local) variables. These are in principle already implemented in the form
of function parameters.
**Extension** Some support for dynamic SQL that could potentially be much less clunky than what PostgreSQL
offers; at first one would only need a syntax to signify safe interpolation as identifier, e.g. `SELECT *
FROM @table_name;` or similar.
## Note on Package Structure

@@ -792,2 +899,5 @@

* **[–]** implement `select * from t` SQL generation
* **[–]** could the `SQL` string annotation / tagged literal function be syntactically extended to allow
simpler interpolation of escaped names? Could we instantiate it with a dictionary of values (implement in
[Guy](https://github.com/loveencounterflow/guy))

@@ -794,0 +904,0 @@ ## Is Done

lib/stdlib-mixin.js.map

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc