Security News
Research
Supply Chain Attack on Rspack npm Packages Injects Cryptojacking Malware
A supply chain attack on Rspack's npm packages injected cryptomining malware, potentially impacting thousands of developers.
RDB is the ultimate Object Relational Mapper for Node.js and Typescript, offering seamless integration with popular databases like Postgres, MS SQL, MySQL, Sybase SAP, and SQLite. Whether you're building applications in TypeScript or JavaScript (including both CommonJS and ECMAScript), RDB has got you covered.
This is the Modern Typescript Documentation. Are you looking for the Classic Documentation ?
$ npm install rdb
Here we choose SQLite.
$ npm install sqlite3
π map.js
import rdb from 'rdb';
const map = rdb.map(x => ({
customer: x.table('customer').map(({ column }) => ({
id: column('id').numeric().primary().notNullExceptInsert(),
name: column('name').string(),
balance: column('balance').numeric(),
isActive: column('isActive').boolean(),
})),
order: x.table('_order').map(({ column }) => ({
id: column('id').numeric().primary().notNullExceptInsert(),
orderDate: column('orderDate').date().notNull(),
customerId: column('customerId').numeric().notNullExceptInsert(),
})),
orderLine: x.table('orderLine').map(({ column }) => ({
id: column('id').numeric().primary(),
orderId: column('orderId').numeric(),
product: column('product').string(),
})),
deliveryAddress: x.table('deliveryAddress').map(({ column }) => ({
id: column('id').numeric().primary(),
orderId: column('orderId').numeric(),
name: column('name').string(),
street: column('street').string(),
postalCode: column('postalCode').string(),
postalPlace: column('postalPlace').string(),
countryCode: column('countryCode').string(),
}))
})).map(x => ({
order: x.order.map(v => ({
customer: v.references(x.customer).by('customerId'),
lines: v.hasMany(x.orderLine).by('orderId'),
deliveryAddress: hasOne(x.deliveryAddress).by('orderId'),
}))
}));
export default map;
π update.js
import map from './map';
const db = map.sqlite('demo.db');
updateRow();
async function updateRow() {
const order = await db.order.getById(2, {
lines: true
});
order.lines.push({
product: 'broomstick'
});
await order.saveChanges();
}
π filter.js
import map from './map';
const db = map.sqlite('demo.db');
getRows();
async function getRows() {
const filter = db.order.lines.any(
line => line.product.contains('broomstick'))
.and(db.order.customer.name.startsWith('Harry'));
const orders = await db.order.getMany(filter, {
lines: true,
deliveryAddress: true,
customer: true
});
console.dir(orders, { depth: Infinity });
}
Each column within your database table is designated by using the column() method, in which you specify its name. This action generates a reference to a column object that enables you to articulate further column properties like its data type or if it serves as a primary key.
Relationships between tables can also be outlined. By using methods like hasOne, hasMany, and references, you can establish connections that reflect the relationships in your data schema. In the example below, an 'order' is linked to a 'customer' reference, a 'deliveryAddress', and multiple 'lines'. The hasMany and hasOne relations represents ownership - the tables 'deliveryAddress' and 'orderLine' are owned by the 'order' table, and therefore, they contain the 'orderId' column referring to their parent table, which is 'order'. Conversely, the customer table is independent and can exist without any knowledge of the 'order' table. Therefore we say that the order table references the customer table - necessitating the existence of a 'customerId' column in the 'order' table.
π map.js
import rdb from 'rdb';
const map = rdb.map(x => ({
customer: x.table('customer').map(({ column }) => ({
id: column('id').numeric().primary().notNullExceptInsert(),
name: column('name').string(),
balance: column('balance').numeric(),
isActive: column('isActive').boolean(),
})),
order: x.table('_order').map(({ column }) => ({
id: column('id').numeric().primary().notNullExceptInsert(),
orderDate: column('orderDate').date().notNull(),
customerId: column('customerId').numeric().notNullExceptInsert(),
})),
orderLine: x.table('orderLine').map(({ column }) => ({
id: column('id').numeric().primary(),
orderId: column('orderId').numeric(),
product: column('product').string(),
})),
deliveryAddress: x.table('deliveryAddress').map(({ column }) => ({
id: column('id').numeric().primary(),
orderId: column('orderId').numeric(),
name: column('name').string(),
street: column('street').string(),
postalCode: column('postalCode').string(),
postalPlace: column('postalPlace').string(),
countryCode: column('countryCode').string(),
}))
})).map(x => ({
order: x.order.map(({ hasOne, hasMany, references }) => ({
customer: references(x.customer).by('customerId'),
deliveryAddress: hasOne(x.deliveryAddress).by('orderId'),
lines: hasMany(x.orderLine).by('orderId')
}))
}));
export default map;
The init.js script resets our SQLite database. It's worth noting that SQLite databases are represented as single files, which makes them wonderfully straightforward to manage.
At the start of the script, we import our database mapping from the map.js file. This gives us access to the db object, which we'll use to interact with our SQLite database.
Then, we define a SQL string. This string outlines the structure of our SQLite database. It first specifies to drop existing tables named 'deliveryAddress', 'orderLine', '_order', and 'customer' if they exist. This ensures we have a clean slate. Then, it dictates how to create these tables anew with the necessary columns and constraints.
Because of a peculiarity in SQLite, which only allows one statement execution at a time, we split this SQL string into separate statements. We do this using the split() method, which breaks up the string at every semicolon.
π init.js
import map from './map';
const db = map.sqlite('demo.db');
const sql = `DROP TABLE IF EXISTS deliveryAddress; DROP TABLE IF EXISTS orderLine; DROP TABLE IF EXISTS _order; DROP TABLE IF EXISTS customer;
CREATE TABLE customer (
id INTEGER PRIMARY KEY,
name TEXT,
balance NUMERIC,
isActive INTEGER
);
CREATE TABLE _order (
id INTEGER PRIMARY KEY,
orderDate TEXT,
customerId INTEGER REFERENCES customer
);
CREATE TABLE orderLine (
id INTEGER PRIMARY KEY,
orderId INTEGER REFERENCES _order,
product TEXT
);
CREATE TABLE deliveryAddress (
id INTEGER PRIMARY KEY,
orderId INTEGER REFERENCES _order,
name TEXT,
street TEXT,
postalCode TEXT,
postalPlace TEXT,
countryCode TEXT
)
`;
async function init() {
const statements = sql.split(';');
for (let i = 0; i < statements.length; i++) {
await db.query(statements[i]);
}
}
export default init;
In SQLite, columns with the INTEGER PRIMARY KEY attribute are designed to autoincrement by default. This means that each time a new record is inserted into the table, SQLite automatically produces a numeric key for the id column that is one greater than the largest existing key. This mechanism is particularly handy when you want to create unique identifiers for your table rows without manually entering each id.
SQLite
$ npm install sqlite3
import map from './map';
const db = map.sqlite('demo.db');
With connection pool
$ npm install sqlite3
import map from './map';
const db = map.sqlite('demo.db', { size: 10 });
From the browser
You can securely use RDB from the browser by utilizing the Express.js plugin, which serves to safeguard sensitive database credentials from exposure at the client level. This technique bypasses the need to transmit raw SQL queries directly from the client to the server. Instead, it logs method calls initiated by the client, which are later replayed and authenticated on the server. This not only reinforces security by preventing the disclosure of raw SQL queries on the client side but also facilitates a smoother operation. Essentially, this method mirrors a traditional REST API, augmented with advanced TypeScript tooling for enhanced functionality. You can read more about it in the section called In the browser
π server.js
import map from './map';
import { json } from 'body-parser';
import express from 'express';
import cors from 'cors';
const db = map.sqlite('demo.db');
express().disable('x-powered-by')
.use(json({ limit: '100mb' }))
.use(cors())
//for demonstrational purposes, authentication middleware is not shown here.
.use('/rdb', db.express())
.listen(3000, () => console.log('Example app listening on port 3000!'));
π browser.js
import map from './map';
const db = map.http('http://localhost:3000/rdb');
MySQL
$ npm install mysql2
import map from './map';
const db = map.mysql('mysql://test:test@mysql/test');
MS SQL
$ npm install tedious
import map from './map';
const db = map.mssql({
server: 'mssql',
options: {
encrypt: false,
database: 'test'
},
authentication: {
type: 'default',
options: {
userName: 'sa',
password: 'P@assword123',
}
}
});
PostgreSQL
$ npm install pg
import map from './map';
const db = map.pg('postgres://postgres:postgres@postgres/postgres');
SAP Adaptive Server
$ npm install msnodesqlv8
import { fileURLToPath } from 'url';
import { dirname } from 'path';
import map from './map';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
//download odbc driver from sap web pages
const db = map.pg(`Driver=${__dirname}/libsybdrvodb.so;SERVER=sapase;Port=5000;UID=sa;PWD=sybase;DATABASE=test`);
In the code below, we initially import the table-mapping feature "map.js" and the setup script "init.js", both of which were defined in the preceding step. The setup script executes a raw query that creates the necessary tables. Subsequently, we insert two customers, named "George" and "Harry", into the customer table, and this is achieved through calling "db.customer.insert".
Next, we insert an array of two orders in the order table. Each order contains an orderDate, customer information, deliveryAddress, and lines for the order items. We use the customer constants "george" and "harry" from previous inserts. Observe that we don't pass in any primary keys. This is because all tables here have autoincremental keys. The second argument to "db.order.insert" specifies a fetching strategy. This fetching strategy plays a critical role in determining the depth of the data retrieved from the database after insertion. The fetching strategy specifies which associated data should be retrieved and included in the resulting orders object. In this case, the fetching strategy instructs the database to retrieve the customer, deliveryAddress, and lines for each order.
Without a fetching strategy, "db.order.insert" would only return the root level of each order. In that case you would only get the id, orderDate, and customerId for each order.
import map from './map';
const db = map.sqlite('demo.db');
import init from './init';
insertRows();
async function insertRows() {
await init();
const george = await db.customer.insert({
name: 'George',
balance: 177,
isActive: true
});
const harry = await db.customer.insert({
name: 'Harry',
balance: 200,
isActive: true
});
const orders = await db.order.insert([
{
orderDate: new Date(2022, 0, 11, 9, 24, 47),
customer: george,
deliveryAddress: {
name: 'George',
street: 'Node street 1',
postalCode: '7059',
postalPlace: 'Jakobsli',
countryCode: 'NO'
},
lines: [
{ product: 'Bicycle' },
{ product: 'Small guitar' }
]
},
{
customer: harry,
orderDate: new Date(2021, 0, 11, 12, 22, 45),
deliveryAddress: {
name: 'Harry Potter',
street: '4 Privet Drive, Little Whinging',
postalCode: 'GU4',
postalPlace: 'Surrey',
countryCode: 'UK'
},
lines: [
{ product: 'Magic wand' }
]
}
], {customer: true, deliveryAddress: true, lines: true}); //fetching strategy
console.dir(orders, {depth: Infinity});
}
RDB has a rich querying model. As you navigate through, you'll learn about the various methods available to retrieve data from your tables, whether you want to fetch all rows, many rows with specific criteria, or a single row based on a primary key.
The fetching strategy in RDB is optional, and its use is influenced by your specific needs. You can define the fetching strategy either on the table level or the column level. This granularity gives you the freedom to decide how much related data you want to pull along with your primary request.
All rows
import map from './map';
const db = map.sqlite('demo.db');
getRows();
async function getRows() {
const orders = await db.order.getAll({
customer: true,
deliveryAddress: true,
lines: true
});
console.dir(orders, {depth: Infinity});
}
Limit, offset and order by
This script demonstrates how to fetch orders with customer, lines and deliveryAddress, limiting the results to 10, skipping the first row, and sorting the data based on the orderDate in descending order followed by id. The lines are sorted by product.
import map from './map';
const db = map.sqlite('demo.db');
getRows();
async function getRows() {
const orders = await db.order.getAll({
offset: 1,
orderBy: ['orderDate desc', 'id'],
limit: 10,
customer: true,
deliveryAddress: true,
lines: {
orderBy: 'product'
},
});
console.dir(orders, {depth: Infinity});
}
Many rows filtered
import map from './map';
const db = map.sqlite('demo.db');
getRows();
async function getRows() {
const filter = db.order.lines.any(line => line.product.contains('i'))
.and(db.order.customer.balance.greaterThan(180));
const orders = await db.order.getMany(filter, {
customer: true,
deliveryAddress: true,
lines: true
});
console.dir(orders, {depth: Infinity});
}
Single row filtered
import map from './map';
const db = map.sqlite('demo.db');
getRows();
async function getRows() {
const filter = db.order.customer(customer => customer.isActive.eq(true)
.and(customer.startsWith('Harr')));
//equivalent, but creates slighly different sql:
// const filter = db.order.customer.isActive.eq(true).and(db.order.customer.startsWith('Harr'));
const order = await db.order.getOne(filter, {
customer: true,
deliveryAddress: true,
lines: true
});
console.dir(order, {depth: Infinity});
}
Single row by primary key
import map from './map';
const db = map.sqlite('demo.db');
getRows();
async function getRows() {
const order = await db.order.getById(1, {
customer: true,
deliveryAddress: true,
lines: true
});
console.dir(order, {depth: Infinity});
}
Many rows by primary key
import map from './map';
const db = map.sqlite('demo.db');
getRows();
async function getRows() {
const orders = await db.order.getMany([
{id: 1},
{id: 2}
],
{
customer: true,
deliveryAddress: true,
lines: true
});
console.dir(orders, {depth: Infinity});
}
To update rows, modify the property values and invoke the method saveChanges(). The function updates only the modified columns, not the entire row. Rows in child relations can also be updated as long as the parent order owns the child tables. In our illustration, the order table owns both the deliveryAddress and the lines tables because they're part of a hasOne/hasMany relationship. Contrastingly, the customer is part of a reference relationship and thus can't be updated here. But you can detach the reference to the customer by assigning it to null or undefined. (Setting order.customerId to null or undefined achieves the same result.)
Updating a single row
import map from './map';
const db = map.sqlite('demo.db');
update();
async function update() {
const order = await db.order.getById(1, {
customer: true,
deliveryAddress: true,
lines: true
});
order.orderDate = new Date();
order.deliveryAddress = null;
order.lines.push({product: 'Cloak of invisibility'});
await order.saveChanges();
console.dir(order, {depth: Infinity});
}
Updating many rows
import map from './map';
const db = map.sqlite('demo.db');
update();
async function update() {
let orders = await db.order.getAll({
orderBy: 'id',
lines: true,
deliveryAddress: true,
customer: true
});
orders[0].orderDate = new Date();
orders[0].deliveryAddress.street = 'Node street 2';
orders[0].lines[1].product = 'Big guitar';
orders[1].orderDate = '2023-07-14T12:00:00'; //iso-string is allowed
orders[1].deliveryAddress = null;
orders[1].customer = null;
orders[1].lines.push({product: 'Cloak of invisibility'});
await orders.saveChanges();
console.dir(orders, {depth: Infinity});
}
Updating with concurrency
Rows get updated using an optimistic concurrency approach by default. This means if a property being edited was meanwhile altered, an exception is raised, indicating the row was modified by a different user. You can change the concurrency strategy either at the table or column level.
Currently, there are three concurrency strategies:
In the example below, we've set the concurrency strategy for orderDate to 'overwrite'. This implies that if other users modify orderDate while you're making changes, their updates will be overwritten.
import map from './map';
const db = map.sqlite('demo.db');
update();
async function update() {
const order = await db.order.getById(1, {
customer: true,
deliveryAddress: true,
lines: true
});
order.orderDate = new Date();
order.deliveryAddress = null;
order.lines.push({product: 'Cloak of invisibility'});
await order.saveChanges( {
orderDate: {
concurrency: 'overwrite'
}});
console.dir(order, {depth: Infinity});
}
Deleting a single row
import map from './map';
const db = map.sqlite('demo.db');
deleteRow();
async function deleteRow() {
const order = await db.order.getById(1);
await order.delete();
//will also delete deliveryAddress and lines
//but not customer
}
Deleting many rows
import map from './map';
const db = map.sqlite('demo.db');
deleteRows();
async function deleteRows() {
const filter = db.order.customer.name.eq('George');
let orders = await db.order.getMany(filter);
await orders.delete();
}
Deleting with concurrency
Concurrent operations can lead to conflicts. When you still want to proceed with the deletion regardless of potential interim changes, the 'overwrite' concurrency strategy can be used. This example demonstrates deleting rows even if the "delivery address" has been modified in the meantime. You can read more about concurrency strategies in the ('Updating rows' section)["#user-content-updating-rows"].
import map from './map';
const db = map.sqlite('demo.db');
deleteRows();
async function deleteRows() {
const filter = db.order.deliveryAddress.name.eq('George');
let orders = await db.order.getMany(filter, {
customer: true,
deliveryAddress: true,
lines: true
});
await orders.delete({
deliveryAddress: {
concurrency: 'overwrite'
}
});
}
Batch delete
When removing a large number of records based on a certain condition, batch deletion can be efficient.
However, it's worth noting that batch deletes don't follow the cascade delete behavior by default. To achieve cascading in batch deletes, you must explicitly call the deleteCascade method.
import map from './map';
const db = map.sqlite('demo.db');
deleteRows();
async function deleteRows() {
const filter = db.order.deliveryAddress.name.eq('George');
await db.order.delete(filter);
}
Batch delete cascade
When deleting records, sometimes associated data in related tables also needs to be removed. This cascade delete helps maintain database integrity.
import map from './map';
const db = map.sqlite('demo.db');
deleteRows();
async function deleteRows() {
const filter = db.order.deliveryAddress.name.eq('George');
await db.order.deleteCascade(filter);
}
Batch delete by primary key
For efficiency, you can also delete records directly if you know their primary keys.
import map from './map';
const db = map.sqlite('demo.db');
deleteRows();
async function deleteRows() {
db.customer.delete([{id: 1}, {id: 2}]);
}
π server.js
import map from './map';
import { json } from 'body-parser';
import express from 'express';
import cors from 'cors';
const db = map.sqlite('demo.db');
express().disable('x-powered-by')
.use(json({ limit: '100mb' }))
.use(cors())
.use('/rdb', db.express())
.listen(3000, () => console.log('Example app listening on port 3000!'));
π browser.js
import map from './map';
const db = map.http('http://localhost:3000/rdb');
updateRows();
async function updateRows() {
const filter = db.order.lines.any(
line => line.product.startsWith('Magic wand'))
.and(db.order.customer.name.startsWith('Harry')
);
const order = await db.order.getOne(filter, {
lines: true
});
order.lines.push({
product: 'broomstick'
});
await order.saveChanges();
}
Interceptors and base filter
In the next setup, axios interceptors are employed on the client side to add an Authorization header of requests. Meanwhile, on the server side, an Express.js middleware (validateToken) is utilized to ensure the presence of the Authorization header, while a base filter is applied on the order table to filter incoming requests based on the customerId extracted from this header. This combined approach enhances security by ensuring that users can only access data relevant to their authorization level and that every request is accompanied by a token. In real-world applications, it's advisable to use a more comprehensive token system and expand error handling to manage a wider range of potential issues.
One notable side effect compared to the previous example, is that only the order table is exposed for interaction, while all other potential tables in the database remain shielded from direct client access (except for related tables). If you want to expose a table without a baseFilter, just set the tableName to an empty object.
π server.js
import map from './map';
import { json } from 'body-parser';
import express from 'express';
import cors from 'cors';
const db = map.sqlite('demo.db');
express().disable('x-powered-by')
.use(json({ limit: '100mb' }))
.use(cors())
.use('/rdb', validateToken)
.use('/rdb', db.express({
order: {
baseFilter: (db, req, _res) => {
const customerId = Number.parseInt(req.headers.authorization.split(' ')[1]); //Bearer 2
return db.order.customerId.eq(Number.parseInt(customerId));
}
}
}))
.listen(3000, () => console.log('Example app listening on port 3000!'));
function validateToken(req, res, next) {
// For demo purposes, we're just checking against existence of authorization header
// In a real-world scenario, this would be a dangerous approach because it bypasses signature validation
const authHeader = req.headers.authorization;
if (authHeader)
return next();
else
return res.status(401).json({ error: 'Authorization header missing' });
}
π browser.js
import map from './map';
const db = map.http('http://localhost:3000/rdb');
updateRows();
async function updateRows() {
db.interceptors.request.use((config) => {
// For demo purposes, we're just adding hardcoded token
// In a real-world scenario, use a proper JSON web token
config.headers.Authorization = 'Bearer 2' //customerId
return config;
});
db.interceptors.response.use(
response => response,
(error) => {
if (error.response && error.response.status === 401) {
console.dir('Unauthorized, dispatch a login action');
//redirectToLogin();
}
return Promise.reject(error);
}
);
const filter = db.order.lines.any(
line => line.product.startsWith('Magic wand'))
.and(db.order.customer.name.startsWith('Harry')
);
const order = await db.order.getOne(filter, {
lines: true
});
order.lines.push({
product: 'broomstick'
});
await order.saveChanges();
}
import map from './map';
const db = map.sqlite('demo.db');
execute();
async function execute() {
await db.transaction(async tx => {
const customer = await tx.customer.getById(1);
customer.balance = 100;
await customer.saveChanges();
throw new Error('This will rollback');
});
}
π map.js
import rdb from 'rdb';
/**
* @typedef {Object} Pet
* @property {string} name - The name of the pet.
* @property {string} kind - The kind of pet
*/
/** @type {Pet} */
let pet;
const map = rdb.map(x => ({
demo: x.table('demo').map(x => ({
id: x.column('id').uuid().primary().notNull(),
name: x.column('name').string(),
balance: x.column('balance').numeric(),
regularDate: x.column('regularDate').date(),
tzDate: x.column('tzDate').dateWithTimeZone(),
picture: x.column('picture').binary(),
pet: x.column('pet').jsonOf(pet), //generic
pet2: x.column('pet2').json(), //non-generic
}))
}));
π map.ts
import rdb from 'rdb';
interface Pet {
name: string;
kind: string;
}
const map = rdb.map(x => ({
demo: x.table('demo').map(x => ({
id: x.column('id').uuid().primary().notNull(),
name: x.column('name').string(),
balance: x.column('balance').numeric(),
regularDate: x.column('regularDate').date(),
tzDate: x.column('tzDate').dateWithTimeZone(),
picture: x.column('picture').binary(),
pet: x.column('pet').jsonOf<Pet>(), //generic
pet2: x.column('pet2').json(), //non-generic
}))
}));
π map.js
import rdb from 'rdb';
/**
* @typedef {Object} Pet
* @property {string} name - The name of the pet.
* @property {string} kind - The kind of pet
*/
/** @type {Pet} */
let pet;
let petSchema = {
"properties": {
"name": { "type": "string" },
"kind": { "type": "string" }
}
};
function validateName(name?: string) {
if (value && value.length > 10)
throw new Error('Length cannot exceed 10 characters');
}
const map = rdb.map(x => ({
demo: x.table('demo').map(x => ({
id: x.column('id').uuid().primary().notNullExceptInsert(),
name: x.column('name').string().validate(validateName),
pet: x.column('pet').jsonOf(pet).JSONSchema(petSchema)
}))
}));
export default map;
π map.ts
import rdb from 'rdb';
interface Pet {
name: string;
kind: string;
}
let petSchema = {
"properties": {
"name": { "type": "string" },
"kind": { "type": "string" }
}
};
function validateName(value) {
if (value && value.length > 10)
throw new Error('Length cannot exceed 10 characters');
}
const map = rdb.map(x => ({
demo: x.table('demo').map(x => ({
id: x.column('id').uuid().primary().notNullExceptInsert(),
name: x.column('name').string().validate(validateName),
pet: x.column('pet').jsonOf<Pet>().JSONSchema(petSchema)
}))
}));
export default map;
import map from './map';
const db = map.sqlite('demo.db');
getRows();
async function getRows() {
const rawFilter = {
sql: 'name like ?',
parameters: ['%arry']
};
const rowsWithRawFilter = await db.customer.getOne(rawFilter);
const combinedFilter = db.customer.balance.greaterThan(100).and(rawFilter);
const rowsWithCombinedFilter = await db.customer.getOne(combinedFilter);
}
import map from './map';
const db = map.sqlite('demo.db');
getRows();
async function getRows() {
const query = {
sql: 'select * from customer where name like ?',
parameters: ['%arry']
};
const rows = await db.query(query)
}
import map from './map';
const db = map.sqlite('demo.db');
getCount();
async function getCount() {
const filter = db.order.lines.any(
line => line.product.contains('broomstick')
);
const count = await db.order.count(filter);
console.log(count); //2
}
To secure your application by preventing sensitive data from being serialized and possibly leaked, you can use the serializable(false) attribute on certain fields within your database schema. Here, the serializable(false) attribute has been applied to the balance column, indicating that this field will not be serialized when a record is converted to a JSON string.
π map.js
import rdb from 'rdb';
const map = rdb.map(x => ({
customer: x.table('customer').map(({ column }) => ({
id: column('id').numeric().primary().notNullExceptInsert(),
name: column('name').string(),
balance: column('balance').numeric().serializable(false),
isActive: column('isActive').boolean(),
}))
}));
export default map;
π sensitive.js
import map from './map';
const db = map.sqlite('demo.db');
getRows();
async function getRows() {
const george = await db.customer.insert({
name: 'George',
balance: 177,
isActive: true
});
console.dir(JSON.stringify(george), {depth: Infinity});
//note that balance is excluded:
//'{"id":1,"name":"George","isActive":true}'
}
import rdb from 'rdb';
import map from './map';
const db = map.sqlite('demo.db');
rdb.on('query', (e) => {
console.log(e.sql);
if (e.parameters.length > 0)
console.log(e.parameters);
});
updateRow();
async function updateRow() {
const order = await db.order.getById(2, {
lines: true
});
order.lines.push({
product: 'broomstick'
});
await order.saveChanges();
}
output:
BEGIN
select _order.id as s_order0,_order.orderDate as s_order1,_order.customerId as s_order2 from _order _order where _order.id=2 order by _order.id limit 1
select orderLine.id as sorderLine0,orderLine.orderId as sorderLine1,orderLine.product as sorderLine2 from orderLine orderLine where orderLine.orderId in (2) order by orderLine.id
COMMIT
BEGIN
select _order.id as s_order0,_order.orderDate as s_order1,_order.customerId as s_order2 from _order _order where _order.id=2 order by _order.id limit 1
INSERT INTO orderLine (orderId,product) VALUES (2,?)
[ 'broomstick' ]
SELECT id,orderId,product FROM orderLine WHERE rowid IN (select last_insert_rowid())
select orderLine.id as sorderLine0,orderLine.orderId as sorderLine1,orderLine.product as sorderLine2 from orderLine orderLine where orderLine.orderId in (2) order by orderLine.id
COMMIT
The allure of ORMs handling SQL migrations is undeniably attractive and sweet. However, this sweetness can become painful. Auto-generated migration scripts might not capture all nuances. Using dedicated migration tools separate from the ORM or manually managing migrations might be the less painful route in the long run. RDB aim for database agnosticism. And when you're dealing with migrations, you might want to use features specific to a database platform. However, I might consider adding support for (non-auto-generated) migrations at a later point. But for now, it is not on the roadmap.
Applying ORMs to NoSQL, which inherently diverges from the relational model, can lead to data representation mismatches and a loss of specialized NoSQL features. Moreover, the added ORM layer can introduce performance inefficiencies, complicate debugging, and increase maintenance concerns. Given the unique capabilities of each NoSQL system, crafting custom data access solutions tailored to specific needs often provides better results than a generalized ORM approach.
RDB, already supports remote data operations via HTTP, eliminating the primary need for integrating GraphQL. RDB's built-in safety mechanisms and tailored optimization layers ensure secure and efficient data operations, which might be compromised by adding GraphQL. Furthermore, RDB's inherent expressivity and powerful querying capabilities could be overshadowed by the introduction of GraphQL. Integrating GraphQL could introduce unnecessary complexity, potential performance overhead, and maintenance challenges, especially as both systems continue to evolve. Therefore, considering RDB's robust features and design, supporting GraphQL might not offer sufficient advantages to warrant the associated complications.
FAQs
Object Relational Mapper
The npm package rdb receives a total of 111 weekly downloads. As such, rdb popularity was classified as not popular.
We found that rdb demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Β It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
A supply chain attack on Rspack's npm packages injected cryptomining malware, potentially impacting thousands of developers.
Research
Security News
Socket researchers discovered a malware campaign on npm delivering the Skuld infostealer via typosquatted packages, exposing sensitive data.
Security News
Sonarβs acquisition of Tidelift highlights a growing industry shift toward sustainable open source funding, addressing maintainer burnout and critical software dependencies.