dbq 🍢
dbq
= (mysql
+ async
for batch execution flow control) / (a preference for brevity × medium naiveté).
Example: four queries, executed in parallel, four results:
db( "select * from ricks order by rickness desc limit 1"
,"select * from mortys where dim=? order by mortyness desc limit 1",["c-137"]
,"select * from gazorpazorpians where father=?",["Morty"]
,"select * from donors where recipient=? and organ=?",["Shrimply Pibbles","heart"]
,(rickest,mortyest,mortyJr,heartDonors)=>)
Pass a function as the last input, and that will receive query results as inputs in the order supplied:
db("select * from user where name=?",['morty']
,"select name,volume from dims where dim=?",['c-137']
,(morty,dim)=>{})
If the last input isn't a function, a bluebird promise is returned, so is then
able:
db("select * from jerrys where dim=?",["c-137"]
,"select * from ricks where dim=?",["J19ζ7"]
).then(([jerry,doofusRick])=>{
})
.catch(errorHandler)
It can execute queries in series or parallel (assuming you have connection pooling on).
db(
"select * from user"
,"select * from book"
,"select * from dinosaur"
).then(([users,books,dinosaurs])=>{})
db.series(
"update cat set living=false"
,"update treaty set active=true where title='Spider Peace'"
,"insert into cat2 select * from cat where living=false"
)
mysql's ?-substitution syntax is also allowed adjacently, as needed:
db( "select * from grandpa where name=?",["rick"]
,"select * from council"
,"select * from morty where ?",[{alignment:"evil"}]
,"select * from dinosaur"
).then(fiddle)
Queries are often performed to retrieve single value results, not arrays of objects.
If you end a query with limit 1
, it will take that one result out of its result []
, returning just the row {}
.
If you also supply only one select
clause column, the result will be just that value
, not a {key:value}
.
If your credentials have information_schema
access, db.schemize()
will query it and put a representation of the database's tables and their columns at db.table
for easy referencing elsewhere in code.
Any key:value passed to the db
options object is Object.assign
ed to db
, so will overwrite defaults. Useful to create your own logging. For example, I like to add an ellipsize
option to it & the logger so I can see partial or full queries if debugging.
var mysql=require("mysql").createPool({
host:'x',user:'x',password:'x',database:'x'
,useConnectionPooling:true
,connectionLimit:16
,connectTimeout:15*60*1000
})
,db=require("dbq")(mysql,{
,verbose:true
,log:(query,rows,queryLine,took,db)=>{
console.log(`query in ${took}s:`,query.sql)
}
})
If you want, you can pass an object and its table name into db.attachCommonMethods(model,name,done)
to attach an opinionated:
insert(rows[,done])
update(rows[,done])
delete(rows[,done])
get(key[,done])
get1(key[,done])
getBy${FieldName}(key[,done])
All of which use proper ?-substitution, support promise/callback responses, and {single}
/[many]
things supplied at once.
Anything more complex, consider writing clear SQL.
- variables and temp tables across multiple connections - since parallel execution requires a connection pool, this means queries will occur across different connections,
which means locally defined variables and temporary tables have no guarantee of existing between queries, since they're connection-local.
So...define your variables in code, not queries, and consider refactoring before reaching for temp tables.
- multiple cores - if your db is operating with only one core, you won't benefit meaningfully from running queries in parallel with a connection pool. 2+ cores and you will. It'd also be appropriate to only have as many connections as cores. See the
test.js
for benchmark numbers (≈25% faster, at least), where the db was on the same server as the app, so the local core count was relevant. - but isn't node single-threaded? Yes! But db requests go out to a separate system, node makes the request and receives the data. And mysql / mariadb can handle multiple queries at once, so why not supply them when you can?