Research
Security News
Malicious npm Packages Inject SSH Backdoors via Typosquatted Libraries
Socket’s threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
nano-gcloud-datastore
Advanced tools
Google Cloud Store Adapter for nanoSQL
This is the fasest and easiest way to get Google Cloud Datastore in your NodeJS app.
npm i --save nano-gcloud-datastore
import { nSQL } from "nano-sql";
import { GDatastoreAdapter } from "nano-gcloud-datastore";
// OR //
const nSQL = require("nano-sql").nSQL;
const GDatastoreAdapter = require("nano-gcloud-datastore").GDatastoreAdapter;
nSQL("users") // table name
.model([ // data model
{key: "id", type: "uuid", props: ["pk"]}, // primary key
{key: "name", type: "string"},
{key: "age", type: "int", props: ["idx"]} // secondary index
])
.config({
id: "myDB", // will be used as namespace in Google Datastore.
cache: false, // dont use javascript object cache
mode: new GDatastoreAdapter({ // required
projectId: "my-project",
keyFilename: "myAuth.json"
})
}).connect().then(() => {
// add record
return nSQL("users").query("upsert", {name: "Jeb", age: 30}).exec();
}).then(() => {
// get all records
return nSLQ("users").query("select").exec();
}).then((rows) => {
console.log(rows) // [{id: "1df52039af3d-a5c0-4ca9-89b7-0e89aad5a61e", name: "Jeb", age: 30}]
})
That's it, now everything nanoSQL can do you can do with Google Cloud Datastore.
Read about nanoSQL here.
The new GDatastoreAdapter
method accepts a single agument, an object that's documented by Google here.
There's a large number of options here but the 90/10 properties are below:
true
to disable auto incriment primary keys, trie
props and local indexes. If you plan to access the datastore from multiple nodeJS instances turn this on. IMPORTANT If you pass true
into this property also make sure you have cache: false
passed into the main config object.true
all reads will be strongly consistent at the cost of query speed. The default (eventual consistency) is much more performant and perfectly acceptable in most situations.NanoSQL handles limit/offset queries much better than Google Datastore's default behavior.
If you query using offset
from Google Datastore you get a performance penalty equal to the offset length. For example, if you sent .limit(20).offset(200)
directly to Google Cloud Datastore you'll pay for 220 entity reads and Datastore will actually read 220 entities, then return only 20 of them. Using the .range(20, 200)
query modifier with nanoSQL lets you bypass this entirely. NanoSQL will first grab a copy of the table index (a fast 1 entity cost read) then limit/offset against the index. Meaning no matter how large the offset gets you only query the total number in the limit argument plus 1. This is potentially hundreds of times faster.
There's no way around the NoSQL style limitations of Datastore. Use .range()
or .where()
queries limited to using primary keys or secondary indexes combined with BETWEEN
, =
, or IN
. Venture outside this safe zone and nanoSQL has to query the entire table of data to complete your request.
Assuming we use the data model at the top under Usage the following queries would perform very well:
// select by primary key
nSQL("users").query("select").where(["id", "=", "1df52039af3d-a5c0-4ca9-89b7-0e89aad5a61e"]).exec()
// select by secondary index
nSQL("users").query("select").where(["age", "=", 30]).exec()
// select by secondary index range
nSQL("users").query("select").where(["age", "BETWEEN", [20, 30]]).exec()
// even combined where statements are fine as long as every column being checked is a primary key or secondary index
nSQL("users").query("select").where([["age", "=", 30], "OR", ["age", "=", 35]]).exec();
The queries below will work but require nanoSQL to grab a copy of the whole table/kind and read every row/entity to discover what matches.
// must check every name column and see if it's like john.
nSQL("users").query("select").where(["name", "LIKE", "john"]).exec();
// because the name column isn't indexed this is still very slow
nSQL("users").query("select").where(["name", "=", "john"]).exec();
// If you use a non primary key/secondary indexed column anywhere in a .where() statement it does a full table scan.
nSQL("users").query("select").where([["age", "=", 30], "OR", ["name", "=", "john"]]).exec()
The takeway here is there's no such thing as a free lunch, while you get RDBMS functions they aren't magically performant. RDBMS is still slow.
There are a few workarounds for this situation, most of them being pretty simple.
When you can grab data directly by it's key or by a range of keys, this is the ideal situation. But secondary indexes slow down your writes the more you have so you can't just secondary index every single column.
Setup tables where you cache low performance queries on a regular basis, this way low performance queries can happen infrequently and in the background.
For example, lets say you want to show the total sales by day over the past week. Make a new table called "salesByDay" with the primary key being the date, then run a daily setInterval
that reads the most recent day's orders and crunches them into a single row in the "salesByDay" table. Now when you need to show this data just pull it directly from "salesByDay", very performant and easy to do!
If you can setup your data so that each table/kind only has a few thousand records, performing the full table scan should only take a second or two to complete.
Copyright (c) 2018 Scott Lott
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
FAQs
Google Cloud Datastore Adapter for nanoSQL
The npm package nano-gcloud-datastore receives a total of 1 weekly downloads. As such, nano-gcloud-datastore popularity was classified as not popular.
We found that nano-gcloud-datastore demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket’s threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
Security News
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
Security News
In this segment of the Risky Business podcast, Feross Aboukhadijeh and Patrick Gray discuss the challenges of tracking malware discovered in open source softare.