Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Basic CRUD and query support for NoSQL databases, allowing for portable cloud native applications
This library is not intended to create databases/tables, use Terraform/ARM/CloudFormation etc for that
Why not just use the name 'nosql' or 'pynosql'? because they already exist on pypi :-)
pip install 'abnosql[dynamodb]'
pip install 'abnosql[cosmos]'
pip install 'abnosql[firestore]'
For optional client side field level envelope encryption
pip install 'abnosql[aws-kms]'
pip install 'abnosql[azure-kms]'
pip install 'abnosql[gcp-kms]'
By default, abnosql does not include database dependencies. This is to facilitate packaging abnosql into AWS Lambda or Azure Functions (for example), without over-bloating the packages
from abnosql import table
import os
os.environ['ABNOSQL_DB'] = 'dynamodb'
os.environ['ABNOSQL_KEY_ATTRS'] = 'hk,rk'
item = {
'hk': '1',
'rk': 'a',
'num': 5,
'obj': {
'foo': 'bar',
'num': 5,
'list': [1, 2, 3],
},
'list': [1, 2, 3],
'str': 'str'
}
tb = table('mytable')
# create/replace
tb.put_item(item)
# update - using ABNOSQL_KEY_ATTRS
updated_item = tb.put_item(
{'hk': '1', 'rk': 'a', 'str': 'STR'},
update=True
)
assert updated_item['str'] == 'STR'
# bulk
tb.put_items([item])
# note partition/hash key should be first kwarg
assert tb.get_item(hk='1', rk='a') == item
assert tb.query({'hk': '1'})['items'] == [item]
# scan
assert tb.query()['items'] == [item]
# be careful not to use cloud specific statements!
assert tb.query_sql(
'SELECT * FROM mytable WHERE mytable.hk = @hk AND mytable.num > @num',
{'@hk': '1', '@num': 4}
)['items'] == [item]
tb.delete_item({'hk': '1', 'rk': 'a'})
See API Docs
query()
performs DynamoDB Query using KeyConditionExpression (if key
supplied) and exact match on FilterExpression if filters are supplied. For Cosmos, SQL is generated. This is the safest/most cloud agnostic way to query and probably OK for most use cases.
query_sql()
performs Dynamodb ExecuteStatement passing in the supplied PartiQL statement. Cosmos uses the NoSQL SELECT syntax.
During mocked tests, SQLGlot is used to execute the statement, so results may differ...
Care should be taken with query_sql()
to not to use SQL features that are specific to any specific provider (breaking the abstraction capability of using abnosql in the first place)
The Firestore plugin uses sqlglot to parse simple SQL statements (eg AND only supported)
Beyond partition and range keys defined on the table, indexes currently have limited support within abnosql
query()
allows a secondary index to be specified via optional index
kwargindex
kwarg in query()
implementation.put_item()
and put_items()
support update
boolean attribute, which if supplied will do an update_item()
on DynamoDB, and a patch_item()
on Cosmos. For this to work however, you must specify the key attribute names, either via ABNOSQL_KEY_ATTRS
env var as a comma separated list (eg perhaps multiple tables all share common partition/range key scheme), or as the key_attrs
config item when instantiating the table, eg:
tb = table('mytable', {'key_attrs': ['hk', 'rk']})
If you don't need to do any updates and only need to do create/replace, then these key attribute names do not need to be supplied
All items being updated must actually exist first, or else exception raised
Firestore does not return updated item, so if this is required use put_get
= True
config variable
If check_exists
config attribute is True
, then CRUD operations will raise exceptions as follows:
get_item()
raises NotFoundException
if item doesnt existput_item()
raises ExistsException
if item already existsput_item(update=True)
raises NotFoundException
if item doesnt exist to updatedelete_item()
raises NotFoundException
if item doesnt existThis adds some delay overhead as abnosql must check if item exists
This can also be enabled by setting environment variable ABNOSQL_CHECK_EXISTS=TRUE
If for some reason you need to override this behaviour once enabled for put_item()
create operation,
you can pass abnosql_check_exists=False
into the item (this gets popped out so not persisten), which
will allow create operation to overwrite the existing item without throwing ExistsException
config
can define jsonschema to validate upon create or update operations (via put_item()
)
Combination of the following config attributes supported
schema
: jsonschema dict or yaml string, applied to both create and updatecreate_schema
: jsonschema dict/yaml only on createupdate_schema
: jsonschema dict/yaml only on updateschema_errmsg
: override default error message on both create and updatecreate_schema_errmsg
: override default error message on createupdate_schema_errmsg
: override default error message on updateYou can get details of validation errors through e.to_problem()
or e.detail
NOTE: key_attrs
required when updating (see Updates)
A few methods such as get_item()
, delete_item()
and query()
need to know partition/hash keys as defined on the table. To avoid having to configure this or lookup from the provider, the convention used is that the first kwarg or dictionary item is the partition key, and if supplied the 2nd is the range/sort key.
query
and query_sql
accept limit
and next
optional kwargs and return next
in response. Use these to paginate.
This works for AWS DyanmoDB & Firestore, however Azure Cosmos has a limitation with continuation token for cross partitions queries (see Python SDK documentation). For Cosmos, abnosql appends OFFSET and LIMIT in the SQL statement if not already present, and returns next
. limit
is defaulted to 100. See the tests for examples
Table config attribute audit_user
will add the following to the item being written to database:
createdBy
- value of audit_user
, added if does not exist in item supplied to put_item()createdDate
- UTC ISO timestamp string, added if does not existmodifiedBy
- value of audit_user
always addedmodifiedDate
- UTC ISO timestamp string, always addedIf snake_case over CamelCase is preferred, set env var ABNOSQL_CAMELCASE
= FALSE
NOTE: created* will only be added if update
is not True in a put_item()
operation
Table config attribute audit_callback
with value as a function callback can be used to hook into additional audit stores.
Callback function must accept the following positional args:
table_name
- table namedt_iso
- ISO date timestampoperation
- create
, update
, get
or delete
key
- key of item serialised in =; formataudit_user
- user performing the operationAWS DynamoDB Streams allow Lambda functions to be triggered upon create, update and delete table operations. The event sent to the lambda (see aws docs) contains eventName
and eventSourceARN
, where:
eventName
- name of event, eg INSERT
, MODIFY
or REMOVE
(see here)eventSourceARN
- ARN of the table nameThis allows a single stream processor lambda to process events from multiple tables (eg for writing into ElasticSearch)
Like DynamoDB, Azure CosmosDB supports change feeds, however the event sent to the function (currently) omits the event source (table name) and only delete event names are available if a preview change feed mode is enabled, which needs explicit enablement for.
Because both the eventName and eventSource are ideally needed (irrespective of preview mode or not), abnosql library automatically adds the changeMetadata
to an item during create, update and delete, eg:
item = {
"hk": "1",
"rk": "a",
"changeMetadata": {
"eventName": "INSERT",
"eventSource": "sometable"
}
}
Because no REMOVE event is sent at all without preview change feed mode above - abnosql must first update the item, and then delete it. This is also needed for the eventSource / table name to be captured in the event, so unfortunately until Cosmos supports both attributes, update is needed before a delete. 5 second synchronous sleep is added by default between update and delete to allow CosmosDB to send the update event (0 seconds results in no update event). This can be controlled with ABNOSQL_COSMOS_CHANGE_META_SLEEPSECS
env var (defaults to 5
seconds), and disabled by setting to 0
This behaviour is enabled by default, however can be disabled by setting ABNOSQL_COSMOS_CHANGE_META
env var to FALSE
or cosmos_change_meta=False
in table config. ABNOSQL_CAMELCASE
= FALSE
env var can also be used to change attribute names used to snake_case if needed
To write an Azure Function / AWS Lambda that is able to process both DynamoDB and Cosmos events, look for changeMetadata
first and if present use that otherwise look for eventName
and eventSourceARN
in the event payload assuming its DynamoDB
Google Firestore should support triggering functions similar to DynamoDB Streams, so changeMetadata is not required
If configured in table config with kms
attribute, abnosql will perform client side encryption using AWS KMS, Azure KeyVault or Google KMS
Each attribute value defined in the config is encrypted with a 256-bit AES-GCM data key generated for each attribute value:
aws
uses AWS Encryption SDK for Pythonazure
uses python cryptography to generate AES-GCM data key, encrypt the attribute value and then uses an RSA CMK in Azure Keyvault to wrap/unwrap (envelope encryption) the AES-GCM data key. The plugin uses the azure-keyvault-keys python SDK for wrap/unrap functionality of the generated data key (Azure doesnt support generate data key as AWS does - see also tink issue)gcp
uses Google TinkAll providers use a 256-bit AES-GCM generated data key with AAD/encryption context (Azure provider uses a 96-nonce). AES-GCM is an Authenticated symmetric encryption scheme used by AWS, Azure & Google (and Hashicorp Vault)
See also AWS Encryption Best Practices
Example config:
{
'kms': {
# Azure example
'key_ids': ['https://foo.vault.azure.net/keys/bar/45e36a1024a04062bd489db0d9004d09'],
# AWS example
# 'key_ids': ['arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab'],
# Google Example
# 'key_ids': ['gcp-kms://projects/p1/locations/global/keyRings/kr1/cryptoKeys/ck1'],
'key_attrs': ['hk', 'rk'],
'attrs': ['obj', 'str']
}
}
Where:
key_ids
: list of AWS KMS Key ARNs, Azure KeyVault identifier (URL to RSA CMK) or Google KMS URI. This is picked up via ABNOSQL_KMS_KEYS
env var as a comma separated list (NOTE: env var recommended to avoid provider specific code)key_attrs
: list of key attributes in the item from which the AAD/encryption context is set. Taken from ABNOSQL_KEY_ATTRS
env var or table key_attrs
if defined thereattrs
: list of attributes keys to encryptkey_bytes
: optional for azure, use your own AESGCM key if specified, otherwise generate oneIf kms
config attribute is present, abnosql will look for the ABNOSQL_KMS
provider to load the appropriate provider KMS module (eg "aws" or "azure"), and if not present use default depending on the database (eg cosmos will use azure, dynamodb will use aws)
In example above, the key_attrs ['hk', 'rk']
are used to define the encryption context / AAD used, and attrs ['obj', 'str']
what attributes to encrypt/decrypt
With an item:
{
'hk': '1',
'rk': 'b',
'obj': {'foo':'bar'},
'str': 'foobar'
}
The encryption context / AAD is set to hk=1 and rk=b and obj and str values are encrypted
If you don't want to use any of these providers, then you can use put_item_pre
and get_item_post
hooks to perform your own client side encryption
See also AWS Multi-region encryption keys and set ABNOSQL_KMS_KEYS
env var as comma list of ARNs
It is recommended to use environment variables where possible to avoid provider specific application code
if ABNOSQL_DB
env var is not set, abnosql will attempt to apply defaults based on available environment variables:
AWS_DEFAULT_REGION
- sets database to dynamodb
(see aws docs)FUNCTIONS_WORKER_RUNTIME
- sets database to cosmos
(see azure docs)K_SERVICE
- sets database to firestore
(though this could also get confused if running on knative)Set the following environment variable and use the usual AWS environment variables that boto3 uses
ABNOSQL_DB
= "dynamodb"Or set the boto3 session in the config
from abnosql import table
import boto3
tb = table(
'mytable',
config={'session': boto3.Session()},
database='dynamodb'
)
Set the following environment variables:
ABNOSQL_DB
= "cosmos"ABNOSQL_COSMOS_ACCOUNT
= your database accountABNOSQL_COSMOS_ENDPOINT
= drived from ABNOSQL_COSMOS_ACCOUNT
if not setABNOSQL_COSMOS_CREDENTIAL
= your cosmos credential, use Azure Key Vault References if using Azure Functions. Don't set to use DefaultAzureCredential / managed identity.ABNOSQL_COSMOS_DATABASE
= cosmos databaseOR - use the connection string format:
ABNOSQL_DB
= "cosmos://account@credential:database" or "cosmos://account@:database" to use managed identity (credential could also be "DefaultAzureCredential")Alternatively, define in config (though ideally you want to use env vars to avoid application / environment specific code).
from abnosql import table
tb = table(
'mytable',
config={'account': 'foo', 'database': 'bar'},
database='cosmos'
)
Set the following environment variables:
ABNOSQL_DB
= "firestore"ABNOSQL_FIRESTORE_PROJECT
or GOOGLE_CLOUD_PROJECT
= google cloud projectABNOSQL_FIRESTORE_DATABASE
= Firestore databaseABNOSQL_FIRESTORE_CREDENTIALS
= oauth, optional - if using google CLI, its also picked up from ~/.config/gcloud/application_default_credentials.json
if foundOR - use the connection string format:
ABNOSQL_DB
= "firestore://project@credential:database"Alternatively, define in config (though ideally you want to use env vars to avoid application / environment specific code).
from abnosql import table
tb = table(
'mytable',
config={'project': 'foo', 'database': 'bar'},
database='firestore'
)
See also https://cloud.google.com/firestore/docs/authentication
abnosql uses pluggy and registers in the abnosql.table
namespace
The following hooks are available
set_config
- set configget_item_pre
get_item_post
- called after get_item()
, can return modified dataput_item_pre
put_item_post
put_items_post
delete_item_post
See the TableSpecs and example test_hooks()
Use moto
package and abnosql.mocks.mock_dynamodbx
mock_dynamodbx is used for query_sql and only needed if/until moto provides full partiql support
Example:
from abnosql.mocks import mock_dynamodbx
from moto import mock_dynamodb
@mock_dynamodb
@mock_dynamodbx # needed for query_sql only
def test_something():
...
More examples in tests/test_dynamodb.py
Use requests
package and abnosql.mocks.mock_cosmos
Example:
from abnosql.mocks import mock_cosmos
import requests
@mock_cosmos
@responses.activate
def test_something():
...
More examples in tests/test_cosmos.py
Use python-mock-firestore and pass MockFirestore()
to table config as client
attribute, or patch get_client()
Example:
from unittest.mock import patch
from mockfirestore import MockFirestore
from abnosql.plugins.table.firestore import Table as FirestoreTable
@patch.object(FirestoreTable, 'get_client', MockFirestore)
def test_something():
tb = table('mytable', {})
item = tb.get_item(foo='bar')
More examples in tests/test_firestore.py
Small abnosql CLI installed with few of the commands above
Usage: abnosql [OPTIONS] COMMAND [ARGS]...
Options:
--help Show this message and exit.
Commands:
delete-item
get-item
put-item
put-items
query
query-sql
To install dependencies
pip install 'abnosql[cli]'
Example querying table in Azure Cosmos, with cosmos.json config file containing endpoint, credential and database
$ abnosql query-sql mytable 'SELECT * FROM mytable' -d cosmos -c cosmos.json
partkey id num obj list str
----------- ---- ----- ------------------------------------------- --------- -----
p1 p1.1 5 {'foo': 'bar', 'num': 5, 'list': [1, 2, 3]} [1, 2, 3] str
p2 p2.1 5 {'foo': 'bar', 'num': 5, 'list': [1, 2, 3]} [1, 2, 3] str
p2 p2.2 5 {'foo': 'bar', 'num': 5, 'list': [1, 2, 3]} [1, 2, 3] str
FAQs
NoSQL Abstraction Library
We found that abnosql demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.