Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
electrodb
Advanced tools
A library to more easily create and interact with multiple entities and heretical relationships in dynamodb
ElectroDB is a DynamoDB library that simplifies the process of defining and interacting with DynamoDB tables. It provides a high-level abstraction for defining entities, managing relationships, and performing CRUD operations, making it easier to work with DynamoDB's complex data modeling and querying capabilities.
Entity Definition
This feature allows you to define an entity with its attributes and indexes. The code sample demonstrates how to create a User entity with attributes like userId, name, and email, and a primary index.
const { Entity } = require('electrodb');
const UserEntity = new Entity({
model: {
entity: 'User',
version: '1',
service: 'UserService'
},
attributes: {
userId: { type: 'string', required: true },
name: { type: 'string', required: true },
email: { type: 'string', required: true }
},
indexes: {
primary: {
pk: { field: 'pk', composite: ['userId'] },
sk: { field: 'sk', composite: [] }
}
}
});
CRUD Operations
This feature provides methods for performing CRUD operations on the defined entities. The code sample shows how to create, read, update, and delete a user entity.
const user = await UserEntity.put({
userId: '123',
name: 'John Doe',
email: 'john.doe@example.com'
}).go();
const fetchedUser = await UserEntity.get({ userId: '123' }).go();
const updatedUser = await UserEntity.update({ userId: '123' })
.set({ name: 'Jane Doe' })
.go();
const deletedUser = await UserEntity.delete({ userId: '123' }).go();
Querying
This feature allows you to perform complex queries on your entities. The code sample demonstrates how to query the User entity using the primary index.
const users = await UserEntity.query.primary({ userId: '123' }).go();
Relationships
This feature allows you to define and manage relationships between different entities. The code sample shows how to fetch a user along with their related orders using the Service class.
const { Service } = require('electrodb');
const UserService = new Service({
User: UserEntity,
Order: OrderEntity
});
const userWithOrders = await UserService.User.get({ userId: '123' }).include(OrderEntity).go();
DynamoDB Toolbox is a set of tools that makes it easier to work with Amazon DynamoDB. It provides a simple and consistent way to define and interact with DynamoDB tables and items. Compared to ElectroDB, DynamoDB Toolbox offers a more lightweight and flexible approach but may require more manual setup for complex data models.
The AWS SDK for JavaScript provides a comprehensive set of tools for interacting with AWS services, including DynamoDB. While it offers low-level access to DynamoDB's API, it lacks the high-level abstractions and convenience features provided by ElectroDB, making it more suitable for developers who need fine-grained control over their DynamoDB interactions.
Dynogels is a DynamoDB data mapper for Node.js that simplifies the process of defining and interacting with DynamoDB tables. It offers a similar high-level abstraction as ElectroDB but is less actively maintained and may not support some of the latest DynamoDB features.
ElectroDB is a DynamoDB library to ease the use of having multiple entities and complex hierarchical relationships in a single DynamoDB table.
Please submit issues/feedback or reach out on Twitter @tinkertamper.
For existing users, checkout the CHANGELOG and/or the section Version 2 Migration to learn more about the recent move to 2.0.0 and the changes neccessary to move to the newest version.
Try out and share ElectroDB Models, Services, and Single Table Design at electrodb.fun
ExpressionAttributeNames
, ExpressionAttributeValues
, and FilterExpressions
.ExpressionAttributeNames
, ExpressionAttributeValues
, and UpdateExpressions
..find()
or .match()
methods to dynamically and efficiently query based on defined sort key structures.Entities
, Services
, Models
directly from the command line.Entities
, Services
, Models
for easier prototyping.Turn this
tasks
.patch({
team: "core",
task: "45-662",
project: "backend"
})
.set({ status: "open" })
.add({ points: 5 })
.append({
comments: [{
user: "janet",
body: "This seems half-baked."
}]
})
.where(( {status}, {eq} ) => eq(status, "in-progress"))
.go();
Into This
{
"UpdateExpression": "SET #status = :status_u0, #points = #points + :points_u0, #comments = list_append(#comments, :comments_u0), #updatedAt = :updatedAt_u0, #gsi1sk = :gsi1sk_u0",
"ExpressionAttributeNames": {
"#status": "status",
"#points": "points",
"#comments": "comments",
"#updatedAt": "updatedAt",
"#gsi1sk": "gsi1sk"
},
"ExpressionAttributeValues": {
":status0": "in-progress",
":status_u0": "open",
":points_u0": 5,
":comments_u0": [
{
"user": "janet",
"body": "This seems half-baked."
}
],
":updatedAt_u0": 1630977029015,
":gsi1sk_u0": "$assignments#tasks_1#status_open"
},
"TableName": "your_table_name",
"Key": {
"pk": "$taskapp#team_core",
"sk": "$tasks_1#project_backend#task_45-662"
},
"ConditionExpression": "attribute_exists(pk) AND attribute_exists(sk) AND #status = :status0"
}
ElectroDB focuses on simplifying the process of modeling, enforcing data constraints, querying across entities, and formatting complex DocumentClient parameters. Three important design considerations we're made with the development of ElectroDB:
Install from NPM
npm install electrodb --save
Require/import Entity
and/or Service
from electrodb
:
const { Entity, Service } = require("electrodb");
// or
import { Entity, Service } from "electrodb";
To see full examples of ElectroDB in action, go to the Examples section.
Entity
allows you to create separate and individual business objects in a DynamoDB table. When queried, your results will not include other Entities that also exist the same table. This allows you to easily achieve single table design as recommended by AWS. For more detail, read Entities.
Service
allows you to build relationships across Entities. A service imports Entity Models, builds individual Entities, and creates Collections to allow cross Entity querying. For more detail, read Services.
You can use Entities independent of Services, you do not need to import models into a Service to use them individually. However, If you intend to make queries that join
or span multiple Entities you will need to use a Service.
If you're looking to get started right away with ElectroDB, checkout code examples in the /examples
directory, or for guided examples in this document below. Additionally the section Building Queries shows examples of every and has descriptions of all methods available in ElectroDB. If you use TypeScript, the section TypeScript contains useful exported types to use in your project.
In ElectroDB an Entity
represents a single business object. For example, in a simple task tracking application, one Entity could represent an Employee and or a Task that is assigned to an employee.
Require or import Entity
from electrodb
:
const { Entity } = require("electrodb");
// or
import { Entity } from "electrodb";
When using TypeScript, for strong type checking, be sure to either add your model as an object literal to the Entity constructor or create your model using const assertions with the
as const
syntax.
In ElectroDB a Service
represents a collection of related Entities. Services allow you to build queries that span across Entities. Similar to Entities, Services can coexist on a single table without collision. You can use Entities independent of Services, you do not need to import models into a Service to use them individually. However, you do you need to use a Service if you intend make queries that join
multiple Entities.
Require:
const { Service } = require("electrodb");
// or
import { Service } from "electrodb";
Previously it was possible to generate type definition files (.d.ts
) for you Models, Entities, and Services with the Electro CLI. New with version 0.10.0
is TypeScript support for Entities and Services.
As of writing this, this functionality is still a work in progress, and enforcement of some of ElectroDB's query constraints have still not been written into the type checks. Most notably are the following constraints not yet enforced by the type checker, but are enforced at query runtime:
put
or update
type operation that impacts a composite attribute of a secondary index, ElectroDB performs a check at runtime to ensure all composite attributes of that key are included. This is detailed more in the section Composite Attribute and Index Considerations.params
method does not yet return strict types.raw
or includeKeys
query options do not yet impact the returned types.If you experience any issues using TypeScript with ElectroDB, your feedback is very important, please create a GitHub issue, and it can be addressed.
See the section Exported TypeScript Types to read more about the useful types exported from ElectroDB.
New with version 0.10.0
is TypeScript support. To ensure accurate types with, TypeScript users should create their services by passing an Object literal or const object that maps Entity alias names to Entity instances.
const table = "my_table_name";
const employees = new Entity(EmployeesModel, { client, table });
const tasks = new Entity(TasksModel, { client, table });
const TaskApp = new Service({employees, tasks});
The property name you assign the entity will then be "alias", or name, you can reference that entity by through the Service. Aliases can be useful if you are building a service with multiple versions of the same entity or wish to change the reference name of an entity without impacting the schema/key names of that entity.
Services take an optional second parameter, similar to Entities, with a client
and table
. Using this constructor interface, the Service will utilize the values from those entities, if they were provided, or be passed values to override the client
or table
name on the individual entities.
While not yet typed, this pattern will also accept Models, or a mix of Entities and Models, in the same object literal format.
let TaskApp = new Service({
personnel: EmployeesModel, // available at TaskApp.entities.personnel
directives: TasksModel, // available at TaskApp.entities.directives
});
When joining a Model/Entity to a Service, ElectroDB will perform a number of validations to ensure that Entity conforms to expectations collectively established by all joined Entities.
Create an Entity's schema. In the below example.
const DynamoDB = require("aws-sdk/clients/dynamodb");
const {Entity, Service} = require("electrodb");
const client = new DynamoDB.DocumentClient();
const EmployeesModel = {
model: {
entity: "employees",
version: "1",
service: "taskapp",
},
attributes: {
employee: {
type: "string",
default: () => uuid(),
},
firstName: {
type: "string",
required: true,
},
lastName: {
type: "string",
required: true,
},
office: {
type: "string",
required: true,
},
title: {
type: "string",
required: true,
},
team: {
type: ["development", "marketing", "finance", "product", "cool cats and kittens"],
required: true,
},
salary: {
type: "string",
required: true,
},
manager: {
type: "string",
},
dateHired: {
type: "string",
validate: /^\d{4}-\d{2}-\d{2}$/gi
},
birthday: {
type: "string",
validate: /^\d{4}-\d{2}-\d{2}$/gi
},
},
indexes: {
employee: {
pk: {
field: "pk",
composite: ["employee"],
},
sk: {
field: "sk",
composite: [],
},
},
coworkers: {
index: "gsi1pk-gsi1sk-index",
collection: "workplaces",
pk: {
field: "gsi1pk",
composite: ["office"],
},
sk: {
field: "gsi1sk",
composite: ["team", "title", "employee"],
},
},
teams: {
index: "gsi2pk-gsi2sk-index",
pk: {
field: "gsi2pk",
composite: ["team"],
},
sk: {
field: "gsi2sk",
composite: ["title", "salary", "employee"],
},
},
employeeLookup: {
collection: "assignments",
index: "gsi3pk-gsi3sk-index",
pk: {
field: "gsi3pk",
composite: ["employee"],
},
sk: {
field: "gsi3sk",
composite: [],
},
},
roles: {
index: "gsi4pk-gsi4sk-index",
pk: {
field: "gsi4pk",
composite: ["title"],
},
sk: {
field: "gsi4sk",
composite: ["salary", "employee"],
},
},
directReports: {
index: "gsi5pk-gsi5sk-index",
pk: {
field: "gsi5pk",
composite: ["manager"],
},
sk: {
field: "gsi5sk",
composite: ["team", "office", "employee"],
},
},
},
};
const TasksModel = {
model: {
entity: "tasks",
version: "1",
service: "taskapp",
},
attributes: {
task: {
type: "string",
default: () => uuid(),
},
project: {
type: "string",
},
employee: {
type: "string",
},
description: {
type: "string",
},
},
indexes: {
task: {
pk: {
field: "pk",
composite: ["task"],
},
sk: {
field: "sk",
composite: ["project", "employee"],
},
},
project: {
index: "gsi1pk-gsi1sk-index",
pk: {
field: "gsi1pk",
composite: ["project"],
},
sk: {
field: "gsi1sk",
composite: ["employee", "task"],
},
},
assigned: {
collection: "assignments",
index: "gsi3pk-gsi3sk-index",
pk: {
field: "gsi3pk",
composite: ["employee"],
},
sk: {
field: "gsi3sk",
composite: ["project", "task"],
},
},
},
};
Property | Description |
---|---|
model.service | Name of the application using the entity, used to namespace all entities |
model.entity | Name of the entity that the schema represents |
model.version | The version number of the schema, used to namespace keys |
attributes | An object containing each attribute that makes up the schema |
indexes | An object containing table indexes, including the values for the table's default Partition Key and Sort Key |
Optional second parameter
Property | Description |
---|---|
table | The name of the dynamodb table in aws. |
client | (optional) An instance of the docClient from the aws-sdk for use when querying a DynamoDB table. This is optional if you wish to only use the params functionality, but required if you actually need to query against a database. |
Attributes define an Entity record. The AttributeName
represents the value your code will use to represent an attribute.
Pro-Tip: Using the
field
property, you can map anAttributeName
to a different field name in your table. This can be useful to utilize existing tables, existing models, or even to reduce record sizes via shorter field names. For example, you may refer to an attribute asorganization
but want to save the attribute with a field name oforg
in DynamoDB.
Use the expanded syntax build out more robust attribute options.
attributes: {
<AttributeName>: {
type: "string" | "number" | "boolean" | "list" | "map" | "set" | "any" | ReadonlyArray<string>;
required?: boolean;
default?: <type> | (() => <type>);
validate?: RegExp | ((value: <type>) => void | string);
field?: string;
readOnly?: boolean;
label?: string;
cast?: "number"|"string"|"boolean";
get?: (attribute: <type>, schema: any) => <type> | void | undefined;
set?: (attribute?: <type>, schema?: any) => <type> | void | undefined;
watch?: "*" | string[];
padding?: {
length: number;
char: string;
}
}
}
NOTE: When using get/set in TypeScript, be sure to use the
?:
syntax to denote an optional attribute onset
Property | Type | Required | Types | Description |
---|---|---|---|---|
type | string , ReadonlyArray<string> , string[] | yes | all | Accepts the values: "string" , "number" "boolean" , "map" , "list" , "set" , an array of strings representing a finite list of acceptable values: ["option1", "option2", "option3"] , or "any" which disables value type checking on that attribute. |
required | boolean | no | all | Flag an attribute as required to be present when creating a record. This attribute also acts as a type of NOT NULL flag, preventing it from being removed directly. When applied to nested properties, be mindful that default map values can cause required child attributes to fail validation. |
hidden | boolean | no | all | Flag an attribute as hidden to remove the property from results before they are returned. |
default | value , () => value | no | all | Either the default value itself or a synchronous function that returns the desired value. Applied before set and before required check. In the case of nested attributes, default values will apply defaults to children attributes until an undefined value is reached |
validate | RegExp , (value: any) => void , (value: any) => string | no | all | Either regex or a synchronous callback to return an error string (will result in exception using the string as the error's message), or thrown exception in the event of an error. |
field | string | no | all | The name of the attribute as it exists in DynamoDB, if named differently in the schema attributes. Defaults to the AttributeName as defined in the schema. |
readOnly | boolean | no | all | Prevents an attribute from being updated after the record has been created. Attributes used in the composition of the table's primary Partition Key and Sort Key are read-only by default. The one exception to readOnly is for properties that also use the watch property, read attribute watching for more detail. |
label | string | no | all | Used in index key composition to prefix key composite attributes. By default, the AttributeName is used as the label. |
padding | { length: number; char: string; } | no | string, number | Similar to label , this property only impacts the attribute's value during index key composition. Padding allows you to define a string pattern to left pad your attribute when ElectroDB builds your partition or sort key. This can be helpful to implementing zero-padding patterns with numbers and strings in sort keys. Note, this will not impact your attribute's stored value, if you want to transform the attribute's field value, use the set callback described below. |
set | (attribute, schema) => value | no | all | A synchronous callback allowing you to apply changes to a value before it is set in params or applied to the database. First value represents the value passed to ElectroDB, second value are the attributes passed on that update/put |
get | (attribute, schema) => value | no | all | A synchronous callback allowing you to apply changes to a value after it is retrieved from the database. First value represents the value passed to ElectroDB, second value are the attributes retrieved from the database. |
watch | Attribute[], "*" | no | root-only | Define other attributes that will always trigger your attribute's getter and setter callback after their getter/setter callbacks are executed. Only available on root level attributes. |
properties | {[key: string]: Attribute} | yes* | map | Define the properties available on a "map" attribute, required if your attribute is a map. Syntax for map properties is the same as root level attributes. |
items | Attribute | yes* | list | Define the attribute type your list attribute will contain, required if your attribute is a list. Syntax for list items is the same as a single attribute. |
items | "string" | "number" | yes* | set |
When using TypeScript, if you wish to also enforce this type make sure to us the as const
syntax. If TypeScript is not told this array is Readonly, even when your model is passed directly to the Entity constructor, it will not resolve the unique values within that array.
This may be desirable, however, as enforcing the type value can require consumers of your model to do more work to resolve the type beyond just the type string
.
NOTE: Regardless of using TypeScript or JavaScript, ElectroDB will enforce values supplied match the supplied array of values at runtime.
The following example shows the differences in how TypeScript may enforce your enum value:
attributes: {
myEnumAttribute1: {
type: ["option1", "option2", "option3"] // TypeScript enforces as `string[]`
},
myEnumAttribute2: {
type: ["option1", "option2", "option3"] as const // TypeScript enforces as `"option1" | "option2" | "option3" | undefined`
},
myEnumAttribute3: {
required: true,
type: ["option1", "option2", "option3"] as const // TypeScript enforces as `"option1" | "option2" | "option3"`
}
}
Map attributes leverage DynamoDB's native support for object-like structures. The attributes within a Map are defined under the properties
property; a syntax that mirrors the syntax used to define root level attributes. You are not limited in the types of attributes you can nest inside a map attribute.
attributes: {
myMapAttribute: {
type: "map",
properties: {
myStringAttribute: {
type: "string"
},
myNumberAttribute: {
type: "number"
}
}
}
}
List attributes model array-like structures with DynamoDB's List type. The elements of a List attribute are defined using the items
property. Similar to Map properties, ElectroDB does not restrict the types of items that can be used with a list.
attributes: {
myStringList: {
type: "list",
items: {
type: "string"
},
},
myMapList: {
myMapAttribute: {
type: "map",
properties: {
myStringAttribute: {
type: "string"
},
myNumberAttribute: {
type: "number"
}
}
}
}
}
The Set attribute is arguably DynamoDB's most powerful type. ElectroDB supports String and Number Sets using the items
property set as either "string"
, "number"
, or an array of strings or numbers. When a ReadonlyArray is provided, ElectroDB will enforce those values as a finite list of acceptable values, similar to an Enum Attribute
In addition to having the same modeling benefits you get with other attributes, ElectroDB also simplifies the use of Sets by removing the need to use DynamoDB's special createSet
class to work with Sets. ElectroDB Set Attributes accept Arrays, JavaScript native Sets, and objects from createSet
as values. ElectroDB will manage the casting of values to a DynamoDB Set value prior to saving and ElectroDB will also convert Sets back to JavaScript arrays on retrieval.
NOTE: If you are using TypeScript, Sets are currently typed as Arrays to simplify the type system. Again, ElectroDB will handle the conversion of these Arrays without the need to use
client.createSet()
.
attributes: {
myStringSet: {
type: "set",
items: "string"
},
myNumberSet: {
type: "set",
items: "number"
},
myEnumStringSet: {
type: "set",
items: ["RED", "GREEN", "BLUE"] as const // electrodb will only accept the included values "RED", "GREEN", and/or "BLUE"
},
myEnumNumberSet: {
type: "set",
items: [1, 2, 3] as const // electrodb will only accept the included values 1, 2, and/or 3
}
}
Using get
and set
on an attribute can allow you to apply logic before and just after modifying or retrieving a field from DynamoDB. Both callbacks should be pure synchronous functions and may be invoked multiple times during one query.
The first argument in an attribute's get
or set
callback is the value received in the query. The second argument, called "item"
, in an attribute's is an object containing the values of other attributes on the item as it was given or retrieved. If your attribute uses watch
, the getter or setter of attribute being watched will be invoked before your getter or setter and the updated value will be on the "item"
argument instead of the original.
NOTE: Using getters/setters on Composite Attributes is not recommended without considering the consequences of how that will impact your keys. When a Composite Attribute is supplied for a new record via a
put
orcreate
operation, or is changed via apatch
orupdated
operation, the Attribute'sset
callback will be invoked prior to formatting/building your record's keys on when creating or updating a record.
ElectroDB invokes an Attribute's get
method in the following circumstances:
put
or create
operation is performed, attribute getters are applied against the object originally received and returned.ElectroDB invokes an Attribute's set
callback in the following circumstances:
create
or put
operation.patch
or update
operation.NOTE: As of ElectroDB
1.3.0
, thewatch
property is only possible for root level attributes. Watch is currently not supported for nested attributes like properties on a "map" or items of a "list".
Attribute watching is a powerful feature in ElectroDB that can be used to solve many unique challenges with DynamoDB. In short, you can define a column to have its getter/setter callbacks called whenever another attribute's getter or setter callbacks are called. If you haven't read the section on Attribute Getters and Setters, it will provide you with more context about when an attribute's mutation callbacks are called.
Because DynamoDB allows for a flexible schema, and ElectroDB allows for optional attributes, it is possible for items belonging to an entity to not have all attributes when setting or getting records. Sometimes values or changes to other attributes will require corresponding changes to another attribute. Sometimes, to fully leverage some advanced model denormalization or query access patterns, it is necessary to duplicate some attribute values with similar or identical values. This functionality has many uses; below are just a few examples of how you can use watch
:
NOTE: Using the
watch
property impacts the order of which getters and setters are called. You cannotwatch
another attribute that also useswatch
, so ElectroDB first invokes the getters or setters of attributes without thewatch
property, then subsequently invokes the getters or setters of attributes who usewatch
.
myAttr: {
type: "string",
watch: ["otherAttr"],
set: (myAttr, {otherAttr}) => {
// Whenever "myAttr" or "otherAttr" are updated from an `update` or `patch` operation, this callback will be fired.
// Note: myAttr or otherAttr could be independently undefined because either attribute could have triggered this callback
},
get: (myAttr, {otherAttr}) => {
// Whenever "myAttr" or "otherAttr" are retrieved from a `query` or `get` operation, this callback will be fired.
// Note: myAttr or otherAttr could be independently undefined because either attribute could have triggered this callback.
}
}
If your attributes needs to watch for any changes to an item, you can model this by supplying the watch property a string value of "*"
myAttr: {
type: "string",
watch: "*", // <- "watch all"
set: (myAttr, allAttributes) => {
// Whenever an `update` or `patch` operation is performed, this callback will be fired.
// Note: myAttr or the attributes under `allAttributes` could be independently undefined because either attribute could have triggered this callback
},
get: (myAttr, allAttributes) => {
// Whenever a `query` or `get` operation is performed, this callback will be fired.
// Note: myAttr or the attributes under `allAttributes` could be independently undefined because either attribute could have triggered this callback
}
}
Example 1 - A calculated attribute that depends on the value of another attribute:
In this example, we have an attribute "fee"
that needs to be updated any time an item's "price"
attribute is updated. The attribute "fee"
uses watch
to have its setter callback called any time "price"
is updated via a put
, create
, update
, or patch
operation.
{
model: {
entity: "products",
service: "estimator",
version: "1"
},
attributes: {
product: {
type: "string"
},
price: {
type: "number",
required: true
},
fee: {
type: "number",
watch: ["price"],
set: (_, {price}) => {
return price * .2;
}
}
},
indexes: {
pricing: {
pk: {
field: "pk",
composite: ["product"]
},
sk: {
field: "sk",
composite: []
}
}
}
}
Example 2 - Making a virtual attribute that never persists to the database:
In this example we have an attribute "displayPrice"
that needs its getter called anytime an item's "price"
attribute is retrieved. The attribute "displayPrice"
uses watch
to return a formatted price string based whenever an item with a "price"
attribute is queried. Additionally, "displayPrice"
always returns undefined
from its setter callback to ensure that it will never write data back to the table.
{
model: {
entity: "services",
service: "costEstimator",
version: "1"
},
attributes: {
service: {
type: "string"
},
price: {
type: "number",
required: true
},
displayPrice: {
type: "string",
watch: ["price"],
get: (_, {price}) => {
return "$" + price;
},
set: () => undefined
}
},
indexes: {
pricing: {
pk: {
field: "pk",
composite: ["service"]
},
sk: {
field: "sk",
composite: []
}
}
}
}
Example 3 - Creating a more filter-friendly version of an attribute without impacting the original attribute:
In this example we have an attribute "descriptionSearch"
which will help our users easily filter for transactions by "description"
. To ensure our filters will not take into account a description's character casing, descriptionSearch
duplicates the value of "description"
so it can be used in filters without impacting the original "description"
value. Without ElectroDB's watch
functionality, to accomplish this you would either have to duplicate this logic or cause permanent modification to the property itself. Additionally, the "descriptionSearch"
attribute has used hidden:true
to ensure this value will not be presented to the user.
{
model: {
entity: "transaction",
service: "bank",
version: "1"
},
attributes: {
accountNumber: {
type: "string"
},
transactionId: {
type: "string"
},
amount: {
type: "number",
},
description: {
type: "string",
},
descriptionSearch: {
type: "string",
hidden: true,
watch: ["description"],
set: (_, {description}) => {
if (typeof description === "string") {
return description.toLowerCase();
}
}
}
},
indexes: {
transactions: {
pk: {
field: "pk",
composite: ["accountNumber"]
},
sk: {
field: "sk",
composite: ["transactionId"]
}
}
}
}
Example 4 - Creating an updatedAt
property:
In this example we can easily create both updatedAt
and createdAt
attributes on our model. createdAt
will use ElectroDB's set
and readOnly
attribute properties, while updatedAt
will make use of readOnly
, and watch
with the "watchAll" syntax: {watch: "*"}
. By supplying an asterisk, instead of an array of attribute names, attributes can be defined to watch all changes to all attributes.
Using watch
in conjunction with readOnly
is another powerful modeling technique. This combination allows you to model attributes that can only be modified via the model and not via the user. This is useful for attributes that need to be locked down and/or strictly calculated.
Notable about this example is that both updatedAt
and createdAt
use the set
property without using its arguments. The readOnly
only prevents modification of an attributes on update
, and patch
. By disregarding the arguments passed to set
, the updatedAt
and createdAt
attributes are then effectively locked down from user influence/manipulation.
{
model: {
entity: "transaction",
service: "bank",
version: "1"
},
attributes: {
accountNumber: {
type: "string"
},
transactionId: {
type: "string"
},
description: {
type: "string",
},
createdAt: {
type: "number",
readOnly: true,
set: () => Date.now()
},
updatedAt: {
type: "number",
readOnly: true,
watch: "*",
set: () => Date.now()
},
},
indexes: {
transactions: {
pk: {
field: "pk",
facets: ["accountNumber"]
},
sk: {
field: "sk",
facets: ["transactionId"]
}
}
}
}
See: Attribute Watching (Example 1).
See: Attribute Watching (Example 2).
See: Attribute Watching (Example 4).
The validation
property allows for multiple function/type signatures. Here the different combinations ElectroDB supports:
signature | behavior |
---|---|
Regexp | ElectroDB will call .test(val) on the provided regex with the value passed to this attribute |
(value: T) => string | If a string value with length is returned, the text will be considered the reason the value is invalid. It will generate a new exception this text as the message. |
(value: T) => boolean | If a boolean value is returned, true or truthy values will signify than a value is invalid while false or falsey will be considered valid. |
(value: T) => void | A void or undefined value is returned, will be treated as successful, in this scenario you can throw an Error yourself to interrupt the query |
When using ElectroDB, indexes are referenced by their AccessPatternName
. This allows you to maintain generic index names on your DynamoDB table, but reference domain specific names while using your ElectroDB Entity. These will be referenced as "Access Patterns".
All DynamoDB table start with at least a PartitionKey with an optional SortKey, this can be referred to as the "Table Index". The indexes
object requires at least the definition of this Table Index Partition Key and (if applicable) Sort Key.
In your model, the Table Index this is expressed as an Access Pattern without an index
property. For Secondary Indexes (both GSIs and LSIs), use the index
property to define the name of the index as defined on your DynamoDB table.
NOTE: The 'index' property is simply a mapping of your AccessPatternName to your DynamoDB index name. ElectroDB does not create or alter DynamoDB tables, so your indexes will need to be created prior to use
Within these AccessPatterns, you define the PartitionKey and (optionally) SortKeys that are present on your DynamoDB table and map the key's name on the table with the field
property.
indexes: {
[AccessPatternName]: {
index?: string;
collection?: string | string[];
type?: 'isolated' | 'clustered';
pk: {
field: string;
composite: AttributeName[];
template?: string;
},
sk?: {
field: string;
composite: AttributesName[];
template?: string;
},
}
}
Property | Type | Required | Description |
---|---|---|---|
index | string | no | Required when the Index defined is a Global/Local Secondary Index; but is omitted for the table's primary index. |
collection | string , string[] | no | Used when models are joined to a Service . When two entities share a collection on the same index , they can be queried with one request to DynamoDB. The name of the collection should represent what the query would return as a pseudo Entity . (see Collections below for more on this functionality). |
type | isolated , clustered | no | Allows you to optimize your index for either entity isolation (high volume of records per partition) or (entity relationships)[#clustered-indexes] (high relationship density per partition). When omitted, ElectroDB defaults to isolation . |
pk | object | yes | Configuration for the pk of that index or table |
pk.composite | string[] | yes | An array that represents the order in which attributes are concatenated to composite attributes the key (see Composite Attributes below for more on this functionality). |
pk.template | string | no | A string that represents the template in which attributes composed to form a key (see Composite Attribute Templates below for more on this functionality). |
pk.field | string | yes | The name of the index Partition Key field as it exists in DynamoDB, if named differently in the schema attributes. |
pk.casing | default , upper , lower , none | no | Choose a case for ElectroDB to convert your keys to, to avoid casing pitfalls when querying data. Default: lower . |
sk | object | no | Configuration for the sk of that index or table |
sk.composite | string[] | no | Either an Array that represents the order in which attributes are concatenated to composite attributes the key, or a String for a composite attribute template. (see Composite Attributes below for more on this functionality). |
sk.template | string | no | A string that represents the template in which attributes composed to form a key (see Composite Attribute Templates below for more on this functionality). |
sk.field | string | yes | The name of the index Sort Key field as it exists in DynamoDB, if named differently in the schema attributes. |
pk.casing | default , upper , lower , none , | no | Choose a case for ElectroDB to convert your keys to, to avoid casing pitfalls when querying data. Default: lower . |
ElectroDB helps manage your key structure, and works to abstract out the details of how your keys are created/formatted. Depending on your unique data set, you may need ElectroDB to optimize your index for either entity isolation (i.e. high volume of records per partition) or (entity relationships)[#clustered-indexes] (i.e. high relationship density per partition).
This option changes how ElectroDB formats your keys for storage, so it is an important consideration to make early in your modeling phase. As a result, this choice cannot be simply walked back without requiring a migration. The choice between clustered
and isolated
depends wholly on your unique dataset and access patterns.
NOTE: You can use Collections with both
isolated
andclustered
indexes. Isolated indexes are limited to only querying across the partition key while Clustered indexes can also leverage the Sort Key
By default, and when omitted, ElectroDB will create your index as an isolated
index. Isolated indexes optimizes your index structure for faster and more efficient retrieval of items within an individual Entity.
Choose isolated
if you have strong access pattern requirements to retrieve only records for only your entity on that index. While an isolated
index is more limited in its ability to be used in a collection, it can perform better than a clustered
index if a collection contains a highly unequal distribution of entities within a collection.
Don't choose isolated
if the primary use-cases for your index is to query across entities -- this index type does limit the extent to which indexes can be leveraged to improve query efficiency.
When your index type is defined as clustered
, ElectroDB will optimize your index for relationships within a partition. Clustered indexes optimize your index structure for more homogenous partitions, which allows for more efficient queries across multiple entities.
Choose clustered
if you have a high degree of grouped or similar data that needs to be frequently accessed together. This index works best in collections when member entities are more evenly distributed within a partition.
Don't choose clustered
if your need to query across entities is secondary to its primary purpose -- this index type limits the efficiency of querying your individual entity.
When using indexes without Sort Keys, that should be expressed as an index without an sk
property at all. Indexes without an sk
cannot have a collection, see Collections for more detail.
NOTE: It is generally recommended to always use Sort Keys when using ElectroDB as they allow for more advanced query opportunities. Even if your model doesn't need an additional property to define a unique record, having an
sk
with no defined composite attributes (e.g. an empty array) still opens the door to many more query opportunities like collections.
// ElectroDB interprets as index *not having* an SK.
{
indexes: {
myIndex: {
pk: {
field: "pk",
composite: ["id"]
}
}
}
}
When using indexes with Sort Keys, that should be expressed as an index with an sk
property. If you don't wish to use the Sort Key in your model, but it does exist on the table, simply use an empty for the composite
property. An empty array is still very useful, and opens the door to more query opportunities and access patterns like collections.
// ElectroDB interprets as index *having* SK, but this model doesnt assign any composite attributes to it.
{
indexes: {
myIndex: {
pk: {
field: "pk",
composite: ["id"]
},
sk: {
field: "sk",
composite: []
}
}
}
}
If you have an index where the Partition or Sort Keys are expected to be numeric values, you can accomplish this with the template
property on the index that requires numeric keys. Define the attribute used in the composite template as type "number", and then create a template string with only the attribute's name.
For example, this model defines both the Partition and Sort Key as numeric:
const schema = {
model: {
entity: "numeric",
service: "example",
version: "1"
},
attributes: {
number1: {
type: "number" // defined as number
},
number2: {
type: "number" // defined as number
}
},
indexes: {
record: {
pk: {
field: "pk",
template: "${number1}" // will build PK as numeric value
},
sk: {
field: "sk",
template: "${number2}" // will build SK as numeric value
}
}
}
}
DynamoDB is a case-sensitive data store, and therefore it is common to convert the casing of keys to uppercase or lowercase prior to saving, updating, or querying data to your table. ElectroDB, by default, will lowercase all keys when preparing query parameters. For those who are using ElectroDB with an existing dataset, have preferences on upper or lowercase, or wish to not convert case at all, this can be configured on an index key field basis.
In the example below, we are configuring the casing ElectroDB will use individually for the Partition Key and Sort Key on the GSI "gis1". For the index's PK, mapped to gsi1pk
, we ElectroDB will convert this key to uppercase prior to its use in queries. For the index's SK, mapped to gsi1pk
, we ElectroDB will not convert the case of this key prior to its use in queries.
{
indexes: {
myIndex: {
index: "gsi1",
pk: {
field: "gsi1pk",
casing: "upper", // Acct_0120 -> ACCT_0120
composite: ["organizationId"]
},
sk: {
field: "gsi1sk",
casing: "none", // Acct_0120 -> Acct_0120
composite: ["accountId"]
}
}
}
}
NOTE: Casing is a very important decision when modeling your data in DynamoDB. While choosing upper/lower is largely a personal preference, once you have begun loading records in your table it can be difficult to change your casing after the fact. Unless you have good reason, allowing for mixed case keys can make querying data difficult because it will require database consumers to always have a knowledge of their data's case.
Casing Option | Effect |
---|---|
default | The default for keys is lowercase, or lower |
lower | Will convert the key to lowercase prior it its use |
upper | Will convert the key to uppercase prior it its use |
none | Will not perform any casing changes when building keys |
A Composite Attribute is a segment of a key based on one of the attributes. Composite Attributes are concatenated together from either a Partition Key, or a Sort Key key, which define an index
.
NOTE: Only attributes with a type of
"string"
,"number"
,"boolean"
, orstring[]
(enum) can be used as composite attributes.
There are two ways to provide composite:
For example, in the following Access Pattern, "locations
" is made up of the composite attributes storeId
, mallId
, buildingId
and unitId
which map to defined attributes in the model:
// Input
{
storeId: "STOREVALUE",
mallId: "MALLVALUE",
buildingId: "BUILDINGVALUE",
unitId: "UNITVALUE"
};
// Output:
{
pk: '$mallstoredirectory_1#storeId_storevalue',
sk: '$mallstores#mallid_mallvalue#buildingid_buildingvalue#unitid_unitvalue'
}
For PK
values, the service
and version
values from the model are prefixed onto the key.
For SK
values, the entity
value from the model is prefixed onto the key.
Within a Composite Attribute Array, each element is the name of the corresponding Attribute defined in the Model. The attributes chosen, and the order in which they are specified, will translate to how your composite keys will be built by ElectroDB.
NOTE: If the Attribute has a
label
property, that will be used to prefix the composite attributes, otherwise the full Attribute name will be used.
attributes: {
storeId: {
type: "string",
label: "sid",
},
mallId: {
type: "string",
label: "mid",
},
buildingId: {
type: "string",
label: "bid",
},
unitId: {
type: "string",
label: "uid",
}
},
indexes: {
locations: {
pk: {
field: "pk",
composite: ["storeId"]
},
sk: {
field: "sk",
composite: ["mallId", "buildingId", "unitId"]
}
}
}
// Input
{
storeId: "STOREVALUE",
mallId: "MALLVALUE",
buildingId: "BUILDINGVALUE",
unitId: "UNITVALUE"
};
// Output:
{
pk: '$mallstoredirectory_1#sid_storevalue',
sk: '$mallstores#mid_mallvalue#bid_buildingvalue#uid_unitvalue'
}
In a Composite Template, you provide a formatted template for ElectroDB to use when making keys. Composite Attribute Templates allow for potential ElectroDB adoption on already established tables and records.
Attributes are identified by surrounding the attribute with ${...}
braces. For example, the syntax ${storeId}
will match storeId
attribute in the model.
Convention for a composing a key use the #
symbol to separate attributes, and for labels to attach with underscore. For example, when composing both the mallId
and buildingId
would be expressed as mid_${mallId}#bid_${buildingId}
.
NOTE: ElectroDB will not prefix templated keys with the Entity, Project, Version, or Collection. This will give you greater control of your keys but will limit ElectroDB's ability to prevent leaking entities with some queries.
ElectroDB will continue to always add a trailing delimiter to composite attributes with keys are partially supplied. The section on BeginsWith Queries goes into more detail about how ElectroDB builds indexes from composite attributes.
{
model: {
entity: "MallStoreCustom",
version: "1",
service: "mallstoredirectory"
},
attributes: {
storeId: {
type: "string"
},
mallId: {
type: "string"
},
buildingId: {
type: "string"
},
unitId: {
type: "string"
}
},
indexes: {
locations: {
pk: {
field: "pk",
template: "sid_${storeId}"
},
sk: {
field: "sk",
template: "mid_${mallId}#bid_${buildingId}#uid_${unitId}"
}
}
}
}
// Input
{
storeId: "STOREVALUE",
mallId: "MALLVALUE",
buildingId: "BUILDINGVALUE",
unitId: "UNITVALUE"
};
// Output:
{
pk: 'sid_storevalue',
sk: 'mid_mallvalue#bid_buildingvalue#uid_unitvalue'
}
The example above shows indexes defined only with the template
property. This property alone is enough to work with ElectroDB, however it can be useful to also include a composite
array with the names of the Composite Attributes included in the template
string. Doing so achieves the following benefits:
ElectroDB will enforce that the template you have supplied actually resolves to the composite attributes specified in the array.
If you use ElectroDB with TypeScript, supplying the composite
array will ensure the indexes' Composite Attributes are typed just the same as if you had not used a composite template.
An example of using template
while also using composite
:
{
indexes: {
locations: {
pk: {
field: "pk",
template: "sid_${storeId}"
composite: ["storeId"]
},
sk: {
field: "sk",
template: "mid_${mallId}#bid_${buildingId}#uid_${unitId}",
composite: ["mallId", "buildingId", "unitId"]
}
}
}
}
As described in the above two sections (Composite Attributes, Indexes), ElectroDB builds your keys using the attribute values defined in your model and provided on your query. Here are a few considerations to take into account when thinking about how to model your indexes:
Your table's primary Partition and Sort Keys cannot be changed after a record has been created. Be mindful of not to use Attributes that have values that can change as composite attributes for your primary table index.
When updating/patching an Attribute that is also a composite attribute for secondary index, ElectroDB will perform a runtime check that the operation will leave a key in a partially built state. For example: if a Sort Key is defined as having the Composite Attributes ["prop1", "prop2", "prop3"]
, than an update to the prop1
Attribute will require supplying the prop2
and prop3
Attributes as well. This prevents a loss of key fidelity because ElectroDB is not able to update a key partially in place with its existing values.
As described and detailed in [Composite Attribute Arrays](#composite attribute-arrays), you can use the label
property on an Attribute shorten a composite attribute's prefix on a key. This can allow trim down the length of your keys.
It may be the case that an index field is also an attribute. For example, if a table was created with a Primary Index partition key of accountId
, and that same field is used to store the accountId
value used by the application. The following are a few examples of how to model that schema with ElectroDB:
NOTE: If you have the unique opportunity to use ElectroDB with a new project, it is strongly recommended to use generically named index fields that are separate from your business attributes.
Using composite
When your attribute's name, or field
property on an attribute, matches the field
property on an indexes' pk
or sk
ElectroDB will forego its usual index key prefixing.
{
model: {
entity: "your_entity_name",
service: "your_service_name",
version: "1"
},
attributes: {
accountId: {
type: "string"
},
productNumber: {
type: "number"
}
},
indexes: {
products: {
pk: {
field: "accountId",
composite: ["accountId"]
},
sk: {
field: "productNumber",
composite: ["productNumber"]
}
}
}
}
Using template
Another approach allows you to use the template
property, which allows you to format exactly how your key should be built when interacting with DynamoDB. In this case composite
is optional when using template
, but including it helps with TypeScript typing.
{
model: {
entity: "your_entity_name",
service: "your_service_name",
version: "1"
},
attributes: {
accountId: {
type: "string" // string and number types are both supported
}
},
indexes: {
"your_access_pattern_name": {
pk: {
field: "accountId",
composite: ["accountId"],
template: "${accountId}"
},
sk: {...}
}
}
}
Advanced use of template
When your string
attribute is also an index key, and using key templates, you can also add static prefixes and postfixes to your attribute. Under the covers, ElectroDB will leverage this template while interacting with DynamoDB but will allow you to maintain a relationship with the attribute value itself.
For example, given the following model:
{
model: {
entity: "your_entity_name",
service: "your_service_name",
version: "1"
},
attributes: {
accountId: {
type: "string" // only string types are both supported for this example
},
organizationId: {
type: "string"
},
name: {
type: "string"
}
},
indexes: {
"your_access_pattern_name": {
pk: {
field: "accountId",
composite: ["accountId"],
template: "prefix_${accountId}_postfix"
},
sk: {
field: "organizationId",
composite: ["organizationId"]
}
}
}
}
ElectroDB will accept a get
request like this:
await myEntity.get({
accountId: "1111-2222-3333-4444",
organizationId: "AAAA-BBBB-CCCC-DDDD"
}).go()
Query DynamoDB with the following params (note the pre/postfix on accountId
):
NOTE: ElectroDB defaults keys to lowercase, though this can be configured using Index Casing.
{
Key: {
accountId: "prefix_1111-2222-3333-4444_postfix",
organizationId: `aaaa-bbbb-cccc-dddd`,
},
TableName: 'your_table_name'
}
When returned from a query, however, ElectroDB will return the following and trim the key of it's prefix and postfix:
{
accountId: "prefix_1111-2222-3333-4444_postfix",
organizationId: `aaaa-bbbb-cccc-dddd`,
}
name: "your_item_name"
A Collection is a grouping of Entities with the same Partition Key and allows you to make efficient query across multiple entities. If your background is SQL, imagine Partition Keys as Foreign Keys, a Collection represents a View with multiple joined Entities.
_NOTE: ElectroDB Collections use a single DynamoDB query to retrieve results. One query is made to retrieve results for all Entities (the benefits of single table design), however keep in mind that DynamoDB returns all records in order of the Entity's sort key. In cases where your partition contains a large volume of items, it is possible some entities will not return items during pagination. This can be mitigated through the use of Index Types.
Collections are defined on an Index, and the name of the collection should represent what the query would return as a pseudo Entity
. Additionally, Collection names must be unique across a Service
.
NOTE: A
collection
name should be unique to a single common index across entities.
const DynamoDB = require("aws-sdk/clients/dynamodb");
const table = "projectmanagement";
const client = new DynamoDB.DocumentClient();
const employees = new Entity({
model: {
entity: "employees",
version: "1",
service: "taskapp",
},
attributes: {
employeeId: {
type: "string"
},
organizationId: {
type: "string"
},
name: {
type: "string"
},
team: {
type: ["jupiter", "mercury", "saturn"]
}
},
indexes: {
staff: {
pk: {
field: "pk",
composite: ["organizationId"]
},
sk: {
field: "sk",
composite: ["employeeId"]
}
},
employee: {
collection: "assignments",
index: "gsi2",
pk: {
field: "gsi2pk",
composite: ["employeeId"],
},
sk: {
field: "gsi2sk",
composite: [],
},
}
}
}, { client, table })
const tasks = new Entity({
model: {
entity: "tasks",
version: "1",
service: "taskapp",
},
attributes: {
taskId: {
type: "string"
},
employeeId: {
type: "string"
},
projectId: {
type: "string"
},
title: {
type: "string"
},
body: {
type: "string"
}
},
indexes: {
project: {
pk: {
field: "pk",
composite: ["projectId"]
},
sk: {
field: "sk",
composite: ["taskId"]
}
},
assigned: {
collection: "assignments",
index: "gsi2",
pk: {
field: "gsi2pk",
composite: ["employeeId"],
},
sk: {
field: "gsi2sk",
composite: ["projectId"],
},
}
}
}, { client, table });
const TaskApp = new Service({employees, tasks});
await TaskApp.collections
.assignments({employeeId: "JExotic"})
.go();
// Equivalent Parameters
{
"TableName": 'projectmanagement',
"ExpressionAttributeNames": { '#pk': 'gsi2pk', '#sk1': 'gsi2sk' },
"ExpressionAttributeValues": { ':pk': '$taskapp_1#employeeid_joeexotic', ':sk1': '$assignments' },
"KeyConditionExpression": '#pk = :pk and begins_with(#sk1, :sk1)',
"IndexName": 'gsi2'
}
To query across entities, collection queries make use of ElectroDB's Sort Key structure, which prefixes Sort Key fields with the collection name. Unlike an Entity Query, Collection queries for isolated indexes only leverage Composite Attributes from an access pattern's Partition Key, while Collection queries for clustered indexes allow you to query on both Partition and Sort Keys.
To better explain how Collection Queries are formed, here is a juxtaposition of an Entity Query's parameters vs a Collection Query's parameters:
Entity Query
await TaskApp.entities
.tasks.query
.assigned({employeeId: "JExotic"})
.go();
// Equivalent Parameters
{
KeyConditionExpression: '#pk = :pk and begins_with(#sk1, :sk1)',
TableName: 'projectmanagement',
ExpressionAttributeNames: { '#pk': 'gsi2pk', '#sk1': 'gsi2sk' },
ExpressionAttributeValues: {
':pk': '$taskapp#employeeid_jexotic',
':sk1': '$assignments#tasks_1'
},
IndexName: 'gsi2'
}
Collection Query
await TaskApp.collections
.assignments({employeeId: "JExotic"})
.go();
// Equivalent Parameters
{
KeyConditionExpression: '#pk = :pk and begins_with(#sk1, :sk1)',
TableName: 'projectmanagement',
ExpressionAttributeNames: { '#pk': 'gsi2pk', '#sk1': 'gsi2sk' },
ExpressionAttributeValues: { ':pk': '$taskapp#employeeid_jexotic', ':sk1': '$assignments' },
IndexName: 'gsi2'
}
The notable difference between the two is how much of the Sort Key is specified at query time.
Entity Query:
ExpressionAttributeValues: { ':sk1': '$assignments#tasks_1' },
Collection Query:
ExpressionAttributeValues: { ':sk1': '$assignments' },
Unlike Entity Queries which return an array, Collection Queries return an object. This object will have a key for every Entity name (or Entity Alias) associated with that Collection, and an array for all results queried that belong to that Entity.
For example, using the "TaskApp" models defined above, we would expect the following response from a query to the "assignments" collection:
let results = await TaskApp.collections
.assignments({employeeId: "JExotic"})
.go();
{
data: {
tasks: [...], // tasks for employeeId "JExotic"
employees: [...] // employee record(s) with employeeId "JExotic"
},
cursor: null
}
Because the Tasks and Employee Entities both associated their index (gsi2
) with the same collection name (assignments
), ElectroDB is able to associate the two entities via a shared Partition Key. As stated in the collections section, querying across Entities by PK can be comparable to querying across a foreign key in a traditional relational database.
Sub-Collections are an extension of Collection functionality that allow you to model more advanced access patterns. Collections and Sub-Collections are defined on Indexes via a property called collection
, as either a string or string array respectively.
_NOTE: Sub-Collections are only supported on "isolated" index types _
The following is an example of functionally identical collections, implemented as a string (referred to as a "collection") and then as a string array (referred to as sub-collections):
As a string (collection):
{
collection: "assignments"
pk: {
field: "pk",
composite: ["employeeId"]
},
sk: {
field: "sk",
composite: ["projectId"]
}
}
As a string array (sub-collections):
{
collection: ["assignments"]
pk: {
field: "pk",
composite: ["employeeId"]
},
sk: {
field: "sk",
composite: ["projectId"]
}
}
Both implementations above will create a "collections" method called assignments
when added to a Service.
const results = await TaskApp.collections
.assignments({employeeId: "JExotic"})
.go();
The advantage to using a string array to define collections is the ability to express sub-collections. Below is an example of three entities using sub-collections, followed by an explanation of their sub-collection definitions:
import {Entity, Service} from "electrodb"
import DynamoDB from "aws-sdk/clients/dynamodb";
const table = "projectmanagement";
const client = new DynamoDB.DocumentClient();
const employees = new Entity({
model: {
entity: "employees",
version: "1",
service: "taskapp",
},
attributes: {
employeeId: {
type: "string"
},
organizationId: {
type: "string"
},
name: {
type: "string"
},
team: {
type: ["jupiter", "mercury", "saturn"] as const
}
},
indexes: {
staff: {
pk: {
field: "pk",
composite: ["organizationId"]
},
sk: {
field: "sk",
composite: ["employeeId"]
}
},
employee: {
collection: "contributions",
index: "gsi2",
pk: {
field: "gsi2pk",
composite: ["employeeId"],
},
sk: {
field: "gsi2sk",
composite: [],
},
}
}
}, { client, table })
const tasks = new Entity({
model: {
entity: "tasks",
version: "1",
service: "taskapp",
},
attributes: {
taskId: {
type: "string"
},
employeeId: {
type: "string"
},
projectId: {
type: "string"
},
title: {
type: "string"
},
body: {
type: "string"
}
},
indexes: {
project: {
collection: "overview",
pk: {
field: "pk",
composite: ["projectId"]
},
sk: {
field: "sk",
composite: ["taskId"]
}
},
assigned: {
collection: ["contributions", "assignments"] as const,
index: "gsi2",
pk: {
field: "gsi2pk",
composite: ["employeeId"],
},
sk: {
field: "gsi2sk",
composite: ["projectId"],
},
}
}
}, { client, table });
const projectMembers = new Entity({
model: {
entity: "projectMembers",
version: "1",
service: "taskapp",
},
attributes: {
employeeId: {
type: "string"
},
projectId: {
type: "string"
},
name: {
type: "string"
},
},
indexes: {
members: {
collection: "overview",
pk: {
field: "pk",
composite: ["projectId"]
},
sk: {
field: "sk",
composite: ["employeeId"]
}
},
projects: {
collection: ["contributions", "assignments"] as const,
index: "gsi2",
pk: {
field: "gsi2pk",
composite: ["employeeId"],
},
sk: {
field: "gsi2sk",
composite: [],
},
}
}
}, { client, table });
const TaskApp = new Service({employees, tasks, projectMembers});
TypeScript Note: Use
as const
syntax when definingcollection
as a string array for improved type support
The last line of the code block above creates a Service called TaskApp
using the Entity instances created above its declaration. By creating a Service, ElectroDB will identify and validate the sub-collections defined across all three models. The result in this case are four unique collections: "overview", "contributions", and "assignments".
The simplest collection to understand is overview
. This collection is defined on the table's Primary Index, composed of a projectId
in the Partition Key, and is currently implemented by two Entities: tasks
and projectMembers
. If another entity were to be added to our service, it could "join" this collection by implementing an identical Partition Key composite (projectId
) and labeling itself as part of the overview
collection. The following is an example of using the overview
collection:
// overview
const results = await TaskApp.collections
.overview({projectId: "SD-204"})
.go();
// results
{
data: {
tasks: [...], // tasks associated with projectId "SD-204
projectMembers: [...] // employees of project "SD-204"
},
cursor: null,
}
// parameters
{
KeyConditionExpression: '#pk = :pk and begins_with(#sk1, :sk1)',
TableName: 'projectmanagement',
ExpressionAttributeNames: { '#pk': 'pk', '#sk1': 'sk' },
ExpressionAttributeValues: { ':pk': '$taskapp#projectid_sd-204', ':sk1': '$overview' }
}
Unlike overview
, the collections contributions
, and assignments
are more complex.
In the case of contributions
, all three entities implement this collection on the gsi2
index, and compose their Partition Key with the employeeId
attribute. The assignments
collection, however, is only implemented by the tasks
and projectMembers
Entities. Below is an example of using these collections:
NOTE: Collection values of
collection: "contributions"
andcollection: ["contributions"]
are interpreted by ElectroDB as being the same implementation.
// contributions
const results = await TaskApp.collections
.contributions({employeeId: "JExotic"})
.go();
// results
{
data: {
tasks: [...], // tasks assigned to employeeId "JExotic"
projectMembers: [...], // projects with employeeId "JExotic"
employees: [...] // employee record(s) with employeeId "JExotic"
},
cursor: null,
}
{
KeyConditionExpression: '#pk = :pk and begins_with(#sk1, :sk1)',
TableName: 'projectmanagement',
ExpressionAttributeNames: { '#pk': 'gsi2pk', '#sk1': 'gsi2sk' },
ExpressionAttributeValues: { ':pk': '$taskapp#employeeid_jexotic', ':sk1': '$contributions' },
IndexName: 'gsi2'
}
// assignments
const results = await TaskApp.collections
.assignments({employeeId: "JExotic"})
.go();
// results
{
data: {
tasks: [...], // tasks assigned to employeeId "JExotic"
projectMembers: [...], // projects with employeeId "JExotic"
},
cursor: null,
}
{
KeyConditionExpression: '#pk = :pk and begins_with(#sk1, :sk1)',
TableName: 'projectmanagement',
ExpressionAttributeNames: { '#pk': 'gsi2pk', '#sk1': 'gsi2sk' },
ExpressionAttributeValues: {
':pk': '$taskapp#employeeid_jexotic',
':sk1': '$contributions#assignments'
},
IndexName: 'gsi2'
}
Looking above we can see that the assignments
collection is actually a subset of the results that could be queried with the contributions
collection. The power behind having the assignments
sub-collection is the flexibility to further slice and dice your cross-entity queries into more specific and performant queries.
If you're interested in the naming used in the collection and access pattern definitions above, checkout the section on Naming Conventions
ElectroDB puts an emphasis on allowing users to define more domain specific naming. Instead of referring to indexes by their name on the table, ElectroDB allows users to define their indexes as Access Patterns.
Please refer to the Entities defined in the section Sub-Collection Entities as the source of examples within this section.
The following is an access pattern on the "employees" entity defined here:
staff: {
pk: {
field: "pk",
composite: ["organizationId"]
},
sk: {
field: "sk",
composite: ["employeeId"]
}
}
This Access Pattern is defined on the table's Primary Index (note the lack of an index
property), is given the name staff
, and is composed of an organiztionId
and an employeeId
.
When deciding on an Access Pattern name, ask yourself, "What would the array of items returned represent if I only supplied the Partition Key". In this example case, the entity defines an "Employee" by its organizationId
and employeeId
. If you performed a query against this index, and only provided organizationId
you would then expect to receive all Employees for that Organization. From there, the name staff
was chosen because the focus becomes "What are these Employees to that Organization?".
This convention also becomes evident when you consider Access Pattern name becomes the name of the method you use query that index.
await employee.query.staff({organizationId: "nike"}).go();
The following are access patterns on entities defined here:
// employees entity
employee: {
collection: "contributions",
index: "gsi2",
pk: {
field: "gsi2pk",
composite: ["employeeId"],
},
sk: {
field: "gsi2sk",
composite: [],
},
}
// tasks entity
assigned: {
collection: ["contributions", "assignments"],
index: "gsi2",
pk: {
field: "gsi2pk",
composite: ["employeeId"],
},
sk: {
field: "gsi2sk",
composite: ["projectId"],
},
}
// projectMembers entity
projects: {
collection: ["contributions", "assignments"] as const,
index: "gsi2",
pk: {
field: "gsi2pk",
composite: ["employeeId"],
},
sk: {
field: "gsi2sk",
composite: [],
},
}
In the case of the entities above, we see an example of a sub-collection. ElectroDB will use the above definitions to generate two collections: contributions
, assignments
.
The considerations for naming a collection are nearly identical to the considerations for naming an index: What do the query results from supplying just the Partition Key represent? In the case of collections you must also consider what the results represent across all of the involved entities, and the entities that may be added in the future.
For example, the contributions
collection is named such because when given an employeeId
we receive the employee's details, the tasks the that employee, and the projects where they are currently a member.
In the case of assignments
, we receive a subset of contributions
when supplying an employeeId
: Only the tasks and projects they are "assigned" are returned.
The
where()
method is an improvement on thefilter()
method. Unlikefilter
,where
will be compatible with upcoming features related to complex types.
Building thoughtful indexes can make queries simple and performant. Sometimes you need to filter results down further or add conditions to an update/patch/put/create/delete/remove action.
Below is the traditional way you would add a FilterExpression
to Dynamo's DocumentClient directly alongside how you would accomplish the same using the where
method.
animals.query
.exhibit({habitat: "Africa"})
.where(({isPregnant, offspring}, {exists, eq}) => `
${eq(isPregnant, true)} OR ${exists(offspring)}
`)
.go()
{
"KeyConditionExpression": "#pk = :pk and begins_with(#sk1, :sk1)",
"TableName": "zoo_manifest",
"ExpressionAttributeNames": {
"#isPregnant": "isPregnant",
"#offspring": "offspring",
"#pk": "gsi1pk",
"#sk1": "gsi1sk"
},
"ExpressionAttributeValues": {
":isPregnant0": true,
":pk": "$zoo#habitat_africa",
":sk1": "$animals_1#enclosure_"
},
"IndexName": "gsi1pk-gsi1sk-index",
"FilterExpression": "#isPregnant = :isPregnant0 OR attribute_exists(#offspring)"
}
Below is the traditional way you would add a ConditionExpression
to Dynamo's DocumentClient directly alongside how you would accomplish the same using the where
method.
animals.update({
animal: "blackbear",
name: "Isabelle"
})
// no longer pregnant because Ernesto was born!
.set({
isPregnant: false,
lastEvaluation: "2021-09-12",
lastEvaluationBy: "stephanie.adler"
})
// welcome to the world Ernesto!
.append({
offspring: [{
name: "Ernesto",
birthday: "2021-09-12",
note: "healthy birth, mild pollen allergy"
}]
})
// using the where clause can guard against making
// updates against stale data
.where(({isPregnant, lastEvaluation}, {lt, eq}) => `
${eq(isPregnant, true)} AND ${lt(lastEvaluation, "2021-09-12")}
`)
.go()
{
"UpdateExpression": "SET #isPregnant = :isPregnant_u0, #lastEvaluation = :lastEvaluation_u0, #lastEvaluationBy = :lastEvaluationBy_u0, #offspring = list_append(#offspring, :offspring_u0)",
"ExpressionAttributeNames": {
"#isPregnant": "isPregnant",
"#lastEvaluation": "lastEvaluation",
"#lastEvaluationBy": "lastEvaluationBy",
"#offspring": "offspring"
},
"ExpressionAttributeValues": {
":isPregnant0": true,
":lastEvaluation0": "2021-09-12",
":isPregnant_u0": false,
":lastEvaluation_u0": "2021-09-12",
":lastEvaluationBy_u0": "stephanie.adler",
":offspring_u0": [
{
"name": "Ernesto",
"birthday": "2021-09-12",
"note": "healthy birth, mild pollen allergy"
}
]
},
"TableName": "zoo_manifest",
"Key": {
"pk": "$zoo#animal_blackbear",
"sk": "$animals_1#name_isabelle"
},
"ConditionExpression": "#isPregnant = :isPregnant0 AND #lastEvaluation < :lastEvaluation0"
}
ElectroDB supports using the where()
method with DynamoDB's complex attribute types: map
, list
, and set
. When using the injected attributes
object, simply drill into the attribute itself to apply your update directly to the required object.
The following are examples on how to filter on complex attributes:
Example 1: Filtering on a map
attribute
animals.query
.farm({habitat: "Africa"})
.where(({veterinarian}, {eq}) => eq(veterinarian.name, "Herb Peterson"))
.go()
Example 1: Filtering on an element in a list
attribute
animals.query
.exhibit({habitat: "Tundra"})
.where(({offspring}, {eq}) => eq(offspring[0].name, "Blitzen"))
.go()
Where functions allow you to write a FilterExpression
or ConditionExpression
without having to worry about the complexities of expression attributes. To accomplish this, ElectroDB injects an object attributes
as the first parameter to all Filter Functions, and an object operations
, as the second parameter. Pass the properties from the attributes
object to the methods found on the operations
object, along with inline values to set filters and conditions.
NOTE:
where
callbacks must return a string. All method on theoperation
object all return strings, so you can return the results of theoperation
method or use template strings compose an expression.
// A single filter operation
animals.update({habitat: "Africa", enclosure: "5b"})
.set({keeper: "Joe Exotic"})
.where((attr, op) => op.eq(attr.dangerous, true))
.go();
// A single filter operation w/ destructuring
animals.update({animal: "tiger", name: "janet"})
.set({keeper: "Joe Exotic"})
.where(({dangerous}, {eq}) => eq(dangerous, true))
.go();
// Multiple conditions - `op`
animals.update({animal: "tiger", name: "janet"})
.set({keeper: "Joe Exotic"})
.where((attr, op) => `
${op.eq(attr.dangerous, true)} AND ${op.notExists(attr.lastFed)}
`)
.go();
// Multiple usages of `where` (implicit AND)
animals.update({animal: "tiger", name: "janet"})
.set({keeper: "Joe Exotic"})
.where((attr, op) => `
${op.eq(attr.dangerous, true)} OR ${op.notExists(attr.lastFed)}
`)
.where(({birthday}, {between}) => {
const today = Date.now();
const lastMonth = today - 1000 * 60 * 60 * 24 * 30;
return between(birthday, lastMonth, today);
})
.go();
// "dynamic" filtering
function getAnimals(habitat, keepers) {
const query = animals.query.exhibit({habitat});
for (const name of keepers) {
query.where(({keeper}, {eq}) => eq(keeper, name));
}
return query.go();
}
const keepers = [
"Joe Exotic",
"Carol Baskin"
];
getAnimals("RainForest", keepers);
The attributes
object contains every Attribute defined in the Entity's Model. The operations
object contains the following methods:
operator | example | result |
---|---|---|
eq | eq(rent, maxRent) | #rent = :rent1 |
ne | eq(rent, maxRent) | #rent <> :rent1 |
gte | gte(rent, value) | #rent >= :rent1 |
gt | gt(rent, maxRent) | #rent > :rent1 |
lte | lte(rent, maxRent) | #rent <= :rent1 |
lt | lt(rent, maxRent) | #rent < :rent1 |
begins | begins(rent, maxRent) | begins_with(#rent, :rent1) |
exists | exists(rent) | attribute_exists(#rent) |
notExists | notExists(rent) | attribute_not_exists(#rent) |
contains | contains(rent, maxRent) | contains(#rent = :rent1) |
notContains | notContains(rent, maxRent) | not contains(#rent = :rent1) |
between | between(rent, minRent, maxRent) | (#rent between :rent1 and :rent2) |
name | name(rent) | #rent |
value | value(rent, maxRent) | :rent1 |
It is possible to include chain multiple where clauses. The resulting FilterExpressions (or ConditionExpressions) are concatenated with an implicit AND
operator.
let MallStores = new Entity(model, {table: "StoreDirectory"});
let stores = await MallStores.query
.leases({ mallId: "EastPointe" })
.between({ leaseEndDate: "2020-04-01" }, { leaseEndDate: "2020-07-01" })
.where(({ rent, discount }, {between, eq}) => `
${between(rent, "2000.00", "5000.00")} AND ${eq(discount, "1000.00")}
`)
.where(({ category }, {eq}) => `
${eq(category, "food/coffee")}
`)
.go();
// Equivalent Parameters
{
TableName: 'StoreDirectory',
ExpressionAttributeNames: {
'#rent': 'rent',
'#discount': 'discount',
'#category': 'category',
'#pk': 'idx2pk',
'#sk1': 'idx2sk'
},
ExpressionAttributeValues: {
':rent1': '2000.00',
':rent2': '5000.00',
':discount1': '1000.00',
':category1': 'food/coffee',
':pk': '$mallstoredirectory_1#mallid_eastpointe',
':sk1': '$mallstore#leaseenddate_2020-04-01#storeid_',
':sk2': '$mallstore#leaseenddate_2020-07-01#storeid_'
},
KeyConditionExpression: '#pk = :pk and #sk1 BETWEEN :sk1 AND :sk2',
IndexName: 'idx2',
FilterExpression: '(#rent between :rent1 and :rent2) AND (#discount = :discount1 AND #category = :category1)'
}
The parse method can be given a DocClient response and return a typed and formatted ElectroDB item.
ElectroDB's parse()
method accepts results from get
, delete
, put
, update
, query
, and scan
operations, applies all the same operations as though the item was retrieved by ElectroDB itself, and will return null
(or empty array for query
results) if the item could not be parsed.
const myEntity = new Entity({...});
const getResults = docClient.get({...}).promise();
const queryResults = docClient.query({...}).promise();
const updateResults = docClient.update({...}).promise();
const formattedGetResults = myEntity.parse(getResults);
const formattedQueryResults = myEntity.parse(formattedQueryResults);
Parse also accepts an optional options
object as a second argument (see the section Query Options to learn more). Currently, the following query options are relevant to the parse()
method:
Option | Type | Default | Notes
----------------- : -------- : ------------------ | -----
ignoreOwnership | boolean | true
| This property defaults to true
here, unlike elsewhere in the application when it defaults to false
. You can overwrite the default here with your own preference.
attributes | string[] | (all attributes) | The attributes
option allows you to specify a subset of attributes to return
For hands-on learners: the following example can be followed along with and executed on runkit: https://runkit.com/tywalch/electrodb-building-queries
ElectroDB queries use DynamoDB's query
method to find records based on your table's indexes.
NOTE: To limit the number of items ElectroDB will retrieve, read more about the Query Options
pages
andlimit
, or use the ElectroDB Pagination API for fine-grain pagination support.
Forming a composite Partition Key and Sort Key is a critical step in planning Access Patterns in DynamoDB. When planning composite keys, it is crucial to consider the order in which they are composed. As of the time of writing this documentation, DynamoDB has the following constraints that should be taken into account when planning your Access Patterns:
begins_with
, between
, >
, >=
, <
, <=
, and Equals
.Carefully considering your Composite Attribute order will allow ElectroDB to express hierarchical relationships and unlock more available Access Patterns for your application.
For example, let's say you have a StoreLocations
Entity that represents Store Locations inside Malls:
let schema = {
model: {
service: "MallStoreDirectory",
entity: "MallStore",
version: "1",
},
attributes: {
cityId: {
type: "string",
required: true,
},
mallId: {
type: "string",
required: true,
},
storeId: {
type: "string",
required: true,
},
buildingId: {
type: "string",
required: true,
},
unitId: {
type: "string",
required: true,
},
category: {
type: [
"spite store",
"food/coffee",
"food/meal",
"clothing",
"electronics",
"department",
"misc"
],
required: true
},
leaseEndDate: {
type: "string",
required: true
},
rent: {
type: "string",
required: true,
validate: /^(\d+\.\d{2})$/
},
discount: {
type: "string",
required: false,
default: "0.00",
validate: /^(\d+\.\d{2})$/
}
},
indexes: {
stores: {
pk: {
field: "pk",
composite: ["cityId", "mallId"]
},
sk: {
field: "sk",
composite: ["buildingId", "storeId"]
}
},
units: {
index: "gis1pk-gsi1sk-index",
pk: {
field: "gis1pk",
composite: ["mallId"]
},
sk: {
field: "gsi1sk",
composite: ["buildingId", "unitId"]
}
},
leases: {
index: "gis2pk-gsi2sk-index",
pk: {
field: "gis2pk",
composite: ["storeId"]
},
sk: {
field: "gsi2sk",
composite: ["leaseEndDate"]
}
}
}
};
const StoreLocations = new Entity(schema, {table: "StoreDirectory"});
Examples in this section using the
MallStore
schema defined above, and available for interacting with here: https://runkit.com/tywalch/electrodb-building-queries
All queries start from the Access Pattern defined in the schema.
const MallStore = new Entity(schema, {table: "StoreDirectory"});
// Each Access Pattern is available on the Entity instance
// MallStore.query.stores()
// MallStore.query.malls()
All queries require (at minimum) the Composite Attributes included in its defined Partition Key. Composite Attributes you define on the Sort Key can be partially supplied, but must be supplied in the order they are defined.
IMPORTANT: Composite Attributes must be supplied in the order they are composed when invoking the Access Pattern. This is because composite attributes are used to form a concatenated key string, and if attributes supplied out of order, it is not possible to fill the gaps in that concatenation.
const MallStore = new Entity({
model: {
service: "mallmgmt",
entity: "store",
version: "1"
},
attributes: {
cityId: "string",
mallId: "string",
storeId: "string",
buildingId: "string",
unitId: "string",
name: "string",
description: "string",
category: "string"
},
indexes: {
stores: {
pk: {
field: "pk",
composite: ["cityId", "mallId"]
},
sk: {
field: "sk",
composite: ["storeId", "unitId"]
}
}
}
}, {table: "StoreDirectory"});
const cityId = "Atlanta1";
const mallId = "EastPointe";
const storeId = "LatteLarrys";
const unitId = "B24";
const buildingId = "F34";
// Good: Includes at least the PK
StoreLocations.query.stores({cityId, mallId});
// Good: Includes at least the PK, and the first SK attribute
StoreLocations.query.stores({cityId, mallId, storeId});
// Good: Includes at least the PK, and the all SK attributes
StoreLocations.query.stores({cityId, mallId, storeId, unitId});
// Bad: No PK composite attributes specified, will throw
StoreLocations.query.stores();
// Bad: Not All PK Composite Attributes included (cityId), will throw
StoreLocations.query.stores({mallId});
// Bad: Composite Attributes not included in order, will NOT throw, but will ignore `unitId` because `storeId` was not supplied as well
StoreLocations.query.stores({cityId, mallId, unitId});
operator | use case |
---|---|
begins | Keys starting with a particular set of characters. |
between | Keys between a specified range. |
gt | Keys less than some value |
gte | Keys less than or equal to some value |
lt | Keys greater than some value |
lte | Keys greater than or equal to some value |
Each record represents one Store location. All Stores are located in Malls we manage.
To satisfy requirements for searching based on location, you could use the following keys: Each StoreLocations
record would have a Partition Key with the store's storeId
. This key alone is not enough to identify a particular store. To solve this, compose a Sort Key for the store's location attribute ordered hierarchically (mall/building/unit): ["mallId", "buildingId", "unitId"]
.
The StoreLocations
entity above, using just the stores
Index alone enables four Access Patterns:
LatteLarrys
locations in all MallsLatteLarrys
locations in one MallLatteLarrys
locations inside a specific MallLatteLarrys
inside of a Mall and BuildingQueries in ElectroDB are built around the Access Patterns defined in the Schema and are capable of using partial key Composite Attributes to create performant lookups. To accomplish this, ElectroDB offers a predictable chainable API.
Examples in this section using the
StoreLocations
schema defined above and can be directly experiment with on runkit: https://runkit.com/tywalch/electrodb-building-queries
The methods: Get (get
), Create (put
), Update (update
), and Delete (delete
) *require all composite attributes described in the Entities' primary PK
and SK
.
DynamoDB offers three methods for updating and creating records: put
, update
, and batchWrite
. For the uninitiated, all three of these methods will create an item if it doesn't exist. The difference between put
/batchWrite
and update
this that a put
will overwrite the existing item while an update
will only modify the fields provided if the item already exists.
ElectroDB offers a few mutation methods beyond put
, update
, and delete
to more ergonomically fit your use case. Below is a table that explains each ElectroDB method, which DynamoDB operation the method maps to, and a short description of the method's purpose.
ElectroDB Method | DynamoDB Method | Purpose |
---|---|---|
put | put , batchWrite | Creates or overwrites an existing item with the values provided |
create | put | Creates an item if the item does not currently exist, or throws if the item exists |
upsert | update | Upsert is similar to put in that it will create a record if one does not exist, except upsert perform an update if that record already exists. |
update | update | Performs update on an existing record or creates a new record per the DynamoDB spec (read more here) |
patch | update | Performs an update on existing item or throws if that item does not already exist.however |
delete | delete , batchWrite | Deletes an item regardless of whether or not the specified item exists |
remove | delete | Deletes an item or throws if the item does not currently exist |
Provide all Table Index composite attributes in an object to the delete
method to delete a record.
Example:
await StoreLocations.delete({
storeId: "LatteLarrys",
mallId: "EastPointe",
buildingId: "F34",
cityId: "Atlanta1"
}).go();
Response Format:
{
data: { YOUR_SCHEMA }
}
Equivalent DocClient Parameters:
{
"Key": {
"pk": "$mallstoredirectory#cityid_atlanta1#mallid_eastpointe",
"sk": "$mallstore_1#buildingid_f34#storeid_lattelarrys"
},
"TableName": "YOUR_TABLE_NAME"
}
Provide all table index composite attributes in an array of objects to the delete
method to batch delete records.
NOTE: Performing a Batch Delete will return an array of "unprocessed" records. An empty array signifies all records were processed. If you want the raw DynamoDB response you can always use the option
{raw: true}
, more detail found here: Query Options. Additionally, when performing a BatchWrite the.params()
method will return an array of parameters, rather than just the parameters for one docClient query. This is because ElectroDB BatchWrite queries larger than the docClient's limit of 25 records.
If the number of records you are requesting is above the BatchWrite threshold of 25 records, ElectroDB will make multiple requests to DynamoDB and return the results in a single array. By default, ElectroDB will make these requests in series, one after another. If you are confident your table can handle the throughput, you can use the Query Option concurrent
. This value can be set to any number greater than zero, and will execute that number of requests simultaneously.
For example, 75 records (50 records over the DynamoDB maximum):
The default value of concurrent
will be 1
. ElectroDB will execute a BatchWrite request of 25, then after that request has responded, make another BatchWrite request for 25 records, and then another.
If you set the Query Option concurrent
to 2
, ElectroDB will execute a BatchWrite request of 25 records, and another BatchGet request for 25 records without waiting for the first request to finish. After those two have finished it will execute another BatchWrite request for 25 records.
It is important to consider your Table's throughput considerations when setting this value.
Example:
let unprocessed = await StoreLocations.delete([
{
storeId: "LatteLarrys",
mallId: "EastPointe",
buildingId: "F34",
cityId: "LosAngeles1"
},
{
storeId: "MochaJoes",
mallId: "EastPointe",
buildingId: "F35",
cityId: "LosAngeles1"
}
]).go({concurrent: 1}); // `concurrent` value is optional and default's to `1`
Response Format:
{
unprocessed: Array<YOUR_COMPOSITE_ATTRIBUTES>
}
Equivalent DocClient Parameters:
{
"RequestItems": {
"StoreDirectory": [
{
"DeleteRequest": {
"Key": {
"pk": "$mallstoredirectory#cityid_losangeles1#mallid_eastpointe",
"sk": "$mallstore_1#buildingid_f34#storeid_lattelarrys"
}
}
},
{
"DeleteRequest": {
"Key": {
"pk": "$mallstoredirectory#cityid_losangeles1#mallid_eastpointe",
"sk": "$mallstore_1#buildingid_f35#storeid_mochajoes"
}
}
}
]
}
}
Elements of the unprocessed
array are unlike results received from a query. Instead of containing all the attributes of a record, an unprocessed record only includes the composite attributes defined in the Table Index. This is in keeping with DynamoDB's practice of returning only Keys in the case of unprocessed records. For convenience, ElectroDB will return these keys as composite attributes, but you can pass the query option {unprocessed:"raw"}
override this behavior and return the Keys as they came from DynamoDB.
Provide all required Attributes as defined in the model to create a new record. ElectroDB will enforce any defined validations, defaults, casting, and field aliasing. A Put operation will trigger the default
, and set
attribute callbacks when writing to DynamoDB. By default, after performing a put()
or create()
operation, ElectroDB will format and return the record through the same process as a Get/Query. This process will invoke the get
callback on all included attributes. If this behaviour is not desired, use the Query Option response:"none"
to return a null value.
Note: This example includes an optional conditional expression
Example:
await StoreLocations
.put({
cityId: "Atlanta1",
storeId: "LatteLarrys",
mallId: "EastPointe",
buildingId: "BuildingA1",
unitId: "B47",
category: "food/coffee",
leaseEndDate: "2020-03-22",
rent: "4500.00"
})
.where((attr, op) => op.eq(attr.rent, "4500.00"))
.go()
Response Format:
{
data: { YOUR_SCHEMA }
}
Equivalent DocClient Parameters:
{
"Item": {
"cityId": "Atlanta1",
"mallId": "EastPointe",
"storeId": "LatteLarrys",
"buildingId": "BuildingA1",
"unitId": "B47",
"category": "food/coffee",
"leaseEndDate": "2020-03-22",
"rent": "4500.00",
"discount": "0.00",
"pk": "$mallstoredirectory#cityid_atlanta1#mallid_eastpointe",
"sk": "$mallstore_1#buildingid_buildinga1#storeid_lattelarrys",
"gis1pk": "$mallstoredirectory#mallid_eastpointe",
"gsi1sk": "$mallstore_1#buildingid_buildinga1#unitid_b47",
"gis2pk": "$mallstoredirectory#storeid_lattelarrys",
"gsi2sk": "$mallstore_1#leaseenddate_2020-03-22",
"__edb_e__": "MallStore",
"__edb_v__": "1"
},
"TableName": "StoreDirectory",
"ConditionExpression": "#rent = :rent_w1",
"ExpressionAttributeNames": {
"#rent": "rent"
},
"ExpressionAttributeValues": {
":rent_w1": "4500.00"
}
}
Provide all required Attributes as defined in the model to create records as an array to .put()
. ElectroDB will enforce any defined validations, defaults, casting, and field aliasing. Another convenience ElectroDB provides, is accepting BatchWrite arrays larger than the 25 record limit. This is achieved making multiple, "parallel", requests to DynamoDB for batches of 25 records at a time. A failure with any of these requests will cause the query to throw, so be mindful of your table's configured throughput.
NOTE: Performing a Batch Put will return an array of "unprocessed" records. An empty array signifies all records returned were processed. If you want the raw DynamoDB response you can always use the option
{raw: true}
, more detail found here: Query Options. Additionally, when performing a BatchWrite the.params()
method will return an array of parameters, rather than just the parameters for one docClient query. This is because ElectroDB BatchWrite queries larger than the docClient's limit of 25 records.
If the number of records you are requesting is above the BatchWrite threshold of 25 records, ElectroDB will make multiple requests to DynamoDB and return the results in a single array. By default, ElectroDB will make these requests in series, one after another. If you are confident your table can handle the throughput, you can use the Query Option concurrent
. This value can be set to any number greater than zero, and will execute that number of requests simultaneously.
For example, 75 records (50 records over the DynamoDB maximum):
The default value of concurrent
will be 1
. ElectroDB will execute a BatchWrite request of 25, then after that request has responded, make another BatchWrite request for 25 records, and then another.
If you set the Query Option concurrent
to 2
, ElectroDB will execute a BatchWrite request of 25 records, and another BatchGet request for 25 records without waiting for the first request to finish. After those two have finished it will execute another BatchWrite request for 25 records.
It is important to consider your Table's throughput considerations when setting this value.
Example:
let unprocessed = await StoreLocations.put([
{
cityId: "LosAngeles1",
storeId: "LatteLarrys",
mallId: "EastPointe",
buildingId: "F34",
unitId: "a1",
category: "food/coffee",
leaseEndDate: "2022-03-22",
rent: "4500.00"
},
{
cityId: "LosAngeles1",
storeId: "MochaJoes",
mallId: "EastPointe",
buildingId: "F35",
unitId: "a2",
category: "food/coffee",
leaseEndDate: "2021-01-22",
rent: "1500.00"
}
]).go({concurrent: 1}); // `concurrent` value is optional and default's to `1`
Response Format:
{
unprocessed: Array<YOUR_COMPOSITE_ATTRIBUTES>
}
Equivalent DocClient Parameters:
{
"RequestItems": {
"StoreDirectory": [
{
"PutRequest": {
"Item": {
"cityId": "LosAngeles1",
"mallId": "EastPointe",
"storeId": "LatteLarrys",
"buildingId": "F34",
"unitId": "a1",
"category": "food/coffee",
"leaseEndDate": "2022-03-22",
"rent": "4500.00",
"discount": "0.00",
"pk": "$mallstoredirectory#cityid_losangeles1#mallid_eastpointe",
"sk": "$mallstore_1#buildingid_f34#storeid_lattelarrys",
"gis1pk": "$mallstoredirectory#mallid_eastpointe",
"gsi1sk": "$mallstore_1#buildingid_f34#unitid_a1",
"gis2pk": "$mallstoredirectory#storeid_lattelarrys",
"gsi2sk": "$mallstore_1#leaseenddate_2022-03-22",
"__edb_e__": "MallStore",
"__edb_v__": "1"
}
}
},
{
"PutRequest": {
"Item": {
"cityId": "LosAngeles1",
"mallId": "EastPointe",
"storeId": "MochaJoes",
"buildingId": "F35",
"unitId": "a2",
"category": "food/coffee",
"leaseEndDate": "2021-01-22",
"rent": "1500.00",
"discount": "0.00",
"pk": "$mallstoredirectory#cityid_losangeles1#mallid_eastpointe",
"sk": "$mallstore_1#buildingid_f35#storeid_mochajoes",
"gis1pk": "$mallstoredirectory#mallid_eastpointe",
"gsi1sk": "$mallstore_1#buildingid_f35#unitid_a2",
"gis2pk": "$mallstoredirectory#storeid_mochajoes",
"gsi2sk": "$mallstore_1#leaseenddate_2021-01-22",
"__edb_e__": "MallStore",
"__edb_v__": "1"
}
}
}
]
}
}
Elements of the unprocessed
array are unlike results received from a query. Instead of containing all the attributes of a record, an unprocessed record only includes the composite attributes defined in the Table Index. This is in keeping with DynamoDB's practice of returning only Keys in the case of unprocessed records. For convenience, ElectroDB will return these keys as composite attributes, but you can pass the query option {unprocessed:"raw"}
override this behavior and return the Keys as they came from DynamoDB.
In DynamoDB, put
operations by default will overwrite a record if record being updated does not exist. In ElectroDB, the create
method will utilize the attribute_not_exists()
parameter dynamically to ensure records are only "created" and not overwritten when inserting new records into the table.
A Put operation will trigger the default
, and set
attribute callbacks when writing to DynamoDB. By default, after writing to DynamoDB, ElectroDB will format and return the record through the same process as a Get/Query, which will invoke the get
callback on all included attributes. If this behaviour is not desired, use the Query Option response:"none"
to return a null value.
Example:
await StoreLocations
.create({
cityId: "Atlanta1",
storeId: "LatteLarrys",
mallId: "EastPointe",
buildingId: "BuildingA1",
unitId: "B47",
category: "food/coffee",
leaseEndDate: "2020-03-22",
rent: "4500.00"
})
.where((attr, op) => op.eq(attr.rent, "4500.00"))
.go()
Response Format:
{
data: { YOUR_SCHEMA }
}
Equivalent DocClient Parameters:
{
"Item": {
"cityId": "Atlanta1",
"mallId": "EastPointe",
"storeId": "LatteLarrys",
"buildingId": "BuildingA1",
"unitId": "B47",
"category": "food/coffee",
"leaseEndDate": "2020-03-22",
"rent": "4500.00",
"discount": "0.00",
"pk": "$mallstoredirectory#cityid_atlanta1#mallid_eastpointe",
"sk": "$mallstore_1#buildingid_buildinga1#storeid_lattelarrys",
"gis1pk": "$mallstoredirectory#mallid_eastpointe",
"gsi1sk": "$mallstore_1#buildingid_buildinga1#unitid_b47",
"gis2pk": "$mallstoredirectory#storeid_lattelarrys",
"gsi2sk": "$mallstore_1#leaseenddate_2020-03-22",
"__edb_e__": "MallStore",
"__edb_v__": "1"
},
"TableName": "StoreDirectory",
"ConditionExpression": "attribute_not_exists(pk) AND attribute_not_exists(sk) AND #rent = :rent_w1",
"ExpressionAttributeNames": {
"#rent": "rent"
},
"ExpressionAttributeValues": {
":rent_w1": "4500.00"
}
}
Update Methods are available after the method update()
is called, and allow you to perform alter an item stored dynamodb. Each Update Method corresponds to a DynamoDB UpdateExpression clause.
NOTE: ElectroDB will validate an attribute's type when performing an operation (e.g. that the
subtract()
method can only be performed on numbers), but will defer checking the logical validity your update operation to the DocumentClient. For example, If your query performs multiple mutations on a single attribute, or perform other illogical operations given nature of an item/attribute, ElectroDB will not validate these edge cases and instead will simply pass back any error(s) thrown by the Document Client.
Update/Patch Method | Attribute Types | Parameter |
---|---|---|
set | number string boolean enum map list set any | object |
remove | number string boolean enum map list set any | array |
add | number any set | object |
subtract | number | object |
append | any list | object |
delete | any set | object |
data | * | callback |
The methods above can be used (and reused) in a chain to form update parameters, when finished with .params()
or .go()
terminal. If your application requires the update method to return values related to the update (e.g. via the ReturnValues
DocumentClient parameters), you can use the Query Option {response: "none" | "all_old" | "updated_old" | "all_new" | "updated_new"}
with the value that matches your need. By default, the Update operation returns an empty object when using .go()
.
NOTE: The DynamoDB method
update
will create an item if one does not exist. Because updates have reduced attribute validations when compared toput
, the practical ramifications of this is that anupdate
can create a record without all the attributes you'd expect from a newly created item. Depending on your project's unique needs, the methods patch or upsert may be a better fit.
ElectroDB adds some constraints to update calls to prevent the accidental loss of data. If an access pattern is defined with multiple composite attributes, then ElectroDB ensure the attributes cannot be updated individually. If an attribute involved in an index composite is updated, then the index key also must be updated, and if the whole key cannot be formed by the attributes supplied to the update, then it cannot create a composite key without overwriting the old data.
This example shows why a partial update to a composite key is prevented by ElectroDB:
{
"index": "my-gsi",
"pk": {
"field": "gsi1pk",
"composite": ["attr1"]
},
"sk": {
"field": "gsi1sk",
"composite": ["attr2", "attr3"]
}
}
The above secondary index definition would generate the following index keys:
{
"gsi1pk": "$service#attr1_value1",
"gsi1sk": "$entity_version#attr2_value2#attr3_value6"
}
If a user attempts to update the attribute attr2
, then ElectroDB has no way of knowing value of the attribute attr3
or if forming the composite key without it would overwrite its value. The same problem exists if a user were to update attr3
, ElectroDB cannot update the key without knowing each composite attribute's value.
In the event that a secondary index includes composite values from the table's primary index, ElectroDB will draw from the values supplied for the update key to address index gaps in the secondary index. For example:
For the defined indexes:
{
"accessPattern1": {
"pk": {
"field": "pk",
"composite": ["attr1"]
},
"sk": {
"field": "sk",
"composite": ["attr2"]
}
},
"accessPattern2": {
"index": "my-gsi",
"pk": {
"field": "gsi1pk",
"composite": ["attr3"]
},
"sk": {
"field": "gsi1sk",
"composite": ["attr2", "attr4"]
}
}
}
A user could update attr4
alone because ElectroDB is able to leverage the value for attr2
from values supplied to the update()
method:
Example:
entity.update({ attr1: "value1", attr2: "value2" })
.set({ attr4: "value4" })
.go();
Response Format:
{
data: { YOUR_SCHEMA }
}
Equivalent DocClient Parameters:
{
"UpdateExpression": "SET #attr4 = :attr4_u0, #gsi1sk = :gsi1sk_u0, #attr1 = :attr1_u0, #attr2 = :attr2_u0",
"ExpressionAttributeNames": {
"#attr4": "attr4",
"#gsi1sk": "gsi1sk",
"#attr1": "attr1",
"#attr2": "attr2"
},
"ExpressionAttributeValues": {
":attr4_u0": "value6",
// This index was successfully built
":gsi1sk_u0": "$update-edgecases_1#attr2_value2#attr4_value6",
":attr1_u0": "value1",
":attr2_u0": "value2"
},
"TableName": "YOUR_TABLE_NAME",
"Key": {
"pk": "$service#attr1_value1",
"sk": "$entity_version#attr2_value2"
}
}
NOTE: Included in the update are all attributes from the table's primary index. These values are automatically included on all updates in the event an update results in an insert._
The set()
method will accept all attributes defined on the model. Provide a value to apply or replace onto the item.
Example:
await StoreLocations
.update({cityId, mallId, storeId, buildingId})
.set({category: "food/meal"})
.where((attr, op) => op.eq(attr.category, "food/coffee"))
.go()
Response Format:
{
data: { YOUR_SCHEMA }
}
Equivalent DocClient Parameters:
{
"UpdateExpression": "SET #category = :category",
"ExpressionAttributeNames": {
"#category": "category"
},
"ExpressionAttributeValues": {
":category_w1": "food/coffee",
":category": "food/meal"
},
"TableName": "StoreDirectory",
"Key": {
"pk": "$mallstoredirectory#cityid_atlanta1#mallid_eastpointe",
"sk": "$mallstore_1#buildingid_f34#storeid_lattelarrys"
},
"ConditionExpression": "#category = :category_w1"
}
The remove()
method will accept all attributes defined on the model. Unlike most other update methods, the remove()
method accepts an array with the names of the attributes that should be removed.
NOTE that the attribute property
required
functions as a sort ofNOT NULL
flag. Because of this, if a property exists asrequired:true
it will not be possible to remove that property in particular. If the attribute is a property is on "map", and the "map" is not required, then the "map" can be removed.
Example:
await StoreLocations
.update({cityId, mallId, storeId, buildingId})
.remove(["category"])
.where((attr, op) => op.eq(attr.category, "food/coffee"))
.go()
Response Format:
{
data: { YOUR_SCHEMA }
}
Equivalent DocClient Parameters:
{
"UpdateExpression": "REMOVE #category",
"ExpressionAttributeNames": {
"#category": "category"
},
"ExpressionAttributeValues": {
":category0": "food/coffee"
},
"TableName": "StoreDirectory",
"Key": {
"pk": "$mallstoredirectory#cityid_atlanta#mallid_eastpointe",
"sk": "$mallstore_1#buildingid_a34#storeid_lattelarrys"
},
"ConditionExpression": "#category = :category0"
}
The add()
method will accept attributes with type number
, set
, and any
defined on the model. In the case of a number
attribute, provide a number to add to the existing attribute's value on the item.
If the attribute is defined as any
, the syntax compatible with the attribute type set
will be used. For this reason, do not use the attribute type any
to represent a number
.
Example:
const newTenant = client.createSet("larry");
await StoreLocations
.update({cityId, mallId, storeId, buildingId})
.add({
rent: 100, // "number" attribute
tenant: ["larry"] // "set" attribute
})
.where((attr, op) => op.eq(attr.category, "food/coffee"))
.go()
Response Format:
{
data: { YOUR_SCHEMA }
}
Equivalent DocClient Parameters:
{
"UpdateExpression": "SET #rent = #rent + :rent0 ADD #tenant :tenant0",
"ExpressionAttributeNames": {
"#category": "category",
"#rent": "rent",
"#tenant": "tenant"
},
"ExpressionAttributeValues": {
":category0": "food/coffee",
":rent0": 100,
":tenant0": ["larry"]
},
"TableName": "StoreDirectory",
"Key": {
"pk": "$mallstoredirectory#cityid_atlanta#mallid_eastpointe",
"sk": "$mallstore_1#buildingid_a34#storeid_lattelarrys"
},
"ConditionExpression": "#category = :category0"
}
The subtract()
method will accept attributes with type number
. In the case of a number
attribute, provide a number to subtract from the existing attribute's value on the item.
Example:
await StoreLocations
.update({cityId, mallId, storeId, buildingId})
.subtract({deposit: 500})
.where((attr, op) => op.eq(attr.category, "food/coffee"))
.go()
Response Format:
{
data: { YOUR_SCHEMA }
}
Equivalent DocClient Parameters:
{
"UpdateExpression": "SET #deposit = #deposit - :deposit0",
"ExpressionAttributeNames": {
"#category": "category",
"#deposit": "deposit"
},
"ExpressionAttributeValues": {
":category0": "food/coffee",
":deposit0": 500
},
"TableName": "StoreDirectory",
"Key": {
"pk": "$mallstoredirectory#cityid_atlanta#mallid_eastpointe",
"sk": "$mallstore_1#buildingid_a34#storeid_lattelarrys"
},
"ConditionExpression": "#category = :category0"
}
The append()
method will accept attributes with type any
. This is a convenience method for working with DynamoDB lists, and is notably different that set
because it will add an element to an existing array, rather than overwrite the existing value.
Example:
await StoreLocations
.update({cityId, mallId, storeId, buildingId})
.append({
rentalAgreement: [{
type: "ammendment",
detail: "no soup for you"
}]
})
.where((attr, op) => op.eq(attr.category, "food/coffee"))
.go()
Response Format:
{
data: { YOUR_SCHEMA }
}
Equivalent DocClient Parameters:
{
"UpdateExpression": "SET #rentalAgreement = list_append(#rentalAgreement, :rentalAgreement0)",
"ExpressionAttributeNames": {
"#category": "category",
"#rentalAgreement": "rentalAgreement"
},
"ExpressionAttributeValues": {
":category0": "food/coffee",
":rentalAgreement0": [
{
"type": "ammendment",
"detail": "no soup for you"
}
]
},
"TableName": "StoreDirectory",
"Key": {
"pk": "$mallstoredirectory#cityid_atlanta#mallid_eastpointe",
"sk": "$mallstore_1#buildingid_a34#storeid_lattelarrys"
},
"ConditionExpression": "#category = :category0"
}
The delete()
method will accept attributes with type any
or set
. This operation removes items from a the contract
attribute, defined as a set
attribute.
Example:
await StoreLocations
.update({cityId, mallId, storeId, buildingId})
.delete({contact: ['555-345-2222']})
.where((attr, op) => op.eq(attr.category, "food/coffee"))
.go()
Response Format:
{
data: { YOUR_SCHEMA }
}
Equivalent DocClient Parameters:
{
"UpdateExpression": "DELETE #contact :contact0",
"ExpressionAttributeNames": {
"#category": "category",
"#contact": "contact"
},
"ExpressionAttributeValues": {
":category0": "food/coffee",
":contact0": "555-345-2222"
},
"TableName": "StoreDirectory",
"Key": {
"pk": "$mallstoredirectory#cityid_atlanta#mallid_eastpointe",
"sk": "$mallstore_1#buildingid_a34#storeid_lattelarrys"
},
"ConditionExpression": "#category = :category0"
}
The data()
allows for different approach to updating your item, by accepting a callback with a similar argument signature to the where clause.
The callback provided to the data
method is injected with an attributes
object as the first parameter, and an operations
object as the second parameter. All operations accept an attribute from the attributes
object as a first parameter, and optionally accept a second value
parameter.
As mentioned above, this method is functionally similar to the where
clause with one exception: The callback provided to data()
is not expected to return a value. When you invoke an injected operation
method, the side effects are applied directly to update expression you are building.
operation | example | result | description |
---|---|---|---|
set | set(category, value) | #category = :category0 | Add or overwrite existing value |
add | add(tenant, name) | #tenant :tenant1 | Add value to existing set attribute (used when provided attribute is of type any or set ) |
add | add(rent, amount) | #rent = #rent + :rent0 | Mathematically add given number to existing number on record |
subtract | subtract(deposit, amount) | #deposit = #deposit - :deposit0 | Mathematically subtract given number from existing number on record |
remove | remove(petFee) | #petFee | Remove attribute/property from item |
append | append(rentalAgreement, amendment) | #rentalAgreement = list_append(#rentalAgreement, :rentalAgreement0) | Add element to existing list attribute |
delete | delete(tenant, name) | #tenant :tenant1 | Remove item from existing set attribute |
del | del(tenant, name) | #tenant :tenant1 | Alias for delete operation |
name | name(rent) | #rent | Reference another attribute's name, can be passed to other operation that allows leveraging existing attribute values in calculating new values |
value | value(rent, amount) | :rent1 | Create a reference to a particular value, can be passed to other operation that allows leveraging existing attribute values in calculating new values |
ifNotExists | ifNotExists(rent, amount) | #rent = if_not_exists(#rent, :rent0) | Update a property's value only if that property doesn't yet exist on the record |
NOTE: Usage of
name
andvalue
operations allow for some escape hatching in the case that a custom operation needs to be expressed. When used however, ElectroDB loses the context necessary to validate the expression created by the user. In practical terms, this means thevalidation
function/regex on the impacted attribute will not be called.
Example:
await StoreLocations
.update({cityId, mallId, storeId, buildingId})
.data((a, o) => {
const newTenant = a.value(attr.tenant, "larry");
o.set(a.category, "food/meal"); // electrodb "enum" -> dynamodb "string"
o.add(a.tenant, newTenant); // electrodb "set" -> dynamodb "set"
o.add(a.rent, 100); // electrodb "number" -> dynamodb "number"
o.subtract(a.deposit, 200); // electrodb "number" -> dynamodb "number"
o.remove(attr.discount); // electrodb "number" -> dynamodb "number"
o.append(a.rentalAgreement, [{ // electrodb "list" -> dynamodb "list"
type: "ammendment", // electrodb "map" -> dynamodb "map"
detail: "no soup for you"
}]);
o.delete(a.tags, ['coffee']); // electrodb "set" -> dynamodb "set"
o.del(a.contact, '555-345-2222'); // electrodb "string" -> dynamodb "string"
o.add(a.fees, op.name(a.petFee)); // electrodb "number" -> dynamodb "number"
o.add(a.leaseHolders, newTenant); // electrodb "set" -> dynamodb "set"
})
.where((attr, op) => op.eq(attr.category, "food/coffee"))
.go()
Response Format:
{
data: { YOUR_SCHEMA }
}
Equivalent DocClient Parameters:
{
"UpdateExpression": "SET #category = :category_u0, #deposit = #deposit - :deposit_u0, #rentalAgreement = list_append(#rentalAgreement, :rentalAgreement_u0), #totalFees = #totalFees + #petFee, #cityId = :cityId_u0, #mallId = :mallId_u0, #buildingId = :buildingId_u0, #storeId = :storeId_u0, #__edb_e__ = :__edb_e___u0, #__edb_v__ = :__edb_v___u0 REMOVE #discount ADD #tenant :tenant_u0, #rent :rent_u0, #leaseHolders :tenant_u0 DELETE #tags :tags_u0, #contact :contact_u0",
"ExpressionAttributeNames": {
"#category": "category",
"#tenant": "tenant",
"#rent": "rent",
"#deposit": "deposit",
"#discount": "discount",
"#rentalAgreement": "rentalAgreement",
"#tags": "tags",
"#contact": "contact",
"#totalFees": "totalFees",
"#petFee": "petFee",
"#leaseHolders": "leaseHolders",
"#buildingId": "buildingId",
"#cityId": "cityId",
"#mallId": "mallId",
"#storeId": "storeId",
"#__edb_e__": "__edb_e__", "#__edb_v__": "__edb_v__",
},
"ExpressionAttributeValues": {
":buildingId_u0": "A34",
":cityId_u0": "portland",
":category0": "food/coffee",
":category_u0": "food/meal",
":tenant_u0": ["larry"],
":rent_u0": 100,
":deposit_u0": 200,
":rentalAgreement_u0": [{
"type": "amendment",
"detail": "no soup for you"
}],
":tags_u0": ["coffee"],
":contact_u0": ["555-345-2222"],
":mallId_u0": "EastPointe",
":storeId_u0": "LatteLarrys",
":__edb_e___u0": "MallStore", ":__edb_v___u0": "1",
},
"TableName": "electro",
"Key": {
"pk": "$mallstoredirectory#cityid_portland#mallid_eastpointe",
"sk": "$mallstore_1#buildingid_a34#storeid_lattelarrys"
},
"ConditionExpression": "#category = :category0"
}
ElectroDB supports updating DynamoDB's complex types (list
, map
, set
) with all of its Update Methods.
When using the chain methods set, add, subtract, remove, append, and delete, you can access map
properties, list
elements, and set
items by supplying the json path of the property as the name of the attribute.
The data()
method also allows for working with complex types. Unlike using the update chain methods, the data()
method ensures type safety when using TypeScript. When using the injected attributes
object, simply drill into the attribute itself to apply your update directly to the required object.
The following are examples on how update complex attributes, using both with chain methods and the data()
method.
Example 1: Set property on a map
attribute
Specifying a property on a map
attribute is expressed with dot notation.
// via Chain Method
await StoreLocations
.update({cityId, mallId, storeId, buildingId})
.set({'mapAttribute.mapProperty': "value"})
.go();
// via Data Method
await StoreLocations
.update({cityId, mallId, storeId, buildingId})
.data(({mapAttribute}, {set}) => set(mapAttribute.mapProperty, "value"))
.go()
Example 2: Removing an element from a list
attribute
Specifying an index on a list
attribute is expressed with square brackets containing the element's index number.
// via Chain Method
await StoreLocations
.update({cityId, mallId, storeId, buildingId})
.remove(['listAttribute[0]'])
.go();
// via Data Method
await StoreLocations
.update({cityId, mallId, storeId, buildingId})
.data(({listAttribute}, {remove}) => remove(listAttribute[0]))
.go();
Example 3: Adding an item to a set
attribute, on a map
attribute, that is an element of a list
attribute
All other complex structures are simply variations on the above two examples.
// Set values must use the DocumentClient to create a `set`
const newSetValue = StoreLocations.client.createSet("setItemValue");
// via Data Method
await StoreLocations
.update({cityId, mallId, storeId, buildingId})
.add({'listAttribute[1].setAttribute': newSetValue})
.go();
await StoreLocations
.update({cityId, mallId, storeId, buildingId})
.data(({listAttribute}, {add}) => {
add(listAttribute[1].setAttribute, newSetValue)
})
.go();
await entity.patch({ attr1: "value1", attr2: "value2" })
.set({ attr4: "value4" })
.go();
Response Format:
{
data: { YOUR_SCHEMA }
}
Equivalent DocClient Parameters:
{
"UpdateExpression": "SET #attr4 = :attr4_u0, #gsi1sk = :gsi1sk_u0, #attr1 = :attr1_u0, #attr2 = :attr2_u0",
"ExpressionAttributeNames": {
"#attr4": "attr4",
"#gsi1sk": "gsi1sk",
"#attr1": "attr1",
"#attr2": "attr2"
},
"ExpressionAttributeValues": {
":attr4_u0": "value6",
// This index was successfully built
":gsi1sk_u0": "$update-edgecases_1#attr2_value2#attr4_value6",
":attr1_u0": "value1",
":attr2_u0": "value2"
},
"TableName": "YOUR_TABLE_NAME",
"Key": {
"pk": "$service#attr1_value1",
"sk": "$entity_version#attr2_value2"
},
"ConditionExpression": "attribute_exists(pk) AND attribute_exists(sk)"
}
The upsert
method is another ElectroDB exclusive method. Upsert is similar to the put-method in that it will create a record if one does not exist. Unlike the put
method, however, upsert
perform an update if that record already exists.
When scanning for rows, you can use filters the same as you would any query. For more information on filters, see the Where section.
Note: Scan
functionality will be scoped to your Entity. This means your results will only include records that match the Entity defined in the model.
Example:
await StoreLocations.scan
.where(({category}, {eq}) => `
${eq(category, "food/coffee")} OR ${eq(category, "spite store")}
`)
.where(({leaseEndDate}, {between}) => `
${between(leaseEndDate, "2020-03", "2020-04")}
`)
.go()
Response Format:
{
data: Array<YOUR_SCHEMA>,
cursor: string | undefined
}
Equivalent DocClient Parameters:
{
"TableName": "StoreDirectory",
"ExpressionAttributeNames": {
"#category": "category",
"#leaseEndDate": "leaseEndDate",
"#pk": "pk",
"#sk": "sk",
"#__edb_e__": "__edb_e__",
"#__edb_v__": "__edb_v__"
},
"ExpressionAttributeValues": {
":category_w1": "food/coffee",
":category_w2": "spite store",
":leaseEndDate_w1": "2020-03",
":leaseEndDate_w2": "2020-04",
":pk": "$mallstoredirectory#cityid_",
":sk": "$mallstore_1#buildingid_",
":__edb_e__": "MallStore",
":__edb_v__": "1"
},
"FilterExpression": "begins_with(#pk, :pk) AND #__edb_e__ = :__edb_e__ AND #__edb_v__ = :__edb_v__ AND begins_with(#sk, :sk) AND (#category = :category_w1 OR #category = :category_w2) AND (#leaseEndDate between :leaseEndDate_w1 and :leaseEndDate_w2)"
}
A convenience method for delete
with ConditionExpression that the item being deleted exists. Provide all Table Index composite attributes in an object to the remove
method to remove the record.
await StoreLocations.remove({
storeId: "LatteLarrys",
mallId: "EastPointe",
buildingId: "F34",
cityId: "Atlanta1"
}).go();
Response Format:
{
data: { YOUR_SCHEMA }
}
Equivalent DocClient Parameters:
{
"Key": {
"pk": "$mallstoredirectory#cityid_atlanta1#mallid_eastpointe",
"sk": "$mallstore_1#buildingid_f34#storeid_lattelarrys"
},
"TableName": "YOUR_TABLE_TABLE",
"ConditionExpression": "attribute_exists(pk) AND attribute_exists(sk)"
}
In DynamoDB, update
operations by default will insert a record if record being updated does not exist. In ElectroDB, the patch
method will utilize the attribute_exists()
parameter dynamically to ensure records are only "patched" and not inserted when updating.
For more detail on how to use the patch()
method, see the section Update Record to see all the transferable requirements and capabilities available to patch()
.
ElectroDB queries use DynamoDB's query
method to find records based on your table's indexes. To read more about queries checkout the section Building Queries
NOTE: To limit the number of items ElectroDB will retrieve, read more about the Query Options
pages
andlimit
, or use the ElectroDB Pagination API for fine-grain pagination support.
Provide all Table Index composite attributes in an object to the get
method. In the event no record is found, a value of null
will be returned.
NOTE: As part of ElectroDB's roll out of 1.0.0, a breaking change was made to the
get
method. Prior to 1.0.0, theget
method would return an empty object if a record was not found. This has been changed to now return a value ofnull
in this case.
Example:
let results = await StoreLocations.get({
storeId: "LatteLarrys",
mallId: "EastPointe",
buildingId: "F34",
cityId: "Atlanta1"
}).go();
Response Format:
{
data: Array<YOUR_SCHEMA>,
cursor: string | undefined
}
Equivalent DocClient Parameters:
{
"Key": {
"pk": "$mallstoredirectory#cityid_atlanta1#mallid_eastpointe",
"sk": "$mallstore_1#buildingid_f34#storeid_lattelarrys"
},
"TableName": "YOUR_TABLE_NAME"
}
Provide all Table Index composite attributes in an array of objects to the get
method to perform a BatchGet query.
_NOTE: When performing a BatchGet the
.params()
method will return an array of parameters, rather than just the parameters for one docClient query. This is because ElectroDB BatchGet allows queries larger than the docClient's limit of 100 records.
If the number of records you are requesting is above the BatchGet threshold of 100 records, ElectroDB will make multiple requests to DynamoDB and return the results in a single array. By default, ElectroDB will make these requests in series, one after another. If you are confident your table can handle the throughput, you can use the Query Option concurrent
. This value can be set to any number greater than zero, and will execute that number of requests simultaneously.
For example, 150 records (50 records over the DynamoDB maximum):
The default value of concurrent
will be 1
. ElectroDB will execute a BatchGet request of 100, then after that request has responded, make another BatchGet request for 50 records.
If you set the Query Option concurrent
to 2
, ElectroDB will execute a BatchGet request of 100 records, and another BatchGet request for 50 records without waiting for the first request to finish.
It is important to consider your Table's throughput considerations when setting this value.
Example:
let [results, unprocessed] = await StoreLocations.get([
{
storeId: "LatteLarrys",
mallId: "EastPointe",
buildingId: "F34",
cityId: "Atlanta1"
},
{
storeId: "MochaJoes",
mallId: "WestEnd",
buildingId: "A21",
cityId: "Madison2"
}
]).go({concurrent: 1}); // `concurrent` value is optional and default's to `1`
Response Format:
{
data: Array<YOUR_SCHEMA>,
unprocessed: Array<YOUR_COMPOSITE_ATTRIBUTES>
}
Equivalent DocClient Parameters:
{
"RequestItems": {
"YOUR_TABLE_NAME": {
"Keys": [
{
"pk": "$mallstoredirectory#cityid_atlanta1#mallid_eastpointe",
"sk": "$mallstore_1#buildingid_f34#storeid_lattelarrys"
},
{
"pk": "$mallstoredirectory#cityid_madison2#mallid_westend",
"sk": "$mallstore_1#buildingid_a21#storeid_mochajoes"
}
]
}
}
}
The two-dimensional array returned by batch get most easily used when deconstructed into two variables, in the above case: results
and unprocessed
.
The results
array are records that were returned DynamoDB as Responses
on the BatchGet query. They will appear in the same format as other ElectroDB queries.
NOTE: By default ElectroDB will return items without concern for order. If the order returned by ElectroDB must match the order provided, the query option
preserveBatchOrder
can be used. When enabled, ElectroDB will ensure the order returned by a batchGet will be the same as the order provided. When enabled, if a record is returned from DynamoDB as "unprocessed" (read more here), ElectroDB will return a null value at that index.
Elements of the unprocessed
array are unlike results received from a query. Instead of containing all the attributes of a record, an unprocessed record only includes the composite attributes defined in the Table Index. This is in keeping with DynamoDB's practice of returning only Keys in the case of unprocessed records. For convenience, ElectroDB will return these keys as composite attributes, but you can pass the query option {unprocessed:"raw"}
override this behavior and return the Keys as they came from DynamoDB.
DynamoDB offers three methods to query records: get
, query
, and scan
. In ElectroDB, there is a fourth type: find
. Unlike get
and query
, the find
method does not require you to provide keys, but under the covers it will leverage the attributes provided to choose the best index to query on. Provide the find
method will all properties known to match a record and ElectroDB will generate the most performant query it can to locate the results. This can be helpful with highly dynamic querying needs. If an index cannot be satisfied with the attributes provided, scan
will be used as a last resort.
NOTE: The Find method is similar to the Match method with one exception: The attributes you supply directly to the
.find()
method will only be used to identify and fulfill your index access patterns. Any values supplied that do not contribute to a composite key will not be applied as query filters. Furthermore, if the values you provide do not resolve to an index access pattern, then a table scan will be performed. Use thewhere()
chain method to further filter beyond keys, or use Match for the convenience of automatic filtering based on the values given directly to that method.
The Find method is useful when the index chosen does not matter or is not known. If your secondary indexes do not contain all attributes then this method might not be right for you. The mechanism that picks the best index for a given payload is subject to improvement and change without triggering a breaking change release version.
Example:
await StoreLocations.find({
mallId: "EastPointe",
buildingId: "BuildingA1",
}).go()
Response Format:
{
data: Array<YOUR_SCHEMA>,
cursor: string | undefined
}
Equivalent DocClient Parameters:
{
"KeyConditionExpression": "#pk = :pk and begins_with(#sk1, :sk1)",
"TableName": "StoreDirectory",
"ExpressionAttributeNames": {
"#mallId": "mallId",
"#buildingId": "buildingId",
"#pk": "gis1pk",
"#sk1": "gsi1sk"
},
"ExpressionAttributeValues": {
":mallId1": "EastPointe",
":buildingId1": "BuildingA1",
":pk": "$mallstoredirectory#mallid_eastpointe",
":sk1": "$mallstore_1#buildingid_buildinga1#unitid_"
},
"IndexName": "gis1pk-gsi1sk-index",
}
Match is a convenience method based off of ElectroDB's find method. Similar to Find, Match does not require you to provide keys, but under the covers it will leverage the attributes provided to choose the best index to query on.
_NOTE: The Math method is useful when the index chosen does not matter or is not known. If your secondary indexes do not contain all attributes then this method might not be right for you. The mechanism that picks the best index for a given payload is subject to improvement and change without triggering a breaking change release version.
Match differs from Find in that it will also include all supplied values into a query filter.
Example:
await StoreLocations.find({
mallId: "EastPointe",
buildingId: "BuildingA1",
leaseEndDate: "2020-03-22",
rent: "1500.00"
}).go()
Response Format:
{
data: Array<YOUR_SCHEMA>,
cursor: string | undefined
}
Equivalent DocClient Parameters:
{
"KeyConditionExpression": "#pk = :pk and begins_with(#sk1, :sk1)",
"TableName": "StoreDirectory",
"ExpressionAttributeNames": {
"#mallId": "mallId",
"#buildingId": "buildingId",
"#leaseEndDate": "leaseEndDate",
"#rent": "rent",
"#pk": "gis1pk",
"#sk1": "gsi1sk"
},
"ExpressionAttributeValues": {
":mallId1": "EastPointe",
":buildingId1": "BuildingA1",
":leaseEndDate1": "2020-03-22",
":rent1": "1500.00",
":pk": "$mallstoredirectory#mallid_eastpointe",
":sk1": "$mallstore_1#buildingid_buildinga1#unitid_"
},
"IndexName": "gis1pk-gsi1sk-index",
"FilterExpression": "#mallId = :mallId1 AND#buildingId = :buildingId1 AND#leaseEndDate = :leaseEndDate1 AND#rent = :rent1"
}
After invoking the Access Pattern with the required Partition Key Composite Attributes, you can now choose what Sort Key Composite Attributes are applicable to your query. Examine the table in Sort Key Operations for more information on the available operations on a Sort Key.
When you define your indexes in your model, you are defining the Access Patterns of your entity. The composite attributes you choose, and their order, ultimately define the finite set of index queries that can be made. The more you can leverage these index queries the better from both a cost and performance perspective.
Unlike Partition Keys, Sort Keys can be partially provided. We can leverage this to multiply our available access patterns and use the Sort Key Operations: begins
, between
, lt
, lte
, gt
, and gte
. These queries are more performant and cost-effective than filters. The costs associated with DynamoDB directly correlate to how effectively you leverage Sort Key Operations.
For a comprehensive and interactive guide to build queries please visit this runkit: https://runkit.com/tywalch/electrodb-building-queries.
One important consideration when using Sort Key Operations to make is when to use and not to use "begins".
It is possible to supply partially supply Sort Key composite attributes. Sort Key attributes must be provided in the order they are defined, but it's possible to provide only a subset of the Sort Key Composite Attributes to ElectroDB. By default, when you supply a partial Sort Key in the Access Pattern method, ElectroDB will create a beginsWith
query. The difference between that and using .begins()
is that, with a .begins()
query, ElectroDB will not post-pend the next composite attribute's label onto the query.
The difference is nuanced and makes better sense with an example, but the rule of thumb is that data passed to the Access Pattern method should represent values you know strictly equal the value you want.
The following examples will use the following Access Pattern definition for units
:
{
"units": {
"index": "gis1pk-gsi1sk-index",
"pk": {
"field": "gis1pk",
"composite attributes": [
"mallId"
]
},
"sk": {
"field": "gsi1sk",
"composite attributes": [
"buildingId",
"unitId"
]
}
}
}
The names you have given to your indexes on your entity model/schema express themselves as "Access Pattern" methods on your Entity's query
object:
// Example #1, access pattern `units`
StoreLocations.query.units({mallId, buildingId}).go();
// -----------------------^^^^^^^^^^^^^^^^^^^^^^
Data passed to the Access Pattern method is considered to be full, known, data. In the above example, we are saying we know the mallId
, buildingId
and unitId
.
Alternatively, if you only know the start of a piece of data, use .begins():
// Example #2
StoreLocations.query.units({mallId}).begins({buildingId}).go();
// ---------------------------------^^^^^^^^^^^^^^^^^^^^^
Data passed to the .begins() method is considered to be partial data. In the second example, we are saying we know the mallId
and buildingId
, but only know the beginning of unitId
.
For the above queries we see two different sort keys:
"$mallstore_1#buildingid_f34#unitid_"
"$mallstore_1#buildingid_f34"
The first example shows how ElectroDB post-pends the label of the next composite attribute (unitId
) on the Sort Key to ensure that buildings such as "f340"
are not included in the query. This is useful to prevent common issues with overloaded sort keys like accidental over-querying.
The second example allows you to make queries that do include buildings such as "f340"
or "f3409"
or "f340356346"
.
For these reasons it is important to consider that attributes passed to the Access Pattern method are considered to be full, known, data.
Collections allow you to query across Entities. They can be used on Service
instance.
const DynamoDB = require("aws-sdk/clients/dynamodb");
const table = "projectmanagement";
const client = new DynamoDB.DocumentClient();
const employees = new Entity({
model: {
entity: "employees",
version: "1",
service: "taskapp",
},
attributes: {
employeeId: {
type: "string"
},
organizationId: {
type: "string"
},
name: {
type: "string"
},
team: {
type: ["jupiter", "mercury", "saturn"]
}
},
indexes: {
staff: {
pk: {
field: "pk",
composite: ["organizationId"]
},
sk: {
field: "sk",
composite: ["employeeId"]
}
},
employee: {
collection: "assignments",
index: "gsi2",
pk: {
field: "gsi2pk",
composite: ["employeeId"],
},
sk: {
field: "gsi2sk",
composite: [],
},
}
}
}, { client, table })
const tasks = new Entity({
model: {
entity: "tasks",
version: "1",
service: "taskapp",
},
attributes: {
taskId: {
type: "string"
},
employeeId: {
type: "string"
},
projectId: {
type: "string"
},
title: {
type: "string"
},
body: {
type: "string"
}
},
indexes: {
project: {
pk: {
field: "pk",
composite: ["projectId"]
},
sk: {
field: "sk",
composite: ["taskId"]
}
},
assigned: {
collection: "assignments",
index: "gsi2",
pk: {
field: "gsi2pk",
composite: ["employeeId"],
},
sk: {
field: "gsi2sk",
composite: [],
},
}
}
}, { client, table });
const TaskApp = new Service({employees, tasks});
Available on your Service are two objects: entites
and collections
. Entities available on entities
have the same capabilities as they would if created individually. When a Model added to a Service with join
however, its Collections are automatically added and validated with the other Models joined to that Service. These Collections are available on collections
.
TaskApp.collections.assignments({employeeId: "JExotic"}).params();
// Results
{
TableName: 'projectmanagement',
ExpressionAttributeNames: { '#pk': 'gsi2pk', '#sk1': 'gsi2sk' },
ExpressionAttributeValues: { ':pk': '$taskapp_1#employeeid_joeexotic', ':sk1': '$assignments' },
KeyConditionExpression: '#pk = :pk and begins_with(#sk1, :sk1)',
IndexName: 'gsi3'
}
Collections do not have the same query
functionality and as an Entity, though it does allow for inline filters like an Entity. The attributes
available on the filter object include all attributes across entities.
TaskApp.collections
.assignments({employee: "CBaskin"})
.filter((attributes) => `
${attributes.project.notExists()} OR ${attributes.project.contains("murder")}
`)
// Results
{
TableName: 'projectmanagement',
ExpressionAttributeNames: { '#project': 'project', '#pk': 'gsi2pk', '#sk1': 'gsi2sk' },
ExpressionAttributeValues: {
':project1': 'murder',
':pk': '$taskapp_1#employeeid_carolbaskin',
':sk1': '$assignments'
},
KeyConditionExpression: '#pk = :pk and begins_with(#sk1, :sk1)',
IndexName: 'gsi2',
FilterExpression: '\n\t\tattribute_not_exists(#project) OR contains(#project, :project1)\n\t'
}
Lastly, all query chains end with either a .go()
, .params()
, or page()
method invocation. These terminal methods will either execute the query to DynamoDB (.go()
) or return formatted parameters for use with the DynamoDB docClient (.params()
).
Both .params()
and .go()
take a query configuration object which is detailed more in the section Query Options.
The params
method ends a query chain, and synchronously formats your query into an object ready for the DynamoDB docClient.
For more information on the options available in the
config
object, checkout the section Query Options.
let config = {};
let stores = MallStores.query
.leases({ mallId })
.between(
{ leaseEndDate: "2020-06-01" },
{ leaseEndDate: "2020-07-31" })
.filter(attr) => attr.rent.lte("5000.00"))
.params(config);
// Results:
{
IndexName: 'idx2',
TableName: 'electro',
ExpressionAttributeNames: { '#rent': 'rent', '#pk': 'idx2pk', '#sk1': 'idx2sk' },
ExpressionAttributeValues: {
':rent1': '5000.00',
':pk': '$mallstoredirectory_1#mallid_eastpointe',
':sk1': '$mallstore#leaseenddate_2020-06-01#rent_',
':sk2': '$mallstore#leaseenddate_2020-07-31#rent_'
},
KeyConditionExpression: '#pk = :pk and #sk1 BETWEEN :sk1 AND :sk2',
FilterExpression: '#rent <= :rent1'
}
The go
method ends a query chain, and asynchronously queries DynamoDB with the client
provided in the model.
For more information on the options available in the
config
object, check out the section Query Options.
let config = {};
let stores = MallStores.query
.leases({ mallId })
.between(
{ leaseEndDate: "2020-06-01" },
{ leaseEndDate: "2020-07-31" })
.filter(({rent}) => rent.lte("5000.00"))
.go(config);
All ElectroDB query
and scan
operations return a cursor
, which is a stringified and copy of DynamoDB's LastEvaluatedKey
with a base64url
encoding.
The terminal method go()
accepts a cursor
when executing a query
or scan
to continue paginating for more results. Pass the cursor from the previous query to your next query and ElectroDB will continue its pagination where it left off.
const results1 = await MallStores.query
.leases({ mallId })
.go(); // no "cursor" passed to `.go()`
const results2 = await MallStores.query
.leases({ mallId })
.go({cursor: results1.cursor}); // Paginate by querying with the "cursor" from your first query
// results1
{
cursor: '...'
data: [{
mall: '3010aa0d-5591-4664-8385-3503ece58b1c',
leaseEnd: '2020-01-20',
sector: '7d0f5c19-ec1d-4c1e-b613-a4cc07eb4db5',
store: 'MNO',
unit: 'B5',
id: 'e0705325-d735-4fe4-906e-74091a551a04',
building: 'BuildingE',
category: 'food/coffee',
rent: '0.00'
},
{
mall: '3010aa0d-5591-4664-8385-3503ece58b1c',
leaseEnd: '2020-01-20',
sector: '7d0f5c19-ec1d-4c1e-b613-a4cc07eb4db5',
store: 'ZYX',
unit: 'B9',
id: 'f201a1d3-2126-46a2-aec9-758ade8ab2ab',
building: 'BuildingI',
category: 'food/coffee',
rent: '0.00'
}]
}
Pagination with services is also possible. Similar to Entity Pagination, calling the .go()
method returns the following structure:
type GoResults = {
cursor: string | null;
data: {
[entityName: string]: { /** EntityItem */ }[]
}
}
A notable Pagination Option is pager
. This property defines the post-processing ElectroDB should perform on a returned LastEvaluatedKey
, as well as how ElectroDB should interpret an incoming pager, to use as an ExclusiveStartKey.
"raw": The "raw"
option returns the LastEvaluatedKey as it was returned by the DynamoDB DocClient.
// {pager: "raw"}
{
pk: '$taskapp#country_united states of america#state_oregon',
sk: '$offices_1#city_power#zip_34706#office_mobile branch',
gsi1pk: '$taskapp#office_mobile branch',
gsi1sk: '$workplaces#offices_1'
}
Simple pagination example:
async function getAllStores(mallId) {
let stores = [];
let cursor = null;
do {
const results = await MallStores.query
.leases({ mallId })
.go({ pager });
stores = [...stores, ...results.data];
cursor = results.cursor;
} while(pager !== null);
return stores;
}
For a comprehensive and interactive guide to build queries please visit this runkit: https://runkit.com/tywalch/electrodb-building-queries.
const cityId = "Atlanta1";
const mallId = "EastPointe";
const storeId = "LatteLarrys";
const unitId = "B24";
const buildingId = "F34";
const june = "2020-06";
const july = "2020-07";
const discount = "500.00";
const maxRent = "2000.00";
const minRent = "5000.00";
// Lease Agreements by StoreId
await StoreLocations.query.leases({storeId}).go()
// Lease Agreement by StoreId for March 22nd 2020
await StoreLocations.query.leases({storeId, leaseEndDate: "2020-03-22"}).go()
// Lease agreements by StoreId for 2020
await StoreLocations.query.leases({storeId}).begins({leaseEndDate: "2020"}).go()
// Lease Agreements by StoreId after March 2020
await StoreLocations.query.leases({storeId}).gt({leaseEndDate: "2020-03"}).go()
// Lease Agreements by StoreId after, and including, March 2020
await StoreLocations.query.leases({storeId}).gte({leaseEndDate: "2020-03"}).go()
// Lease Agreements by StoreId before 2021
await StoreLocations.query.leases({storeId}).lt({leaseEndDate: "2021-01"}).go()
// Lease Agreements by StoreId before February 2021
await StoreLocations.query.leases({storeId}).lte({leaseEndDate: "2021-02"}).go()
// Lease Agreements by StoreId between 2010 and 2020
await StoreLocations.query
.leases({storeId})
.between(
{leaseEndDate: "2010"},
{leaseEndDate: "2020"})
.go()
// Lease Agreements by StoreId after, and including, 2010 in the city of Atlanta and category containing food
await StoreLocations.query
.leases({storeId})
.gte({leaseEndDate: "2010"})
.where((attr, op) => `
${op.eq(attr.cityId, "Atlanta1")} AND ${op.contains(attr.category, "food")}
`)
.go()
// Rents by City and Store who's rent discounts match a certain rent/discount criteria
await StoreLocations.query
.units({mallId})
.begins({leaseEndDate: june})
.rentDiscount(discount, maxRent, minRent)
.go()
// Stores by Mall matching a specific category
await StoreLocations.query
.units({mallId})
.byCategory("food/coffee")
.go()
Query options can be added the .params()
and `.go()`` to change query behavior or add customer parameters to a query.
By default, ElectroDB enables you to work with records as the names and properties defined in the model. Additionally, it removes the need to deal directly with the docClient parameters which can be complex for a team without as much experience with DynamoDB. The Query Options object can be passed to both the .params()
and .go()
methods when building you query. Below are the options available:
{
params?: object;
table?: string;
data?: 'raw' | 'includeKeys' | 'attributes';
pager?: 'raw' | 'cursor';
originalErr?: boolean;
concurrent?: number;
unprocessed?: "raw" | "item";
response?: "default" | "none" | "all_old" | "updated_old" | "all_new" | "updated_new";
ignoreOwnership?: boolean;
limit?: number;
pages?: number | 'all';
logger?: (event) => void;
listeners Array<(event) => void>;
preserveBatchOrder?: boolean;
attributes?: string[];
order?: 'asc' | 'desc';
};
Option | Default | Description |
---|---|---|
params | {} | Properties added to this object will be merged onto the params sent to the document client. Any conflicts with ElectroDB will favor the params specified here. |
table | (from constructor) | Use a different table than the one defined in the Service Options |
attributes | (all attributes) | The attributes query option allows you to specify ProjectionExpression Attributes for your get or query operation. As of 1.11.0 only root attributes are allowed to be specified. |
data | "attributes" | Accepts the values 'raw' , 'includeKeys' , 'attributes' or undefined . Use 'raw' to return query results as they were returned by the docClient. Use 'includeKeys' to include item partition and sort key values in your return object. By default, ElectroDB does not return partition, sort, or global keys in its response. |
pager | cursor | Used in with pagination calls to override ElectroDBs default behaviour to return a serialized string cursor. See more detail about this in the sections for Pager Query Options. |
originalErr | false | By default, ElectroDB alters the stacktrace of any exceptions thrown by the DynamoDB client to give better visibility to the developer. Set this value equal to true to turn off this functionality and return the error unchanged. |
concurrent | 1 | When performing batch operations, how many requests (1 batch operation == 1 request) to DynamoDB should ElectroDB make at one time. Be mindful of your DynamoDB throughput configurations. |
unprocessed | "item" | Used in batch processing to override ElectroDBs default behaviour to break apart DynamoDBs Unprocessed records into composite attributes. See more detail about this in the sections for BatchGet, BatchDelete, and BatchPut. |
response | "default" | Used as a convenience for applying the DynamoDB parameter ReturnValues . The options here are the same as the parameter values for the DocumentClient except lowercase. The "none" option will cause the method to return null and will bypass ElectroDB's response formatting -- useful if formatting performance is a concern. |
ignoreOwnership | false | By default, ElectroDB interrogates items returned from a query for the presence of matching entity "identifiers". This helps to ensure other entities, or other versions of an entity, are filtered from your results. If you are using ElectroDB with an existing table/dataset you can turn off this feature by setting this property to true . |
limit | none | A target for the number of items to return from DynamoDB. If this option is passed, Queries on entities and through collections will paginate DynamoDB until this limit is reached or all items for that query have been returned. |
pages | 1 | How many DynamoDB pages should a query iterate through before stopping. To have ElectroDB automatically paginate through all results, pass the string value 'all' . |
order | 'asc' | Convenience option for ScanIndexForward , to the change the order of queries based on your index's Sort Key -- valid options include 'asc' and 'desc'. [read more] |
listeners | [] | An array of callbacks that are invoked when internal ElectroDB events occur. |
logger | none | A convenience option for a single event listener that semantically can be used for logging. |
preserveBatchOrder | false | When used with a batchGet operation, ElectroDB will ensure the order returned by a batchGet will be the same as the order provided. When enabled, if a record is returned from DynamoDB as "unprocessed" (read more here), ElectroDB will return a null value at that index. |
ElectroDB supports both the v2 and v3 aws clients. The client can be supplied creating a new Entity or Service, or added to a Entity/Service instance via the setClient()
method.
On the instantiation of an Entity
:
import { Entity } from 'electrodb';
import { DocumentClient } from "aws-sdk/clients/dynamodb";
const table = "my_table_name";
const client = new DocumentClient({
region: "us-east-1"
});
const task = new Entity({
// your model
}, {
client, // <----- client
table,
});
On the instantiation of an Service
:
import { Entity } from 'electrodb';
import { DocumentClient } from "aws-sdk/clients/dynamodb";
const table = "my_table_name";
const client = new DocumentClient({
region: "us-east-1"
});
const task = new Entity({
// your model
});
const user = new Entity({
// your model
});
const service = new Service({ task, user }, {
client, // <----- client
table,
});
Via the setClient
method:
import { Entity } from 'electrodb';
import { DocumentClient } from "aws-sdk/clients/dynamodb";
const table = "my_table_name";
const client = new DocumentClient({
region: "us-east-1"
});
const task = new Entity({
// your model
});
task.setClient(client);
The v2 sdk will work out of the box with the the DynamoDB DocumentClient.
Example:
import { DocumentClient } from "aws-sdk/clients/dynamodb";
const client = new DocumentClient({
region: "us-east-1"
});
The v3 client will work out of the box with the the DynamoDBClient.
import { DynamoDBClient } from '@aws-sdk/client-dynamodb';
const client = new DynamoDBClient({
region: "us-east-1"
});
A logger callback function can be provided both the at the instantiation of an Entity
or Service
instance or as a Query Option. The property logger
is implemented as a convenience property; under the hood ElectroDB uses this property identically to how it uses a Listener.
On the instantiation of an Entity
:
import { DynamoDB } from 'aws-sdk';
import { Entity, ElectroEvent } from 'electrodb';
const table = "my_table_name";
const client = new DynamoDB.DocumentClient();
const logger = (event: ElectroEvent) => {
console.log(JSON.stringify(event, null, 4));
}
const task = new Entity({
// your model
}, {
client,
table,
logger // <----- logger listener
});
On the instantiation of an Service
:
import { DynamoDB } from 'aws-sdk';
import { Entity, ElectroEvent } from 'electrodb';
const table = "my_table_name";
const client = new DynamoDB.DocumentClient();
const logger = (event: ElectroEvent) => {
console.log(JSON.stringify(event, null, 4));
}
const task = new Entity({
// your model
});
const user = new Entity({
// your model
});
const service = new Service({ task, user }, {
client,
table,
logger // <----- logger listener
});
As a Query Option:
const logger = (event: ElectroEvent) => {
console.log(JSON.stringify(event, null, 4));
}
task.query
.assigned({ userId })
.go({ logger });
ElectroDB can be supplied with callbacks (see: logging and listeners to learn how) to be invoked after certain request lifecycles. This can be useful for logging, analytics, expanding functionality, and more. The following are events currently supported by ElectroDB -- if you would like to see additional events feel free to create a github issue to discuss your concept/need!
The query
event occurs when a query is made via the terminal method go()
. The event includes the exact parameters given to the provided client, the ElectroDB method used, and the ElectroDB configuration provided.
Type:
interface ElectroQueryEvent<P extends any = any> {
type: 'query';
method: "put" | "get" | "query" | "scan" | "update" | "delete" | "remove" | "patch" | "create" | "batchGet" | "batchWrite";
config: any;
params: P;
}
Example Input:
const prop1 = "22874c81-27c4-4264-92c3-b280aa79aa30";
const prop2 = "366aade8-a7c0-4328-8e14-0331b185de4e";
const prop3 = "3ec9ed0c-7497-4d05-bdb8-86c09a618047";
entity.update({ prop1, prop2 })
.set({ prop3 })
.go();
Example Output:
{
"type": "query",
"method": "update",
"params": {
"UpdateExpression": "SET #prop3 = :prop3_u0, #prop1 = :prop1_u0, #prop2 = :prop2_u0, #__edb_e__ = :__edb_e___u0, #__edb_v__ = :__edb_v___u0",
"ExpressionAttributeNames": {
"#prop3": "prop3",
"#prop1": "prop1",
"#prop2": "prop2",
"#__edb_e__": "__edb_e__",
"#__edb_v__": "__edb_v__"
},
"ExpressionAttributeValues": {
":prop3_u0": "3ec9ed0c-7497-4d05-bdb8-86c09a618047",
":prop1_u0": "22874c81-27c4-4264-92c3-b280aa79aa30",
":prop2_u0": "366aade8-a7c0-4328-8e14-0331b185de4e",
":__edb_e___u0": "entity",
":__edb_v___u0": "1"
},
"TableName": "electro",
"Key": {
"pk": "$test#prop1_22874c81-27c4-4264-92c3-b280aa79aa30",
"sk": "$testcollection#entity_1#prop2_366aade8-a7c0-4328-8e14-0331b185de4e"
}
},
"config": { }
}
The results
event occurs when results are returned from DynamoDB. The event includes the exact results returned from the provided client, the ElectroDB method used, and the ElectroDB configuration provided. Note this event handles both failed (or thrown) results in addition to returned (or resolved) results.
Pro-Tip: Use this event to hook into the DyanmoDB's consumed capacity statistics to learn more about the impact and cost associated with your queries.
Type::
interface ElectroResultsEvent<R extends any = any> {
type: 'results';
method: "put" | "get" | "query" | "scan" | "update" | "delete" | "remove" | "patch" | "create" | "batchGet" | "batchWrite";
config: any;
results: R;
success: boolean;
}
Example Input:
const prop1 = "22874c81-27c4-4264-92c3-b280aa79aa30";
const prop2 = "366aade8-a7c0-4328-8e14-0331b185de4e";
entity.get({ prop1, prop2 }).go();
Example Output:
{
"type": "results",
"method": "get",
"config": { },
"success": true,
"results": {
"Item": {
"prop2": "366aade8-a7c0-4328-8e14-0331b185de4e",
"sk": "$testcollection#entity_1#prop2_366aade8-a7c0-4328-8e14-0331b185de4e",
"prop1": "22874c81-27c4-4264-92c3-b280aa79aa30",
"prop3": "3ec9ed0c-7497-4d05-bdb8-86c09a618047",
"__edb_e__": "entity",
"__edb_v__": "1",
"pk": "$test_1#prop1_22874c81-27c4-4264-92c3-b280aa79aa30"
}
}
}
ElectroDB can be supplied with callbacks (called "Listeners") to be invoked after certain request lifecycles. Unlike Attribute Getters and Setters, Listeners are implemented to react to events passively, not to modify values during the request lifecycle. Listeners can be useful for logging, analytics, expanding functionality, and more. Listeners can be provide both the at the instantiation of an Entity
or Service
instance or as a Query Option.
_NOTE: Listeners treated as synchronous callbacks and are not awaited. In the event that a callback throws an exception, ElectroDB will quietly catch and log the exception with
console.error
to prevent the exception from impacting your query.
On the instantiation of an Entity
:
import { DynamoDB } from 'aws-sdk';
import { Entity, ElectroEvent } from 'electrodb';
const table = "my_table_name";
const client = new DynamoDB.DocumentClient();
const listener1 = (event: ElectroEvent) => {
// do work
}
const listener2 = (event: ElectroEvent) => {
// do work
}
const task = new Entity({
// your model
}, {
client,
table,
listeners: [
listener1,
listener2, // <----- supports multiple listeners
]
});
On the instantiation of an Service
:
import { DynamoDB } from 'aws-sdk';
import { Entity, ElectroEvent } from 'electrodb';
const table = "my_table_name";
const client = new DynamoDB.DocumentClient();
const listener1 = (event: ElectroEvent) => {
// do work
}
const listener2 = (event: ElectroEvent) => {
// do work
}
const task = new Entity({
// your model
});
const user = new Entity({
// your model
});
const service = new Service({ task, user }, {
client,
table,
listeners: [
listener1,
listener2, // <----- supports multiple listeners
]
});
As a Query Option:
const listener1 = (event: ElectroEvent) => {
// do work
}
const listener2 = (event: ElectroEvent) => {
// do work
}
task.query
.assigned({ userId })
.go({ listeners: [listener1, listener2] });
Error Code | Description |
---|---|
1000s | Configuration Errors |
2000s | Invalid Queries |
3000s | User Defined Errors |
4000s | DynamoDB Errors |
5000s | Unexpected Errors |
Code: 1001
Why this occurred:
If a DynamoDB DocClient is not passed to the constructor of an Entity or Service (client
), ElectroDB will be unable to query DynamoDB. This error will only appear when a query(using go()
) is made because ElectroDB is still useful without a DocClient through the use of it's params()
method.
What to do about it: For an Entity be sure to pass the DocClient as the second param to the constructor:
new Entity(schema, {client})
For a Service, the client is passed the same way, as the second param to the constructor:
new Service("", {client});
Code: 1002
Why this occurred: You tried to modify the entity identifier on an Entity.
What to do about it: Make sure you have spelled the identifier correctly or that you actually passed a replacement.
Code: 1003
Why this occurred: You are trying to use the custom Key Composite Attribute Template, and the format you passed is invalid.
What to do about it: Checkout the section on [Composite Attribute Templates](#composite attribute-templates) and verify your template conforms to the rules detailed there.
Code: 1004
Why this occurred: Your model contains duplicate indexes. This could be because you accidentally included an index twice or even forgot to add an index name on a secondary index, which would be interpreted as "duplicate" to the Table's Primary index.
What to do about it:
Double-check the index names on your model for duplicate indexes. The error should specify which index has been duplicated. It is also possible that you have forgotten to include an index name. Each table must have at least one Table Index (which does not include an index
property in ElectroDB), but all Secondary and Local indexes must include an index
property with the name of that index as defined on the table.
{
indexes: {
index1: {
index: "idx1", // <-- duplicate "idx1"
pk: {},
sk: {}
},
index2: {
index: "idx1", // <-- duplicate "idx1"
pk: {},
sk: {}
}
}
}
Code: 1005
Why this occurred:
You have added a collection
to an index that does not have an SK. Because Collections are used to help query across entities via the Sort Key, not having a Sort Key on an index defeats the purpose of a Collection.
What to do about it: If your index does have a Sort Key, but you are unsure of how to inform electro without setting composite attributes to the SK, add the SK object to the index and use an empty array for Composite Attributes:
// ElectroDB interprets as index *not having* an SK.
{
indexes: {
myIndex: {
pk: {
field: "pk",
composite: ["id"]
}
}
}
}
// ElectroDB interprets as index *having* SK, but this model doesnt attach any composite attributes to it.
{
indexes: {
myIndex: {
pk: {
field: "pk",
composite: ["id"]
},
sk: {
field: "sk",
composite: []
}
}
}
}
Code: 1006
Why this occurred: You have assigned the same collection name to multiple indexes. This is not allowed because collection names must be unique.
What to do about it: Determine a new naming scheme
Code: 1007
Why this occurred:
DynamoDB requires the definition of at least one Primary Index on the table. In Electro this is defined as an Index without an index
property. Each model needs at least one, and the composite attributes used for this index must ensure each composite represents a unique record.
What to do about it: Identify the index you're using as the Primary Index and ensure it does not have an index property on its definition.
// ElectroDB interprets as the Primary Index because it lacks an `index` property.
{
indexes: {
myIndex: {
pk: {
field: "pk",
composite: ["org"]
},
sk: {
field: "sk",
composite: ["id"]
}
}
}
}
// ElectroDB interprets as a Global Secondary Index because it has an `index` property.
{
indexes: {
myIndex: {
index: "gsi1"
pk: {
field: "gsipk1",
composite: ["org"]
},
sk: {
field: "gsisk1",
composite: ["id"]
}
}
}
}
Code: 1008
Why this occurred: Some attribute on your model has an invalid configuration.
What to do about it: Use the error to identify which column needs to examined, double-check the properties on that attribute. Checkout the section on Attributes for more information on how they are structured.
Code: 1009
Why this occurred: Some properties on your model are missing or invalid.
What to do about it: Checkout the section on Models to verify your model against what is expected.
Code: 1010
Why this occurred: Some properties on your options object are missing or invalid.
What to do about it: Checkout the section on Model/Service Options to verify your model against what is expected.
Code: 1014
Why this occurred:
An Index in your model references the same field twice across indexes. The field
property in the definition of an index is a mapping to the name of the field assigned to the PK or SK of an index.
What to do about it: This is likely a typo, if not double-check the names of the fields you assigned to be the PK and SK of your index, these field names must be unique.
Code: 1015
Why this occurred: Within one index you tried to use the same composite attribute in both the PK and SK. A composite attribute may only be used once within an index. With ElectroDB it is not uncommon to use the same value as both the PK and SK when a Sort Key exists on a table -- this usually is done because some value is required in that column but for that entity it is not necessary. If this is your situation remember that ElectroDB does put a value in the SortKey even if does not include a composite attribute, checkout this section for more information.
What to do about it: Determine how you can change your access pattern to not duplicate the composite attribute. Remember that an empty array for an SK is valid.
Code: 1017
Why this occurred: You are trying to use the custom Key Composite Attribute Template, and a Composite Attribute Array on your model, and they do not contain identical composite attributes.
What to do about it: Checkout the section on [Composite Attribute Templates](#composite attribute-templates) and verify your template conforms to the rules detailed there. Both properties must contain the same attributes and be provided in the same order.
Code: 1018
Why this occurred: ElectroDB's design revolves around best practices related to modeling in single table design. This includes giving indexed fields generic names. If the PK and SK fields on your table indexes also match the names of attributes on your Entity you will need to make special considerations to make sure ElectroDB can accurately map your data.
What to do about it: Checkout the section Using ElectroDB with existing data to learn more about considerations to make when using attributes as index fields.
Code: 1019
Why this occurred: Collections allow for unique access patterns to be modeled between entities. It does this by appending prefixes to your key composites. If an Entity leverages an attribute field as an index key, ElectroDB will be unable to prefix your value because that would result in modifying the value itself.
What to do about it: Checkout the section Collections to learn more about collections, as well as the section Using ElectroDB with existing data to learn more about considerations to make when using attributes as index fields.
Code: 2002
Why this occurred: The current request is missing some composite attributes to complete the query based on the model definition. Composite Attributes are used to create the Partition and Sort keys. In DynamoDB Partition keys cannot be partially included, and Sort Keys can be partially include they must be at least passed in the order they are defined on the model.
What to do about it: The error should describe the missing composite attributes, ensure those composite attributes are included in the query or update the model to reflect the needs of the access pattern.
Code: 2003f
Why this occurred: You never specified a Table for DynamoDB to use.
What to do about it: Tables can be defined on the Service Options object when you create an Entity or Service, or if that is not known at the time of creation, it can be supplied as a Query Option and supplied on each query individually. If can be supplied on both, in that case the Query Option will override the Service Option.
Code: 2004
Why this occurred:
When performing a bulk operation (Batch Get, Batch Delete Records, Batch Put Records) you can pass a Query Options called concurrent
, which impacts how many batch requests can occur at the same time. Your value should pass the test of both, !isNaN(parseInt(value))
and parseInt(value) > 0
.
What to do about it:
Expect this error only if you're providing a concurrency
option. Double-check the value you are providing is the value you expect to be passing, and that the value passes the tests listed above.
Code: 2005
Why this occurred:
When performing a query Query you can pass a Query Options called pages
, which impacts how many DynamoDB pages a query should iterate through. Your value should pass the test of both, !isNaN(parseInt(value))
and parseInt(value) > 0
.
What to do about it:
Expect this error only if you're providing a pages
option. Double-check the value you are providing is the value you expect to be passing, and that the value passes the tests listed above.
Code: 2006
Why this occurred:
When performing a query Query you can pass a Query Options called limit
, which impacts how many DynamoDB items a query should return. Your value should pass the test of both, !isNaN(parseInt(value))
and parseInt(value) > 0
.
What to do about it:
Expect this error only if you're providing a limit
option. Double-check the value you are providing is the value you expect to be passing, and that the value passes the tests listed above.
Code: 3001
Why this occurred: The value received for a validation either failed type expectations (e.g. a "number" instead of a "string"), or the user provided "validate" callback on an attribute rejected a value.
What to do about it:
Examine the error itself for more precise detail on why the failure occurred. The error object itself should have a property called "fields" which contains an array of every attribute that failed validation, and a reason for each. If the failure originated from a "validate" callback, the originally thrown error will be accessible via the cause
property the corresponding element within the fields array.1
Below is the type definition for an ElectroValidationError:
ElectroValidationError<T extends Error = Error> extends ElectroError {
readonly name: "ElectroValidationError"
readonly code: number;
readonly date: number;
readonly isElectroError: boolean;
ref: {
readonly code: number;
readonly section: string;
readonly name: string;
readonly sym: unique symbol;
}
readonly fields: ReadonlyArray<{
/**
* The json path to the attribute that had a validation error
*/
readonly field: string;
/**
* A description of the validation error for that attribute
*/
readonly reason: string;
/**
* Index of the value passed (present only in List attribute validation errors)
*/
readonly index: number | undefined;
/**
* The error thrown from the attribute's validate callback (if applicable)
*/
readonly cause: T | undefined;
}>
}
Code: 4001
Why this occurred: DynamoDB did not like something about your query.
What to do about it:
By default ElectroDB tries to keep the stack trace close to your code, ideally this can help you identify what might be going on. A tip to help with troubleshooting: use .params()
to get more insight into how your query is converted to DocClient params.
Code: 5004
Why this occurred: When using pagination with a Service, ElectroDB will try to identify which Entity is associated with the supplied pager. This error can occur when you supply an invalid pager, or when you are using a different pager option to a pager than what was used when retrieving it. Consult the section on Pagination to learn more.
What to do about it:
If you are sure the pager you are passing to .page()
is the same you received from .page()
this could be an unexpected error. To mitigate the issue use the Query Option {pager: "raw"}
and please open a support issue.
Code: 5005
Why this occurred:
When using pagination with a Service, ElectroDB will try to identify which Entity is associated with the supplied pager option. This error can occur when you supply a pager that resolves to more than one Entity. This can happen if your entities share the same composite attributes for the index you are querying on, and you are using the Query Option {pager: "item""}
.
What to do about it:
Because this scenario is possible with otherwise well considered/thoughtful entity models, the default pager
type used by ElectroDB is "named"
. To avoid this error, you will need to use either the "raw"
or "named"
pager options for any index that could result in an ambiguous Entity owner.
Want to just play with ElectroDB instead of read about it? Try it out for yourself! https://runkit.com/tywalch/electrodb-building-queries
For an example, lets look at the needs of application used to manage Employees. The application Looks at employees, offices, tasks, and projects.
const EmployeesModel = {
model: {
entity: "employees",
version: "1",
service: "taskapp",
},
attributes: {
employee: "string",
firstName: "string",
lastName: "string",
office: "string",
title: "string",
team: ["development", "marketing", "finance", "product"],
salary: "string",
manager: "string",
dateHired: "string",
birthday: "string",
},
indexes: {
employee: {
pk: {
field: "pk",
composite: ["employee"],
},
sk: {
field: "sk",
composite: [],
},
},
coworkers: {
index: "gsi1pk-gsi1sk-index",
collection: "workplaces",
pk: {
field: "gsi1pk",
composite: ["office"],
},
sk: {
field: "gsi1sk",
composite: ["team", "title", "employee"],
},
},
teams: {
index: "gsi2pk-gsi2sk-index",
pk: {
field: "gsi2pk",
composite: ["team"],
},
sk: {
field: "gsi2sk",
composite: ["title", "salary", "employee"],
},
},
employeeLookup: {
collection: "assignments",
index: "gsi3pk-gsi3sk-index",
pk: {
field: "gsi3pk",
composite: ["employee"],
},
sk: {
field: "gsi3sk",
composite: [],
},
},
roles: {
index: "gsi4pk-gsi4sk-index",
pk: {
field: "gsi4pk",
composite: ["title"],
},
sk: {
field: "gsi4sk",
composite: ["salary", "employee"],
},
},
directReports: {
index: "gsi5pk-gsi5sk-index",
pk: {
field: "gsi5pk",
composite: ["manager"],
},
sk: {
field: "gsi5sk",
composite: ["team", "office", "employee"],
},
},
}
};
const TasksModel = {
model: {
entity: "tasks",
version: "1",
service: "taskapp",
},
attributes: {
task: "string",
project: "string",
employee: "string",
description: "string",
},
indexes: {
task: {
pk: {
field: "pk",
composite: ["task"],
},
sk: {
field: "sk",
composite: ["project", "employee"],
},
},
project: {
index: "gsi1pk-gsi1sk-index",
pk: {
field: "gsi1pk",
composite: ["project"],
},
sk: {
field: "gsi1sk",
composite: ["employee", "task"],
},
},
assigned: {
collection: "assignments",
index: "gsi3pk-gsi3sk-index",
pk: {
field: "gsi3pk",
composite: ["employee"],
},
sk: {
field: "gsi3sk",
composite: ["project", "task"],
},
},
},
};
const OfficesModel = {
model: {
entity: "offices",
version: "1",
service: "taskapp",
},
attributes: {
office: "string",
country: "string",
state: "string",
city: "string",
zip: "string",
address: "string",
},
indexes: {
locations: {
pk: {
field: "pk",
composite: ["country", "state"],
},
sk: {
field: "sk",
composite: ["city", "zip", "office"],
},
},
office: {
index: "gsi1pk-gsi1sk-index",
collection: "workplaces",
pk: {
field: "gsi1pk",
composite: ["office"],
},
sk: {
field: "gsi1sk",
composite: [],
},
},
},
};
Join models on a new Service
called EmployeeApp
const DynamoDB = require("aws-sdk/clients/dynamodb");
const client = new DynamoDB.DocumentClient({region: "us-east-1"});
const { Service } = require("electrodb");
const table = "projectmanagement";
const EmployeeApp = new Service({
employees: EmployeesModel,
tasks: TasksModel,
offices: OfficesModel,
}, { client, table });
Fulfilling Requirement #1.
EmployeeApp.collections.assignments({employee: "CBaskin"}).go();
Returns the following:
{
data: {
employees: [{
employee: "cbaskin",
firstName: "carol",
lastName: "baskin",
office: "big cat rescue",
title: "owner",
team: "cool cats and kittens",
salary: "1,000,000",
manager: "",
dateHired: "1992-11-04",
birthday: "1961-06-06",
}],
tasks: [{
task: "Feed tigers",
description: "Prepare food for tigers to eat",
project: "Keep tigers alive",
employee: "cbaskin"
}, {
task: "Fill water bowls",
description: "Ensure the tigers have enough water",
project: "Keep tigers alive",
employee: "cbaskin"
}]
},
cursor: '...'
}
Fulfilling Requirement #2.
EmployeeApp.collections.workplaces({office: "big cat rescue"}).go()
Returns the following:
{
data: {
employees: [{
employee: "cbaskin",
firstName: "carol",
lastName: "baskin",
office: "big cat rescue",
title: "owner",
team: "cool cats and kittens",
salary: "1,000,000",
manager: "",
dateHired: "1992-11-04",
birthday: "1961-06-06",
}],
offices: [{
office: "big cat rescue",
country: "usa",
state: "florida",
city: "tampa",
zip: "12345",
address: "123 Kitty Cat Lane"
}]
},
cursor: '...'
}
Fulfilling Requirement #3.
EmployeeApp.entities.tasks.query.assigned({employee: "cbaskin"}).go();
Returns the following:
{
data: [
{
task: "Feed tigers",
description: "Prepare food for tigers to eat",
project: "Keep tigers alive",
employee: "cbaskin"
}, {
task: "Fill water bowls",
description: "Ensure the tigers have enough water",
project: "Keep tigers alive",
employee: "cbaskin"
}
],
cursor: '...',
}
Fulfilling Requirement #4.
EmployeeApp.entities.tasks.query.project({project: "Murder Carol"}).go();
Returns the following:
{
data: [
{
task: "Hire hitman",
description: "Find someone to murder Carol",
project: "Murder Carol",
employee: "jexotic"
}
],
cursor: '...'
}
Fulfilling Requirement #5.
EmployeeApp.entities.office.locations({country: "usa", state: "florida"}).go()
Returns the following:
{
data: [
{
office: "big cat rescue",
country: "usa",
state: "florida",
city: "tampa",
zip: "12345",
address: "123 Kitty Cat Lane"
}
],
cursor: '...'
}
Fulfilling Requirement #6.
EmployeeApp.entities.employees
.roles({title: "animal wrangler"})
.lte({salary: "150.00"})
.go()
Returns the following:
{
data: [
{
employee: "ssaffery",
firstName: "saff",
lastName: "saffery",
office: "gw zoo",
title: "animal wrangler",
team: "keepers",
salary: "105.00",
manager: "jexotic",
dateHired: "1999-02-23",
birthday: "1960-07-11",
}
],
cursor: '...'
}
Fulfilling Requirement #7.
const startDate = "2020-05-01";
const endDate = "2020-06-01";
EmployeeApp.entities.employees
.workplaces({office: "gw zoo"})
.where(({ birthday, dateHired }, { between }) => `
${between(dateHired, startDate, endDate)} OR
${between(birthday, startDate, endDate)}
`)
.upcomingCelebrations("2020-05-01", "2020-06-01")
.go()
Returns the following:
{
data: [
{
employee: "jexotic",
firstName: "joe",
lastName: "maldonado-passage",
office: "gw zoo",
title: "tiger king",
team: "founders",
salary: "10000.00",
manager: "jlowe",
dateHired: "1999-02-23",
birthday: "1963-03-05",
}
],
cursor: '...'
}
Fulfilling Requirement #8.
EmployeeApp.entities.employees
.reports({manager: "jlowe"})
.go()
Returns the following:
{
data: [
{
employee: "jexotic",
firstName: "joe",
lastName: "maldonado-passage",
office: "gw zoo",
title: "tiger king",
team: "founders",
salary: "10000.00",
manager: "jlowe",
dateHired: "1999-02-23",
birthday: "1963-03-05",
}
],
cursor: '...'
}
For an example, lets look at the needs of application used to manage Shopping Mall properties. The application assists employees in the day-to-day operations of multiple Shopping Malls.
Create a new Entity using the StoreLocations
schema defined above
const DynamoDB = require("aws-sdk/clients/dynamodb");
const client = new DynamoDB.DocumentClient();
const StoreLocations = new Entity(model, {client, table: "StoreLocations"});
await StoreLocations.create({
mallId: "EastPointe",
storeId: "LatteLarrys",
buildingId: "BuildingA1",
unitId: "B47",
category: "spite store",
leaseEndDate: "2020-02-29",
rent: "5000.00",
}).go();
Returns the following:
{
"data": {
"mallId": "EastPointe",
"storeId": "LatteLarrys",
"buildingId": "BuildingA1",
"unitId": "B47",
"category": "spite store",
"leaseEndDate": "2020-02-29",
"rent": "5000.00",
"discount": "0.00"
}
}
When updating a record, you must include all Composite Attributes associated with the table's primary PK and SK.
let storeId = "LatteLarrys";
let mallId = "EastPointe";
let buildingId = "BuildingA1";
let unitId = "B47";
await StoreLocations.update({storeId, mallId, buildingId, unitId}).set({
leaseEndDate: "2021-02-28"
}).go();
Returns the following:
{
"data": {
"leaseEndDate": "2021-02-28"
}
}
When retrieving a specific record, you must include all Composite Attributes associated with the table's primary PK and SK.
let storeId = "LatteLarrys";
let mallId = "EastPointe";
let buildingId = "BuildingA1";
let unitId = "B47";
await StoreLocations.get({storeId, mallId, buildingId, unitId}).go();
Returns the following:
{
"mallId": "EastPointe",
"storeId": "LatteLarrys",
"buildingId": "BuildingA1",
"unitId": "B47",
"category": "spite store",
"leaseEndDate": "2021-02-28",
"rent": "5000.00",
"discount": "0.00"
}
When removing a specific record, you must include all Composite Attributes associated with the table's primary PK and SK.
let storeId = "LatteLarrys";
let mallId = "EastPointe";
let buildingId = "BuildingA1";
let unitId = "B47";
let storeId = "LatteLarrys";
await StoreLocations.delete({storeId, mallId, buildingId, unitId}).go();
Returns the following:
{ "data": {} }
Fulfilling Requirement #1.
let mallId = "EastPointe";
let stores = await StoreLocations.malls({mallId}).query().go();
Fulfilling Requirement #1.
let mallId = "EastPointe";
let buildingId = "BuildingA1";
let stores = await StoreLocations.malls({mallId}).query({buildingId}).go();
Fulfilling Requirement #1.
let mallId = "EastPointe";
let buildingId = "BuildingA1";
let unitId = "B47";
let stores = await StoreLocations.malls({mallId}).query({buildingId, unitId}).go();
Fulfilling Requirement #2.
let mallId = "EastPointe";
let category = "food/coffee";
let stores = await StoreLocations.malls({mallId}).byCategory(category).go();
Fulfilling Requirement #3.
let mallId = "EastPointe";
let q2StartDate = "2020-04-01";
let stores = await StoreLocations.leases({mallId}).lt({leaseEndDate: q2StateDate}).go();
Fulfilling Requirement #3.
let mallId = "EastPointe";
let q4StartDate = "2020-10-01";
let q4EndDate = "2020-12-31";
let stores = await StoreLocations.leases(mallId)
.between (
{leaseEndDate: q4StartDate},
{leaseEndDate: q4EndDate})
.go();
Fulfilling Requirement #3.
let mallId = "EastPointe";
let yearStarDate = "2020-01-01";
let yearEndDate = "2020-12-31";
let storeId = "LatteLarrys";
let stores = await StoreLocations.leases(mallId)
.between (
{leaseEndDate: yearStarDate},
{leaseEndDate: yearEndDate})
.filter(attr => attr.category.eq("Spite Store"))
.go();
let mallId = "EastPointe";
let buildingId = "BuildingA1";
let unitId = "B47";
let storeId = "LatteLarrys";
let stores = await StoreLocations.malls({mallId}).query({buildingId, storeId}).go();
ElectroDB using advanced dynamic typing techniques to automatically create types based on the configurations in your model. Changes to your model will automatically change the types returned by ElectroDB.
If you have a need for a custom attribute type (beyond those supported by ElectroDB) you can use the the export function CustomAttributeType
or OpaquePrimitiveType
. These functions can be passed a generic and that allow you to specify a custom attribute with ElectroDB:
This function allows for a narrowing of ElectroDB's any
type, which does not enforce runtime type checks. This can be useful for expressing complex attribute types.
The function CustomAttributeType
takes one argument, which is the "base" type of the attribute. For complex objects and arrays, the base object would be "any" but you can also use a base type like "string", "number", or "boolean" to accomplish (Opaque Keys)[#opaque-keys] which can be used as Composite Attributes.
In this example we accomplish a complex union type:
import { Entity, CustomAttributeType } from 'electrodb';
const table = 'workplace_table';
type PersonnelRole = {
type: 'employee';
startDate: number;
endDate?: number;
} | {
type: 'contractor';
contractStartDate: number;
contractEndDate: number;
};
const person = new Entity({
model: {
entity: 'personnel',
service: 'workplace',
version: '1'
},
attributes: {
id: {
type: 'string'
},
role: {
type: CustomAttributeType<PersonnelRole>('any'),
required: true,
},
},
indexes: {
record: {
pk: {
field: 'pk',
composite: ['id']
},
sk: {
field: 'sk',
composite: [],
}
}
}
}, { table });
If you use Opaque Keys for identifiers or other primitive types, you can use the function CustomAttributeType
and pass it the primitive base type of your key ('string', 'number', 'boolean'). This can be useful to gain more precise control over which properties can be used as entity identifiers, create unique unit types, etc.
import { Entity, CustomAttributeType } from 'electrodb';
const UniqueKeySymbol: unique symbol = Symbol();
type EmployeeID = string & {[UniqueKeySymbol]: any};
const UniqueAgeSymbol: unique symbol = Symbol();
type Month = number & {[UniqueAgeSymbol]: any};
const table = 'workplace_table';
const person = new Entity({
model: {
entity: 'personnel',
service: 'workplace',
version: '1'
},
attributes: {
employeeId: {
type: CustomAttributeType<EmployeeID>('string')
},
firstName: {
type: 'string',
required: true,
},
lastName: {
type: 'string',
required: true,
},
ageInMonths: {
type: CustomAttributeType<Month>('number')
}
},
indexes: {
record: {
pk: {
field: 'pk',
composite: ['employeeId']
},
sk: {
field: 'sk',
composite: [],
}
}
}
}, { table });
The following types are exported for easier use while using ElectroDB with TypeScript. The naming convention for the types include three different kinds:
xResponse
-- Types with the postfix Response
represent the returned interfaces directly from ElectroDB.
xItem
-- Types with the postfix Item
represent an Entity row. Queries return multiple items, a get returns a single item, etc. The type for an item is inferred based on the attributes and index definitions within your model. For example if your attribute is marked as required
then that attribute will never be undefined, if your attribute has a default value then it won't be required to be supplied on put
, list
attributes must be an array, etc.
xRecord
-- In some cases it is helpful to have a type that respresents all attributes of an item without nullable properties. Types with the postfix Record
contain all properties in a non-nullable format.
The follow highlight many of the types exported utility types from ElectroDB:
The QueryResponse type is the same type returned by an ElectroDB Query.
Definition:
export type QueryResponse<E extends Entity<any, any, any, any>> = {
data: EntityItem<E>;
cursor: string | null;
}
Use:
type EntitySchema = QueryResponse<typeof MyEntity>
The EntityRecord type is an object containing every attribute an Entity's model.
Definition:
type EntityRecord<E extends Entity<any, any, any, any>> =
E extends Entity<infer A, infer F, infer C, infer S>
? Item<A,F,C,S,S["attributes"]>
: never;
Use:
type Item = EntityRecord<typeof MyEntity>
This type represents an item as it is returned from a query. This is different from the EntityRecord
in that this type reflects the required
, hidden
, default
, etc properties defined on the attribute.
Definition:
export type EntityItem<E extends Entity<any, any, any, any>> =
E extends Entity<infer A, infer F, infer C, infer S>
? ResponseItem<A, F, C, S>
: never;
Use:
type Item = EntityItem<typeof MyEntityInstance>;
This type represents an item returned from a collection query, and is similar to EntityItem.
Definition:
export type CollectionItem<SERVICE extends Service<any>, COLLECTION extends keyof SERVICE["collections"]> =
SERVICE extends Service<infer E>
? Pick<{
[EntityName in keyof E]: E[EntityName] extends Entity<infer A, infer F, infer C, infer S>
? COLLECTION extends keyof CollectionAssociations<E>
? EntityName extends CollectionAssociations<E>[COLLECTION]
? ResponseItem<A,F,C,S>[]
: never
: never
: never
}, COLLECTION extends keyof CollectionAssociations<E>
? CollectionAssociations<E>[COLLECTION]
: never>
: never
Use:
type CollectionResults = CollectionItem<typeof MyServiceInstance, "collectionName">
This type represents the value returned the collection query itself
Definition:
export type CollectionResponse<SERVICE extends Service<any>, COLLECTION extends keyof SERVICE["collections"]> = {
data: CollectionItem<SERVICE, COLLECTION>;
cursor: string | null;
}
Use:
type CollectionResults = CollectionResponse<typeof MyServiceInstance, "collectionName">
This type represents an item that you would pass your entity's put
or create
method
Definition:
export type CreateEntityItem<E extends Entity<any, any, any, any>> =
E extends Entity<infer A, infer F, infer C, infer S>
? PutItem<A, F, C, S>
: never;
Use:
type NewThing = CreateEntityItem<typeof MyEntityInstance>;
This type represents an item that you would pass your entity's set
method when using create
or update
.
Definition:
export type UpdateEntityItem<E extends Entity<any, any, any, any>> =
E extends Entity<infer A, infer F, infer C, infer S>
? SetItem<A, F, C, S>
: never;
Use:
type UpdateProperties = UpdateEntityItem<typeof MyEntityInstance>;
This type represents an item that you would pass your entity's add
method when using create
or update
.
Definition:
export type UpdateAddEntityItem<E extends Entity<any, any, any, any>> =
E extends Entity<infer A, infer F, infer C, infer S>
? AddItem<A, F, C, S>
: never;
This type represents an item that you would pass your entity's subtract
method when using create
or update
.
Definition:
export type UpdateSubtractEntityItem<E extends Entity<any, any, any, any>> =
E extends Entity<infer A, infer F, infer C, infer S>
? SubtractItem<A, F, C, S>
: never;
This type represents an item that you would pass your entity's append
method when using create
or update
.
Definition:
export type UpdateAppendEntityItem<E extends Entity<any, any, any, any>> =
E extends Entity<infer A, infer F, infer C, infer S>
? AppendItem<A, F, C, S>
: never;
This type represents an item that you would pass your entity's remove
method when using create
or update
.
Definition:
export type UpdateRemoveEntityItem<E extends Entity<any, any, any, any>> =
E extends Entity<infer A, infer F, infer C, infer S>
? RemoveItem<A, F, C, S>
: never;
This type represents an item that you would pass your entity's delete
method when using create
or update
.
Definition:
export type UpdateDeleteEntityItem<E extends Entity<any, any, any, any>> =
E extends Entity<infer A, infer F, infer C, infer S>
? DeleteItem<A, F, C, S>
: never;
When using ElectroDB with an existing table and/or data model, there are a few configurations you may need to make to your ElectroDB model. Read the sections below to see if any of the following cases fits your particular needs.
Whenever using ElectroDB with existing tables/data, it is best to use the Query Option ignoreOwnership
. ElectroDB leaves some meta-data on items to help ensure data queried and returned from DynamoDB does not leak between entities. Because your data was not made by ElectroDB, these checks could impede your ability to return data.
// when building params
.params({ignoreOwnership: true})
// when querying the table
.go({ignoreOwnership: true})
Your existing index fields have values with mixed case:
DynamoDB is case-sensitive, and ElectroDB will lowercase key values by default. In the case where you modeled your data with uppercase, or did not apply case modifications, ElectroDB can be configured to match this behavior. Checkout the second on Index Casing to read more.
You have index field names that match attribute names:
With Single Table Design, it is encouraged to give index fields a generic name, like pk
, sk
, gsi1pk
, etc. In reality, it is also common for tables to have index fields that are named after the domain itself, like accountId
, organizationId
, etc.
ElectroDB tries to abstract away your when working with DynamoDB, so instead of defining pk
or sk
in your model's attributes, you define them as indexes and map other attributes onto those fields as a composite. Using separate item fields for keys, then for the actual attributes you use in your application, you can leverage more advanced modeling techniques in DynamoDB.
If your existing table uses non-generic fields that also function as attributes, checkout the section Attributes as Indexes to learn more about how ElectroDB handles these types of indexes.
NOTE: The ElectroCLI is currently in a beta phase and subject to change.
Electro is a CLI utility toolbox for extending the functionality of ElectroDB. Current functionality of the CLI allows you to:
Entities
, Services
, Models
directly from the command line.Entities
, Services
, Models
.For usage and installation details you can learn more here.
Prior to 2.0.0, ElectroDB had multiple unique response signatures depending on the method used. Queries now return responses within an envelope object with results typically on a property called data
. The section Building Queries now has response format examples for all methods, and the section Exported Types has new utility types you can use to express response types in your code.
Version 2.0.0 removes the .page()
terminal function and unifies pagination under the .go()
method. The response signature for queries, scans, finds, and matches now include a cursor string that can be passed back into the go method as a query option (e.g. go({cursor})
. This new cursor is a departure from the destructure object ElectroDB returned prior for pagination, and is a base64url
type string making it url safe.
Note: It is still possible to return the native DynamoDB LastEvaluatedKey using the pager
and/or data
query options. This new cursor
Another change to pagination involves the "auto-pagination" used with the .go()
method. Prior to 2.0.0 the .go()
method would paginate through all query results automatically. This was not the behavior for scan
which caused some confusion. All queries and and query-like methods (scan, find, match, etc) now query a single page by default. You can use the query options pages
and limit
to instruct electrodb to automatically iterate through multiple pages, or use pages: 'all'
to have electrodb automatically exhaust pagination.
Checkout the section [#pagination-query-options] to read more on this topic and to find an example of how to perform pagination with ElectroDB 2.0.0.
All ElectroDB query
and scan
operations return a cursor
, which is a stringified and copy of DynamoDB's LastEvaluatedKey
with a base64url
encoding. Read the section Pagination Cursor to learn more about how the cursor is formed and how to use it to accomplish pagination in ElectroDB.
This section is to detail any breaking changes made on the journey to a stable 1.0 product.
It became clear when I added the concept of a Service that the "version" paradigm of having the version in the PK wasn't going to work. This is because collection queries use the same PK for all entities and this would prevent some entities in a Service to change versions without impacting the service as a whole. The better more is the place the version in the SK after the entity name so that all version of an entity can be queried. This will work nicely into the migration feature I have planned that will help migrate between model versions.
To address this change, I decide it would be best to change the structure for defining a model, which is then used as heuristic to determine where to place the version in the key (PK or SK). This has the benefit of not breaking existing models, but does increase some complexity in the underlying code.
Additionally, a change was made to the Service class. New Services would take a string of the service name instead of an object as before.
In the old scheme, version came after the service name (see ^
).
pk: $mallstoredirectory_1#mall_eastpointe
^
sk: $mallstores#building_buildinga#store_lattelarrys
In the new scheme, version comes after the entity name (see ^
).
pk: $mallstoredirectory#mall_eastpointe
sk: $mallstores_1#building_buildinga#store_lattelarrys
^
In practice the change looks like this for use of Entity
:
const DynamoDB = require("aws-sdk/clients/dynamodb");
const {Entity} = require("electrodb");
const client = new DynamoDB.DocumentClient();
const table = "dynamodb_table_name";
// old way
let old_schema = {
entity: "model_name",
service: "service_name",
version: "1",
table: table,
attributes: {...},
indexes: {...}
};
new Entity(old_schema, {client});
// new way
let new_schema = {
model: {
entity: "model_name",
service: "service_name",
version: "1",
},
attributes: {...},
indexes: {...}
};
new Entity(new_schema, {client, table});
Changes to usage of Service
would look like this:
const DynamoDB = require("aws-sdk/clients/dynamodb");
const {Service} = require("electrodb");
const client = new DynamoDB.DocumentClient();
const table = "dynamodb_table_name";
// old way
new Service({
service: "service_name",
version: "1",
table: table,
}, {client});
// new way
new Service({entity1, entity2, ...})
In preparation of moving the codebase to version 1.0, ElectroDB will now accept the facets
property as either the composite
and/or template
properties. Using the facets
property is still accepted by ElectroDB but will be deprecated sometime in the future (tbd).
This change stems from the fact the facets
is already a defined term in the DynamoDB space and that definition does not fit the use-case of how ElectroDB uses the term. To avoid confusion from new developers, the facets
property shall now be called composite
(as in Composite Attributes) when supplying an Array of attributes, and template
while supplying a string. These are two independent fields for two reasons:
ElectroDB will validate the Composite Attributes provided map to those in the template (more validation is always nice).
Allowing for the composite
array to be supplied independently will allow for Composite Attributes to remained typed even when using a Composite Attribute Template.
1.0.0 brings back a null
response from the get()
method when a record could not be found. Prior to 1.0.0
ElectroDB returned an empty object.
[2.3.3] - 2022-11-28
remove
and delete
functionality between update
and patch
methods.FAQs
A library to more easily create and interact with multiple entities and heretical relationships in dynamodb
The npm package electrodb receives a total of 411,753 weekly downloads. As such, electrodb popularity was classified as popular.
We found that electrodb demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.