Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

redlock

Package Overview
Dependencies
Maintainers
1
Versions
27
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

redlock - npm Package Compare versions

Comparing version 5.0.0-alpha.0 to 5.0.0-beta.0

dist/cjs/index.js

9

CHANGELOG.md

@@ -24,1 +24,10 @@ ## v4.0.0

- **BREAKING** Drop support for Node < 12
## v5.0.0-beta1
- Compile to both ESM and CJS (@ekosz via [#114](https://github.com/mike-marcacci/node-redlock/pull/114/)).
- Add compatibility with TypeScript 4.4 (@slosd via [#104](https://github.com/mike-marcacci/node-redlock/pull/104)).
- Use docker compose to test against real clusters in CI (via #101)
- Add documentation for contributing.
- Upgrade dependencies.
- **BREAKING** Change types for "using" helper (@ekosz via [#113](https://github.com/mike-marcacci/node-redlock/pull/114/)).

8

dist/index.d.ts
/// <reference types="node" />
import { EventEmitter } from "events";
import { Redis as IORedisClient } from "ioredis";
declare type Client = IORedisClient;
import { Redis as IORedisClient, Cluster as IORedisCluster } from "ioredis";
declare type Client = IORedisClient | IORedisCluster;
export declare type ClientExecutionResult = {

@@ -153,5 +153,5 @@ client: Client;

*/
using<T>(resources: string[], duration: number, settings: Partial<Settings>, routine?: (signal: RedlockAbortSignal) => T): Promise<T>;
using<T>(resources: string[], duration: number, routine: (signal: RedlockAbortSignal) => T): Promise<T>;
using<T>(resources: string[], duration: number, settings: Partial<Settings>, routine?: (signal: RedlockAbortSignal) => Promise<T>): Promise<T>;
using<T>(resources: string[], duration: number, routine: (signal: RedlockAbortSignal) => Promise<T>): Promise<T>;
}
export {};
{
"name": "redlock",
"version": "5.0.0-alpha.0",
"version": "5.0.0-beta.0",
"description": "A node.js redlock implementation for distributed redis locks",

@@ -13,3 +13,11 @@ "license": "MIT",

"bugs": "https://github.com/mike-marcacci/node-redlock/issues",
"main": "dist/index.js",
"main": "./dist/cjs/index.js",
"module": "./dist/esm/index.js",
"types": "./dist/index.d.ts",
"exports": {
".": {
"import": "./dist/esm/index.js",
"require": "./dist/cjs/index.js"
}
},
"keywords": [

@@ -24,4 +32,6 @@ "nodejs",

"dist/index.d.ts",
"dist/index.js",
"dist/index.js.map"
"dist/esm/index.js",
"dist/esm/index.js.map",
"dist/cjs/index.js",
"dist/cjs/index.js.map"
],

@@ -38,14 +48,14 @@ "engines": {

"devDependencies": {
"@types/ioredis": "^4.26.6",
"@types/node": "^16.4.2",
"@typescript-eslint/eslint-plugin": "^4.28.4",
"@typescript-eslint/parser": "^4.28.4",
"@types/ioredis": "^4.28.1",
"@types/node": "^16.11.10",
"@typescript-eslint/eslint-plugin": "^5.4.0",
"@typescript-eslint/parser": "^5.4.0",
"ava": "^3.13.0",
"eslint": "^7.31.0",
"eslint": "^8.3.0",
"eslint-config-prettier": "^8.3.0",
"eslint-plugin-prettier": "^3.2.0",
"ioredis": "^4.19.2",
"nodemon": "^2.0.6",
"prettier": "^2.2.1",
"typescript": "^4.1.2"
"eslint-plugin-prettier": "^4.0.0",
"ioredis": "^4.28.1",
"nodemon": "^2.0.15",
"prettier": "^2.5.0",
"typescript": "~4.5.2"
},

@@ -55,6 +65,6 @@ "scripts": {

"lint": "prettier -c '**/*.{json,yml,md,ts}' && eslint src --ext ts",
"build": "rm -rf dist && tsc",
"build:development": "rm -rf dist && tsc --watch",
"test": "ava --verbose dist/*.test.js",
"test:development": "ava --verbose --watch dist/*.test.js",
"build": "rm -f dist/**/*.{js,js.map,d.ts} && tsc && tsc -p tsconfig.cjs.json",
"build:development": "rm -f dist/**/*.{js,js.map,d.ts} && tsc --watch",
"test": "cd dist/esm && ava --verbose *.test.js",
"test:development": "cd dist/esm && ava --verbose --watch *.test.js",
"prepare": "yarn build",

@@ -64,3 +74,3 @@ "prepublishOnly": "yarn install && yarn lint && yarn build"

"dependencies": {
"node-abort-controller": "^2.0.0"
"node-abort-controller": "^3.0.1"
},

@@ -67,0 +77,0 @@ "type": "module",

@@ -1,2 +0,2 @@

[![Continuous Integration](https://github.com/mike-marcacci/node-redlock/workflows/Continuous%20Integration/badge.svg)](https://github.com/mike-marcacci/node-redlock/actions/workflows/ci.yml)
[![Continuous Integration](https://github.com/mike-marcacci/node-redlock/workflows/Continuous%20Integration/badge.svg)](https://github.com/mike-marcacci/node-redlock/actions/workflows/ci.yml?query=branch%3Amain++)
[![Current Version](https://badgen.net/npm/v/redlock)](https://npm.im/redlock)

@@ -11,30 +11,6 @@ [![Supported Node.js Versions](https://badgen.net/npm/node/redlock)](https://npm.im/redlock)

- [Usage](#usage)
- [Error Handling](#error-handling)
- [API](#api)
- [Guidance](#guidance)
### High-Availability Recommendations
- Use at least 3 independent servers or clusters
- Use an odd number of independent redis **_servers_** for most installations
- Use an odd number of independent redis **_clusters_** for massive installations
- When possible, distribute redis nodes across different physical machines
### Using Cluster/Sentinel
**_Please make sure to use a client with built-in cluster support, such as [ioredis](https://github.com/luin/ioredis)._**
It is completely possible to use a _single_ redis cluster or sentinal configuration by passing one preconfigured client to redlock. While you do gain high availability and vastly increased throughput under this scheme, the failure modes are a bit different, and it becomes theoretically possible that a lock is acquired twice:
Assume you are using eventually-consistent redis replication, and you acquire a lock for a resource. Immediately after acquiring your lock, the redis master for that shard crashes. Redis does its thing and fails over to the slave which hasn't yet synced your lock. If another process attempts to acquire a lock for the same resource, it will succeed!
This is why redlock allows you to specify multiple independent nodes/clusters: by requiring consensus between them, we can safely take out or fail-over a minority of nodes without invalidating active locks.
To learn more about the the algorithm, check out the [redis distlock page](http://redis.io/topics/distlock).
### How do I check if something is locked?
The purpose of redlock is to provide exclusivity guarantees on a resource over a duration of time, and is not designed to report the ownership status of a resource. For example, if you are on the smaller side of a network partition you will fail to acquire a lock, but you don't know if the lock exists on the other side; all you know is that you can't guarantee exclusivity on yours. This is further complicated by retry behavior, and even moreso when acquiring a lock on more than one resource.
That said, for many tasks it's sufficient to attempt a lock with `retryCount=0`, and treat a failure as the resource being "locked" or (more correctly) "unavailable".
Note that with `retryCount=-1` there will be unlimited retries until the lock is aquired.
## Installation

@@ -88,2 +64,41 @@

## Usage
The `using` method wraps and executes a routine in the context of an auto-extending lock, returning a promise of the routine's value. In the case that auto-extension fails, an AbortSignal will be updated to indicate that abortion of the routine is in order, and to pass along the encountered error.
```ts
await redlock.using([senderId, recipientId], 5000, async (signal) => {
// Do something...
await something();
// Make sure any attempted lock extension has not failed.
if (signal.aborted) {
throw signal.error;
}
// Do something else...
await somethingElse();
});
```
Alternatively, locks can be acquired and released directly:
```ts
// Acquire a lock.
let lock = await redlock.acquire(["a"], 5000);
try {
// Do something...
await something();
// Extend the lock.
lock = await lock.extend(5000);
// Do something else...
await somethingElse();
} finally {
// Release the lock.
await lock.release();
}
```
## Error Handling

@@ -109,42 +124,45 @@

## Usage
## API
The `using` method wraps and executes a routine in the context of an auto-extending lock, returning a promise of the routine's value. In the case that auto-extension fails, an AbortSignal will be updated to indicate that abortion of the routine is in order, and to pass along the encountered error.
Please view the (very concise) source code or TypeScript definitions for a detailed breakdown of the API.
```ts
await redlock.using([senderId, recipientId], 5000, async (signal) => {
// Do something...
await something();
## Guidance
// Make sure any necessary lock extension has not failed.
if (signal.aborted) {
throw signal.error;
}
### Contributing
// Do something else...
await somethingElse();
});
```
Please see [`CONTRIBUTING.md`](./CONTRIBUTING.md) for information on developing, running, and testing this library.
Alternatively, locks can be acquired and released directly:
### High-Availability Recommendations
```ts
// Acquire a lock.
let lock = await redlock.acquire(["a"], 5000);
- Use at least 3 independent servers or clusters
- Use an odd number of independent redis **_servers_** for most installations
- Use an odd number of independent redis **_clusters_** for massive installations
- When possible, distribute redis nodes across different physical machines
// Do something...
await something();
### Using Cluster/Sentinel
// Extend the lock.
lock = await lock.extend(5000);
**_Please make sure to use a client with built-in cluster support, such as [ioredis](https://github.com/luin/ioredis)._**
// Do something else...
await somethingElse();
It is completely possible to use a _single_ redis cluster or sentinal configuration by passing one preconfigured client to redlock. While you do gain high availability and vastly increased throughput under this scheme, the failure modes are a bit different, and it becomes theoretically possible that a lock is acquired twice:
// Release the lock.
await lock.release();
```
Assume you are using eventually-consistent redis replication, and you acquire a lock for a resource. Immediately after acquiring your lock, the redis master for that shard crashes. Redis does its thing and fails over to the slave which hasn't yet synced your lock. If another process attempts to acquire a lock for the same resource, it will succeed!
## API
This is why redlock allows you to specify multiple independent nodes/clusters: by requiring consensus between them, we can safely take out or fail-over a minority of nodes without invalidating active locks.
Please view the (very concise) source code or TypeScript definitions for a detailed breakdown of the API.
To learn more about the the algorithm, check out the [redis distlock page](http://redis.io/topics/distlock).
Also note that when acquiring a lock on multiple resources, commands are executed in a single call to redis. Redis clusters require that all keys exist in a command belong to the same node. **If you are using a redis cluster or clusters and need to lock multiple resources together you MUST use [redis hash tags](https://redis.io/topics/cluster-spec#keys-hash-tags) (ie. use `ignored{considered}ignored{ignored}` notation in resource strings) to ensure that all keys resolve to the same node.** Chosing what data to include must be done thoughtfully, because representing the same conceptual resource in more than one way defeats the purpose of acquiring a lock. Accordingly, it's generally wise to use a single very generic prefix to ensure that ALL lock keys resolve to the same node, such as `{redlock}my_resource`. This is the most straightforward strategy and may be appropriate when the cluster has additional purposes. However, when locks will always naturally share a common attribute (for example, an organization/tenant ID), this may be used for better key distribution and cluster utilization. You can also acheive ideal utilization by completely omiting a hash tag if you do _not_ need to lock multiple resources at the same time.
### How do I check if something is locked?
The purpose of redlock is to provide exclusivity guarantees on a resource over a duration of time, and is not designed to report the ownership status of a resource. For example, if you are on the smaller side of a network partition you will fail to acquire a lock, but you don't know if the lock exists on the other side; all you know is that you can't guarantee exclusivity on yours. This is further complicated by retry behavior, and even moreso when acquiring a lock on more than one resource.
That said, for many tasks it's sufficient to attempt a lock with `retryCount=0`, and treat a failure as the resource being "locked" or (more correctly) "unavailable".
Note that with `retryCount=-1` there will be unlimited retries until the lock is aquired.
### Use in CommonJS projects
Beginning in version 5, this package is published as an ECMAScript module. While this is universally accepted as the format of the future, there remain some quirks when used in CommonJS node applications. To provide better erganomics for use in CommonJS projects, this package **also** distributes a CommonJS version. Please ensure that your project either uses the CommonJS or ECMAScript version **but NOT both**.
In version 6, this package will stop distributing a CommonJS version.
SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc