Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Distributed locks with "prioritized lock acquisition queue" capabilities based on the Redis Database.
Each lock request is put into the request queue (each lock is hosted by it's own queue separately from other queues) and processed in order of their priority (FIFO). Each lock request lives some period of time (RTTL) (with requeue capabilities) which guarantees the request queue will never be stacked.
In addition to the classic queued
(FIFO) strategy RQL supports random
(RANDOM) lock obtaining strategy when any acquirer from the lock queue can obtain the lock regardless the position in the queue.
Provides flexible invocation flow, parametrized limits (lock request ttl, lock ttl, queue ttl, lock attempts limit, fast failing, etc), logging and instrumentation.
~> 7.x
;RESP3
;redis-client
: ~> 0.20
;>= 3.1
;~3000
locks-per-second are obtained and released on an ongoing basis;hiredis
driver enabled (it is enabled by default on our projects where redis_queued_locks
are used);Each lock request is put into the request queue (each lock is hosted by it's own queue separately from other queues) and processed in order of their priority (FIFO). Each lock request lives some period of time (RTTL) which guarantees that the request queue will never be stacked.
In addition to the classic "queued" (FIFO) strategy RQL supports "random" (RANDOM) lock obtaining strategy when any acquirer from the lock queue can obtain the lock regardless the position in the queue.
Soon: detailed explanation.
gem 'redis_queued_locks'
bundle install
# --- or ---
gem install redis_queued_locks
require 'redis_queued_locks'
require 'redis_queued_locks'
# Step 1: initialize RedisClient instance
redis_client = RedisClient.config.new_pool # NOTE: provide your own RedisClient instance
# Step 2: initialize RedisQueuedLock::Client instance
rq_lock_client = RedisQueuedLocks::Client.new(redis_client) do |config|
# NOTE:
# - some your application-related configs;
# - for documentation see <Configuration> section in readme;
end
# Step 3: start to work with locks :)
rq_lock_client.lock("some-lock") { puts "Hello, lock!" }
redis_client = RedisClient.config.new_pool # NOTE: provide your own RedisClient instance
clinet = RedisQueuedLocks::Client.new(redis_client) do |config|
# (default: 3) (supports nil)
# - nil means "infinite retries" and you are only limited by the "try_to_lock_timeout" config;
config.retry_count = 3
# (milliseconds) (default: 200)
config.retry_delay = 200
# (milliseconds) (default: 25)
config.retry_jitter = 25
# (seconds) (supports nil)
# - nil means "no timeout" and you are only limited by "retry_count" config;
config.try_to_lock_timeout = 10
# (milliseconds) (default: 5_000)
# - lock's time to live
config.default_lock_ttl = 5_000
# (seconds) (default: 15)
# - lock request timeout. after this timeout your lock request in queue will be requeued with new position (at the end of the queue);
config.default_queue_ttl = 15
# (boolean) (default: false)
# - should be all blocks of code are timed by default;
config.is_timed_by_default = false
# (boolean) (default: false)
# - When the lock acquirement try reached the acquirement time limit (:timeout option) the
# `RedisQueuedLocks::LockAcquirementTimeoutError` is raised (when `raise_errors` option
# of the #lock method is set to `true`). The error message contains the lock key name and
# the timeout value).
# - <true> option adds the additional details to the error message:
# - current lock queue state (you can see which acquirer blocks your request and
# how much acquirers are in queue);
# - current lock data stored inside (for example: you can check the current acquirer and
# the lock meta state if you store some additional data there);
# - Realized as an option because of the additional lock data requires two additional Redis
# queries: (1) get the current lock from redis and (2) fetch the lock queue state;
# - These two additional Redis queries has async nature so you can receive
# inconsistent data of the lock and of the lock queue in your error emssage because:
# - required lock can be released after the error moment and before the error message build;
# - required lock can be obtained by other process after the error moment and
# before the error message build;
# - required lock queue can reach a state when the blocking acquirer start to obtain the lock
# and moved from the lock queue after the error moment and before the error message build;
# - You should consider the async nature of this error message and should use received data
# from error message correspondingly;
config.detailed_acq_timeout_error = false
# (symbol) (default: :queued)
# - Defines the way in which the lock should be obitained;
# - By default it is configured to obtain a lock in classic `queued` way:
# you should wait your position in queue in order to obtain a lock;
# - Can be customized in methods `#lock` and `#lock!` via `:access_strategy` attribute (see method signatures of #lock and #lock! methods);
# - Supports different strategies:
# - `:queued` (FIFO): the classic queued behavior (default), your lock will be obitaned if you are first in queue and the required lock is free;
# - `:random` (RANDOM): obtain a lock without checking the positions in the queue (but with checking the limist,
# retries, timeouts and so on). if lock is free to obtain - it will be obtained;
config.default_access_strategy = :queued
# (symbol) (default: :wait_for_lock)
# - Global default conflict strategy mode;
# - Can be customized in methods `#lock` and `#lock!` via `:conflict_strategy` attribute (see method signatures of #lock and #lock! methods);
# - Conflict strategy is a logical behavior for cases when the process that obtained the lock want to acquire this lock again;
# - Realizes "reentrant locks" abstraction (same process conflict / same process deadlock);
# - By default uses `:wait_for_lock` strategy (classic way);
# - Strategies:
# - `:work_through` - continue working under the lock <without> lock's TTL extension;
# - `:extendable_work_through` - continue working under the lock <with> lock's TTL extension;
# - `:wait_for_lock` - (default) - work in classic way (with timeouts, retry delays, retry limits, etc - in classic way :));
# - `:dead_locking` - fail with deadlock exception;
# - See "Dead locks and Reentrant Locks" documentation section in REDME.md for details;
config.default_conflict_strategy = :wait_for_lock
# (default: 100)
# - how many items will be released at a time in #clear_locks and in #clear_dead_requests (uses SCAN);
# - affects the performance of your Redis and Ruby Application (configure thoughtfully);
config.lock_release_batch_size = 100
# (default: 500)
# - how many items should be extracted from redis during the #locks, #queues, #keys
# #locks_info, and #queues_info operations (uses SCAN);
# - affects the performance of your Redis and Ruby Application (configure thoughtfully;)
config.key_extraction_batch_size = 500
# (default: 1 day)
# - the default period of time (in milliseconds) after which a lock request is considered dead;
# - used for `#clear_dead_requests` as default vaule of `:dead_ttl` option;
config.dead_request_ttl = (1 * 24 * 60 * 60 * 1000) # one day in milliseconds
# (default: RedisQueuedLocks::Instrument::VoidNotifier)
# - instrumentation layer;
# - you can provide your own instrumenter that should realize `#notify(event, payload = {})` interface:
# - event: <string> requried;
# - payload: <hash> requried;
# - disabled by default via `VoidNotifier`;
config.instrumenter = RedisQueuedLocks::Instrument::ActiveSupport
# (default: -> { RedisQueuedLocks::Resource.calc_uniq_identity })
# - uniqude idenfitier that is uniq per process/pod;
# - prevents potential lock-acquirement collisions bettween different process/pods
# that have identical process_id/thread_id/fiber_id/ractor_id (identivcal acquier ids);
# - it is calculated once per `RedisQueudLocks::Client` instance;
# - expects the proc object;
# - `SecureRandom.hex(8)` by default;
config.uniq_identifier = -> { RedisQueuedLocks::Resource.calc_uniq_identity }
# (default: RedisQueuedLocks::Logging::VoidLogger)
# - the logger object;
# - should implement `debug(progname = nil, &block)` (minimal requirement) or be an instance of Ruby's `::Logger` class/subclass;
# - supports `SemanticLogger::Logger` (see "semantic_logger" gem)
# - at this moment the only debug logs are realised in following cases:
# - "[redis_queued_locks.start_lock_obtaining]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.start_try_to_lock_cycle]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.dead_score_reached__reset_acquier_position]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.lock_obtained]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acq_time", "acs_strat");
# - "[redis_queued_locks.extendable_reentrant_lock_obtained]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acq_time", "acs_strat");
# - "[redis_queued_locks.reentrant_lock_obtained]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acq_time", "acs_strat");
# - "[redis_queued_locks.fail_fast_or_limits_reached_or_deadlock__dequeue]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.expire_lock]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.decrease_lock]" (logs "lock_key", "decreased_ttl", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - by default uses VoidLogger that does nothing;
config.logger = RedisQueuedLocks::Logging::VoidLogger
# (default: false)
# - adds additional debug logs;
# - enables additional logs for each internal try-retry lock acquiring (a lot of logs can be generated depending on your retry configurations);
# - it adds following debug logs in addition to the existing:
# - "[redis_queued_locks.try_lock.start]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.try_lock.rconn_fetched]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.try_lock.same_process_conflict_detected]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.try_lock.same_process_conflict_analyzed]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "spc_status");
# - "[redis_queued_locks.try_lock.reentrant_lock__extend_and_work_through]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "spc_status", "last_ext_ttl", "last_ext_ts");
# - "[redis_queued_locks.try_lock.reentrant_lock__work_through]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "spc_status", last_spc_ts);
# - "[redis_queued_locks.try_lock.acq_added_to_queue]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat")";
# - "[redis_queued_locks.try_lock.remove_expired_acqs]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.try_lock.get_first_from_queue]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "first_acq_id_in_queue");
# - "[redis_queued_locks.try_lock.exit__queue_ttl_reached]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.try_lock.exit__no_first]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "first_acq_id_in_queue", "<current_lock_data>");
# - "[redis_queued_locks.try_lock.exit__lock_still_obtained]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "first_acq_id_in_queue", "locked_by_acq_id", "<current_lock_data>");
# - "[redis_queued_locks.try_lock.obtain__free_to_acquire]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
config.log_lock_try = false
# (default: false)
# - enables <log sampling>: only the configured percent of RQL cases will be logged;
# - disabled by default;
# - works in tandem with <config.log_sampling_percent> and <log.sampler> configs;
config.log_sampling_enabled = false
# (default: 15)
# - the percent of cases that should be logged;
# - take an effect when <config.log_sampling_enalbed> is true;
# - works in tandem with <config.log_sampling_enabled> and <config.log_sampler> configs;
config.log_sampling_percent = 15
# (default: RedisQueuedLocks::Logging::Sampler)
# - percent-based log sampler that decides should be RQL case logged or not;
# - works in tandem with <config.log_sampling_enabled> and <config.log_sampling_percent> configs;
# - based on the ultra simple percent-based (weight-based) algorithm that uses SecureRandom.rand
# method so the algorithm error is ~(0%..13%);
# - you can provide your own log sampler with bettter algorithm that should realize
# `sampling_happened?(percent) => boolean` interface (see `RedisQueuedLocks::Logging::Sampler` for example);
config.log_sampler = RedisQueuedLocks::Logging::Sampler
# (default: false)
# - enables <instrumentaion sampling>: only the configured percent of RQL cases will be instrumented;
# - disabled by default;
# - works in tandem with <config.instr_sampling_percent and <log.instr_sampler>;
config.instr_sampling_enabled = false
# (default: 15)
# - the percent of cases that should be instrumented;
# - take an effect when <config.instr_sampling_enalbed> is true;
# - works in tandem with <config.instr_sampling_enabled> and <config.instr_sampler> configs;
config.instr_sampling_percent = 15
# (default: RedisQueuedLocks::Instrument::Sampler)
# - percent-based log sampler that decides should be RQL case instrumented or not;
# - works in tandem with <config.instr_sampling_enabled> and <config.instr_sampling_percent> configs;
# - based on the ultra simple percent-based (weight-based) algorithm that uses SecureRandom.rand
# method so the algorithm error is ~(0%..13%);
# - you can provide your own log sampler with bettter algorithm that should realize
# `sampling_happened?(percent) => boolean` interface (see `RedisQueuedLocks::Instrument::Sampler` for example);
config.instr_sampler = RedisQueuedLocks::Instrument::Sampler
end
#lock
- obtain a lock;timed: true
option;def lock(
lock_name,
ttl: config[:default_lock_ttl],
queue_ttl: config[:default_queue_ttl],
timeout: config[:try_to_lock_timeout],
timed: config[:is_timed_by_default],
retry_count: config[:retry_count],
retry_delay: config[:retry_delay],
retry_jitter: config[:retry_jitter],
raise_errors: false,
fail_fast: false,
conflict_strategy: config[:default_conflict_strategy],
access_strategy: config[:default_access_strategy],
identity: uniq_identity, # (attr_accessor) calculated during client instantiation via config[:uniq_identifier] proc;
meta: nil,
detailed_acq_timeout_error: config[:detailed_acq_timeout_error],
instrument: nil,
instrumenter: config[:instrumenter],
logger: config[:logger],
log_lock_try: config[:log_lock_try],
log_sampling_enabled: config[:log_sampling_enabled],
log_sampling_percent: config[:log_sampling_percent],
log_sampler: config[:log_sampler],
log_sample_this: false,
instr_sampling_enabled: config[:instr_sampling_enabled],
instr_sampling_percent: config[:instr_sampling_percent],
instr_sampler: config[:instr_sampler],
instr_sample_this: false,
&block
)
lock_name
- (required) [String]
ttl
- (optional) - [Integer]
config[:default_lock_ttl]
;queue_ttl
- (optional) [Integer]
config[:default_queue_ttl]
;timeout
- (optional) [Integer,NilClass]
config[:try_to_lock_timeout]
;timed
- (optiona) [Boolean]
config[:is_timed_by_default]
;false
by default;retry_count
- (optional) [Integer,NilClass]
config[:retry_count]
;retry_delay
- (optional) [Integer]
config[:retry_delay]
;retry_jitter
- (optional) [Integer]
config[:retry_jitter]
;instrumenter
- (optional) [#notify]
config[:isntrumenter]
with void notifier (RedisQueuedLocks::Instrumenter::VoidNotifier
);instrument
- (optional) [NilClass,Any]
nil
by default (means "no custom instrumentation data");raise_errors
- (optional) [Boolean]
false
by default;fail_fast
- (optional) [Boolean]
false
by default;access_strategy
- (optional) - [Symbol]
:queued
way: you should wait your position in queue in order to obtain a lock;:queued
(FIFO): (default) the classic queued behavior, your lock will be obitaned if you are first in queue and the required lock is free;:random
(RANDOM): obtain a lock without checking the positions in the queue (but with checking the limist, retries, timeouts and so on).
if lock is free to obtain - it will be obtained;config[:default_access_strategy]
;conflict_strategy
- (optional) - [Symbol]
:wait_for_lock
strategy;config[:default_conflict_strategy]
;:work_through
- continue working under the lock without lock's TTL extension;:extendable_work_through
- continue working under the lock with lock's TTL extension;:wait_for_lock
- (default) - work in classic way (with timeouts, retry delays, retry limits, etc - in classic way :));:dead_locking
- fail with deadlock exception;identity
- (optional) [String]
RedisQueuedLock::Client
instance. Resolves the
collisions between the same process_id/thread_id/fiber_id/ractor_id identifiers on different
pods or/and nodes of your application;RedisQueuedLock::Client
instantiation and stored in @uniq_identity
ivar (accessed via uniq_dentity
accessor method);config[:uniq_identifier]
;meta
- (optional) [NilClass,Hash<String|Symbol,Any>]
lock_key
, acq_id
, ts
, ini_ttl
, rem_ttl
);nil
by default (means "no metadata");detailed_acq_timeout_error
- (optional) [Boolean]
RedisQueuedLocks::LockAcquirementTimeoutError
is raised (when raise_errors
option
set to true
). The error message contains the lock key name and the timeout value).config[:detailed_acq_timeout_error]
;logger
- (optional) [::Logger,#debug]
config[:logger]
with void logger RedisQueuedLocks::Logging::VoidLogger
;log_lock_try
- (optional) [Boolean]
config[:log_lock_try]
;false
by default;log_sampling_enabled
- (optional) [Boolean]
log_sampling_percent
and log_sampler
options;config[:log_sampling_enabled]
;log_sampling_percent
- (optional) [Integer]
log_sampling_enalbed
is true;log_sampling_enabled
and log_sampler
options;config[:log_sampling_percent]
;log_sampler
- (optional) [#sampling_happened?,Module<RedisQueuedLocks::Logging::Sampler>]
log_sampling_enabled
and log_sampling_percent
options;sampling_happened?(percent) => boolean
interface (see RedisQueuedLocks::Logging::Sampler
for example);config[:log_sampler]
;log_sample_this
- (optional) [Boolean]
false
by default;instr_sampling_enabled
- (optional) [Boolean]
instr_sampling_percent
and instr_sampler
options;config[:instr_sampling_enabled]
;instr_sampling_percent
- (optional) [Integer]
instr_sampling_enalbed
is true;instr_sampling_enabled
and instr_sampler
options;config[:instr_sampling_percent]
;instr_sampler
- (optional) [#sampling_happened?,Module<RedisQueuedLocks::Instrument::Sampler>]
instr_sampling_enabled
and instr_sampling_percent
options;sampling_happened?(percent) => boolean
interface (see RedisQueuedLocks::Instrument::Sampler
for example);config[:instr_sampler]
;instr_sample_this
- (optional) [Boolean]
false
by default;block
- (optional) [Block]
timed: true
option (rql.lock("my_lock", timed: true, ttl: 5_000) { ... }
)Return value:
result = rql.lock("my_lock") { 1 + 1 }
result # => 2
result = rql.lock("my_lock")
result # =>
{
ok: true,
result: {
lock_key: "rql:lock:my_lock",
acq_id: "rql:acq:26672/2280/2300/2320/70ea5dbf10ea1056",
ts: 1711909612.653696,
ttl: 10000,
process: :lock_obtaining
}
}
Signature: [yield, Hash<Symbol,Boolean|Hash<Symbol,Numeric|String>>]
Format: { ok: true/false, result: <Symbol|Hash<Symbol,Hash>> }
;
Includes the :process
key that describes a logical type of the lock obtaining process. Possible values:
:lock_obtaining
- classic lock obtaining proces. Default behavior (conflict_strategy: :wait_for_lock
);:extendable_conflict_work_through
- reentrant lock acquiring process with lock's TTL extension. Suitable for conflict_strategy: :extendable_work_through
;:conflict_work_through
- reentrant lock acquiring process without lock's TTL extension. Suitable for conflict_strategy: :work_through
;:dead_locking
- current process tries to acquire a lock that is already acquired by them. Suitalbe for conflict_startegy: :dead_locking
;For successful lock obtaining:
{
ok: true,
result: {
lock_key: String, # acquierd lock key ("rql:lock:your_lock_name")
acq_id: String, # acquier identifier ("process_id/thread_id/fiber_id/ractor_id/identity")
hst_id: String, # host identifier ("process_id/thread_id/ractor_id/identity")
ts: Float, # time (epoch) when lock was obtained (float, Time#to_f)
ttl: Integer, # lock's time to live in milliseconds (integer)
process: Symbol # which logical process has acquired the lock (:lock_obtaining, :extendable_conflict_work_through, :conflict_work_through, :conflict_dead_lock)
}
}
# example:
{
ok: true,
result: {
lock_key: "rql:lock:my_lock",
acq_id: "rql:acq:26672/2280/2300/2320/70ea5dbf10ea1056",
acq_id: "rql:acq:26672/2280/2320/70ea5dbf10ea1056",
ts: 1711909612.653696,
ttl: 10000,
process: :lock_obtaining # for custom conflict strategies may be: :conflict_dead_lock, :conflict_work_through, :extendable_conflict_work_through
}
}
For failed lock obtaining:
{ ok: false, result: :timeout_reached }
{ ok: false, result: :retry_count_reached }
{ ok: false, result: :conflict_dead_lock } # see <conflict_strategy> option for details (:dead_locking strategy)
{ ok: false, result: :fail_fast_no_try } # see <fail_fast> option
{ ok: false, result: :fail_fast_after_try } # see <fail_fast> option
{ ok: false, result: :unknown }
Examples:
rql.lock("my_lock") { print "Hello!" }
rql.lock("my_lock", ttl: 5_000) { print "Hello!" } # for 5 seconds
rql.lock("my_lock", ttl: 5_000, timed: true) { sleep(4) }
# => OK
rql.lock("my_lock", ttl: 5_000, timed: true) { sleep(6) }
# => fails with RedisQueuedLocks::TimedLockTimeoutError
rql.lock("my_lock", retry_count: nil, timeout: nil)
# First Ruby Process:
rql.lock("my_lock", ttl: 5_000) { sleep(4) } # acquire a long living lock
# Another Ruby Process:
rql.lock("my_lock", timeout: 2) # try to acquire but wait for a 2 seconds maximum
# =>
{ ok: false, result: :timeout_reached }
rql.lock("my_lock", ttl: 6_500) # blocks execution until the lock is obtained
puts "Let's go" # will be called immediately after the lock is obtained
:meta
option):rql.lock("my_lock", ttl: 123456, meta: { "some" => "data", key: 123.456 })
rql.lock_info("my_lock")
# =>
{
"lock_key" => "rql:lock:my_lock",
"acq_id" => "rql:acq:123/456/567/678/374dd74324",
"hst_id" => "rql:acq:123/456/678/374dd74324",
"ts" => 123456789,
"ini_ttl" => 123456,
"rem_ttl" => 123440,
"some" => "data",
"key" => "123.456" # NOTE: returned as a raw string directly from Redis
}
:queue_ttl
) setting a short limit of time to the lock request queue position (if a process fails to acquire
the lock within this period of time (and before timeout/retry_count limits occurs of course) -
it's lock request will be moved to the end of queue):rql.lock("my_lock", queue_ttl: 5, timeout: 10_000, retry_count: nil)
# "queue_ttl: 5": 5 seconds time slot before the lock request moves to the end of queue;
# "timeout" and "retry_count" is used as "endless lock try attempts" example to show the lock queue behavior;
# lock queue: =>
[
"rql:acq:123/456/567/676/374dd74324",
"rql:acq:123/456/567/677/374dd74322", # <- long living lock
"rql:acq:123/456/567/679/374dd74321",
"rql:acq:123/456/567/683/374dd74322", # <== we are here
"rql:acq:123/456/567/685/374dd74329", # some other waiting process
]
# ... some period of time (2 seconds later)
# lock queue: =>
[
"rql:acq:123/456/567/677/374dd74322", # <- long living lock
"rql:acq:123/456/567/679/374dd74321",
"rql:acq:123/456/567/683/374dd74322", # <== we are here
"rql:acq:123/456/567/685/374dd74329", # some other waiting process
]
# ... some period of time (3 seconds later)
# ... queue_ttl time limit is reached
# lock queue: =>
[
"rql:acq:123/456/567/685/374dd74329", # some other waiting process
"rql:acq:123/456/567/683/374dd74322", # <== we are here (moved to the end of the queue)
]
:random
way (with :random
strategy): in :random
strategy
any acquirer from the lcok queue can obtain the lock regardless of the position in the lock queue;# Current Process (process#1)
rql.lock('my_lock', ttl: 2_000, access_strategy: :random)
# => holds the lock
# Another Process (process#2)
rql.lock('my_lock', retry_delay: 7000, ttl: 4000, access_strategy: :random)
# => the lock is not free, stay in a queue and retry...
# Another Process (process#3)
rql.lock('my_lock', retry_delay: 3000, ttl: 3000, access_strategy: :random)
# => the lock is not free, stay in a queue and retry...
# lock queue:
[
"rql:acq:123/456/567/677/374dd74322", # process#1 (holds the lock)
"rql:acq:123/456/567/679/374dd74321", # process#2 (waiting for the lock, in retry)
"rql:acq:123/456/567/683/374dd74322", # process#3 (waiting for the lock, in retry)
]
# ... some period of time
# -> process#1 => released the lock;
# -> process#2 => delayed retry, waiting;
# -> process#3 => preparing for retry (the delay is over);
# lock queue:
[
"rql:acq:123/456/567/679/374dd74321", # process#2 (waiting for the lock, DELAYED)
"rql:acq:123/456/567/683/374dd74322", # process#3 (trying to obtain the lock, RETRYING now)
]
# ... some period of time
# -> process#2 => didn't have time to obtain the lock, delayed retry;
# -> process#3 => holds the lock;
# lock queue:
[
"rql:acq:123/456/567/679/374dd74321", # process#2 (waiting for the lock, DELAYED)
"rql:acq:123/456/567/683/374dd74322", # process#3 (holds the lock)
]
# `process#3` is the last in queue, but has acquired the lock because his lock request "randomly" came first;
#lock!
- exceptional lock obtaining;RedisQueuedLocks::LockAlreadyObtainedError
) when fail_fast
is true
and lock is already obtained;RedisQueuedLocks::LockAcquiermentTimeoutError
) timeout
limit reached before lock is obtained;RedisQueuedLocks::LockAcquiermentRetryLimitError
) retry_count
limit reached before lock is obtained;RedisQueuedLocks::ConflictLockObtainError
) when conflict_strategy: :dead_locking
is used and the "same-process-dead-lock" is happened (see Deadlocks and Reentrant locks for details);def lock!(
lock_name,
ttl: config[:default_lock_ttl],
queue_ttl: config[:default_queue_ttl],
timeout: config[:try_to_lock_timeout],
timed: config[:is_timed_by_default],
retry_count: config[:retry_count],
retry_delay: config[:retry_delay],
retry_jitter: config[:retry_jitter],
fail_fast: false,
identity: uniq_identity,
meta: nil,
detailed_acq_timeout_error: config[:detailed_acq_timeout_error]
logger: config[:logger],
log_lock_try: config[:log_lock_try],
instrument: nil,
instrumenter: config[:instrumenter],
access_strategy: config[:default_access_strategy],
conflict_strategy: config[:default_conflict_strategy],
log_sampling_enabled: config[:log_sampling_enabled],
log_sampling_percent: config[:log_sampling_percent],
log_sampler: config[:log_sampler],
log_sample_this: false,
instr_sampling_enabled: config[:instr_sampling_enabled],
instr_sampling_percent: config[:instr_sampling_percent],
instr_sampler: config[:instr_sampler],
instr_sample_this: false,
&block
)
See #lock
method documentation.
nil
if lock does not exist;Hash<String,String|Integer>
):
"lock_key"
- string
- lock key in redis;"acq_id"
- string
- acquier identifier (process_id/thread_id/fiber_id/ractor_id/identity);"hst_id"
- string
- host identifier (process_id/thread_id/ractor_id/identity);"ts"
- numeric
/epoch
- the time when lock was obtained;"init_ttl"
- integer
- (milliseconds) initial lock key ttl;"rem_ttl"
- integer
- (milliseconds) remaining lock key ttl;<custom metadata>
- string
/integer
- custom metadata passed to the lock
/lock!
methods via meta:
keyword argument (see lock method documentation);"spc_cnt"
- integer
- how many times the lock was obtained as reentrant lock;"l_spc_ts"
- numeric
/epoch
- timestamp of the last non-extendable reentrant lock obtaining;"spc_ext_ttl"
- integer
- (milliseconds) sum of TTL of the each extendable reentrant lock (the total TTL extension time);"l_spc_ext_ini_ttl"
- integer
- (milliseconds) TTL of the last reentrant lock;"l_spc_ext_ts"
- numeric
/epoch
- timestamp of the last extendable reentrant lock obtaining;# <without custom metadata>
rql.lock_info("your_lock_name")
# =>
{
"lock_key" => "rql:lock:your_lock_name",
"acq_id" => "rql:acq:123/456/567/678/374dd74324",
"hst_id" => "rql:acq:123/456/678/374dd74324",
"ts" => 123456789.12345,
"ini_ttl" => 5_000,
"rem_ttl" => 4_999
}
# <with custom metadata>
rql.lock("your_lock_name", meta: { "kek" => "pek", "bum" => 123 })
rql.lock_info("your_lock_name")
# =>
{
"lock_key" => "rql:lock:your_lock_name",
"acq_id" => "rql:acq:123/456/567/678/374dd74324",
"hst_id" => "rql:acq:123/456/678/374dd74324",
"ts" => 123456789.12345,
"ini_ttl" => 5_000,
"rem_ttl" => 4_999,
"kek" => "pek",
"bum" => "123" # NOTE: returned as a raw string directly from Redis
}
# <for reentrant locks>
# (see `conflict_strategy:` kwarg attribute of #lock/#lock! methods and `config.default_conflict_strategy` config)
rql.lock("your_lock_name", ttl: 5_000)
rql.lock("your_lock_name", ttl: 3_000)
rql.lock("your_lock_name", ttl: 2_000)
rql.lock_info("your_lock_name")
# =>
{
"lock_key" => "rql:lock:your_lock_name",
"acq_id" => "rql:acq:123/456/567/678/374dd74324",
"hst_id" => "rql:acq:123/456/678/374dd74324",
"ts" => 123456789.12345,
"ini_ttl" => 5_000,
"rem_ttl" => 9_444,
# ==> keys for any type of reentrant lock:
"spc_count" => 2, # how many times the lock was obtained as reentrant lock
# ==> keys for extendable reentarnt locks with `:extendable_work_through` strategy:
"spc_ext_ttl" => 5_000, # sum of TTL of the each <extendable> reentrant lock (3_000 + 2_000)
"l_spc_ext_ini_ttl" => 2_000, # TTL of the last <extendable> reentrant lock
"l_spc_ext_ts" => 123456792.12345, # timestamp of the last <extendable> reentrant lock obtaining
# ==> keys for non-extendable locks with `:work_through` strategy:
"l_spc_ts" => 123456.789 # timestamp of the last <non-extendable> reentrant lock obtaining
}
Returns an information about the required lock queue by the lock name. The result represnts the ordered lock request queue that is ordered by score (Redis Sets) and shows lock acquirers and their position in queue. Async nature with redis communcation can lead the situation when the queue becomes empty during the queue data extraction. So sometimes you can receive the lock queue info with empty queue value (an empty array).
nil
if lock queue does not exist;Hash<String,String|Array<Hash<String|Numeric>>
):
"lock_queue"
- string
- lock queue key in redis;"queue"
- array
- an array of lock requests (array of hashes):
"acq_id"
- string
- acquier identifier (process_id/thread_id/fiber_id/ractor_id/identity by default);"score"
- float
/epoch
- time when the lock request was made (epoch);rql.queue_info("your_lock_name")
# =>
{
"lock_queue" => "rql:lock_queue:your_lock_name",
"queue" => [
{ "acq_id" => "rql:acq:123/456/567/678/fa76df9cc2", "score" => 1711606640.540842},
{ "acq_id" => "rql:acq:123/567/456/679/c7bfcaf4f9", "score" => 1711606640.540906},
{ "acq_id" => "rql:acq:555/329/523/127/7329553b11", "score" => 1711606640.540963},
# ...etc
]
}
rql.locked?("your_lock_name") # => true/false
rql.queued?("your_lock_name") # => true/false
#release_lock
;lock_name
- (required) [String]
- the lock name that should be released.:logger
- (optional) [::Logger,#debug]
config[:logger]
;:instrumenter
- (optional) [#notify]
config[:instrumetner]
;:instrument
- (optional) [NilClass,Any]
;
nil
by default (no additional data);:log_sampling_enabled
- (optional) [Boolean]
config[:log_sampling_enabled]
;:log_sampling_percent
- (optional) [Integer]
config[:log_sampling_percent]
;:log_sampler
- (optional) [#sampling_happened?,Module<RedisQueuedLocks::Logging::Sampler>]
config[:log_sampler]
;log_sample_this
- (optional) [Boolean]
false
by default;:instr_sampling_enabled
- (optional) [Boolean]
config[:instr_sampling_enabled]
;instr_sampling_percent
- (optional) [Integer]
config[:instr_sampling_percent]
;instr_sampler
- (optional) [#sampling_happened?,Module<RedisQueuedLocks::Instrument::Sampler>]
config[:instr_sampler]
;instr_sample_this
- (optional) [Boolean]
false
by default;ok: true
result with operation timings
and :nothing_to_release
result factor inside;Return:
[Hash<Symbol,Boolean|Hash<Symbol,Numeric|String|Symbol>>]
({ ok: true/false, result: Hasn }
);:result
format;
:rel_time
- Float
- time spent to process redis commands (in seconds);:rel_key
- String
- released lock key (RedisQueudLocks-internal lock key name from Redis);:rel_queue
- String
- released lock queue key (RedisQueuedLocks-internal queue key name from Redis);:queue_res
- Symbol
- :released
(or :nothing_to_release
if the required queue does not exist);:lock_res
- Symbol
- :released
(or :nothing_to_release
if the required lock does not exist);Consider that lock_res
and queue_res
can have different value because of the async nature of invoked Redis'es commands.
rql.unlock("your_lock_name")
# =>
{
ok: true,
result: {
rel_time: 0.02, # time spent to lock release (in seconds)
rel_key: "rql:lock:your_lock_name", # released lock key
rel_queue: "rql:lock_queue:your_lock_name", # released lock key queue
queue_res: :released, # or :nothing_to_release
lock_res: :released # or :nothing_to_release
}
}
#release_locks
;:batch_size
- (optional) [Integer]
config[:lock_release_batch_size]
;:logger
- (optional) [::Logger,#debug]
config[:logger]
;:instrumenter
- (optional) [#notify]
config[:isntrumenter]
;:instrument
- (optional) [NilClass,Any]
:instrument
key;:log_sampling_enabled
- (optional) [Boolean]
config[:log_sampling_enabled]
;:log_sampling_percent
- (optional) [Integer]
config[:log_sampling_percent]
;:log_sampler
- (optional) [#sampling_happened?,Module<RedisQueuedLocks::Logging::Sampler>]
config[:log_sampler]
;log_sample_this
- (optional) [Boolean]
false
by default;:instr_sampling_enabled
- (optional) [Boolean]
config[:instr_sampling_enabled]
;instr_sampling_percent
- (optional) [Integer]
config[:instr_sampling_percent]
;instr_sampler
- (optional) [#sampling_happened?,Module<RedisQueuedLocks::Instrument::Sampler>]
config[:instr_sampler]
;instr_sample_this
- (optional) [Boolean]
false
by default;[Hash<Symbol,Numeric>]
- Format: { ok: true, result: Hash<Symbol,Numeric> }
;:rel_time
- Numeric
- time spent to release all locks and related queus;:rel_key_cnt
- Integer
- the number of released Redis keys (queues+locks);rql.clear_locks
# =>
{
ok: true,
result: {
rel_time: 3.07,
rel_key_cnt: 1234
}
}
lock_name
- (required) [String]
milliseconds
- (required) [Integer]
:instrumenter
- (optional) [#notify]
config[:instrumetner]
;:instrument
- (optional) [NilClass,Any]
;
nil
by default (no additional data);:logger
- (optional) [::Logger,#debug]
config[:logger]
;:log_sampling_enabled
- (optional) [Boolean]
config[:log_sampling_enabled]
;:log_sampling_percent
- (optional) [Integer]
config[:log_sampling_percent]
;:log_sampler
- (optional) [#sampling_happened?,Module<RedisQueuedLocks::Logging::Sampler>]
config[:log_sampler]
;log_sample_this
- (optional) [Boolean]
false
by default;:instr_sampling_enabled
- (optional) [Boolean]
config[:instr_sampling_enabled]
;instr_sampling_percent
- (optional) [Integer]
config[:instr_sampling_percent]
;instr_sampler
- (optional) [#sampling_happened?,Module<RedisQueuedLocks::Instrument::Sampler>]
config[:instr_sampler]
;instr_sample_this
- (optional) [Boolean]
false
by default;{ ok: true, result: :ttl_extended }
when ttl is extended;{ ok: false, result: :async_expire_or_no_lock }
when a lock not found or a lock is already expired during
some steps of invocation (see Important section below);rql.extend_lock_ttl("my_lock", 5_000) # NOTE: add 5_000 milliseconds
# => `ok` case
{ ok: true, result: :ttl_extended }
# => `failed` case
{ ok: false, result: :async_expire_or_no_lock }
SCAN
under the hood;:scan_size
- Integer
- (config[:key_extraction_batch_size]
by default);:with_info
- Boolean
- false
by default (for details see #locks_info);Set<String>
(for with_info: false
);Set<Hash<Symbol,Any>>
(for with_info: true
). See #locks_info for details;rql.locks # or rql.locks(scan_size: 123)
=>
#<Set:
{"rql:lock:locklock75",
"rql:lock:locklock9",
"rql:lock:locklock108",
"rql:lock:locklock7",
"rql:lock:locklock48",
"rql:lock:locklock104",
"rql:lock:locklock13",
"rql:lock:locklock62",
"rql:lock:locklock80",
"rql:lock:locklock28",
...}>
SCAN
under the hood;:scan_size
- Integer
- (config[:key_extraction_batch_size]
by default);:with_info
- Boolean
- false
by default (for details see #queues_info);Set<String>
(for with_info: false
);Set<Hash<Symbol,Any>>
(for with_info: true
). See #locks_info for details;rql.queues # or rql.queues(scan_size: 123)
=>
#<Set:
{"rql:lock_queue:locklock75",
"rql:lock_queue:locklock9",
"rql:lock_queue:locklock108",
"rql:lock_queue:locklock7",
"rql:lock_queue:locklock48",
"rql:lock_queue:locklock104",
"rql:lock_queue:locklock13",
"rql:lock_queue:locklock62",
"rql:lock_queue:locklock80",
"rql:lock_queue:locklock28",
...}>
SCAN
under the hood;:scan_size
- Integer
- (config[:key_extraction_batch_size]
by default);Set<String>
rql.keys # or rql.keys(scan_size: 123)
=>
#<Set:
{"rql:lock_queue:locklock75",
"rql:lock_queue:locklock9",
"rql:lock:locklock9",
"rql:lock_queue:locklock108",
"rql:lock_queue:locklock7",
"rql:lock:locklock7",
"rql:lock_queue:locklock48",
"rql:lock_queue:locklock104",
"rql:lock:locklock104",
"rql:lock_queue:locklock13",
"rql:lock_queue:locklock62",
"rql:lock_queue:locklock80",
"rql:lock:locklock80",
"rql:lock_queue:locklock28",
...}>
SCAN
under the hod;scan_size:
/Integer
option (config[:key_extraction_batch_size]
by default);Set<Hash<Symbol,Any>>
(see #lock_info and examples below for details).
{ lock: String, status: Symbol, info: Hash<String,Any> }
;:lock
- String
- lock key in Redis;:status
- Symbol
- :released
or :alive
:info
for :released
keys is empty ({}
);:info
- Hash<String,Any>
rql.locks_info # or rql.locks_info(scan_size: 123)
# =>
=> #<Set:
{{:lock=>"rql:lock:some-lock-123",
:status=>:alive,
:info=>{
"acq_id"=>"rql:acq:41478/4320/4340/4360/848818f09d8c3420",
"hst_id"=>"rql:hst:41478/4320/4360/848818f09d8c3420"
"ts"=>1711607112.670343,
"ini_ttl"=>15000,
"rem_ttl"=>13998}},
{:lock=>"rql:lock:some-lock-456",
:status=>:released,
:info=>{},
...}>
SCAN
under the hod;scan_size:
/Integer
option (config[:key_extraction_batch_size]
by default);Set<Hash<Symbol,Any>>
(see #queue_info and examples below for details).
{ queue: String, requests: Array<Hash<String,Any>> }
:queue
- String
- lock key queue in Redis;:requests
- Array<Hash<String,Any>>
- lock requests in the que with their acquier id and score.rql.queues_info # or rql.qeuues_info(scan_size: 123)
=> #<Set:
{{:queue=>"rql:lock_queue:some-lock-123",
:requests=>
[{"acq_id"=>"rql:acq:38529/4500/4520/4360/66093702f24a3129", "score"=>1711606640.540842},
{"acq_id"=>"rql:acq:38529/4580/4600/4360/66093702f24a3129", "score"=>1711606640.540906},
{"acq_id"=>"rql:acq:38529/4620/4640/4360/66093702f24a3129", "score"=>1711606640.5409632}]},
{:queue=>"rql:lock_queue:some-lock-456",
:requests=>
[{"acq_id"=>"rql:acq:38529/4380/4400/4360/66093702f24a3129", "score"=>1711606640.540722},
{"acq_id"=>"rql:acq:38529/4420/4440/4360/66093702f24a3129", "score"=>1711606640.5407748},
{"acq_id"=>"rql:acq:38529/4460/4480/4360/66093702f24a3129", "score"=>1711606640.540808}]},
...}>
In some cases your lock requests may become "dead". It means that your lock request lives in lock queue in Redis without any processing. It can happen when your processs that are enqueeud to the lock queue is failed unexpectedly (for some reason) before the lock acquire moment occurs and when no any other process does not need this lock anymore. For this case your lock reuquest will be cleared only when any process will try to acquire this lock again (cuz lock acquirement triggers the removement of expired requests).
In order to help with these dead requests you may periodically call #clear_dead_requests
with corresponding :dead_ttl
option, that is pre-configured by default via config[:dead_request_ttl]
.
:dead_ttl
option is required because of it is no any fast and resource-free way to understand which request
is dead now and is it really dead cuz each request queue can host their requests with
a custom queue ttl for each request differently.
Accepts:
:dead_ttl
- (optional) [Integer]
config[:dead_request_ttl]
(1 day by default);:sacn_size
- (optional) [Integer]
config[:lock_release_batch_size]
;:logger
- (optional) [::Logger,#debug]
config[:logger]
;:instrumenter
- (optional) [#notify]
config[:isntrumenter]
;:instrument
- (optional) [NilClass,Any]
nil
by default (no additional data);:log_sampling_enabled
- (optional) [Boolean]
config[:log_sampling_enabled]
;:log_sampling_percent
- (optional) [Integer]
config[:log_sampling_percent]
;:log_sampler
- (optional) [#sampling_happened?,Module<RedisQueuedLocks::Logging::Sampler>]
config[:log_sampler]
;log_sample_this
- (optional) [Boolean]
false
by default;:instr_sampling_enabled
- (optional) [Boolean]
config[:instr_sampling_enabled]
;instr_sampling_percent
- (optional) [Integer]
config[:instr_sampling_percent]
;instr_sampler
- (optional) [#sampling_happened?,Module<RedisQueuedLocks::Instrument::Sampler>]
config[:instr_sampler]
;instr_sample_this
- (optional) [Boolean]
false
by default;Returns: { ok: true, processed_queues: Set<String> }
returns the list of processed lock queues;
rql.clear_dead_requests(dead_ttl: 60 * 60 * 1000) # 1 hour in milliseconds
# =>
{
ok: true,
processed_queues: [
"rql:lock_queue:some-lock-123",
"rql:lock_queue:some-lock-456",
"rql:lock_queue:your-other-lock",
...
]
}
"rql:acq:#{process_id}/#{thread_id}/#{fiber_id}/#{ractor_id}/#{identity}"
#lock
/#lock!
gives you a possibility to customize process_id
,
fiber_id
, thread_id
, ractor_id
and unique identity
identifiers the #current_acquirer_id
method provides this possibility too;Accepts:
process_id:
- (optional) [Integer,Any]
::Process.pid
by default;thread_id:
- (optional) [Integer,Any]
;
::Thread.current.object_id
by default;fiber_id:
- (optional) [Integer,Any]
;
::Fiber.current.object_id
by default;ractor_id:
- (optional) [Integer,Any]
;
::Ractor.current.object_id
by default;identity:
- (optional) [String,Any]
;
RedisQueuedLock::Client
instantiation and stored in @uniq_identity
;RedisQueuedLock::Client#uniq_identity
;config[:uniq_identifier]
;uniq_identifier
;rql.current_acquirer_id
# =>
"rql:acq:38529/4500/4520/4360/66093702f24a3129"
process
/thread
/ractor
(without fiber
) combination cuz we have no abilities to extract
all fiber objects from the current ruby process when at least one ractor object is defined (ObjectSpace loses
abilities to extract Fiber
and Thread
objects after the any ractor is created) (Thread
objects are analyzed
via Thread.list
API which does not lose their abilites); "rql:hst:#{process_id}/#{thread_id}/#{ractor_id}/#{uniq_identity}"
#lock
/#lock!
gives you a possibility to customize process_id
,
fiber_id
, thread_id
, ractor_id
and unique identity
identifiers the #current_host_id
method provides this possibility too
(except the fiber_id
correspondingly);Accepts:
process_id:
- (optional) [Integer,Any]
::Process.pid
by default;thread_id:
- (optional) [Integer,Any]
;
::Thread.current.object_id
by default;ractor_id:
- (optional) [Integer,Any]
;
::Ractor.current.object_id
by default;identity:
- (optional) [String]
;
RedisQueuedLock::Client
instantiation and stored in @uniq_identity
;RedisQueuedLock::Client#uniq_identity
;config[:uniq_identifier]
;uniq_identifier
;rql.current_host_id
# =>
"rql:acq:38529/4500/4360/66093702f24a3129"
Array<String>
) of possible host identifiers that can be reached from the current ractor;process
/thread
/ractor
(without fiber
) combination cuz we have no abilities to extract
all fiber objects from the current ruby process when at least one ractor object is defined (ObjectSpace loses
abilities to extract Fiber
and Thread
objects after the any ractor is created) (Thread
objects are analyzed
via Thread.list
API which does not lose their abilites); "rql:hst:#{process_id}/#{thread_id}/#{ractor_id}/#{uniq_identity}"
Accepts:
identity
- (optional) [String]
;
RedisQueuedLock::Client
instantiation and stored in @uniq_identity
;RedisQueuedLock::Client#uniq_identity
;config[:uniq_identifier]
;uniq_identifier
;rql.possible_host_ids
# =>
[
"rql:hst:18814/2300/2280/5ce0c4582fc59c06", # process id / thread id / ractor id / uniq identity
"rql:hst:18814/2320/2280/5ce0c4582fc59c06", # ...
"rql:hst:18814/2340/2280/5ce0c4582fc59c06", # ...
"rql:hst:18814/2360/2280/5ce0c4582fc59c06", # ...
"rql:hst:18814/2380/2280/5ce0c4582fc59c06", # ...
"rql:hst:18814/2400/2280/5ce0c4582fc59c06"
]
Eliminate zombie locks with a swarm.
This documentation section is in progress! (see the changelog and the usage preview for details at this moment)
(work and usage preview (temporary example-based docs))
redis_client = RedisClient.config.new_pool # NOTE: provide your own RedisClient instance
clinet = RedisQueuedLocks::Client.new(redis_client) do |config|
# NOTE: auto-swarm your RQL client after initalization (run swarm elements and their supervisor)
config.swarm.auto_swarm = false
# supervisor configs
config.swarm.supervisor.liveness_probing_period = 2 # NOTE: in seconds
# (probe_hosts) host probing configuration
config.swarm.probe_hosts.enabled_for_swarm = true # NOTE: run host-probing from or not
config.swarm.probe_hosts.probe_period = 2 # NOTE: (in seconds) the period of time when the probing process is triggered
# (probe_hosts) individual redis config
config.swarm.probe_hosts.redis_config.sentinel = false # NOTE: individual redis config
config.swarm.probe_hosts.redis_config.pooled = false # NOTE: individual redis config
config.swarm.probe_hosts.redis_config.config = {} # NOTE: individual redis config
config.swarm.probe_hosts.redis_config.pool_config = {} # NOTE: individual redis config
# (flush_zombies) zombie flushing configuration
config.swarm.flush_zombies.enabled_for_swarm = true # NOTE: run zombie flushing or not
config.swarm.flush_zombies.zombie_flush_period = 10 # NOTE: (in seconds) period of time when the zombie flusher is triggered
config.swarm.flush_zombies.zombie_ttl = 15_000 # NOTE: (in milliseconds) when the lock/host/acquier is considered a zombie
config.swarm.flush_zombies.zombie_lock_scan_size = 500 # NOTE: scan sizec during zombie flushing
config.swarm.flush_zombies.zombie_queue_scan_size = 500 # NOTE: scan sizec during zombie flushing
# (flush_zombies) individual redis config
config.swarm.flush_zombies.redis_config.sentinel = false # NOTE: individual redis config
config.swarm.flush_zombies.redis_config.pooled = false # NOTE: individual redis config
config.swarm.flush_zombies.redis_config.config = {} # NOTE: individual redis config
config.swarm.flush_zombies.redis_config.pool_config = {} # NOTE: individual redis config
end
daiver => ~/Projects/redis_queued_locks master [$]
➜ bin/console
[1] pry(main)> rql = RedisQueuedLocks::Client.new(RedisClient.new);
[2] pry(main)> rql.swarmize!
/Users/daiver/Projects/redis_queued_locks/lib/redis_queued_locks/swarm/flush_zombies.rb:107: warning: Ractor is experimental, and the behavior may change in future versions of Ruby! Also there are many implementation issues.
=> {:ok=>true, :result=>:swarming}
[3] pry(main)> rql.lock('kekpek', ttl: 1111111111)
=> {:ok=>true,
:result=>
{:lock_key=>"rql:lock:kekpek",
:acq_id=>"rql:acq:17580/2260/2380/2280/3f16b93973612580",
:hst_id=>"rql:hst:17580/2260/2280/3f16b93973612580",
:ts=>1720305351.069259,
:ttl=>1111111111,
:process=>:lock_obtaining}}
[4] pry(main)> exit
daiver => ~/Projects/redis_queued_locks master [$] took 27.2s
➜ bin/console
[1] pry(main)> rql = RedisQueuedLocks::Client.new(RedisClient.new);
[2] pry(main)> rql.swarm_info
=> {"rql:hst:17580/2260/2280/3f16b93973612580"=>{:zombie=>true, :last_probe_time=>2024-07-07 01:35:53 12897/262144 +0300, :last_probe_score=>1720305353.0491982},
"rql:hst:17580/2300/2280/3f16b93973612580"=>{:zombie=>true, :last_probe_time=>2024-07-07 01:35:53 211107/4194304 +0300, :last_probe_score=>1720305353.0503318},
"rql:hst:17580/2320/2280/3f16b93973612580"=>{:zombie=>true, :last_probe_time=>2024-07-07 01:35:53 106615/2097152 +0300, :last_probe_score=>1720305353.050838},
"rql:hst:17580/2260/2340/3f16b93973612580"=>{:zombie=>true, :last_probe_time=>2024-07-07 01:35:53 26239/524288 +0300, :last_probe_score=>1720305353.050047},
"rql:hst:17580/2300/2340/3f16b93973612580"=>{:zombie=>true, :last_probe_time=>2024-07-07 01:35:53 106359/2097152 +0300, :last_probe_score=>1720305353.050716},
"rql:hst:17580/2320/2340/3f16b93973612580"=>{:zombie=>true, :last_probe_time=>2024-07-07 01:35:53 213633/4194304 +0300, :last_probe_score=>1720305353.050934},
"rql:hst:17580/2360/2280/3f16b93973612580"=>{:zombie=>true, :last_probe_time=>2024-07-07 01:35:53 214077/4194304 +0300, :last_probe_score=>1720305353.05104},
"rql:hst:17580/2360/2340/3f16b93973612580"=>{:zombie=>true, :last_probe_time=>2024-07-07 01:35:53 214505/4194304 +0300, :last_probe_score=>1720305353.051142},
"rql:hst:17580/2400/2280/3f16b93973612580"=>{:zombie=>true, :last_probe_time=>2024-07-07 01:35:53 53729/1048576 +0300, :last_probe_score=>1720305353.05124},
"rql:hst:17580/2400/2340/3f16b93973612580"=>{:zombie=>true, :last_probe_time=>2024-07-07 01:35:53 3365/65536 +0300, :last_probe_score=>1720305353.0513458}}
[3] pry(main)> rql.swarm_status
=> {:auto_swarm=>false,
:supervisor=>{:running=>false, :state=>"non_initialized", :observable=>"non_initialized"},
:probe_hosts=>{:enabled=>true, :thread=>{:running=>false, :state=>"non_initialized"}, :main_loop=>{:running=>false, :state=>"non_initialized"}},
:flush_zombies=>{:enabled=>true, :ractor=>{:running=>false, :state=>"non_initialized"}, :main_loop=>{:running=>false, :state=>"non_initialized"}}}
[4] pry(main)> rql.zombies_info
=> {:zombie_hosts=>
#<Set:
{"rql:hst:17580/2260/2280/3f16b93973612580",
"rql:hst:17580/2300/2280/3f16b93973612580",
"rql:hst:17580/2320/2280/3f16b93973612580",
"rql:hst:17580/2260/2340/3f16b93973612580",
"rql:hst:17580/2300/2340/3f16b93973612580",
"rql:hst:17580/2320/2340/3f16b93973612580",
"rql:hst:17580/2360/2280/3f16b93973612580",
"rql:hst:17580/2360/2340/3f16b93973612580",
"rql:hst:17580/2400/2280/3f16b93973612580",
"rql:hst:17580/2400/2340/3f16b93973612580"}>,
:zombie_acquirers=>#<Set: {"rql:acq:17580/2260/2380/2280/3f16b93973612580"}>,
:zombie_locks=>#<Set: {"rql:lock:kekpek"}>}
[5] pry(main)> rql.zombie_locks
=> #<Set: {"rql:lock:kekpek"}>
[6] pry(main)> rql.zombie_acquiers
=> #<Set: {"rql:acq:17580/2260/2380/2280/3f16b93973612580"}>
[7] pry(main)> rql.zombie_hosts
=> #<Set:
{"rql:hst:17580/2260/2280/3f16b93973612580",
"rql:hst:17580/2300/2280/3f16b93973612580",
"rql:hst:17580/2320/2280/3f16b93973612580",
"rql:hst:17580/2260/2340/3f16b93973612580",
"rql:hst:17580/2300/2340/3f16b93973612580",
"rql:hst:17580/2320/2340/3f16b93973612580",
"rql:hst:17580/2360/2280/3f16b93973612580",
"rql:hst:17580/2360/2340/3f16b93973612580",
"rql:hst:17580/2400/2280/3f16b93973612580",
"rql:hst:17580/2400/2340/3f16b93973612580"}>
[8] pry(main)> rql.swarmize!
/Users/daiver/Projects/redis_queued_locks/lib/redis_queued_locks/swarm/flush_zombies.rb:107: warning: Ractor is experimental, and the behavior may change in future versions of Ruby! Also there are many implementation issues.
=> {:ok=>true, :result=>:swarming}
[9] pry(main)> rql.swarm_info
=> {"rql:hst:17752/2260/2280/89beef198021f16d"=>{:zombie=>false, :last_probe_time=>2024-07-07 01:36:39 4012577/4194304 +0300, :last_probe_score=>1720305399.956673},
"rql:hst:17752/2300/2280/89beef198021f16d"=>{:zombie=>false, :last_probe_time=>2024-07-07 01:36:39 4015233/4194304 +0300, :last_probe_score=>1720305399.9573061},
"rql:hst:17752/2320/2280/89beef198021f16d"=>{:zombie=>false, :last_probe_time=>2024-07-07 01:36:39 4016755/4194304 +0300, :last_probe_score=>1720305399.957669},
"rql:hst:17752/2260/2340/89beef198021f16d"=>{:zombie=>false, :last_probe_time=>2024-07-07 01:36:39 1003611/1048576 +0300, :last_probe_score=>1720305399.957118},
"rql:hst:17752/2300/2340/89beef198021f16d"=>{:zombie=>false, :last_probe_time=>2024-07-07 01:36:39 2008027/2097152 +0300, :last_probe_score=>1720305399.957502},
"rql:hst:17752/2320/2340/89beef198021f16d"=>{:zombie=>false, :last_probe_time=>2024-07-07 01:36:39 2008715/2097152 +0300, :last_probe_score=>1720305399.95783},
"rql:hst:17752/2360/2280/89beef198021f16d"=>{:zombie=>false, :last_probe_time=>2024-07-07 01:36:39 4018063/4194304 +0300, :last_probe_score=>1720305399.9579809},
"rql:hst:17752/2360/2340/89beef198021f16d"=>{:zombie=>false, :last_probe_time=>2024-07-07 01:36:39 1004673/1048576 +0300, :last_probe_score=>1720305399.9581308}}
[10] pry(main)> rql.swarm_status
=> {:auto_swarm=>false,
:supervisor=>{:running=>true, :state=>"sleep", :observable=>"initialized"},
:probe_hosts=>{:enabled=>true, :thread=>{:running=>true, :state=>"sleep"}, :main_loop=>{:running=>true, :state=>"sleep"}},
:flush_zombies=>{:enabled=>true, :ractor=>{:running=>true, :state=>"running"}, :main_loop=>{:running=>true, :state=>"sleep"}}}
[11] pry(main)> rql.zombies_info
=> {:zombie_hosts=>#<Set: {}>, :zombie_acquirers=>#<Set: {}>, :zombie_locks=>#<Set: {}>}
[12] pry(main)> rql.zombie_acquiers
=> #<Set: {}>
[13] pry(main)> rql.zombie_hosts
=> #<Set: {}>
[14] pry(main)>
"rql:swarm:hsts"
queued
way: you should wait your position in queue in order to obtain a lock;#lock
and #lock!
via :access_strategy
attribute (see method signatures of #lock and #lock! methods);:queued
(FIFO): the classic queued behavior (default), your lock will be obitaned if you are first in queue and the required lock is free;:random
(RANDOM): obtain a lock without checking the positions in the queue (but with checking the limist, retries, timeouts and so on). if lock is free to obtain - it will be obtained;config.default_access_strategy
config docs;access_strategy
attribute docs;:wait_for_lock
(default), :work_through
, :extendable_work_through
, :dead_locking
);:wait_for_lock
) your lock obtaining process will work in a classic way (limits, retries, etc);:work_through
, :extendable_work_through
works with limits too (timeouts, delays, etc), but the decision of
"is your lock are obtained or not" is made as you work with reentrant locks (your process continues to use the lock without/with
lock's TTL extension accordingly);config.default_conflict_strategy
config docs;conflict_strategy
attribute docs and the method result data;default logs (raised from #lock
/#lock!
):
"[redis_queued_locks.start_lock_obtaining]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
"[redis_queued_locks.start_try_to_lock_cycle]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
"[redis_queued_locks.dead_score_reached__reset_acquier_position]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
"[redis_queued_locks.lock_obtained]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acq_time");
"[redis_queued_locks.extendable_reentrant_lock_obtained]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "acq_time");
"[redis_queued_locks.reentrant_lock_obtained]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "acq_time");
"[redis_queued_locks.fail_fast_or_limits_reached_or_deadlock__dequeue]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
"[redis_queued_locks.expire_lock]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
"[redis_queued_locks.decrease_lock]" # (logs "lock_key", "decreased_ttl", "queue_ttl", "acq_id", "hst_id", "acs_strat");
#lock
/#lock!
with confg[:log_lock_try] == true
):"[redis_queued_locks.try_lock.start]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
"[redis_queued_locks.try_lock.rconn_fetched]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
"[redis_queued_locks.try_lock.same_process_conflict_detected]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
"[redis_queued_locks.try_lock.same_process_conflict_analyzed]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "spc_status");
"[redis_queued_locks.try_lock.reentrant_lock__extend_and_work_through]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "spc_status", "last_ext_ttl", "last_ext_ts");
"[redis_queued_locks.try_lock.reentrant_lock__work_through]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "spc_status", last_spc_ts);
"[redis_queued_locks.try_lock.single_process_lock_conflict__dead_lock]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "spc_status", "last_spc_ts");
"[redis_queued_locks.try_lock.acq_added_to_queue]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
"[redis_queued_locks.try_lock.remove_expired_acqs]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
"[redis_queued_locks.try_lock.get_first_from_queue]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "first_acq_id_in_queue");
"[redis_queued_locks.try_lock.exit__queue_ttl_reached]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
"[redis_queued_locks.try_lock.exit__no_first]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "first_acq_id_in_queue", "<current_lock_data>");
"[redis_queued_locks.try_lock.exit__lock_still_obtained]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "first_acq_id_in_queue", "locked_by_acq_id", "<current_lock_data>");
"[redis_queued_locks.try_lock.obtain__free_to_acquire]" # (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
NOTICE: logging can be sampled via:
config.log_samplign_enabled = true
(false by default);config.log_sampler = RedisQueuedLocks::Logging::Sampler
(used by default);# (default: RedisQueuedLocks::Logging::VoidLogger)
# - the logger object;
# - should implement `debug(progname = nil, &block)` (minimal requirement) or be an instance of Ruby's `::Logger` class/subclass;
# - supports `SemanticLogger::Logger` (see "semantic_logger" gem)
# - at this moment the only debug logs are realised in following cases:
# - "[redis_queued_locks.start_lock_obtaining]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.start_try_to_lock_cycle]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.dead_score_reached__reset_acquier_position]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.lock_obtained]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acq_time", "acs_strat");
# - "[redis_queued_locks.extendable_reentrant_lock_obtained]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acq_time", "acs_strat");
# - "[redis_queued_locks.reentrant_lock_obtained]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acq_time", "acs_strat");
# - "[redis_queued_locks.fail_fast_or_limits_reached_or_deadlock__dequeue]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.expire_lock]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.decrease_lock]" (logs "lock_key", "decreased_ttl", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - by default uses VoidLogger that does nothing;
config.logger = RedisQueuedLocks::Logging::VoidLogger
# (default: false)
# - adds additional debug logs;
# - enables additional logs for each internal try-retry lock acquiring (a lot of logs can be generated depending on your retry configurations);
# - it adds following debug logs in addition to the existing:
# - "[redis_queued_locks.try_lock.start]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.try_lock.rconn_fetched]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.try_lock.same_process_conflict_detected]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.try_lock.same_process_conflict_analyzed]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "spc_status");
# - "[redis_queued_locks.try_lock.reentrant_lock__extend_and_work_through]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "spc_status", "last_ext_ttl", "last_ext_ts");
# - "[redis_queued_locks.try_lock.reentrant_lock__work_through]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "spc_status", last_spc_ts);
# - "[redis_queued_locks.try_lock.acq_added_to_queue]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat")";
# - "[redis_queued_locks.try_lock.remove_expired_acqs]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.try_lock.get_first_from_queue]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "first_acq_id_in_queue");
# - "[redis_queued_locks.try_lock.exit__queue_ttl_reached]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
# - "[redis_queued_locks.try_lock.exit__no_first]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "first_acq_id_in_queue", "<current_lock_data>");
# - "[redis_queued_locks.try_lock.exit__lock_still_obtained]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat", "first_acq_id_in_queue", "locked_by_acq_id", "<current_lock_data>");
# - "[redis_queued_locks.try_lock.obtain__free_to_acquire]" (logs "lock_key", "queue_ttl", "acq_id", "hst_id", "acs_strat");
config.log_lock_try = false
# (default: false)
# - enables <log sampling>: only the configured percent of RQL cases will be logged;
# - disabled by default;
# - works in tandem with <config.log_sampling_percent> and <log.sampler> configs;
config.log_sampling_enabled = false
# (default: 15)
# - the percent of cases that should be logged;
# - take an effect when <config.log_sampling_enalbed> is true;
# - works in tandem with <config.log_sampling_enabled> and <config.log_sampler> configs;
config.log_sampling_percent = 15
# (default: RedisQueuedLocks::Logging::Sampler)
# - percent-based log sampler that decides should be RQL case logged or not;
# - works in tandem with <config.log_sampling_enabled> and <config.log_sampling_percent> configs;
# - based on the ultra simple percent-based (weight-based) algorithm that uses SecureRandom.rand
# method so the algorithm error is ~(0%..13%);
# - you can provide your own log sampler with bettter algorithm that should realize
# `sampling_happened?(percent) => boolean` interface (see `RedisQueuedLocks::Logging::Sampler` for example);
config.log_sampler = RedisQueuedLocks::Logging::Sampler
An instrumentation layer is incapsulated in instrumenter
object stored in config (RedisQueuedLocks::Client#config[:instrumenter]
).
Instrumentation can be sampled. See Instrumentation Configuration section for details.
Instrumenter object should provide notify(event, payload)
method with the following signarue:
event
- string
;payload
- hash<Symbol,Any>
;redis_queued_locks
provides two instrumenters:
RedisQueuedLocks::Instrument::ActiveSupport
- ActiveSupport::Notifications instrumenter
that instrument events via ActiveSupport::Notifications API;RedisQueuedLocks::Instrument::VoidNotifier
- instrumenter that does nothing;By default RedisQueuedLocks::Client
is configured with the void notifier (which means "instrumentation is disabled").
NOTICE: instrumentation can be sampled via:
config.instr_sampling_enabled = true
(false by default);config.instr_sampler = RedisQueuedLocks::Instrument::Sampler
(used by default);# (default: RedisQueuedLocks::Instrument::VoidNotifier)
# - instrumentation layer;
# - you can provide your own instrumenter that should realize `#notify(event, payload = {})` interface:
# - event: <string> requried;
# - payload: <hash> requried;
# - disabled by default via `VoidNotifier`;
config.instrumenter = RedisQueuedLocks::Instrument::ActiveSupport
# (default: false)
# - enables <instrumentaion sampling>: only the configured percent of RQL cases will be instrumented;
# - disabled by default;
# - works in tandem with <config.instr_sampling_percent and <log.instr_sampler>;
config.instr_sampling_enabled = false
# (default: 15)
# - the percent of cases that should be instrumented;
# - take an effect when <config.instr_sampling_enalbed> is true;
# - works in tandem with <config.instr_sampling_enabled> and <config.instr_sampler> configs;
config.instr_sampling_percent = 15
# (default: RedisQueuedLocks::Instrument::Sampler)
# - percent-based log sampler that decides should be RQL case instrumented or not;
# - works in tandem with <config.instr_sampling_enabled> and <config.instr_sampling_percent> configs;
# - based on the ultra simple percent-based (weight-based) algorithm that uses SecureRandom.rand
# method so the algorithm error is ~(0%..13%);
# - you can provide your own log sampler with bettter algorithm that should realize
# `sampling_happened?(percent) => boolean` interface (see `RedisQueuedLocks::Instrument::Sampler` for example);
config.instr_sampler = RedisQueuedLocks::Instrument::Sampler
List of instrumentation events
redis_queued_locks.lock_obtained
;redis_queued_locks.extendable_reentrant_lock_obtained
;redis_queued_locks.reentrant_lock_obtained
;redis_queued_locks.lock_hold_and_release
;redis_queued_locks.reentrant_lock_hold_completes
;redis_queued_locks.explicit_lock_release
;redis_queued_locks.explicit_all_locks_release
;Detalized event semantics and payload structure:
"redis_queued_locks.lock_obtained"
#lock
/#lock!
;:ttl
- integer
/milliseconds
- lock ttl;:acq_id
- string
- lock acquier identifier;:hst_id
- string
- lock's host identifier;:lock_key
- string
- lock name;:ts
- numeric
/epoch
- the time when the lock was obtaiend;:acq_time
- float
/milliseconds
- time spent on lock acquiring;:instrument
- nil
/Any
- custom data passed to the #lock
/#lock!
method as :instrument
attribute;"redis_queued_locks.extendable_reentrant_lock_obtained"
#lock
/#lock!
when the lock was obtained as reentrant lock;:lock_key
- string
- lock name;:ttl
- integer
/milliseconds
- last lock ttl by reentrant locking;:acq_id
- string
- lock acquier identifier;:hst_id
- string
- lock's host identifier;:ts
- numeric
/epoch
- the time when the lock was obtaiend as extendable reentrant lock;:acq_time
- float
/milliseconds
- time spent on lock acquiring;:instrument
- nil
/Any
- custom data passed to the #lock
/#lock!
method as :instrument
attribute;"redis_queued_locks.reentrant_lock_obtained"
#lock
/#lock!
when the lock was obtained as reentrant lock;:lock_key
- string
- lock name;:ttl
- integer
/milliseconds
- last lock ttl by reentrant locking;:acq_id
- string
- lock acquier identifier;:hst_id
- string
- lock's host identifier;:ts
- numeric
/epoch
- the time when the lock was obtaiend as reentrant lock;:acq_time
- float
/milliseconds
- time spent on lock acquiring;:instrument
- nil
/Any
- custom data passed to the #lock
/#lock!
method as :instrument
attribute;"redis_queued_locks.lock_hold_and_release"
#lock
/#lock!
when invoked with a block of code;:hold_time
- float
/milliseconds
- lock hold time;:ttl
- integer
/milliseconds
- lock ttl;:acq_id
- string
- lock acquier identifier;:hst_id
- string
- lock's host identifier;:lock_key
- string
- lock name;:ts
- numeric
/epoch
- the time when lock was obtained;:acq_time
- float
/milliseconds
- time spent on lock acquiring;:instrument
- nil
/Any
- custom data passed to the #lock
/#lock!
method as :instrument
attribute;"redis_queued_locks.reentrant_lock_hold_completes"
#lock
/#lock!
when the lock was obtained as reentrant lock;:hold_time
- float
/milliseconds
- lock hold time;:ttl
- integer
/milliseconds
- last lock ttl by reentrant locking;:acq_id
- string
- lock acquier identifier;:hst_id
- string
- lock's host identifier;:ts
- numeric
/epoch
- the time when the lock was obtaiend as reentrant lock;:lock_key
- string
- lock name;:acq_time
- float
/milliseconds
- time spent on lock acquiring;:instrument
- nil
/Any
- custom data passed to the #lock
/#lock!
method as :instrument
attribute;"redis_queued_locks.explicit_lock_release"
RedisQueuedLock#unlock
);#unlock
;:at
- float
/epoch
- the time when the lock was released;:rel_time
- float
/milliseconds
- time spent on lock releasing;:lock_key
- string
- released lock (lock name);:lock_key_queue
- string
- released lock queue (lock queue name);"redis_queued_locks.explicit_all_locks_release"
RedisQueuedLock#clear_locks
);#clear_locks
;:rel_time
- float
/milliseconds
- time spent on "realese all locks" operation;:at
- float
/epoch
- the time when the operation has ended;:rel_keys
- integer
- released redis keys count (released queue keys
+ released lock keys
);true
)
trying to ressurect unexpectedly terminated swarm elements, and will notify about this;RedisClient
instances that are fully independent (distributed redis instances));#lock_series
- acquire a series of locks:
rql.lock_series('lock_a', 'lock_b', 'lock_c') { puts 'locked' }
light mode
: an ability to work without any debug and instrumentation logic and data (totally reduced debugging and instrumenting possibilities, but better performance);Dragonfly
database backend (https://github.com/dragonflydb/dragonfly) (https://www.dragonflydb.io/);timed
locks): per-ruby-block-holding-the-lock sidecar Ractor
and in progress queue
in RedisDB that will extend
the acquired lock for long-running blocks of code (that invoked "under" the lock
whose ttl may expire before the block execution completes). It makes sense for non-timed
locks only;RedisQueuedLocks::Acquier::Try.try_to_lock
- detailed successful result analization;go
-lang implementation;git checkout -b feature/my-new-feature
)git commit -am '[feature_context] Add some feature'
)git push origin feature/my-new-feature
)Released under MIT License.
FAQs
Unknown package
We found that redis_queued_locks demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.