Dual Window Cache

Dual window cache adaptively coordinates the ratio of LRU to LFU using the two sliding windows.
Maintenance
The source code is maintained on the next source repository.
https://github.com/falsandtru/spica
Abstract
The highest performance constant complexity cache algorithm.
Strategies
- Dynamic partition
- Sliding window
- Transitive wide MRU with cyclic replacement
- Sampled history
Properties
Generally superior and almost flawless.
- High performance
- High hit ratio (DS1, S3, OLTP, GLI)
- Highest hit ratio among all the general-purpose cache algorithms.
- Near ARC (S3, OLTP).
- Significantly higher than ARC (DS1, GLI).
- Low overhead (High throughput)
- Constant time complexity overhead decreasing in linear time.
- Use of only two lists.
- Low latency
- Constant time complexity.
- No batch processing like LIRS and TinyLFU.
- Parallel suitable
- Separated lists are suitable for lock-free processing.
- Efficient
- Low memory usage
- Constant extra space complexity.
- Retain only keys of resident entries (No history).
- Immediate release of evicted keys
- Primary cache algorithm in the standard library must release memory immediately.
- High resistance
- Scan, loop, and burst resistance
- Few tradeoffs
- Not the highest hit ratio
- Significantly small cache size can degrade hit ratio
- Upward compatible with ARC
- Comprehensively higher performance
- Upward compatible with Segmented LRU
- Totally higher performance
- Suitable for TinyLFU
- Better for (W-)TinyLFU's eviction algorithm.
Efficiency
Some different cache algorithms require extra memory space to retain evicted keys.
Linear time complexity indicates the existence of batch processing.
Note that admission algorithm doesn't work without eviction algorithm.
Algorithm | Type | Time complexity (Worst case) | Space complexity (Extra) | Key size | Data structures |
---|
LRU | Evict | Constant | Constant | 1x | 1 list |
DWC | Evict | Constant | Constant | 1x | 2 lists |
ARC | Evict | Constant | Linear | 2x | 4 lists |
LIRS | Evict | Linear | Linear | 3-2500x | 2 lists |
TinyLFU | Admit | Linear | Linear | 8bit * 10N * 4 | 5 arrays |
W-TinyLFU | Admit | Linear | Linear | 8bit * 10N * 4 | 1 list 4 arrays |
https://github.com/ben-manes/caffeine/wiki/Efficiency
https://github.com/zhongch4g/LIRS2/blob/master/src/replace_lirs_base.cc
Resistance
LIRS's burst resistance means resistance to continuous cache misses.
Algorithm | Type | Scan | Loop | Burst |
---|
LRU | Evict | | | ✓ |
DWC | Evict | ✓ | ✓ | ✓ |
ARC | Evict | ✓ | | ✓ |
LIRS | Evict | ✓ | ✓ | |
TinyLFU | Admit | ✓ | ✓ | |
W-TinyLFU | Admit | ✓ | ✓ | ✓ |
Tradeoffs
Note that LIRS and TinyLFU are risky cache algorithms.
- LRU
- Low performance
- No resistance
- Scan access clears all entries.
- DWC
- Not the highest hit ratio
- Significantly small cache size can degrade hit ratio
- ARC
- Middle performance
- Inefficient
- High overhead
- Few resistance
- LIRS
- Extremely inefficient
- Spike latency
- Bulk deletion of low-frequency entries takes linear time.
- Vulnerable algorithm
- Continuous cache misses at the last of LIR and of HIR entries explode key size.
- TinyLFU
- Incomplete algorithm
- Burst access degrades performance.
- TinyLFU is worse than LRU in theory.
- TinyLFU is just an incomplete implementation of W-TinyLFU.
- Restricted delete operation
- Bloom filters don't support delete operation.
- Frequent delete operations degrade performance.
- Spike latency
- Whole reset of Bloom filters takes linear time.
- Vulnerable algorithm
- Burst access saturates Bloom filters.
- W-TinyLFU
- Restricted delete operation
- Bloom filters don't support delete operation.
- Frequent delete operations degrade performance.
- Spike latency
- Whole reset of Bloom filters takes linear time.
Hit ratio
Note that another cache algorithm sometimes changes the parameter values per workload to get a favorite result as the paper of TinyLFU has changed the window size of W-TinyLFU.
All the results of DWC are measured by the same default parameter values.
TinyLFU's results are traces of Ristretto.
W-TinyLFU's results are traces of Caffeine.
- Set the datasets to
./benchmark/trace
(See ./benchmark/ratio.ts
). - Run
npm i
- Run
npm run bench
- Click the DEBUG button to open a debug tab.
- Close the previous tab.
- Press F12 key to open devtools.
- Select the console tab.
https://github.com/dgraph-io/benchmarks
https://github.com/ben-manes/caffeine/wiki/Efficiency
https://github.com/dgraph-io/ristretto
https://docs.google.com/spreadsheets/d/1G3deNz1gJCoXBE2IuraUSwLE7H_EMn4Sn2GU0HTpI5Y
https://github.com/jedisct1/rust-arc-cache/issues/1
DS1
W-TinyLFU > DWC, (LIRS) > (TinyLFU) > ARC > LRU
- DWC is significantly better than ARC.

DS1 1,000,000
LRU hit ratio 3.08%
DWC hit ratio 11.59%
DWC - LRU hit ratio delta 8.50%
DWC / LRU hit ratio rate 375%
DS1 2,000,000
LRU hit ratio 10.74%
DWC hit ratio 26.27%
DWC - LRU hit ratio delta 15.52%
DWC / LRU hit ratio rate 244%
DS1 3,000,000
LRU hit ratio 18.59%
DWC hit ratio 38.14%
DWC - LRU hit ratio delta 19.55%
DWC / LRU hit ratio rate 205%
DS1 4,000,000
LRU hit ratio 20.24%
DWC hit ratio 42.08%
DWC - LRU hit ratio delta 21.84%
DWC / LRU hit ratio rate 207%
DS1 5,000,000
LRU hit ratio 21.03%
DWC hit ratio 47.13%
DWC - LRU hit ratio delta 26.10%
DWC / LRU hit ratio rate 224%
DS1 6,000,000
LRU hit ratio 33.95%
DWC hit ratio 55.20%
DWC - LRU hit ratio delta 21.25%
DWC / LRU hit ratio rate 162%
DS1 7,000,000
LRU hit ratio 38.89%
DWC hit ratio 56.80%
DWC - LRU hit ratio delta 17.90%
DWC / LRU hit ratio rate 146%
DS1 8,000,000
LRU hit ratio 43.03%
DWC hit ratio 64.13%
DWC - LRU hit ratio delta 21.10%
DWC / LRU hit ratio rate 149%
S3
W-TinyLFU > (TinyLFU) > (LIRS) > DWC, ARC > LRU
- DWC is an approximation of ARC.

S3 100,000
LRU hit ratio 2.32%
DWC hit ratio 10.69%
DWC - LRU hit ratio delta 8.36%
DWC / LRU hit ratio rate 459%
S3 200,000
LRU hit ratio 4.63%
DWC hit ratio 19.17%
DWC - LRU hit ratio delta 14.54%
DWC / LRU hit ratio rate 414%
S3 300,000
LRU hit ratio 7.58%
DWC hit ratio 25.36%
DWC - LRU hit ratio delta 17.77%
DWC / LRU hit ratio rate 334%
S3 400,000
LRU hit ratio 12.03%
DWC hit ratio 30.71%
DWC - LRU hit ratio delta 18.67%
DWC / LRU hit ratio rate 255%
S3 500,000
LRU hit ratio 22.76%
DWC hit ratio 38.65%
DWC - LRU hit ratio delta 15.88%
DWC / LRU hit ratio rate 169%
S3 600,000
LRU hit ratio 34.63%
DWC hit ratio 47.37%
DWC - LRU hit ratio delta 12.74%
DWC / LRU hit ratio rate 136%
S3 700,000
LRU hit ratio 46.04%
DWC hit ratio 56.13%
DWC - LRU hit ratio delta 10.09%
DWC / LRU hit ratio rate 121%
S3 800,000
LRU hit ratio 56.59%
DWC hit ratio 64.31%
DWC - LRU hit ratio delta 7.71%
DWC / LRU hit ratio rate 113%
OLTP
W-TinyLFU > ARC, DWC > (LIRS) > LRU > (TinyLFU)
- DWC is an approximation of ARC.

OLTP 250
LRU hit ratio 16.47%
DWC hit ratio 17.65%
DWC - LRU hit ratio delta 1.17%
DWC / LRU hit ratio rate 107%
OLTP 500
LRU hit ratio 23.44%
DWC hit ratio 28.83%
DWC - LRU hit ratio delta 5.39%
DWC / LRU hit ratio rate 122%
OLTP 750
LRU hit ratio 28.28%
DWC hit ratio 34.57%
DWC - LRU hit ratio delta 6.29%
DWC / LRU hit ratio rate 122%
OLTP 1,000
LRU hit ratio 32.83%
DWC hit ratio 38.05%
DWC - LRU hit ratio delta 5.22%
DWC / LRU hit ratio rate 115%
OLTP 1,250
LRU hit ratio 36.20%
DWC hit ratio 40.05%
DWC - LRU hit ratio delta 3.85%
DWC / LRU hit ratio rate 110%
OLTP 1,500
LRU hit ratio 38.69%
DWC hit ratio 41.80%
DWC - LRU hit ratio delta 3.11%
DWC / LRU hit ratio rate 108%
OLTP 1,750
LRU hit ratio 40.78%
DWC hit ratio 43.17%
DWC - LRU hit ratio delta 2.39%
DWC / LRU hit ratio rate 105%
OLTP 2,000
LRU hit ratio 42.46%
DWC hit ratio 44.60%
DWC - LRU hit ratio delta 2.13%
DWC / LRU hit ratio rate 105%
GLI
W-TinyLFU, (LIRS) > DWC > (TinyLFU) >> ARC > LRU
- DWC is significantly better than ARC.

GLI 250
LRU hit ratio 0.93%
DWC hit ratio 15.29%
DWC - LRU hit ratio delta 14.36%
DWC / LRU hit ratio rate 1642%
GLI 500
LRU hit ratio 0.96%
DWC hit ratio 30.86%
DWC - LRU hit ratio delta 29.90%
DWC / LRU hit ratio rate 3201%
GLI 750
LRU hit ratio 1.16%
DWC hit ratio 41.78%
DWC - LRU hit ratio delta 40.62%
DWC / LRU hit ratio rate 3591%
GLI 1,000
LRU hit ratio 11.22%
DWC hit ratio 47.97%
DWC - LRU hit ratio delta 36.75%
DWC / LRU hit ratio rate 427%
GLI 1,250
LRU hit ratio 21.25%
DWC hit ratio 52.92%
DWC - LRU hit ratio delta 31.66%
DWC / LRU hit ratio rate 248%
GLI 1,500
LRU hit ratio 36.56%
DWC hit ratio 54.02%
DWC - LRU hit ratio delta 17.45%
DWC / LRU hit ratio rate 147%
GLI 1,750
LRU hit ratio 45.04%
DWC hit ratio 55.15%
DWC - LRU hit ratio delta 10.10%
DWC / LRU hit ratio rate 122%
GLI 2,000
LRU hit ratio 57.41%
DWC hit ratio 57.43%
DWC - LRU hit ratio delta 0.01%
DWC / LRU hit ratio rate 100%
LOOP
LOOP 100
LRU hit ratio 0.00%
DWC hit ratio 7.82%
DWC - LRU hit ratio delta 7.82%
DWC / LRU hit ratio rate Infinity%
LOOP 250
LRU hit ratio 0.00%
DWC hit ratio 19.46%
DWC - LRU hit ratio delta 19.46%
DWC / LRU hit ratio rate Infinity%
LOOP 500
LRU hit ratio 0.00%
DWC hit ratio 44.26%
DWC - LRU hit ratio delta 44.26%
DWC / LRU hit ratio rate Infinity%
LOOP 750
LRU hit ratio 0.00%
DWC hit ratio 67.88%
DWC - LRU hit ratio delta 67.88%
DWC / LRU hit ratio rate Infinity%
LOOP 1,000
LRU hit ratio 0.00%
DWC hit ratio 95.72%
DWC - LRU hit ratio delta 95.72%
DWC / LRU hit ratio rate Infinity%
LOOP 1,250
LRU hit ratio 99.80%
DWC hit ratio 99.80%
DWC - LRU hit ratio delta 0.00%
DWC / LRU hit ratio rate 100%
Throughput
70-110% of lru-cache.
Note that the number of trials per capacity for simulation 1,000,000 is insufficient.
No result with 10,000,000 because lru-cache crushes with the next error on the next machine of GitHub Actions.
It is verified that the error was thrown also when benchmarking only lru-cache.
Of course it is verified that DWC works fine under the same condition.
Error: Uncaught RangeError: Map maximum size exceeded
System:
OS: Linux 5.15 Ubuntu 20.04.5 LTS (Focal Fossa)
CPU: (2) x64 Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz
Memory: 5.88 GB / 6.78 GB
ISCCache: lru-cache
LRUCache: spica/lru
DW-Cache: spica/cache
'ISCCache new x 10,563 ops/sec ±1.04% (113 runs sampled)'
'LRUCache new x 21,337,874 ops/sec ±1.11% (122 runs sampled)'
'DW-Cache new x 5,335,833 ops/sec ±1.14% (123 runs sampled)'
'ISCCache simulation 100 x 7,750,020 ops/sec ±1.97% (120 runs sampled)'
'LRUCache simulation 100 x 8,620,914 ops/sec ±2.74% (118 runs sampled)'
'DW-Cache simulation 100 x 5,929,238 ops/sec ±1.85% (120 runs sampled)'
'ISCCache simulation 1,000 x 7,155,541 ops/sec ±1.91% (119 runs sampled)'
'LRUCache simulation 1,000 x 7,602,324 ops/sec ±2.53% (118 runs sampled)'
'DW-Cache simulation 1,000 x 5,728,049 ops/sec ±2.02% (121 runs sampled)'
'ISCCache simulation 10,000 x 6,387,831 ops/sec ±1.94% (120 runs sampled)'
'LRUCache simulation 10,000 x 6,920,591 ops/sec ±2.40% (120 runs sampled)'
'DW-Cache simulation 10,000 x 5,969,294 ops/sec ±1.77% (121 runs sampled)'
'ISCCache simulation 100,000 x 3,075,606 ops/sec ±1.62% (113 runs sampled)'
'LRUCache simulation 100,000 x 3,533,877 ops/sec ±2.11% (117 runs sampled)'
'DW-Cache simulation 100,000 x 3,457,664 ops/sec ±2.08% (113 runs sampled)'
'ISCCache simulation 1,000,000 x 1,481,102 ops/sec ±3.37% (107 runs sampled)'
'LRUCache simulation 1,000,000 x 1,444,819 ops/sec ±3.79% (99 runs sampled)'
'DW-Cache simulation 1,000,000 x 1,361,458 ops/sec ±2.54% (113 runs sampled)'
const key = random() < 0.8
? random() * capacity * 1 | 0
: random() * capacity * 9 + capacity | 0;
cache.get(key) ?? cache.set(key, {});
Comprehensive evaluation
Hit ratio
Class | Algorithms |
---|
Very high | W-TinyLFU |
Hight | DWC, (LIRS) |
Middle | ARC, (TinyLFU) |
Low | LRU |
Efficiency
Extra space | Algorithms |
---|
Constant | LRU, DWC |
Linear (< 1) | W-TinyLFU > (TinyLFU) |
Linear (1) | ARC |
Linear (> 1) | (LIRS) |
Resistance
Class | Effect | Algorithms |
---|
Total | High | W-TinyLFU, DWC |
Most | High | (TinyLFU), (LIRS) |
Few | Low | ARC |
None | None | LRU |
Throughput
Class | Algorithms |
---|
Bloom filter + Static list | (TinyLFU) |
Multiple lists (Lock-free) | DWC > (LIRS) > ARC |
Dynamic list + Bloom filter | W-TinyLFU |
Static list | LRU |
Latency
Time | Algorithms |
---|
Constant | LRU, DWC, ARC |
Linear (1) | W-TinyLFU > (TinyLFU) |
Linear (> 1) | (LIRS) |
Vulnerability
Class | Algorithms |
---|
Degrade | (TinyLFU) |
Crush | (LIRS) |
API
export namespace Cache {
export interface Options<K, V = undefined> {
readonly capacity?: number;
readonly window?: number;
readonly resource?: number;
readonly age?: number;
readonly eagerExpiration?: boolean;
readonly disposer?: (value: V, key: K) => void;
readonly capture?: {
readonly delete?: boolean;
readonly clear?: boolean;
};
readonly scope?: number;
readonly sample?: number;
readonly resolution?: number;
readonly offset?: number;
readonly sweep?: {
readonly threshold?: number;
readonly window?: number;
readonly range?: number;
readonly shift?: number;
};
}
}
export class Cache<K, V = undefined> {
constructor(capacity: number, opts?: Cache.Options<K, V>);
constructor(opts: Cache.Options<K, V>);
put(key: K, value: V, opts?: { size?: number; age?: number; }): boolean;
put(this: Cache<K, undefined>, key: K, value?: V, opts?: { size?: number; age?: number; }): boolean;
set(key: K, value: V, opts?: { size?: number; age?: number; }): this;
set(this: Cache<K, undefined>, key: K, value?: V, opts?: { size?: number; age?: number; }): this;
get(key: K): V | undefined;
has(key: K): boolean;
delete(key: K): boolean;
clear(): void;
resize(capacity: number, resource?: number): void;
readonly length: number;
readonly size: number;
[Symbol.iterator](): Iterator<[K, V], undefined, undefined>;
}