bench-flumelog
a simple benchmark of flumedb log implementations.
method
append as many items as can be added in 10 seconds,
then stream all items, then read out those items randomly.
for each phase, record the number of items per second, and the mb/second.
results
name, ops/second, mb/second, ops, total-mb, seconds
append, 109777.644, 13.881, 1097996, 138.844, 10.002
stream, 105121.787, 13.292, 1051323, 132.942, 10.001
random, 26169.883, 3.309, 261725, 33.095, 10.001
name, ops/second, mb/second, ops, total-mb, seconds
append, 64168.894, 8.116, 643229, 81.359, 10.024
stream, 69560.83, 8.798, 643229, 81.359, 9.247
random, 31589.041, 3.995, 315922, 39.959, 10.001
memory is extremely fast, because it's kept completely in memory.
name, ops/second, mb/second, ops, total-mb, seconds
append, 121065.388, 15.313, 1212712, 153.391, 10.017
stream, 247025.213, 574.498, 1212712, 153.391, 0.267
random, 662146.485, 83.752, -1967807.592, 837.605, 10.001
stream reads more per second but only the same total number of mb, because stream stops
once it has read the entire log, but random continues for 10 seconds.
(firefox)
name, ops/second, mb/second, ops, total-mb, seconds
append, 7738.949, 0.979, 81940, 10.366, 10.588
stream, 2996.709, 0.379, 30054, 3.802, 10.029
random, 4072.355, 0.515, 40805, 5.162, 10.02
(electron/chromium)
name, ops/second, mb/second, ops, total-mb, seconds
append, 6179.487, 0.782, 65552, 8.295, 10.608
stream, 3189.681, 0.403, 31900, 4.037, 10.001
random, 5695.873, 0.72, 57010, 7.215, 10.009
something about indexed db is making this very slow!