Benchit
This is a really simple benchmarking library thats return the running time of code. It supports asynchronous and synchronous code.
Usage
Synchronous code block
benchit.one ->
...
# (will return elapsed time in milliseconds)
Asynchronous code block
benchit.one (done) ->
...
done() # call when finished
, (elapsed) ->
console.log elapsed
Multiple code blocks
benchit.many
synchronousCode: ->
...
asynchronousCode: (done) ->
...
done() # call when finished
, (name, elapsed) -> console.log "#{name} elapsed: #{elapsed}ms"
###
Output:
synchronousCode elapsed: 100ms
asynchronousCode elapsed: 200ms
(note: 2nd parameter is optional)
###
Improvements to be made
- This is more of an educational activity at the moment. Benchmark.js is definitely the much better alternative to use, especially for testing across browsers.
- However it does suffer a weakness in that if one wanted to test something like sending larger packets across a network or sorting a larger list, there isn't a parameter passed to the function that tells the code to try with a larger "size". This is important since some algorithm running times increase exponentially with larger inputs as opposed to running the same algorithm with the same input multiple times.
- Ideally, what should happen is:
- Run each test case at a starting size.
- Increase the "size" until a certain amount of time has elapsed for any test case (let's say a second). The number of times it ran (i.e. iterations) is tracked, and sets the baseline so that any one test doesn't run too long.
- For each test case, run it for that number of iterations repeatedly until a number of seconds have passed (let's say 5-10 seconds).
- Return the average number of operations per second & standard deviation for each test case.
- However, what is preventing me from implementing this is that I'm still trying to understand what is meant by "statistical significance".
Discussion
- The function compilation feature in benchmark.js aims to make benchmarks more accurate, however as far as I know, the only extra cost in not inlining the test code into the loop is an extra function call, which is a constant time operation (as long as the same number of parameters are used for each call). I'm personally more interested in relative comparisons between code blocks, hence I'm most likely not going to implement this any time soon.
Interesting reads