Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

benchit

Package Overview
Dependencies
Maintainers
1
Versions
3
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

benchit

Really simple code benchmarking library for nodejs/coffeescript

  • 0.0.3
  • latest
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
1
Maintainers
1
Weekly downloads
 
Created
Source

Build Status

Benchit

This is a really simple benchmarking library thats return the running time of code. It supports asynchronous and synchronous code.

Usage

Synchronous code block

benchit.one ->
	...
# (will return elapsed time in milliseconds)

Asynchronous code block

benchit.one (done) ->
	...
	done() # call when finished
, (elapsed) ->
	console.log elapsed

Multiple code blocks

benchit.many
	synchronousCode: ->
		...

	asynchronousCode: (done) ->
		...
		done() # call when finished
, (name, elapsed) -> console.log "#{name} elapsed: #{elapsed}ms"

###
Output:
synchronousCode elapsed: 100ms
asynchronousCode elapsed: 200ms

(note: 2nd parameter is optional)
###

Improvements to be made

  • This is more of an educational activity at the moment. Benchmark.js is definitely the much better alternative to use, especially for testing across browsers.
    • However it does suffer a weakness in that if one wanted to test something like sending larger packets across a network or sorting a larger list, there isn't a parameter passed to the function that tells the code to try with a larger "size". This is important since some algorithm running times increase exponentially with larger inputs as opposed to running the same algorithm with the same input multiple times.
  • Ideally, what should happen is:
    • Run each test case at a starting size.
    • Increase the "size" until a certain amount of time has elapsed for any test case (let's say a second). The number of times it ran (i.e. iterations) is tracked, and sets the baseline so that any one test doesn't run too long.
    • For each test case, run it for that number of iterations repeatedly until a number of seconds have passed (let's say 5-10 seconds).
    • Return the average number of operations per second & standard deviation for each test case.
  • However, what is preventing me from implementing this is that I'm still trying to understand what is meant by "statistical significance".

Discussion

  • The function compilation feature in benchmark.js aims to make benchmarks more accurate, however as far as I know, the only extra cost in not inlining the test code into the loop is an extra function call, which is a constant time operation (as long as the same number of parameters are used for each call). I'm personally more interested in relative comparisons between code blocks, hence I'm most likely not going to implement this any time soon.

Interesting reads

FAQs

Package last updated on 26 Dec 2012

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc