Socket
Socket
Sign inDemoInstall

timer-set

Package Overview
Dependencies
0
Maintainers
1
Versions
9
Alerts
File Explorer

Advanced tools

Install Socket

Detect and block malicious and high-risk dependencies

Install

Comparing version 1.0.0 to 1.0.1

2

package.json
{
"name": "timer-set",
"version": "1.0.0",
"version": "1.0.1",
"description": "Code timing class",

@@ -5,0 +5,0 @@ "homepage": "https://github.com/BrenPatF/timer-set_nodejs#readme",

@@ -75,4 +75,8 @@ # timer-set

Returns elapsed, user and system CPU times in ms per call.
Returns an object, with fields:
* `ela`: elapsed time per call in ms
* `usr`: user CPU time per call in ms
* `sys`: system CPU time per call in ms
### TimerSet.formatSelfTimer(time_width, time_dp, time_ratio_dp);

@@ -94,12 +98,12 @@ Static method to return the results from getSelfTimer in a formatted string, with parameters as formatTimers (but any extra spaces are trimmed here).

```
The package is fairly thoroughly tested using the Math Function Unit Testing design pattern (`See also` below). In this approach, a 'pure' wrapper function is constructed that takes input parameters and returns a value, and can be tested within a loop over scenario records read from a JSON file.
The package is tested using the Math Function Unit Testing design pattern (`See also` below). In this approach, a 'pure' wrapper function is constructed that takes input parameters and returns a value, and is tested within a loop over scenario records read from a JSON file.
The wrapper function represents a generalised transactional use of the package in which multiple timer sets may be constructed, and then timings carried out and reported on at the end of the transaction.
This kind of package would usually be thought hard to unit-test, with CPU and elapsed times being inherently non-deterministic. However, this is a good example of the power of the design pattern that I recently introduced: One of the inputs is a yes/no flag indicating whether to mock the system timing calls, or not. The function calls used to return epochal CPU and elapsed times are actually parameters that take the (Windows) system calls as defaults, while in the mocked case deterministic versions are supplied by the test driver, that read the values to return from the input scenario data. In this way we can test correctness of the timing aggregations, independence of timer sets etc. using the deterministics functions; on the other hand, one of the key benefits of automated unit testing is to test the actual dependencies, and we do this in the non-mocked case by having the wrapper function return values that are tested against ranges of values.
This kind of package would usually be thought hard to unit-test, with CPU and elapsed times being inherently non-deterministic. However, this is a good example of the power of the design pattern that I recently introduced: One of the inputs is a yes/no flag indicating whether to mock the system timing calls, or not. The function calls used to return epochal CPU and elapsed times are actually parameters that take the (Windows) system functions as defaults, while in the mocked case deterministic versions are supplied by the test driver, that read the values to return from the input scenario data. In this way we can test correctness of the timing aggregations, independence of timer sets etc. using the deterministics functions; on the other hand, one of the key benefits of automated unit testing is to test the actual dependencies, and we do this in the non-mocked case by passing in 'sleep' times to the wrapper function and testing the outputs against ranges of values.
## See also
- [trapit (unit testing package)](https://github.com/BrenPatF/trapit)
- [trapit (unit testing package)](https://github.com/BrenPatF/trapit_nodejs_tester)
## License
ISC
SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc