Security News
Node.js EOL Versions CVE Dubbed the "Worst CVE of the Year" by Security Experts
Critics call the Node.js EOL CVE a misuse of the system, sparking debate over CVE standards and the growing noise in vulnerability databases.
@arction/xydata
Advanced tools
A data generator library.
The generator is used to generate data for LightningChart® JS charting library. https://www.arction.com/
npm install --save @arction/xydata
Online documentation is available at arction.github.io/xydata
import { createProgressiveRandomGenerator } from '@arction/xydata'
// create new instance of progressive random generator
createProgressiveRandomGenerator()
// define that 1000 points should be generated
.setNumberOfPoints(1000)
// generate those 1000 points
.generate()
// set stream to progress every 250 milliseconds
.setStreamInterval(250)
// set stream to output 10 points at a time
.setStreamBatchSize(10)
// make the stream infinite
.setStreamRepeat(true)
// create a new stream with previously defined stream settings
.toStream()
// every time the stream outputs data, run this function on each of the data points
.forEach(data=>{
console.log(data)
})
This creates a basic progressive random generator and uses the Stream API to output the data to console.
Note: You should newer create a new instance of any generator using the
new
keyword. Generators should only be created with thecreate...
functions.
When calling .generate()
on any data generator a new instance of a 'DataHost' is returned. The .generate()
function can be called multiple times to get a new set of data with same settings as before but different values each time.
You can call .generate()
function multiple times to get new sets of data.
import { createTraceGenerator } from '@arction/xydata'
const generator = createTraceGenerator()
const dataSet1 = generator.generate()
const dataSet2 = generator.generate()
This would give you two different data sets that have been generated based on same settings but which will have different values.
When a data generator is created it has some default settings based on which generator it is. To change any of these settings call .set....
function that will create a new data generator with that setting changed. You can't change multiple settings with a single call or change settings of a generator that has been created previously. A change in settings will always result in a new generator.
import { createTraceGenerator } from '@arction/xydata'
const generator = createTraceGenerator()
.setNumberOfPoints( 10 )
const derivedGenerator = generator.setNumberOfPoints( 20 )
const dataSet1 = derivedGenerator.generate()
const dataSet2 = generator.generate()
This would create two data sets with different values and settings. dataSet1
would have 20 data points and dataSet2
would have 10.
The data sets have possibility to output the data as a stream of data. These streams can be used to alter the data in multiple steps.
import { createTraceGenerator } from '@arction/xydata'
createTraceGenerator()
.setNumberOfPoints( 10 )
.generate()
.toStream()
.map( value => ( { x:value.x, y: value.y * 2 } ) )
.forEach( value => console.log(value) )
This code would create a data generator and then stream that data through two functions, map and forEach. The map function alters the data by multiplying the y value by 2 and then streams it to the forEach function. The forEach function would log each invidual point to console.
The settings for the stream are set by the Data Host that is returned from the .generate()
function. The stream settings can't be changed
after the stream has been generated.
Generator | Description |
---|---|
Delta Function | Generate mostly flat data with random spikes. |
OHLC | Generate Open, High, Low, Close data. |
Parametric Function | Sample user defined X and Y functions for each t step. |
Progressive Function | Sample a user defined function with given X step. |
Progressive Random | Generate random progressive data that has progessive X step. |
Progressive Trace | Generate random trace data from previous point that has progressive X step. |
Sample Data | Sample given array with specified frequency and user defined step. |
Trace | Generate random trace data that can go to any direction on the XY coordinates. |
White Noise | Generate white noise. |
Spectrum Data | Generate spectrum data. |
Water Drop Data | Generate water drop data. |
The project is developed using TypeScript. Build system of the project heavily relies on Node.js. Dependencies are managed with npm, therefore, remember to run npm install before starting of anything else.
The project uses RollUp for creating the distributable library files.
There are several npm scripts, which are used in development process:
Name | Command | Description |
---|---|---|
test | npm test | run tests and watch |
lint | npm run lint | run static analyzer and watch |
ci:test | npm run ci:test | run tests once |
ci:lint | npm run ci:lint | run static analyzer once |
ci:watch | npm run ci:watch | run CI circle and watch |
build | npm run build | build the library |
build:watch | npm run build:watch | build the library and watch |
docs | npm run docs | build documentation |
[1.4.0] - 2020-01-18
FAQs
A random data generation library.
We found that @arction/xydata demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 3 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Critics call the Node.js EOL CVE a misuse of the system, sparking debate over CVE standards and the growing noise in vulnerability databases.
Security News
cURL and Go security teams are publicly rejecting CVSS as flawed for assessing vulnerabilities and are calling for more accurate, context-aware approaches.
Security News
Bun 1.2 enhances its JavaScript runtime with 90% Node.js compatibility, built-in S3 and Postgres support, HTML Imports, and faster, cloud-first performance.