Socket
Socket
Sign inDemoInstall

video-quality-tools

Package Overview
Dependencies
Maintainers
1
Versions
7
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

video-quality-tools - npm Package Compare versions

Comparing version 1.1.0 to 2.0.0

examples/realtimeStats.js

42

CHANGELOG.md

@@ -1,14 +0,32 @@

## video-quality-tools v1.1.0
# Changelog
* **processFrames**:
Added new fields `gopDuration`, `displayAspectRatio`, `width`, `height`, `hasAudioStream` to the result of
_processFrames_ execution .
Add new methods to _processFrames_: `calculateGopDuration`, `calculateDisplayAspectRatio`, `hasAudioFrames`.
* **FramesMonitor**
### 2.0.0
FramesMonitor fetches video and audio frames from the stream now.
Added `width` and `height` info to video frames.
IMPROVEMENTS:
- Function `processFrames` from the module with the same name actually does calculations of encoder statistic. To
improve naming it was renamed to `processFrames.encoderStats`
[[GH-10](https://github.com/LCMApps/video-quality-tools/issues/10)]
- `processFrames.accumulatePktSize` was renamed to `processFrames.calculatePktSize`
[[GH-17](https://github.com/LCMApps/video-quality-tools/issues/17)]
- New function `processFrames.networkStats` for analyzing network link quality and losses in realtime. Check the
README for more details.
[[GH-17](https://github.com/LCMApps/video-quality-tools/issues/17)]
- Example for the `processFrames.networkStats` at [examples/networkStats.js](examples/networkStats.js)
[[GH-17](https://github.com/LCMApps/video-quality-tools/issues/17)]
- Dependencies was bumped
BUG FIXES:
- Fix of functional tests (aspectRatio -> displayAspectRatio)
[[GH-12](https://github.com/LCMApps/video-quality-tools/pull/12)]
- ffprobe ran without `-fflags nobuffer` so `FramesMonitor` receives incorrect info at the time of first analysis.
Check [[GH-18](https://github.com/LCMApps/video-quality-tools/pull/18)] for more details.
### 1.1.0
- Added new fields `gopDuration`, `displayAspectRatio`, `width`, `height`, `hasAudioStream` to the result of
_processFrames_ execution
- Added new methods to _processFrames_: `calculateGopDuration`, `calculateDisplayAspectRatio`, `hasAudioFrames`
- `FramesMonitor` fetches video and audio frames from the stream now.
- Added `width` and `height` info to video frames.
{
"name": "video-quality-tools",
"version": "1.1.0",
"version": "2.0.0",
"description": "Set of tools to evaluate video stream quality.",

@@ -31,8 +31,8 @@ "main": "index.js",

"data-driven": "^1.3.0",
"eslint": "^4.4.1",
"get-port": "^3.2.0",
"eslint": "^5.6.1",
"get-port": "^4.0.0",
"istanbul": "v1.1.0-alpha.1",
"mocha": "^3.5.0",
"proxyquire": "^1.8.0",
"sinon": "^2.4.1"
"mocha": "^5.2.0",
"proxyquire": "^2.1.0",
"sinon": "^6.3.5"
},

@@ -39,0 +39,0 @@ "dependencies": {

@@ -329,10 +329,68 @@ # Video Quality Tools module - helps to measure live stream characteristics by RTMP/HLS/DASH streams

`video-quality-tools` ships with functions that help determining live stream info based on the set of frames
collected from `FramesMonitor`. It relies on
[GOP structure](https://en.wikipedia.org/wiki/Group_of_pictures) of the stream.
collected from `FramesMonitor`:
- `processFrames.networkStats`
- `processFrames.encoderStats`
The following example shows how to gather frames and pass them to the function that analyzes
## `processFrames.networkStats(frames, durationInMsec)`
Receives an array of `frames` collected for a given time interval `durationInMsec`.
This method doesn't analyze GOP structure and isn't dependant on fullness of GOP between runs. Method shows only
frame rate of audio and video streams received, bitrate of audio and video. Instead of `processFrames.networkStats`
this method allows to control quality of network link between sender and receiver (like RTMP server).
> Remember that this module must be located not far away from receiver server (that is under analysis). If link
between receiver and module affects delivery of RTMP packages this module indicates incorrect values. It's better
to run this module near the receiver.
```javascript
const {processFrames} = require('video-quality-tools');
const INTERVAL_TO_ANALYZE_FRAMES = 5000; // in milliseconds
let frames = [];
framesMonitor.on('frame', frame => {
frames.push(frame);
});
setInterval(() => {
try {
const info = processFrames.networkStats(frames, INTERVAL_TO_ANALYZE_FRAMES);
console.log(info);
frames = [];
} catch(err) {
// only if arguments are invalid
console.log(err);
process.exit(1);
}
}, INTERVAL_TO_ANALYZE_FRAMES);
```
There is an output for the example above:
```
{
videoFrameRate: 29,
audioFrameRate: 50,
videoBitrate: 1403.5421875,
audioBitrate: 39.846875
}
```
Check [examples/networkStats.js](examples/networkStats.js) to see an example code.
## `processFrames.encoderStats(frames)`
It relies on [GOP structure](https://en.wikipedia.org/wiki/Group_of_pictures) of the stream.
The following example shows how to gather frames and pass them to the function that analyzes encoder statistic.
```javascript
const {processFrames} = require('video-quality-tools');
const AMOUNT_OF_FRAMES_TO_GATHER = 300;

@@ -350,3 +408,3 @@

try {
const info = processFrames(frames);
const info = processFrames.encoderStats(frames);
frames = info.remainedFrames;

@@ -386,27 +444,28 @@

In given example the frames are collected in `frames` array and than use `processFrames` function for sets of 300 frames
(`AMOUNT_OF_FRAMES_TO_GATHER`). The function searches the
In given example the frames are collected in `frames` array and than use `processFrames.encoderStats` function for
sets of 300 frames (`AMOUNT_OF_FRAMES_TO_GATHER`). The function searches the
[key frames](https://en.wikipedia.org/wiki/Video_compression_picture_types#Intra-coded_(I)_frames/slices_(key_frames))
and measures the distance between them.
It's impossible to detect GOP structure for a set of frames with only one key frame, so `processFrames` returns
back all passed frames as an array in `remainedFrames` field.
It's impossible to detect GOP structure for a set of frames with only one key frame, so `processFrames.encoderStats`
returns back all passed frames as an array in `remainedFrames` field.
If there are more than 2 key frames, `processFrames` uses full GOPs to track fps and bitrate and returns all frames back
in the last GOP that was not finished. It's important to remember the `remainedFrames` output and push a new frame to
the `remainedFrames` array when it arrives.
If there are more than 2 key frames, `processFrames.encoderStats` uses full GOPs to track fps and bitrate and returns
all frames back in the last GOP that was not finished. It's important to remember the `remainedFrames` output
and push a new frame to the `remainedFrames` array when it arrives.
For the full GOPs `processFrames` calculates min/max/mean values of bitrates (in kbit/s), framerates and GOP duration
(in seconds) and returns them in `payload` field. The result of the check for the similarity of GOP structures for
the collected GOPs is returned in `areAllGopsIdentical` field. Fields `width`, `height` and `displayAspectRatio`
are taken from data from first frame of the first collected GOP. Value of `hasAudioStream` reflects presence of
audio frames.
For the full GOPs `processFrames.encoderStats` calculates min/max/mean values of bitrates (in kbit/s), framerates
and GOP duration (in seconds) and returns them in `payload` field. The result of the check for the similarity
of GOP structures for the collected GOPs is returned in `areAllGopsIdentical` field. Fields `width`, `height`
and `displayAspectRatio` are taken from data from first frame of the first collected GOP. Value of `hasAudioStream`
reflects presence of audio frames.
For display aspect ratio calculation method `processFrames::calculateDisplayAspectRatio` use list of
[current video aspect ratio standards](https://en.wikipedia.org/wiki/Aspect_ratio_(image))
with approximation error of frames width and height ratio. If ratio hasn't a reflection in aspect ratio standards then
[GCD algorithm](https://en.wikipedia.org/wiki/Greatest_common_divisor) is used.
To calculate display aspect ratio method `processFrames::calculateDisplayAspectRatio` uses list of
[video aspect ratio standards](https://en.wikipedia.org/wiki/Aspect_ratio_(image))
with approximation of frames width and height ratio. If ratio can't be found in list of known standards, even in delta
neighbourhood, then
[GCD algorithm](https://en.wikipedia.org/wiki/Greatest_common_divisor) is used to simplify returned value.
`processFrames` may throw `Errors.GopNotFoundError`.
`processFrames.encoderStats` may throw `Errors.GopNotFoundError`.
Also, you may extend the metrics. Check `src/processFrames.js` to find common functions.

@@ -272,2 +272,4 @@ 'use strict';

errorLevel,
'-fflags',
'nobuffer',
'-show_frames',

@@ -274,0 +276,0 @@ '-show_entries',

@@ -7,2 +7,4 @@ 'use strict';

const MSECS_IN_SEC = 1000;
const AR_CALCULATION_PRECISION = 0.01;

@@ -25,9 +27,9 @@

function processFrames(frames) {
function encoderStats(frames) {
if (!Array.isArray(frames)) {
throw new TypeError('process method is supposed to accept an array of frames.');
throw new TypeError('Method accepts only an array of frames');
}
const videoFrames = processFrames.filterVideoFrames(frames);
const {gops, remainedFrames} = processFrames.identifyGops(videoFrames);
const videoFrames = filterVideoFrames(frames);
const {gops, remainedFrames} = identifyGops(videoFrames);

@@ -39,3 +41,3 @@ if (_.isEmpty(gops)) {

let areAllGopsIdentical = true;
const hasAudioStream = processFrames.hasAudioFrames(frames);
const hasAudioStream = hasAudioFrames(frames);
const baseGopSize = gops[0].frames.length;

@@ -47,7 +49,7 @@ const bitrates = [];

gops.forEach(gop => {
areAllGopsIdentical = areAllGopsIdentical && baseGopSize === gop.frames.length;
const accumulatedPktSize = processFrames.accumulatePktSize(gop);
const gopDuration = processFrames.gopDurationInSec(gop);
areAllGopsIdentical = areAllGopsIdentical && baseGopSize === gop.frames.length;
const calculatedPktSize = calculatePktSize(gop.frames);
const gopDuration = gopDurationInSec(gop);
const gopBitrate = processFrames.toKbs(accumulatedPktSize / gopDuration);
const gopBitrate = toKbs(calculatedPktSize / gopDuration);
bitrates.push(gopBitrate);

@@ -98,17 +100,24 @@

processFrames.identifyGops = identifyGops;
processFrames.calculateBitrate = calculateBitrate;
processFrames.calculateFps = calculateFps;
processFrames.calculateGopDuration = calculateGopDuration;
processFrames.filterVideoFrames = filterVideoFrames;
processFrames.hasAudioFrames = hasAudioFrames;
processFrames.gopDurationInSec = gopDurationInSec;
processFrames.toKbs = toKbs;
processFrames.accumulatePktSize = accumulatePktSize;
processFrames.areAllGopsIdentical = areAllGopsIdentical;
processFrames.findGcd = findGcd;
processFrames.calculateDisplayAspectRatio = calculateDisplayAspectRatio;
function networkStats(frames, durationInMsec) {
if (!Array.isArray(frames)) {
throw new TypeError('Method accepts only an array of frames');
}
module.exports = processFrames;
if (!_.isInteger(durationInMsec) || durationInMsec <= 0) {
throw new TypeError('Method accepts only a positive integer as duration');
}
const videoFrames = filterVideoFrames(frames);
const audioFrames = filterAudioFrames(frames);
const durationInSec = durationInMsec / MSECS_IN_SEC;
return {
videoFrameRate: videoFrames.length / durationInSec,
audioFrameRate: audioFrames.length / durationInSec,
videoBitrate: toKbs(calculatePktSize(videoFrames) / durationInSec),
audioBitrate: toKbs(calculatePktSize(audioFrames) / durationInSec),
};
}
function identifyGops(frames) {

@@ -168,6 +177,6 @@ const GOP_TEMPLATE = {

gops.forEach(gop => {
const accumulatedPktSize = processFrames.accumulatePktSize(gop);
const gopDurationInSec = processFrames.gopDurationInSec(gop);
const calculatedPktSize = calculatePktSize(gop.frames);
const durationInSec = gopDurationInSec(gop);
const gopBitrate = processFrames.toKbs(accumulatedPktSize / gopDurationInSec);
const gopBitrate = toKbs(calculatedPktSize / durationInSec);

@@ -184,4 +193,4 @@ bitrates.push(gopBitrate);

function accumulatePktSize(gop) {
const accumulatedPktSize = gop.frames.reduce((accumulator, frame) => {
function calculatePktSize(frames) {
const accumulatedPktSize = frames.reduce((accumulator, frame) => {
if (!_.isNumber(frame.pkt_size)) {

@@ -247,4 +256,4 @@ throw new Errors.FrameInvalidData(

gops.forEach(gop => {
const gopDurationInSec = processFrames.gopDurationInSec(gop);
const gopFps = gop.frames.length / gopDurationInSec;
const durationInSec = gopDurationInSec(gop);
const gopFps = gop.frames.length / durationInSec;

@@ -265,5 +274,5 @@ fps.push(gopFps);

gops.forEach(gop => {
const gopDurationInSec = processFrames.gopDurationInSec(gop);
const durationInSec = gopDurationInSec(gop);
gopsDurations.push(gopDurationInSec);
gopsDurations.push(durationInSec);
});

@@ -322,2 +331,6 @@

function filterAudioFrames(frames) {
return frames.filter(frame => frame.media_type === 'audio');
}
function hasAudioFrames(frames) {

@@ -342,1 +355,18 @@ return frames.some(frame => frame.media_type === 'audio');

}
module.exports = {
encoderStats,
networkStats,
identifyGops,
calculateBitrate,
calculateFps,
calculateGopDuration,
filterVideoFrames,
hasAudioFrames,
gopDurationInSec,
toKbs,
calculatePktSize,
areAllGopsIdentical,
findGcd,
calculateDisplayAspectRatio
};

@@ -45,4 +45,4 @@ 'use strict';

afterEach(() => {
spyOnFrame.reset();
spyOnStderr.reset();
spyOnFrame.resetHistory();
spyOnStderr.resetHistory();
});

@@ -100,5 +100,5 @@

afterEach(() => {
spyOnPFrame.reset();
spyOnIFrame.reset();
spyOnAudioFrame.reset();
spyOnPFrame.resetHistory();
spyOnIFrame.resetHistory();
spyOnAudioFrame.resetHistory();
});

@@ -108,5 +108,2 @@

const expectedReturnCode = 0;
const expectedIFramesCount = 60;
const expectedPFramesCount = 240;
const expectedAudioFramesCount = 431;

@@ -130,6 +127,6 @@ const onFrame = {I: spyOnIFrame, P: spyOnPFrame};

assert.strictEqual(spyOnAudioFrame.callCount, expectedAudioFramesCount);
assert.isTrue(spyOnAudioFrame.called);
assert.strictEqual(spyOnIFrame.callCount, expectedIFramesCount);
assert.strictEqual(spyOnPFrame.callCount, expectedPFramesCount);
assert.isTrue(spyOnIFrame.called);
assert.isTrue(spyOnPFrame.called);

@@ -136,0 +133,0 @@ done();

@@ -35,3 +35,3 @@ 'use strict';

afterEach(() => {
spyOnCompleteFrame.reset();
spyOnCompleteFrame.resetHistory();
stubRunShowFramesProcess.restore();

@@ -38,0 +38,0 @@ stubHandleProcessingError.restore();

@@ -14,2 +14,4 @@ 'use strict';

errorLevel,
'-fflags',
'nobuffer',
'-show_frames',

@@ -16,0 +18,0 @@ '-show_entries',

@@ -39,3 +39,3 @@ 'use strict';

try {
processFrames.accumulatePktSize(invalidInput);
processFrames.calculatePktSize(invalidInput.frames);
assert.isFalse(true, 'should not be here');

@@ -62,3 +62,3 @@ } catch (error) {

const res = processFrames.accumulatePktSize({frames});
const res = processFrames.calculatePktSize(frames);

@@ -65,0 +65,0 @@ assert.strictEqual(res, expectedRes);

Sorry, the diff of this file is not supported yet

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc