
Research
Security News
Lazarus Strikes npm Again with New Wave of Malicious Packages
The Socket Research Team has discovered six new malicious npm packages linked to North Korea’s Lazarus Group, designed to steal credentials and deploy backdoors.
@webex/web-media-effects
Advanced tools
Web Media Effects (WFX) is a suite of media effects developed for web SDKs and WebRTC media applications.
There are three effects included in this library:
The effects are built on top of a plugin interface that makes building and extending effects more straight-forward.
Each effect has four primary methods to control the plugin:
load(input)
accepts a track or stream and returns a new track or stream with the effect appliedenable()
enables the plugin after it's loadeddisable()
disables the plugin after it's loadeddispose()
tears down the effectpreloadAssets()
fetches all assets (e.g., WASM files, ONNX models) to optimize the load sequenceUpon enabling or disabling the effect, an event is fired.
effect.on('track-updated', (track: MediaStreamTrack) => {
// do something with the new track.
});
Additionally, there are a few convenience methods:
getOutputStream()
returns the new outgoing (i.e., "effected") streamgetOutputTrack()
returns the active output tracksetEnabled(boolean)
sets the effect state by passing in a boolean (convenient for state managers)In an effort to optimize startup time for applying media effects, there is a preloading mechanism. This mechanism fetches critical assets, such as ONNX models for image segmentation, WASM modules for audio processing, and web workers for background processing, in advance of media availability. This ensures smoother integration of effects once the media stream is ready to improve the overall user experience. Assets can be preloaded using either a provided factory function or directly using preloadAssets()
API.
The library includes factory functions for scenarios that require asynchronous operations. Utilizing the async/await pattern, these functions provide a simple method for creating effects with their assets already preloaded. The factory function's second parameter is a boolean that indicates whether the assets should be preloaded.
const noiseReductionEffect = await createNoiseReductionEffect({
authToken: 'your-auth-token',
// ...other options
},
true
);
const virtualBackgroundEffect = await createVirtualBackgroundEffect({
authToken: 'your-auth-token',
mode: 'BLUR',
// ...other options
},
true
);
By incorporating asset preloading, the preload API aims to minimize delays and performance hitches when activating effects to keep the UI fluid and responsive.
For more fine-grained control over the preloading process, you can also directly call the preloadAssets()
method on each effect instance. This approach allows you to manually manage when and how assets are preloaded, providing flexibility to fit various application architectures and workflows:
const virtualBackgroundEffect = new VirtualBackgroundEffect(options);
await virtualBackgroundEffect.preloadAssets();
const noiseReductionEffect = new NoiseReductionEffect(options);
await noiseReductionEffect.preloadAssets();
This direct method is useful in scenarios where you might want to preload assets independently of the effect instantiation or in response to specific application states or events. It gives you the ability to strategically preload assets at the most appropriate moment.
The virtual background effect is a wrapper around ladon-ts that provides a virtual background for video calling. The virtual background may be an image, an mp4 video, or the user's background with blur applied. The blur option allows for varied levels of strength and quality where higher levels require more compute resources.
The virtual-background-effect
takes an optional VirtualBackgroundEffectOptions
config object in its constructor. The effect's options can be changed at runtime via an updateOptions()
method. When disabled, the effect simply passes through the original video images so that the outgoing stream does not need to be changed.
The effect uses a background thread worker by default to prevent slowdowns on the main UI thread. The main UI thread can be used instead by adding the property generator: 'local'
in the VirtualBackgroundEffectOptions
object. However, this is not recommended as the worker thread performs much better.
NOTE: For backwards compatibility, the default
mode
is set toBLUR
.
There are a few different options that can be supplied to the constructor or updateOptions()
method that affect the behavior
Name | Description | Values | Required |
---|---|---|---|
authToken | Used to authenticate the request for the backend models | An encoded string token | Yes |
generator | Determines where the model runs (on main thread or background thread) | local worker | Defaults to worker |
frameRate | Determines how many frames per second are sent to the model | 0-60 | Defaults to 30 |
quality | Determines the accuracy of the model (higher requires more CPU) | LOW MEDIUM HIGH ULTRA | Defaults to LOW |
mirror | Whether the output image should be flipped horizontally | true false | Defaults to false |
mode | Determines what kind of background to render behind the user | BLUR IMAGE VIDEO | Defaults to BLUR |
blurStrength | How strongly the background should be blurred | WEAK MODERATE STRONG STRONGER STRONGEST | Required in BLUR mode |
bgImageUrl | Path to the background image to replace the original background | Fully qualified URL | Required in IMAGE mode |
bgVideoUrl | Path to the background video to replace the original background | Fully qualified URL (mp4 only) | Required in VIDEO mode |
env | Which environment the effect is running in. | EffectEnv.Production EffectEnv.Integration | Defaults to EffectEnv.Production |
avoidSimd | Avoid using the SIMD processor, if SIMD is supported (for testing) | true , false | Defaults to false |
preventBackgroundThrottling | If set to true , prevents the browser from throttling the effect frame rate when the page is hidden. | true , false | Defaults to false |
The virtual background plugin applies a background effect to the original media stream by performing image segmentation on the incoming video frames. The plugin is capable of applying four different kinds of effects called modes: background blur, background image replacement, background video replacement, and passthrough.
The mode
configuration option determines what background effect to apply. There are four accepted values for the mode
: BLUR
, IMAGE
, VIDEO
, and PASSTHROUGH
. Each mode has at least one required option that needs to be set in the options object, which is outlined below in the Options section.
bgImageUrl
option.bgVideoUrl
option.NOTE: For Typescript users, the mode can be selected by using the exported
VirtualBackgroundMode
enum, for convenience.
Supply a video stream to the effect and when loaded, it will return a new stream with the effect applied.
// Create a new video stream by a getting user's video media.
const originalVideoStream = await navigator.mediaDevices.getUserMedia({ video: { width, height } });
// Create the effect.
const effect = new VirtualBackgroundEffect({
authToken: 'YOUR_AUTH_TOKEN',
mode: `BLUR`,
blurStrength: `STRONG`,
quality: `LOW`,
});
// Load the effect with the input stream.
const newStream = await effect.load(originalVideoStream);
// Attach the new stream to a video element to see the effect in action.
myVideoElement.srcObject = myStream;
The virtual background effect supports the use of plugins to extend and customize its functionality. Plugins can be registered, initialized, and disposed of through the plugin manager. The two primary base classes to extend when creating plugins are BaseBeforeInferencePlugin
and BaseAfterInferencePlugin
.
Model inference refers to the process where the virtual background effect performs calculations (like segmentation or motion analysis) on each video frame to apply the selected effect (e.g., blur, image replacement, or video replacement). Plugins can hook into this process at two key points:
Before Inference: Plugins that need to analyze or modify the video frame before the virtual background effect processes it can use the Before Inference stage. This is useful for plugins that might want to control whether the model should perform inference at all, based on conditions like motion in the frame. For more details, see Adaptive Frame Skipper.
After Inference: Plugins that need to work with the results of the model inference can use the After Inference stage. These plugins get access to the results of the frame processing (e.g., segmented image or detected motion) and can use that information to make decisions or apply further effects. For more details, see Be Right Back and Rate Estimator.
Plugins should extend one of the following base classes depending on whether they operate before or after inference:
abstract class BaseBeforeInferencePlugin<T, O> {
initialize(effect: VirtualBackgroundEffect): void;
dispose(): void;
onBeforeInference(timestamp: number, lastResult: InferenceResult): Promise<boolean>;
updateOptions(newOptions: Partial<O>): void;
}
abstract class BaseAfterInferencePlugin<T, O> {
initialize(effect: VirtualBackgroundEffect): void;
dispose(): void;
onAfterInference(timestamp: number, result?: InferenceResult): Promise<void>;
updateOptions(newOptions: Partial<O>): void;
}
These base classes automatically handle registering and unregistering the appropriate callbacks (addBeforeInferenceCallback
, addAfterInferenceCallback
, removeBeforeInferenceCallback
, removeAfterInferenceCallback
) when the plugin is initialized or disposed. Plugin developers only need to implement the onBeforeInference
or onAfterInference
methods based on their plugin’s needs.
The VirtualBackgroundEffect also supports the following plugin methods:
Note:
addBeforeInferenceCallback
,addAfterInferenceCallback
,removeBeforeInferenceCallback
, andremoveAfterInferenceCallback
are handled automatically by the base plugin classes (BaseBeforeInferencePlugin
andBaseAfterInferencePlugin
), so plugin developers typically don’t need to call them directly.
Plugins should be registered after creating the effect instance using the registerPlugin
method. Here's an example of how to create and register plugins like the BeRightBackPlugin
, FrameSkipperPlugin
, and RateEstimatorPlugin
. The example also demonstrates how to overwrite default plugin options using coreOptions
.
For more details on these advanced features, see the documentation:
// Create the BeRightBack plugin
const beRightBackPlugin = new BeRightBackPlugin({
"mode": "conservative",
"debug": true,
"coreOptions": {
"motionIouThreshold": 0.9,
"onHysteresisMaxMs": 3000,
"offHysteresisMaxMs": 2000
}
});
// Create the FrameSkipper plugin
const frameSkipperPlugin = new FrameSkipperPlugin({
"mode": "aggressive",
"debug": true,
"coreOptions": {
"baseMinSkipTime": 50,
"baseMaxSkipTime": 1000,
"historySize": 100,
"skipTimeIncrement": 50,
"forcedInferenceInterval": 2000,
"highMotionThreshold": 0.9,
"smoothingFactor": 0.5
}
});
// Create the RateEstimator plugin
const rateEstimatorPlugin = new RateEstimationPlugin({
"targetRate": 30,
"debug": true
});
// Register the plugins with the effect instance
effect.registerPlugin("beRightBack", beRightBackPlugin);
effect.registerPlugin("frameSkipper", frameSkipperPlugin);
effect.registerPlugin("rateEstimator", rateEstimatorPlugin);
// Initialize the plugins
effect.initializePlugins();
// Enable the effect
await effect.enable(); // or `await effect.setEnabled(true);`
To retrieve a registered plugin, use the getPlugin
method and provide the plugin's name.
const brbPlugin = effect.getPlugin<BeRightBackPlugin>("beRightBack");
By using coreOptions
, you can customize the behavior of plugins according to your application's requirements.
The Rate Estimator plugin monitors the processing rate (such as frame rates) of media effects and emits events when the rate changes. This is helpful to dynamically adjust the system's behavior based on performance conditions.
Option | Description | Default Value |
---|---|---|
targetRate | The desired target rate (frames per second) the estimator should maintain. | Required |
debug | Whether to show debug information. | false |
coreOptions | Optional overrides for specific rate estimator behavior, such as hysteresis margin, low rate threshold, and more (see core options). | - |
Option | Description | Default Value |
---|---|---|
hysteresisMargin | Margin of tolerance around the low threshold to prevent rapid toggling between states, expressed as a percentage of the lowThreshold . | 0.05 (5%) |
lowDuration | Duration in seconds that the rate must be below the lowThreshold before considering the rate sustainedly low. | 5 seconds |
lowThreshold | Threshold below which the rate is considered low, expressed as a percentage of the target rate. | 80% of target rate |
minSamples | Minimum number of samples to accumulate before making a rate estimation. | 30 |
maxSamples | Maximum number of samples to consider for rate estimation to prevent using stale data. | 120 |
The Rate Estimator emits events to indicate changes in the processing rate. You can use string values or, if using TypeScript, enums provided by RateEstimatorEvent
.
Event | Description |
---|---|
rate-ok or RateEstimatorEvent.RateOk | Fired when the estimated rate returns to normal, above the lowThreshold . |
rate-low or RateEstimatorEvent.RateLow | Fired when the estimated rate falls below the lowThreshold . |
rate-lagging or RateEstimatorEvent.RateLagging | Fired when the low rate is sustained beyond the duration specified by lowDuration . |
import { RateEstimatorPlugin, RateEstimatorEvent } from '@webex/web-media-effects';
const rateEstimatorPlugin = new RateEstimatorPlugin({
targetRate: 30, // Target fps
debug: true,
coreOptions: {
hysteresisMargin: 0.05,
lowThreshold: 24, // Consider rate low if below 24 fps
lowDuration: 5,
minSamples: 30,
maxSamples: 120,
}
});
rateEstimatorPlugin.on(RateEstimatorEvent.RateLow, (rate) => {
console.log(`Rate is low: ${rate}`);
});
rateEstimatorPlugin.on(RateEstimatorEvent.RateOk, (rate) => {
console.log(`Rate is ok: ${rate}`);
});
rateEstimatorPlugin.on(RateEstimatorEvent.RateLagging, (rate) => {
console.log(`Rate is lagging: ${rate}`);
});
The Frame Skipper plugin is designed to optimize the performance of media effects by selectively skipping frames based on motion detection and other criteria. This helps reduce the computational load while maintaining acceptable quality.
Option | Description | Default Value |
---|---|---|
mode | The frame skipping mode to use. | conservative |
debug | Whether to enable debug logging for the frame skipper. | false |
coreOptions | Optional overrides for specific frame skipping behavior, such as base skip time, motion thresholds, etc. | - |
Option | Description | Default Value |
---|---|---|
baseMinSkipTime | The minimum time to wait before performing inference, in milliseconds. | 50 ms |
baseMaxSkipTime | The maximum time to wait before performing inference, in milliseconds. | 1000 ms |
historySize | The number of recent motion data points to consider for calculating stats. | 100 |
skipTimeIncrement | The amount by which skip time is adjusted based on motion variance. | 50 ms |
forcedInferenceInterval | The maximum time before inference is forced, in milliseconds. | 2000 ms |
highMotionThreshold | The motion threshold for determining high motion. | 0.9 |
smoothingFactor | The smoothing factor for motion value calculation. | 0.5 |
import { FrameSkipperPlugin } from '@webex/web-media-effects';
const frameSkipperPlugin = new FrameSkipperPlugin({
mode: 'aggressive', // default
debug: true,
coreOptions: {
baseMinSkipTime: 50,
baseMaxSkipTime: 1000,
historySize: 100,
skipTimeIncrement: 50,
forcedInferenceInterval: 2000,
highMotionThreshold: 0.9,
smoothingFactor: 0.5
}
});
The Be Right Back (BRB) plugin is designed to detect when the user is away from the camera and apply appropriate actions, such as displaying a placeholder image or message.
Option | Description | Default Value |
---|---|---|
mode | The mode for controlling the be right back behavior. | conservative |
debug | Whether to enable debug logging for the be right back plugin. | false |
coreOptions | Optional overrides for specific BRB behavior, such as motion thresholds and hysteresis times. | - |
Option | Description | Default Value |
---|---|---|
motionIouThreshold | The amount of motion required to trigger the be right back state. | 0.9 |
onHysteresisMaxMs | The amount of time required to trigger the be right back state on. | 3000 ms |
offHysteresisMaxMs | The amount of time required to trigger the be right back state off. | 2000 ms |
on
, off
, and state-change
.The Be Right Back plugin emits events to indicate changes in the user's presence in front of the camera. You can use string values or, if using TypeScript, enums provided by BeRightBackEvent
.
Event | Description |
---|---|
on or BeRightBackEvent.On | Fired when the user is detected to have left the camera. |
off or BeRightBackEvent.Off | Fired when the user returns to the camera. |
state-change or BeRightBackEvent.StateChange | Fired when the user's state changes, whether they leave or return to camera. |
import { BeRightBackPlugin, BeRightBackEvent } from '@webex/web-media-effects';
const beRightBackPlugin = new BeRightBackPlugin({
mode: 'conservative',
debug: true,
coreOptions: {
motionIouThreshold: 0.9,
onHysteresisMaxMs: 3000,
offHysteresisMaxMs: 2000
}
});
beRightBackPlugin.on(BeRightBackEvent.On, () => {
console.log('User has left the camera');
});
beRightBackPlugin.on(BeRightBackEvent.Off, () => {
console.log('User has returned to the camera');
});
beRightBackPlugin.on(BeRightBackEvent.StateChange, (newState, oldState) => {
console.log('User has changed state: ', newState, oldState);
});
The noise reduction effect removes background noise from an audio stream to provide clear audio for calling.
The noise-reduction-effect
takes a NoiseReductionEffectOptions
config object in its constructor. A developer can optionally pass a workletProcessorUrl
parameter (or legacyProcessorUrl
) in the config to use a different of test version of the audio processor. An audioContext
parameter can be passed into the config as well in order to supply an existing AudioContext
; otherwise, a new one will be created.
The effect loads the background thread AudioWorkletProcessor into the main thread AudioWorklet in order to keep the audio computations from impacting UI performance.
There are a few different options that can be supplied to the constructor or updateOptions()
method that affect the behavior
Name | Description | Values | Required |
---|---|---|---|
authToken | Used to authenticate the request for the backend processors | An encoded string token | Yes |
audioContext | An optional AudioContext for custom behavior | AudioContext | No |
mode | Determines whether to run in WORKLET mode or LEGACY mode for older browsers | WORKLET LEGACY | Defaults to WORKLET |
legacyProcessorUrl | A url to fetch the legacy processor that attaches to the deprecated ScriptProcessorNode | A fully qualified URL | No |
workletProcessorUrl | A url to fetch the AudioWorkletProcessor to attach to the AudioWorkletNode | A fully qualified URL | No |
env | Which environment the effect is running in. | EffectEnv.Production EffectEnv.Integration | No |
The noise reduction effect supports the following audio bitrates:
If an unsupported bitrate is detected, the noise reduction effect will throw the following error: Error: noise reduction: worklet processor error, "Error: Sample rate of X is not supported.
Supply an audio track or stream to the effect, the effect will handle updating the stream on enable/disable. In the case of a track being passed, listen to the 'track-updated'
event to receive the updated track on enable/disable.
// Create a new audio stream by getting a user's audio media.
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
// Create the effect.
const effect = new NoiseReductionEffect({
authToken: 'YOUR_AUTH_TOKEN',
workletProcessorUrl: 'https://my-worklet-processor-url', // For 'WORKLET' mode
legacyProcessorUrl: 'https://my-legacy-processor-url', // For 'LEGACY' mode
mode: 'WORKLET', // or 'LEGACY'
});
// Load the effect with the input stream.
await effect.load(stream);
The example app included in this repo is designed to help test functionality and troubleshoot issues. You can run the example app by following the instructions in the README in the example folder. You can also view a live example at https://effects.webex.com.
yarn
to install dependencies.yarn prepare
to prepare dependencies.yarn watch
to build and watch for updates.yarn test
to build, run tests, lint, and run test coverage.Install the recommended extensions when first opening the workspace (there should be a prompt). These plugins will help maintain high code quality and consistency across the project.
NOTE: VS Code is setup to apply formatting and linting rules on save (Prettier runs first, then ESLint). The rules applied are defined in settings.json.
FAQs
Media effects for JS SDKs
The npm package @webex/web-media-effects receives a total of 3,035 weekly downloads. As such, @webex/web-media-effects popularity was classified as popular.
We found that @webex/web-media-effects demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
The Socket Research Team has discovered six new malicious npm packages linked to North Korea’s Lazarus Group, designed to steal credentials and deploy backdoors.
Security News
Socket CEO Feross Aboukhadijeh discusses the open web, open source security, and how Socket tackles software supply chain attacks on The Pair Program podcast.
Security News
Opengrep continues building momentum with the alpha release of its Playground tool, demonstrating the project's rapid evolution just two months after its initial launch.