New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

@webex/web-media-effects

Package Overview
Dependencies
Maintainers
0
Versions
92
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@webex/web-media-effects

Media effects for JS SDKs

2.23.7
latest
Source
npm
Version published
Weekly downloads
3.8K
-22.14%
Maintainers
0
Weekly downloads
 
Created
Source

web-media-effects

Web Media Effects (WFX) is a suite of media effects developed for web SDKs and WebRTC media applications.

Introduction

There are three effects included in this library:

  • Virtual background (e.g., blur, image replacement, video replacement)
  • Noise reduction (e.g., background noise removal)
  • Gain (for testing)

Common Methods

The effects are built on top of a plugin interface that makes building and extending effects more straight-forward.

Each effect has four primary methods to control the plugin:

  • load(input) accepts a track or stream and returns a new track or stream with the effect applied
  • enable() enables the plugin after it's loaded
  • disable() disables the plugin after it's loaded
  • dispose() tears down the effect
  • preloadAssets() fetches all assets (e.g., WASM files, ONNX models) to optimize the load sequence

Upon enabling or disabling the effect, an event is fired.

effect.on('track-updated', (track: MediaStreamTrack) => {
  // do something with the new track.
});

Additionally, there are a few convenience methods:

  • getOutputStream() returns the new outgoing (i.e., "effected") stream
  • getOutputTrack() returns the active output track
  • setEnabled(boolean) sets the effect state by passing in a boolean (convenient for state managers)

Preloading Assets

In an effort to optimize startup time for applying media effects, there is a preloading mechanism. This mechanism fetches critical assets, such as ONNX models for image segmentation, WASM modules for audio processing, and web workers for background processing, in advance of media availability. This ensures smoother integration of effects once the media stream is ready to improve the overall user experience. Assets can be preloaded using either a provided factory function or directly using preloadAssets() API.

Using Factory Function for Asynchronous Initialization

The library includes factory functions for scenarios that require asynchronous operations. Utilizing the async/await pattern, these functions provide a simple method for creating effects with their assets already preloaded. The factory function's second parameter is a boolean that indicates whether the assets should be preloaded.

const noiseReductionEffect = await createNoiseReductionEffect({
    authToken: 'your-auth-token',
    // ...other options
  },
  true
);

const virtualBackgroundEffect = await createVirtualBackgroundEffect({
    authToken: 'your-auth-token',
    mode: 'BLUR',
    // ...other options
  },
  true
);

By incorporating asset preloading, the preload API aims to minimize delays and performance hitches when activating effects to keep the UI fluid and responsive.

Direct Use of preloadAssets() API

For more fine-grained control over the preloading process, you can also directly call the preloadAssets() method on each effect instance. This approach allows you to manually manage when and how assets are preloaded, providing flexibility to fit various application architectures and workflows:

const virtualBackgroundEffect = new VirtualBackgroundEffect(options);
await virtualBackgroundEffect.preloadAssets();

const noiseReductionEffect = new NoiseReductionEffect(options);
await noiseReductionEffect.preloadAssets();

This direct method is useful in scenarios where you might want to preload assets independently of the effect instantiation or in response to specific application states or events. It gives you the ability to strategically preload assets at the most appropriate moment.

Virtual Background Effect

The virtual background effect is a wrapper around ladon-ts that provides a virtual background for video calling. The virtual background may be an image, an mp4 video, or the user's background with blur applied. The blur option allows for varied levels of strength and quality where higher levels require more compute resources.

The virtual-background-effect takes an optional VirtualBackgroundEffectOptions config object in its constructor. The effect's options can be changed at runtime via an updateOptions() method. When disabled, the effect simply passes through the original video images so that the outgoing stream does not need to be changed.

The effect uses a background thread worker by default to prevent slowdowns on the main UI thread. The main UI thread can be used instead by adding the property generator: 'local' in the VirtualBackgroundEffectOptions object. However, this is not recommended as the worker thread performs much better.

NOTE: For backwards compatibility, the default mode is set to BLUR.

Options

There are a few different options that can be supplied to the constructor or updateOptions() method that affect the behavior

NameDescriptionValuesRequired
authTokenUsed to authenticate the request for the backend modelsAn encoded string tokenYes
generatorDetermines where the model runs (on main thread or background thread)local workerDefaults to worker
frameRateDetermines how many frames per second are sent to the model0-60Defaults to 30
qualityDetermines the accuracy of the model (higher requires more CPU)LOW MEDIUM HIGH ULTRADefaults to LOW
mirrorWhether the output image should be flipped horizontallytrue falseDefaults to false
modeDetermines what kind of background to render behind the userBLUR IMAGE VIDEODefaults to BLUR
blurStrengthHow strongly the background should be blurredWEAK MODERATE STRONG STRONGER STRONGESTRequired in BLUR mode
bgImageUrlPath to the background image to replace the original backgroundFully qualified URLRequired in IMAGE mode
bgVideoUrlPath to the background video to replace the original backgroundFully qualified URL (mp4 only)Required in VIDEO mode
envWhich environment the effect is running in.EffectEnv.Production EffectEnv.IntegrationDefaults to EffectEnv.Production
avoidSimdAvoid using the SIMD processor, if SIMD is supported (for testing)true, falseDefaults to false
preventBackgroundThrottlingIf set to true, prevents the browser from throttling the effect frame rate when the page is hidden.true, falseDefaults to false

Mode

The virtual background plugin applies a background effect to the original media stream by performing image segmentation on the incoming video frames. The plugin is capable of applying four different kinds of effects called modes: background blur, background image replacement, background video replacement, and passthrough.

The mode configuration option determines what background effect to apply. There are four accepted values for the mode: BLUR, IMAGE, VIDEO, and PASSTHROUGH. Each mode has at least one required option that needs to be set in the options object, which is outlined below in the Options section.

  • BLUR: Applies a blur effect to the background, with configurable blur strength levels.
  • IMAGE: Replaces the background with a static image specified by the bgImageUrl option.
  • VIDEO: Replaces the background with a video specified by the bgVideoUrl option.
  • PASSTHROUGH: Runs the Ladon AI model without rendering a virtual background, allowing plugins to process the inference results (e.g., motion detection, faces and landmarks).

NOTE: For Typescript users, the mode can be selected by using the exported VirtualBackgroundMode enum, for convenience.

Usage

Supply a video stream to the effect and when loaded, it will return a new stream with the effect applied.

// Create a new video stream by a getting user's video media.
const originalVideoStream = await navigator.mediaDevices.getUserMedia({ video: { width, height } });

// Create the effect.
const effect = new VirtualBackgroundEffect({
  authToken: 'YOUR_AUTH_TOKEN',
  mode: `BLUR`,
  blurStrength: `STRONG`,
  quality: `LOW`,
});

// Load the effect with the input stream.
const newStream = await effect.load(originalVideoStream);

// Attach the new stream to a video element to see the effect in action.
myVideoElement.srcObject = myStream;

Plugins

The virtual background effect supports the use of plugins to extend and customize its functionality. Plugins can be registered, initialized, and disposed of through the plugin manager. The two primary base classes to extend when creating plugins are BaseBeforeInferencePlugin and BaseAfterInferencePlugin.

Model Inference

Model inference refers to the process where the virtual background effect performs calculations (like segmentation or motion analysis) on each video frame to apply the selected effect (e.g., blur, image replacement, or video replacement). Plugins can hook into this process at two key points:

  • Before Inference: Plugins that need to analyze or modify the video frame before the virtual background effect processes it can use the Before Inference stage. This is useful for plugins that might want to control whether the model should perform inference at all, based on conditions like motion in the frame. For more details, see Adaptive Frame Skipper.

  • After Inference: Plugins that need to work with the results of the model inference can use the After Inference stage. These plugins get access to the results of the frame processing (e.g., segmented image or detected motion) and can use that information to make decisions or apply further effects. For more details, see Be Right Back and Rate Estimator.

Plugin Interface

Plugins should extend one of the following base classes depending on whether they operate before or after inference:

abstract class BaseBeforeInferencePlugin<T, O> {
  initialize(effect: VirtualBackgroundEffect): void;
  dispose(): void;
  onBeforeInference(timestamp: number, lastResult: InferenceResult): Promise<boolean>;
  updateOptions(newOptions: Partial<O>): void;
}

abstract class BaseAfterInferencePlugin<T, O> {
  initialize(effect: VirtualBackgroundEffect): void;
  dispose(): void;
  onAfterInference(timestamp: number, result?: InferenceResult): Promise<void>;
  updateOptions(newOptions: Partial<O>): void;
}

These base classes automatically handle registering and unregistering the appropriate callbacks (addBeforeInferenceCallback, addAfterInferenceCallback, removeBeforeInferenceCallback, removeAfterInferenceCallback) when the plugin is initialized or disposed. Plugin developers only need to implement the onBeforeInference or onAfterInference methods based on their plugin’s needs.

Virtual Background Effect Plugin Methods

The VirtualBackgroundEffect also supports the following plugin methods:

  • registerPlugin(name: string, plugin: PluginType): Register a plugin with the virtual background effect.
  • initializePlugins(): Initialize all registered plugins.
  • getPlugin<T>(name: string): Retrieve a plugin by name.

Note: addBeforeInferenceCallback, addAfterInferenceCallback, removeBeforeInferenceCallback, and removeAfterInferenceCallback are handled automatically by the base plugin classes (BaseBeforeInferencePlugin and BaseAfterInferencePlugin), so plugin developers typically don’t need to call them directly.

Using Plugins

Plugins should be registered after creating the effect instance using the registerPlugin method. Here's an example of how to create and register plugins like the BeRightBackPlugin, FrameSkipperPlugin, and RateEstimatorPlugin. The example also demonstrates how to overwrite default plugin options using coreOptions.

For more details on these advanced features, see the documentation:

// Create the BeRightBack plugin
const beRightBackPlugin = new BeRightBackPlugin({
   "mode": "conservative",
   "debug": true,
   "coreOptions": {
      "motionIouThreshold": 0.9,
      "onHysteresisMaxMs": 3000,
      "offHysteresisMaxMs": 2000
   }
});

// Create the FrameSkipper plugin
const frameSkipperPlugin = new FrameSkipperPlugin({
   "mode": "aggressive",
   "debug": true,
   "coreOptions": {
      "baseMinSkipTime": 50,
      "baseMaxSkipTime": 1000,
      "historySize": 100,
      "skipTimeIncrement": 50,
      "forcedInferenceInterval": 2000,
      "highMotionThreshold": 0.9,
      "smoothingFactor": 0.5
   }
});

// Create the RateEstimator plugin
const rateEstimatorPlugin = new RateEstimationPlugin({
   "targetRate": 30,
   "debug": true
});

// Register the plugins with the effect instance
effect.registerPlugin("beRightBack", beRightBackPlugin);
effect.registerPlugin("frameSkipper", frameSkipperPlugin);
effect.registerPlugin("rateEstimator", rateEstimatorPlugin);

// Initialize the plugins
effect.initializePlugins();

// Enable the effect
await effect.enable(); // or `await effect.setEnabled(true);`

Plugin Retrieval

To retrieve a registered plugin, use the getPlugin method and provide the plugin's name.

const brbPlugin = effect.getPlugin<BeRightBackPlugin>("beRightBack");

By using coreOptions, you can customize the behavior of plugins according to your application's requirements.

Rate Estimator Plugin

The Rate Estimator plugin monitors the processing rate (such as frame rates) of media effects and emits events when the rate changes. This is helpful to dynamically adjust the system's behavior based on performance conditions.

Configuration Options
OptionDescriptionDefault Value
targetRateThe desired target rate (frames per second) the estimator should maintain.Required
debugWhether to show debug information.false
coreOptionsOptional overrides for specific rate estimator behavior, such as hysteresis margin, low rate threshold, and more (see core options).-
Core Options
OptionDescriptionDefault Value
hysteresisMarginMargin of tolerance around the low threshold to prevent rapid toggling between states, expressed as a percentage of the lowThreshold.0.05 (5%)
lowDurationDuration in seconds that the rate must be below the lowThreshold before considering the rate sustainedly low.5 seconds
lowThresholdThreshold below which the rate is considered low, expressed as a percentage of the target rate.80% of target rate
minSamplesMinimum number of samples to accumulate before making a rate estimation.30
maxSamplesMaximum number of samples to consider for rate estimation to prevent using stale data.120
Methods
  • on(event: RateEstimatorEvent, callback: (rate: number) => void): Attach a handler to rate change events.
  • getRate(): Returns the current rate estimation.
  • getStatus(): Returns the current status of the rate estimator.
Events

The Rate Estimator emits events to indicate changes in the processing rate. You can use string values or, if using TypeScript, enums provided by RateEstimatorEvent.

EventDescription
rate-ok or RateEstimatorEvent.RateOkFired when the estimated rate returns to normal, above the lowThreshold.
rate-low or RateEstimatorEvent.RateLowFired when the estimated rate falls below the lowThreshold.
rate-lagging or RateEstimatorEvent.RateLaggingFired when the low rate is sustained beyond the duration specified by lowDuration.
Usage
import { RateEstimatorPlugin, RateEstimatorEvent } from '@webex/web-media-effects';

const rateEstimatorPlugin = new RateEstimatorPlugin({
  targetRate: 30, // Target fps
  debug: true,
  coreOptions: {
    hysteresisMargin: 0.05,
    lowThreshold: 24, // Consider rate low if below 24 fps
    lowDuration: 5,
    minSamples: 30,
    maxSamples: 120,
  }
});

rateEstimatorPlugin.on(RateEstimatorEvent.RateLow, (rate) => {
  console.log(`Rate is low: ${rate}`);
});
rateEstimatorPlugin.on(RateEstimatorEvent.RateOk, (rate) => {
  console.log(`Rate is ok: ${rate}`);
});
rateEstimatorPlugin.on(RateEstimatorEvent.RateLagging, (rate) => {
  console.log(`Rate is lagging: ${rate}`);
});

Frame Skipper Plugin

The Frame Skipper plugin is designed to optimize the performance of media effects by selectively skipping frames based on motion detection and other criteria. This helps reduce the computational load while maintaining acceptable quality.

Configuration Options
OptionDescriptionDefault Value
modeThe frame skipping mode to use.conservative
debugWhether to enable debug logging for the frame skipper.false
coreOptionsOptional overrides for specific frame skipping behavior, such as base skip time, motion thresholds, etc.-
Core Options
OptionDescriptionDefault Value
baseMinSkipTimeThe minimum time to wait before performing inference, in milliseconds.50 ms
baseMaxSkipTimeThe maximum time to wait before performing inference, in milliseconds.1000 ms
historySizeThe number of recent motion data points to consider for calculating stats.100
skipTimeIncrementThe amount by which skip time is adjusted based on motion variance.50 ms
forcedInferenceIntervalThe maximum time before inference is forced, in milliseconds.2000 ms
highMotionThresholdThe motion threshold for determining high motion.0.9
smoothingFactorThe smoothing factor for motion value calculation.0.5
Methods
  • updateOptions(newOptions: Partial): Update the options for the frame skipper.
  • getLatestDebugInfo(): Returns the latest debug information about frame skipping.
Usage
import { FrameSkipperPlugin } from '@webex/web-media-effects';

const frameSkipperPlugin = new FrameSkipperPlugin({
  mode: 'aggressive', // default
  debug: true,
  coreOptions: {
    baseMinSkipTime: 50,
    baseMaxSkipTime: 1000,
    historySize: 100,
    skipTimeIncrement: 50,
    forcedInferenceInterval: 2000,
    highMotionThreshold: 0.9,
    smoothingFactor: 0.5
  }
});

Be Right Back Plugin

The Be Right Back (BRB) plugin is designed to detect when the user is away from the camera and apply appropriate actions, such as displaying a placeholder image or message.

Configuration Options
OptionDescriptionDefault Value
modeThe mode for controlling the be right back behavior.conservative
debugWhether to enable debug logging for the be right back plugin.false
coreOptionsOptional overrides for specific BRB behavior, such as motion thresholds and hysteresis times.-
Core Options
OptionDescriptionDefault Value
motionIouThresholdThe amount of motion required to trigger the be right back state.0.9
onHysteresisMaxMsThe amount of time required to trigger the be right back state on.3000 ms
offHysteresisMaxMsThe amount of time required to trigger the be right back state off.2000 ms
Methods
  • on(event: BeRightBackEvent, callback: () => void): Attach handlers for BRB events like on, off, and state-change.
  • updateOptions(newOptions: Partial): Update the options for BRB.
Events

The Be Right Back plugin emits events to indicate changes in the user's presence in front of the camera. You can use string values or, if using TypeScript, enums provided by BeRightBackEvent.

EventDescription
on or BeRightBackEvent.OnFired when the user is detected to have left the camera.
off or BeRightBackEvent.OffFired when the user returns to the camera.
state-change or BeRightBackEvent.StateChangeFired when the user's state changes, whether they leave or return to camera.
Usage
import { BeRightBackPlugin, BeRightBackEvent } from '@webex/web-media-effects';

const beRightBackPlugin = new BeRightBackPlugin({
  mode: 'conservative',
  debug: true,
  coreOptions: {
    motionIouThreshold: 0.9,
    onHysteresisMaxMs: 3000,
    offHysteresisMaxMs: 2000
  }
});

beRightBackPlugin.on(BeRightBackEvent.On, () => {
  console.log('User has left the camera');
});
beRightBackPlugin.on(BeRightBackEvent.Off, () => {
  console.log('User has returned to the camera');
});
beRightBackPlugin.on(BeRightBackEvent.StateChange, (newState, oldState) => {
  console.log('User has changed state: ', newState, oldState);
});

Noise Reduction Effect

The noise reduction effect removes background noise from an audio stream to provide clear audio for calling.

The noise-reduction-effect takes a NoiseReductionEffectOptions config object in its constructor. A developer can optionally pass a workletProcessorUrl parameter (or legacyProcessorUrl) in the config to use a different of test version of the audio processor. An audioContext parameter can be passed into the config as well in order to supply an existing AudioContext; otherwise, a new one will be created.

The effect loads the background thread AudioWorkletProcessor into the main thread AudioWorklet in order to keep the audio computations from impacting UI performance.

Options

There are a few different options that can be supplied to the constructor or updateOptions() method that affect the behavior

NameDescriptionValuesRequired
authTokenUsed to authenticate the request for the backend processorsAn encoded string tokenYes
audioContextAn optional AudioContext for custom behaviorAudioContextNo
modeDetermines whether to run in WORKLET mode or LEGACY mode for older browsersWORKLET LEGACYDefaults to WORKLET
legacyProcessorUrlA url to fetch the legacy processor that attaches to the deprecated ScriptProcessorNodeA fully qualified URLNo
workletProcessorUrlA url to fetch the AudioWorkletProcessor to attach to the AudioWorkletNodeA fully qualified URLNo
envWhich environment the effect is running in.EffectEnv.Production EffectEnv.IntegrationNo

Supported Bitrates

The noise reduction effect supports the following audio bitrates:

  • 16 kHz
  • 32 kHz
  • 44.1 kHz
  • 48 kHz

If an unsupported bitrate is detected, the noise reduction effect will throw the following error: Error: noise reduction: worklet processor error, "Error: Sample rate of X is not supported.

Usage

Supply an audio track or stream to the effect, the effect will handle updating the stream on enable/disable. In the case of a track being passed, listen to the 'track-updated' event to receive the updated track on enable/disable.

// Create a new audio stream by getting a user's audio media.
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });

// Create the effect.
const effect = new NoiseReductionEffect({
  authToken: 'YOUR_AUTH_TOKEN',
  workletProcessorUrl: 'https://my-worklet-processor-url', // For 'WORKLET' mode
  legacyProcessorUrl: 'https://my-legacy-processor-url', // For 'LEGACY' mode
  mode: 'WORKLET', // or 'LEGACY'
});

// Load the effect with the input stream.
await effect.load(stream);

Example

The example app included in this repo is designed to help test functionality and troubleshoot issues. You can run the example app by following the instructions in the README in the example folder. You can also view a live example at https://effects.webex.com.

Development

  • Run yarn to install dependencies.
  • Run yarn prepare to prepare dependencies.
  • Run yarn watch to build and watch for updates.
  • Run yarn test to build, run tests, lint, and run test coverage.

Visual Studio Code

Install the recommended extensions when first opening the workspace (there should be a prompt). These plugins will help maintain high code quality and consistency across the project.

NOTE: VS Code is setup to apply formatting and linting rules on save (Prettier runs first, then ESLint). The rules applied are defined in settings.json.

FAQs

Package last updated on 26 Feb 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts