Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@webex/web-media-effects

Package Overview
Dependencies
Maintainers
0
Versions
73
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@webex/web-media-effects

Media effects for JS SDKs

  • 2.20.7
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
1.8K
decreased by-18.27%
Maintainers
0
Weekly downloads
 
Created
Source

web-media-effects

Web Media Effects is a suite of media effects developed for web SDKs and WebRTC media applications.

Introduction

There are three effects included in this library:

  • Virtual background (e.g., blur, image replacement, video replacement)
  • Noise reduction (e.g., background noise removal)
  • Gain (for testing)

Common Methods

The effects are built on top of a plugin interface that makes building and extending effects more straight-forward.

The effects plugins have four primary methods to control the plugin:

  • load(input) accepts a track or stream and returns a new track or stream with the effect applied
  • enable() enables the plugin after it's loaded
  • disable() disables the plugin after it's loaded
  • dispose() tears down the effect
  • preloadAssets() fetches all assets (e.g., WASM files, ONNX models) to optimize the load sequence

Upon enabling or disabling the effect, an event is fired.

effect.on('track-updated', (track: MediaStreamTrack) => {
  // do something with the new track.
});

Additionally, there are a few convenience methods:

  • getOutputStream() returns the new outgoing (i.e., "effected") stream
  • getOutputTrack() returns the active output track
  • setEnabled(boolean) sets the effect state by passing in a boolean (convenient for state managers)

Preloading Assets

In an effort to optimize startup time for applying media effects, there is a preloading mechanism. This mechanism fetches critical assets, such as ONNX models for image segmentation, WASM modules for audio processing, and web workers for background processing, in advance of media availability. This ensures smoother integration of effects once the media stream is ready to improve the overall user experience. Assets can be preloaded using either a provided factory function or directly using preloadAssets() API.

Using Factory Function for Asynchronous Initialization

The library includes factory functions for scenarios that require asynchronous operations. Utilizing the async/await pattern, these functions provide a simple method for creating effects with their assets already preloaded. The factory function's second parameter is a boolean that indicates whether the assets should be preloaded.

const noiseReductionEffect = await createNoiseReductionEffect({
    authToken: 'YOUR_AUTH_TOKEN',
    // ...other options
  },
  true
);

const virtualBackgroundEffect = await createVirtualBackgroundEffect({
    authToken: 'YOUR_AUTH_TOKEN',
    mode: 'BLUR',
    // ...other options
  },
  true
);

By incorporating asset preloading, the preload API aims to minimize delays and performance hitches when activating effects to keep the UI fluid and responsive.

Direct Use of preloadAssets() API

For more fine-grained control over the preloading process, you can also directly call the preloadAssets() method on each effect instance. This approach allows you to manually manage when and how assets are preloaded, providing flexibility to fit various application architectures and workflows:

const virtualBackgroundEffect = new VirtualBackgroundEffect(options);
await virtualBackgroundEffect.preloadAssets();

const noiseReductionEffect = new NoiseReductionEffect(options);
await noiseReductionEffect.preloadAssets();

This direct method is useful in scenarios where you might want to preload assets independently of the effect instantiation or in response to specific application states or events. It gives you the ability to strategically preload assets at the most appropriate moment.

Virtual Background

The virtual background effect is a wrapper around ladon-ts that provides a virtual background for video calling. The virtual background may be an image, an mp4 video, or the user's background with blur applied. The blur option allows for varied levels of strength and quality where higher levels require more compute resources.

The virtual-background-effect takes an optional VirtualBackgroundEffectOptions config object in its constructor. The effect's options can be changed at runtime via an updateOptions() method. When disabled, the effect simply passes through the original video images so that the outgoing stream does not need to be changed.

The effect uses a background thread worker by default to prevent slowdowns on the main UI thread. The main UI thread can be used instead by adding the property generator: 'local' in the VirtualBackgroundEffectOptions object. However, this is not recommended as the worker thread performs much better.

NOTE: For backwards compatibility, the default mode is set to BLUR.

Options

There are a few different options that can be supplied to the constructor or updateOptions() method that affect the behavior

NameDescriptionValuesRequired
authTokenUsed to authenticate the request for the backend modelsAn encoded string tokenYes
generatorDetermines where the model runs (on main thread or background thread)local workerDefaults to worker
frameRateDetermines how many frames per second are sent to the model0-60Defaults to 30
qualityDetermines the accuracy of the model (higher requires more CPU)LOW MEDIUM HIGH ULTRADefaults to LOW
mirrorWhether the output image should be flipped horizontallytrue falseDefaults to false
modeDetermines what kind of background to render behind the userBLUR IMAGE VIDEODefaults to BLUR
blurStrengthHow strongly the background should be blurredWEAK MODERATE STRONG STRONGER STRONGESTRequired in BLUR mode
bgImageUrlPath to the background image to replace the original backgroundFully qualified URLRequired in IMAGE mode
bgVideoUrlPath to the background video to replace the original backgroundFully qualified URL (mp4 only)Required in VIDEO mode
envWhich environment the effect is running in.EffectEnv.Production EffectEnv.IntegrationDefaults to EffectEnv.Production
avoidSimdAvoid using the SIMD processor, if SIMD is supported (for testing)true, falseDefaults to false
preventBackgroundThrottlingIf set to true, prevents the browser from throttling the effect frame rate when the page is hidden.true, falseDefaults to false
Mode

The virtual background plugin applies a background effect to the original media stream by performing image segmentation on the incoming video frames. The plugin is capable of applying three different kinds of effects called modes: background blur, background image replacement, and background video replacement.

The mode configuration option is determines what background effect to apply. There are three accepted values for the mode: BLUR, IMAGE, and VIDEO. Each mode has at least one required option that needs to be set in the options object, which is outlined below in the Options section.

NOTE: For Typescript users, the mode can be selected by using the exported VirtualBackgroundMode enum, for convenience.

Usage

Supply a video stream to the effect and when loaded, it will return a new stream with the effect applied.

// Create a new video stream by a getting user's video media.
const originalVideoStream = await navigator.mediaDevices.getUserMedia({ video: { width, height } });

// Create the effect.
const effect = new VirtualBackgroundEffect({
  authToken: 'YOUR_AUTH_TOKEN',
  mode: `BLUR`,
  blurStrength: `STRONG`,
  quality: `LOW`,
});

// Load the effect with the input stream.
const newStream = await effect.load(originalVideoStream);

// Attach the new stream to a video element to see the effect in action.
myVideoElement.srcObject = myStream;

Rate Estimator

The Rate Estimator is a utility designed to monitor and adapt to changes in the processing rate of media effects, such as frame rates for video effects. It provides insights into performance, allowing applications to respond dynamically to varying processing capabilities by emitting events based on the processing rate's performance against defined thresholds.

Configuration Options

Configure the Rate Estimator with the following options to tailor its behavior to your application's needs:

OptionDescriptionDefault Value
hysteresisMarginMargin of tolerance around the low threshold to prevent rapid toggling between states, expressed as a percentage of the lowThreshold.0.05 (5%)
lowDurationDuration in seconds that the rate must be below the lowThreshold before considering the rate sustainedly low.5 seconds
lowThresholdThreshold below which the rate is considered low, expressed as a percentage of the target rate.80% of target rate
minSamplesMinimum number of samples to accumulate before making a rate estimation.30
maxSamplesMaximum number of samples to consider for rate estimation to prevent using stale data.120
Events

The Rate Estimator emits events to indicate changes in the processing rate. You can use string values or, if using TypeScript, enums provided by RateEstimatorEvent.

EventDescription
rate-ok or RateEstimatorEvent.RateOkFired when the estimated rate returns to normal, above the lowThreshold.
rate-low or RateEstimatorEvent.RateLowFired when the estimated rate falls below the lowThreshold.
rate-lagging or RateEstimatorEvent.RateLaggingFired when the low rate is sustained beyond the duration specified by lowDuration.
Statuses

The Rate Estimator can be in one of the following statuses, accessible as strings or through the RateEstimatorStatus TypeScript enum:

StatusDescription
idle or RateEstimatorStatus.IdleNo data is being processed or the estimator has been reset.
init or RateEstimatorStatus.InitInitial data is being collected.
lagging or RateEstimatorStatus.LaggingThe rate has been low for a sustained period.
low or RateEstimatorStatus.LowThe current rate is below the threshold.
ok or RateEstimatorStatus.OkThe rate is normal, above the threshold.
Usage

Instantiate the Rate Estimator with your target rate and optional configuration settings. Add timestamps periodically as frames or tasks are processed to enable the estimator to assess the current rate.

import { EffectEvent, RateEstimator, RateEstimatorEvent, RateEstimatorStatus } from '@webex/web-media-effects';

const rateEstimator = new RateEstimator(30, {
  lowThreshold: 24, // Consider rate low if below 24 fps
});

// Assuming 'effect' is an instance of a media effect using the Rate Estimator
effect.setOnFrameProcessedCallback((timestamp) => {
  rateEstimator.addTimestamp(timestamp);
});

// Reset the estimator when the effect is enabled or disabled
effect.on(EffectEvent.Enabled, () => rateEstimator.reset());
effect.on(EffectEvent.Disabled, () => rateEstimator.reset());

// Handle rate change events
rateEstimator.on(RateEstimatorEvent.RateLow, (rate) => {
  console.log(`Rate is low: ${rate}`);
});
rateEstimator.on(RateEstimatorEvent.RateOk, (rate) => {
  console.log(`Rate is ok: ${rate}`);
});
rateEstimator.on(RateEstimatorEvent.RateLagging, (rate) => {
  console.log(`Rate is lagging: ${rate}`);
});

Noise Reduction

The noise reduction effect removes background noise from an audio stream to provide clear audio for calling.

The noise-reduction-effect takes a NoiseReductionEffectOptions config object in its constructor. A developer can optionally pass a workletProcessorUrl parameter (or legacyProcessorUrl) in the config to use a different of test version of the audio processor. An audioContext parameter can be passed into the config as well in order to supply an existing AudioContext; otherwise, a new one will be created.

The effect loads the background thread AudioWorkletProcessor into the main thread AudioWorklet in order to keep the audio computations from impacting UI performance.

Options

There are a few different options that can be supplied to the constructor or updateOptions() method that affect the behavior

NameDescriptionValuesRequired
authTokenUsed to authenticate the request for the backend processorsAn encoded string tokenYes
audioContextAn optional AudioContext for custom behaviorAudioContextNo
modeDetermines whether to run in WORKLET mode or LEGACY mode for older browsersWORKLET LEGACYDefaults to WORKLET
legacyProcessorUrlA url to fetch the legacy processor that attaches to the deprecated ScriptProcessorNodeA fully qualified URLNo
workletProcessorUrlA url to fetch the AudioWorkletProcessor to attach to the AudioWorkletNodeA fully qualified URLNo
envWhich environment the effect is running in.EffectEnv.Production EffectEnv.IntegrationNo

Supported Bitrates

The noise reduction effect supports the following audio bitrates:

  • 16 kHz
  • 32 kHz
  • 44.1 kHz
  • 48 kHz

If an unsupported bitrate is detected, the noise reduction effect will throw the following error: Error: noise reduction: worklet processor error, "Error: Sample rate of X is not supported.

Usage

Supply an audio track or stream to the effect, the effect will handle updating the stream on enable/disable. In the case of a track being passed, listen to the 'track-updated' event to receive the updated track on enable/disable.

// Create a new audio stream by getting a user's audio media.
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });

// Create the effect.
const effect = new NoiseReductionEffect({
  authToken: 'YOUR_AUTH_TOKEN',
  workletProcessorUrl: 'https://my-worklet-processor-url', // For 'WORKLET' mode
  legacyProcessorUrl: 'https://my-legacy-processor-url', // For 'LEGACY' mode
  mode: 'WORKLET', // or 'LEGACY'
});

// Load the effect with the input stream.
await effect.load(stream);

Publishing and Using TypeScript Definitions

The TypeScript definitions for Web Media Effects are maintained in a separate package to allow developers to use the types without importing the entire library. This package is published under the scope @webex on npm as @webex/web-media-effects-types.

Steps to Publish

  1. Prepare the Types: Run the prepare script to ensure that the latest index.d.ts is copied to the types directory and is ready for publication.

    npm run prepare
    
  2. Publish the Package: Navigate to the types directory and use npm to publish:

    cd types
    npm publish
    

    Ensure you are logged into npm with an account that has permissions to publish under the @webex scope.

GitHub Actions Automation

To automate the publishing process, GitHub Actions workflows are configured to handle the deployment of the types package upon changes to the main branch or manually via workflow_dispatch event. This ensures that the types are consistently updated and versioned in alignment with the source library.

Using the Published Types

Once published, the TypeScript definitions can be easily integrated into any TypeScript project.

  1. Install the Types Package: To add the types to your project, run:

    npm install --save-dev @webex/web-media-effects-types
    
  2. Configure TypeScript: In your tsconfig.json, ensure that your type roots or type references are configured to include the installed types:

    {
      "compilerOptions": {
        "typeRoots": ["./node_modules/@types", "./node_modules/@webex/web-media-effects-types"]
      }
    }
    

    Alternatively, you can use a direct reference in your TypeScript file if needed:

    /// <reference types="@webex/web-media-effects-types" />
    

Example

The example app included in this repo is designed to help test functionality and troubleshoot issues. You can run the example app by following the instructions in the README in the example folder. You can also view a live example at https://effects.webex.com.

Development

  1. Run yarn to install dependencies.
  2. Run yarn prepare to prepare dependencies.
  3. Run yarn watch to build and watch for updates.
  4. Run yarn test to build, run tests, lint, and run test coverage.
Visual Studio Code

Install the recommended extensions when first opening the workspace (there should be a prompt). These plugins will help maintain high code quality and consistency across the project.

NOTE: VS Code is setup to apply formatting and linting rules on save (Prettier runs first, then ESLint). The rules applied are defined in settings.json.

FAQs

Package last updated on 30 Jul 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc