
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
@pexip/media
Advanced tools
@pexip/mediaA package to connect @pexip/media-control and @pexip/media-processor to
create a streamlined media process
npm install @pexip/media
createMediaIt create's an object to interact with the media stream, which is usually used for our main stream.
Its major features:
Create a media pipeline to get and process the MediaStream, see
createMediaPipeline. It also provides some media features including:
Provide media input related information
Verify if we need to request a new MediaStream, see updateMedia
Subscribe events from @pexip/media-control and update the state accordingly
createAudioStreamProcessA processor to process audio from the stream to be used together with
createMedia as one of the media processor of the pipeline.
createVideoStreamProcessA processor to process video from the stream to be used together with
createMedia as one of the media processor of the pipeline.
createPreviewControllerIt creates an object to control the provided media stream, e.g. change the audio/video input device and apply changes to the main stream when necessary.
The package tries to follow the MediaStream API's Constraints pattern. Beside the MediaTrackConstraintSet that's specified from the Media Capture and Streams spec, we have extended some additional media features as the following:
export interface InputConstraintSet extends MediaTrackConstraints {
/**
* Same purpose as `deviceId` but it gives more information about the device
* so that we can have extra tolerance on device selection
*/
device?: DeviceConstraint | ConstraintDeviceParameters;
/**
* Whether or not using video segmentation, e.g. background
* blur/replacement, to specify the effects, intended to be applied to the
* segment. Available effects are `none`, `blur`, `overlay` or `remove`
*/
videoSegmentation?: ConstrainDOMString;
/**
* Segmentation model to be used for video segmentation, currently only
* supports `mediapipeSelfie` and `personify`
*/
videoSegmentationModel?: ConstrainDOMString;
/**
* Whether or not using our own noise suppression
*/
denoise?: ConstrainBoolean;
/**
* Voice Activity Detection
*/
vad?: ConstrainBoolean;
/**
* Audio Signal Detection for the purpose of checking if the audio input is
* hardware muted or unusable
*/
asd?: ConstrainBoolean;
/**
* Audio Signal Detection for the purpose of checking if the audio input is
* hardware muted or unusable
*/
mixWithAdditionalMedia?: ConstrainBoolean;
/**
* Blur size/level parameter when using video segmentation with `blur`
* effects
*/
backgroundBlurAmount?: ConstrainULong;
/**
* Blur amount applied to the segmented person's edge
*/
edgeBlurAmount?: ConstrainULong;
/**
* Erode level for edge smoothing when using video segmentation
*/
foregroundThreshold?: ConstrainDouble;
/**
* Image Url that is being used for video overlay effects
*/
backgroundImageUrl?: ConstrainDOMString;
/**
* The ratio to be used for smoothing segmentation mask
*/
maskCombineRatio?: ConstrainDouble;
/**
* Image Url that is being used for video overlay effects
*/
resizeMode?: ConstrainDOMString;
/**
* Current pan, tilt and zoom level for a PTZ controllable camera on web
*/
pan?: boolean;
tilt?: boolean;
zoom?: boolean;
/**
* Content Hint for the track to apply
*/
contentHint?: ConstrainDOMString;
}
createMediaimport {createMedia, createMediaSignals, UserMediaStatus} from '@pexip/media';
import type {RenderEffects, Segmenters, Media} from '@pexip/media';
import {
createSegmenter,
createCanvasTransform,
createVideoProcessor,
} from '@pexip/media-processor';
// App states
const states: {
muted: {video: boolean; audio: boolean};
effects: RenderEffects;
foregroundThreshold: number;
backgroundBlurAmount: number;
maskCombineRatio: number;
backgroundImageUrl: string;
media?: Media;
status: UserMediaStatus;
} = {
muted: {
audio: false,
video: false,
},
// Could be `none`, `blur` or `overlay`
effects: 'none',
foregroundThreshold: 0.5, // 0.0 - 1.0
backgroundBlurAmount: 30, // 0 - 100
maskCombineRatio: 0.5, // 0.0 - 1.0
backgroundImageUrl: '/some/path/to/an/image/for/background/replacement.png',
status: UserMediaStatus.Initial,
};
// Create required signals with additional `onDevicesChanged`,
// `onStatusChanged` and `onStreamTrackEnabled` signals
export const signals = createMediaSignals('demo', [
'onDevicesChanged',
'onStatusChanged',
'onStreamTrackEnabled',
]);
// Setting the path to that `@mediapipe/tasks-vision` assets
// It will be passed direct to
// [FilesetResolver.forVisionTasks()](https://ai.google.dev/edge/api/mediapipe/js/tasks-vision.filesetresolver#filesetresolverforvisiontasks)
const tasksVisionBasePath =
'A base path to specify the directory the Wasm files should be loaded from';
const modelAsset = {
/**
* Path to mediapipe selfie segmentation model asset
*/
path: 'A path to selfie segmentation model',
modelName: 'selfie' as const,
};
const segmenter = createSegmenter(tasksVisionBasePath, {modelAssets});
// Create a processing transformer and set the effects to `blur`
const transformer = createCanvasTransform(segmenter, {
effects: states.effects,
foregroundThreshold: states.foregroundThreshold,
backgroundBlurAmount: states.backgroundBlurAmount,
maskCombineRatio: states.maskCombineRatio,
backgroundImageUrl: states.backgroundImageUrl,
});
const selfie = createSegmenter(taskVisionBasePath, {
modelAsset,
delegate: () => 'GPU', // Use GPU
});
export const segmenters: Partial<Segmenters> = {
selfie,
};
const videoProcessor = createVideoProcessor([transformer]);
const videoStreamProcessor = createVideoStreamProcess({
shouldEnable: () => true, // Always Process Video Track
segmenters,
transformer,
videoProcessor,
});
// Instantiate the media object
export const mediaController = createMedia({
getMuteState: () => states.muted,
signals,
audioProcessors: [], // No audio processor
videoProcessors: [videoStreamProcessor],
});
// Hook-up the signals to get the update
// Subscribe the onMediaChanged signal to get the latest Media object change event
signals.onMediaChanged.add(media => {
states.media = media;
}); // Assume there is a local function `setLocalMedia`
// Subscribe the onStatusChanged signal to get the latest Media status change event
signals.onStatusChanged.add(media => setMediaStatus(media)); // Assume there is a local function `setMediaStatus`
// Subscribe onStreamTrackEnabled signal to get the MediaStreamTrack['enabled'] change event
signals.onStreamTrackEnabled.add(track => {
console.log(track);
});
// Later we can initialize a gUM to get a MediaStream
mediaController.getUserMedia({
audio: true,
video: {
videoSegmentation: 'blur', // Use background blur
foregroundThreshold: states.foregroundThreshold,
backgroundBlurAmount: states.backgroundBlurAmount,
maskCombineRatio: states.maskCombineRatio,
backgroundImageUrl: states.backgroundImageUrl,
},
});
// Access the current `MediaStream`
mediaController.media.stream;
// Get audio mute state
mediaController.media.audioMuted;
// Mute audio
mediaController.media.muteAudio(true);
// Get video mute state
mediaController.media.videoMuted;
// Mute video
mediaController.media.muteVideo(true);
// Get the status
mediaController.media.status;
// Update the status
mediaController.media.status = UserMediaStatus.Initial;
// Stop the `MediaStream`
await mediaController.media.release();
// Later we can make changes to the media on-demand without initiating a new gUM call
// Turn off background blur
await mediaController.media.applyConstraints({
video: {videoSegmentation: 'none'},
});
// Turn on background blur
await mediaController.media.applyConstraints({
video: {videoSegmentation: 'blur'},
});
// Use our own noise suppression
await mediaController.media.applyConstraints({
audio: {denoise: true, noiseSuppression: false},
});
// Use built-in noise suppression
await mediaController.media.applyConstraints({
audio: {denoise: false, noiseSuppression: true},
});
FAQs
Home for media related stuff
We found that @pexip/media demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.