Socket
Socket
Sign inDemoInstall

agora-rte-extension

Package Overview
Dependencies
Maintainers
2
Versions
5
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

agora-rte-extension - npm Package Compare versions

Comparing version 1.0.23 to 1.1.0

CHANGELOG.md

2

dist/agora-rte-extension.d.ts

@@ -154,2 +154,3 @@ declare interface AgoraApiExecutor<T> {

protected abstract _createProcessor(): T;
checkCompatibility?(): boolean;
createProcessor(): T;

@@ -254,2 +255,3 @@ }

createProcessor(): T;
checkCompatibility?(): boolean;
}

@@ -256,0 +258,0 @@

10

package.json
{
"name": "agora-rte-extension",
"version": "1.0.23",
"version": "1.1.0",
"description": "Agora RTE Extension",

@@ -11,3 +11,4 @@ "main": "./dist/agora-rte-extension.js",

"dev": "webpack --mode=development --watch",
"test": "npx karma start"
"test": "npx karma start",
"wb": "rm -rf ./dist-white-brand && yarn build && wb"
},

@@ -36,7 +37,8 @@ "author": "",

"webpack": "^5.65.0",
"webpack-cli": "^4.9.1"
"webpack-cli": "^4.9.1",
"white-brand-cli": "^1.2.3"
},
"files": [
"dist",
"WRITING_AN_EXTENSION.md"
"CHANGELOG.md"
],

@@ -43,0 +45,0 @@ "browserslist": [

@@ -0,1 +1,3 @@

[TOC]
# Agora-RTE-Extension

@@ -5,74 +7,168 @@

An extension system for Agora Web SDK NG.
Agora RTE Extension provides the ability for extension developer to interact with Agora RTC SDK NG's VideoTrack and
AudioTrack object, making video and audio processing possible.
## Installation
By receiving `MediaStreamTrack` or `AudioNode` as input, running custom processing procedure such as `WASM` module
or `AudioWorkletNode`, and finally generating processed `MediaStreamTrack` or `AudioNode`, it will construct a media
processing pipeline to allow custom media processing provided by developers.
```shell
npm install --save agora-rte-extension
## How Extension and Processor Interacts With Agora RTC SDK NG
A `Processor` basically connects to other `Processor`s with `pipe` method:
```typescript
processorA.pipe(processorB);
```
when using `Yarn`:
The `pipe` method returns the `Processor` passed as parameter itself, making a function chaining style:
```shell
yarn add agora-rte-extension
```typescript
//processor actually is processorB
const processor = processorA.pipe(processorB);
//function chaining
processorA.pipe(processorB).pipe(processorC);
```
## Usage Guide
On AgoraRTC SDK NG v4.10.0 and afterwards, the `ILocalVideoTrack` and `ILocalAudioTrack` objects also have `pipe` method
on it:
### 1. Extending Processor
```typescript
const localVideoTrack = AgoraRTC.createCameraVideoTrack();
#### a. Choose between `VideoProcessor` or `AudioProcessor`
localVideoTrack.pipe(videoProcessor);
```
Depend on your media type, which is either video or audio, you should extend `VideoProcessor` or `AudioProcessor`
accordingly.
To make the processed media rendering locally and transmitting through WebRTC, `processorDestination` property
on `ILocalVideoTrack` and `ILocalAudioTrack` has to be the final processor through the pipeline:
```typescript
import {VideoProcessor, AudioProcessor} from "agora-rte-extension";
localVideoTrack.pipe(videoProcessor).pipe(localVideoTrack.processorDestination);
```
//for video processor
class YourVideoProcessor extends VideoProcessor {
----
An `Extension` receives injected utility functionality such as `logger` and `reporter`
during `AgoraRTC.registerExtensions` function call:
```typescript
AgoraRTC.registerExtensions([videoExtension, audioExtension]);
```
`Extension` also provides `createProcessor` method for constructing `Processor` instance:
```typescript
const videoProcessor = videoExtension.createProcessor();
```
----
Wrap it up:
```typescript
const videoExtension = new VideoExtension();
AgoraRTC.registerExtensions([videoExtension]);
const localVideoTrack = await AgoraRTC.createCameraVideoTrack();
const videoProcessor = videoExtension.createProcessor();
localVideoTrack.pipe(videoProcessor).pipe(localVideoTrack.processorDestination);
```
## Extension and Processor APIs for extension developers
### Extension
#### `Extension._createProcessor`
Abstract class `Extension` has one abstract method `_createProcessor` needs to be implemented:
```typescript
abstract class Extension<T extends BaseProcessor> {
abstract _createProcessor(): T;
}
```
//for audio processor
class YourAudioProcessor extends AudioProcessor {
When implemented, it should return a `VideoProcessor` or `AudioProcessor` instance.
`AgoraRTC` developer calling `extension.createProcessor()` will return the processor returned by `_createProcessor`.
#### `Extension.checkCompatibility`
Abstract class `Extension` has one optional abstract public method `checkCompatibility` could be implemented:
```typescript
abstract class Extension<T extends BaseProcessor> {
public abstract checkCompatibility?(): boolean;
}
```
#### b. Extending VideoProcessor
When implemented, it should return a `boolean` value indicating whether extension could be run inside current browser environment.
For Video Processor, there are 1 property, 4 methods could be implemented:
### VideoProcessor
#### `VideoProcessor.name`
Abstract property `name` on `VideoProcessor` has to be implemented in order to name processor:
```typescript
import {VideoProcessor, AudioProcessor} from "agora-rte-extension";
import type {IProcessorContext} from "agora-rte-extension";
abstract name: string;
```
class YourVideoProcessor extends VideoProcessor {
/**
* processor name
* */
public name = "YourVideoProcessor"
#### `VideoProcessor.onPiped`
/**
* will be called when AgoraRTC.LocalTrack was connected to the pipedline
* */
onPiped(context: IProcessorContext) {
}
Abstract optional method `onPiped` could be implemented in order to be notified when processor connected to a pipeline
with `ILocalVideoTrack` as it's source:
/**
* will be called when AgoraRTC.LocalTrack was disconnected from the pipedline,
* or the current/previous processor was disconnected from the pipeline.
* */
onUnpiped() {
}
```typescript
abstract onPiped?(context: IProcessorContext): void;
```
/**
* will be called when previous processor generated output MediaStreamTrack
* */
onTrack(track: MediaStreamTrack, context: IProcessorContext) {
}
It will only be called when an `ILocalVideoTrack` object from `AgoraRTC` was connected to the pipeline, or when the
processor was connected to a pipeline with `ILocalVideoTrack` as its source.
> Pipeline without an `ILocalVideoTrack` as it's source, `onPiped` method will not be called for processors belonging to this pipeline until an `ILocalVideoTrack` connected to it.
/**
* will be called when enable/disable methods on Processor was called
* */
onEnableChange(enabled: boolean) {
```typescript
videoTrack.pipe(processor);//will be called
processorA.pipe(processorB);//will NOT be called
videoTrack.pipe(processorA);//will be called for both processorA and processorB
```
#### `VideoProcessor.onUnpiped`
Abstract optional method `onUnpiped` could be implemented in order to be notified when processor disconnected to a
pipeline:
```typescript
abstract onUnPiped?(): void;
```
#### `VideoProcessor.onTrack`
Abstract optional method `onTrack` could be implemented in order to be notified when the previous processor
or `ILocalVideoTrack` feeds output `MediaStreamTrack` to the current processor:
```typescript
abstract onTrack?(track: MediaStreamTrack, context: IProcessorContext): void;
```
#### `VideoProcessor.onEnableChange`
Abstract optional method `onEnableChange` could be implemented in order to be notified when processor's `_enabled`
property has changed:
```typescript
abstract onEnableChange?(enabled: boolean): void | Promise<void>;
```
`AgoraRTC` developer calling `processor.enable()` and `processor.disable()` may change `_enabled` property and consequently calling `onEnableChange`, but enabling an already enabled processor or disabling an already disabled processor will not.
#### `VideoProcessor._enabled`
property `_enabled` describes enabled status of the current processor.
```typescript
protected _enabled :boolean = true;
```
It defaults to `true` , but could be change inside processor constructor:
```typescript
class CustomProcessor extends VideoProcessor {
public constructor(){
this._enabled = false;
}

@@ -82,62 +178,239 @@ }

When the video is being processed and is about to generate output, you can call `output` method on `Processor`:
Other than that, it should not be modified directly.
#### `VideoProcessor.enabled`
Getter `enabled` describes enabled status of the current processor.
```typescript
import {IProcessorContext} from "agora-rte-extension";
public get enabled(): boolean;
```
class YourVideoProcessor extends VideoProcessor {
onTrack(track: MediaStreamTrack, context: IProcessorContext) {
//...processing
const processedTrack = this.process(track);
#### `VideoProcessor.inputTrack`
//output processed MediaStreamTrack
this.output(processedTrack, context);
}
Optional property `inputTrack` will be setted when the previous processor or `ILocalVideoTrack` feeds output track on the current processor:
```typescript
protected inputTrack?:MediaStreamTrack;
```
#### `VideoProcessor.outputTrack`
Optional property `outputTrack` will be setted when the current processor calling `output()` to generate output `MediaStreamTrack`:
```typescript
protected outputTrack?:MediaStreamTrack;
```
#### `VideoProcessor.ID`
Readonly property `ID` is a random ID for the current processor instance:
```typescript
public readonly ID:string;
```
#### `VideoProcessor.kind`
Getter `kind` describes current processor's kind, which is either `audio` or `video`:
```typescript
public get Kind():'video' | 'audio';
```
#### `VideoProcessor.context`
Optional property `context` is the current processor's `IProcessorContext` :
```typescript
protected context?: IProcessorContext;
```
#### `VideoProcessor.output`
method `output` should be called when processor was about to generate processed `MediaStreamTrack`:
```typescript
output(track: MediaStreamTrack, context: IProcessorContext): void;
```
### AudioProcessor
`AudioProcessor` shares almost all the property/methods with `VideoProcessor`, with 1 exception that `AudioProcessor`'s processorContext is `IAudioProcessorContext`; and with several additions:
#### `AudioProcessor.onNode`
Abstract optional method `onNode` could be implemented in order to be notified when the previous processor
or `ILocalAudioTrack` feeds output `AudioNode` to the current audio processor:
```typescript
abstract onNode?(node: AudioNode, context: IAudioProcessorContext): void;
```
#### `AudioProcessor.output`
method `output` should be called when audio processor was about to generate processed `MediaStreamTrack` or `AudioNode`:
```typescript
output(track: MediaStreamTrack | AudioNode, context: IProcessorContext): void;
```
#### `AudioProcessor.inputNode`
Optional property `inputNode` will be setted when the previous processor or `ILocalAudioTrack` feeds output audio node on the current processor:
```typescript
protected inputNode?:AudioNode;
```
####`AudioProcessor.outputNode`
Optional property `outputNode` will be setted when the current processor calling `output()` to generate output `AudioNode`:
```typescript
protected outputNode?:AudioNode;
```
### ProcessorContext
`ProcessorContext` provides the ability to interact with the process pipeline's source which is `ILocalVideoTrack` or `ILocalAudioTrack`, and possiblly affecting media capture.
`ProcessorContext` will be assgined to the processor once the processor was connected with a pipeline has `ILocalVideoTrack` or `ILocalAudioTrack` as it's source.
#### `ProcessorContext.requestApplyConstraints`
Method `requestApplyConstraints` provides the ability to change the `MediaTrackConstraints` used for getting pipeline source's `MediaStreamTrack` :
```typescript
public requestApplyConstraints(constraints: MediaTrackConstraints, processor: IVideoProcessor): Promise<void>;
```
Constraints supplied in `requestApplyConstraints` will be merged with the original constraints used for creating `ICameraVideoTrack`. If several processors inside the same pipline all request to apply additional constraints, the pipe order will be considered to make the final constraints.
#### `ProcessorContext.requestRevertConstraints`
Method`requestRevertConstraints` provides the ability to revert previous constraints request using `requestApplyConstraints`:
```typescript
public requestRevertConstraints(processor: IVideoProcessor):void;
```
### AudioProcessorContext
`AudioProceesorContext` inherits all the methods provided by `ProcessorContext`, with one addition `getAudioContext`.
#### `getAudioContext`
Method `getAudioContext` provides the ability to get `AudioContext` object of the current pipeline:
```typescript
public getAudioContext(): AudioContext;
```
### Ticker
`Ticker` is a utitly class that helps with periodic tasks.
`Ticker` provides simple interface for choosing periodic task implementation, add/remove task and start/stop task.
#### `new Ticker`
`Ticker` constructor requires ticker type and tick interval as parameter:
```typescript
class Ticker{
public constructor(type:"Timer" | "RAF" | "Oscillator", interval: number):Ticker;
}
```
`Ticker` has three implementation to choose from:
- `Timer`: uses `setTimeout` as the internal timer
- `RAF`: uses `requestAnimationFrame` as the internal timer. Most users should choose this type of `Ticker` as it provides best rendering performance
- `Osciilator`: uses `WebAudio`'s `OscillatorNode` as the internal timer. Can still keep running even the browser tab is not focused.
`interval` sets the time between the next callback. It is a best effort timing not an exactly timing.
#### `Ticker.add`
`Ticker.add` adds a task to the ticker:
```typescript
public add(fn: Function): void;
```
#### c. Extending AudioProcessor
#### `Ticker.remove`
For Audio Processor, there are 1 property, 5 methods could be implemented:
`Ticker.remove` removes the task added to the ticker previously:
```typescript
import {VideoProcessor, AudioProcessor} from "agora-rte-extension";
import type {IAudioProcessorContext} from "agora-rte-extension";
public remove():void;
```
class YourVideoProcessor extends VideoProcessor {
/**
* processor name
* */
public name = "YourVideoProcessor"
#### `Ticker.start`
/**
* will be called when AgoraRTC.LocalTrack was connected to the pipedline
* */
onPiped(context: IAudioProcessorContext) {
}
`Ticker.start` starts the already add task with settled ticker type and interval:
/**
* will be called when AgoraRTC.LocalTrack was disconnected from the pipedline,
* or the current/previous processor was disconnected from the pipeline.
* */
onUnpiped() {
}
```typescript
public start():void;
```
/**
* will be called when previous processor generated output MediaStreamTrack
* */
onTrack(track: MediaStreamTrack, context: IAudioProcessorContext) {
}
/**
* will be called when previous processor generated output AudioNode
* */
onNode(node: AudioNode, context: IAudioProcessorContext) {
}
####`Ticker.stop`
/**
* will be called when enable/disable methods on Processor was called
* */
onEnableChange(enabled: boolean) {
`Ticker.stop` stops the previously add task:
```typescript
public stop():void;
```
### Logger
`Logger` is a global utility singleton that helps the logging. It provides four log levels to log to the console.
When the extension was registered with `AgoraRTC.registerExtension`, and the `AgoraRTC` developer choose to upload log, extension logs loged with `Logger` will also been uploaded.
#### `Logger.info`, `Logger.debug`, `Logger.warning`, `Logger.error`
Theses methods log with different level:
```typescript
public info(...args:any[]):void;
public debug(...args:any[]):void;
public warning(...args:any[]):void;
public error(...args:any[]):void;
```
### Reporter
`Reporter` is a global utility singleton that helps with event reporting to Agora analysis platform:
#### `Reporter.reportApiInvoke`
`Repoter.reportApiInvoke` can report public API calling event to Agora analysis platform:
```typescript
interface ReportApiInvokeParams {
name: string;
options: any;
reportResult?: boolean;
timeout?: number;
}
interface AgoraApiExecutor<T> {
onSuccess: (result: T) => void;
onError: (err: Error) => void;
}
public reportApiInvoke<T>(params: ReportApiInvokeParams): AgoraApiExecutor<T>;
```
It accepts `ReportAPIInvokeParams` as parameter:
- `ReportAPIInvokeParams.name`: the name of the public API
- `options`: the arguments, or any other options related to this API invoke
- `reportResult`: whether to report API invoke result
- `timeout`: specifies how long it is `Reporter` thinks the API calling is timeout.
It reports two callback methods, `onSuccess` and `onEror`, which can be called when the API calling success or failed accordingly.
## Extending Extension
Extending an `Extension` is fairly straightforward as we only need to implement `_createProcessor` abstract method:
```typescript
import {Extension} from 'agora-rte-extension'
class YourExtension extends Extension<YourProcessor> {
protected _createProcessor(): YourProcessor {
return new YourProcessor();
}

@@ -147,68 +420,165 @@ }

When the audio is being processed and is about to generate output, you can call `output` method on `Processor`:
## Extending Processor
There are several abstract methods could be implemented and they will be called at the different timing of the processing pipeline.
### `onTrack` and `onNode`
`onTrack` and `onNode` method will be called when the previous processor/LocalTrack generated output. They are the main entry point for us to process media:
```typescript
import {IProcessorContext} from "./index";
class CustomVideoProcessor extends VideoProcesor {
protected onTrack(track: MediaStreamTrack, context: IProcessorContext){}
}
class YourAudioProcessor extends AudioProcessor {
onNode(node: AudioNode, context: IAudioProcessorContext) {
//...processing
const processedAudioNode = this.process(node);
class CustomAudioProcessor extends AudioProcessor {
protected onNode(node: AudioNode, context: IAudioProcessorContext){}
}
```
this.output(processedAudioNode, context);
### Video Processing
Typically, doing video processing requests extracting each video frame as `ImageData` or `ArrayBuffer`.
As for now `InsertableStream` have not been globally supported by browser vendors yet, we use `canvas` API here to extract video frame data:
```typescript
class CustomVideoProcessor extends VideoProcessor {
private canvas: HTMLCanvasElement;
private ctx: CanvasRenderingContext2D;
private videoElement:HTMLVideoElement;
constructor(){
super();
//initialize canvas element
this.canvas = document.createElement('canvas');
this.canvas.width = 640; // canvas's width and height will be your output video streams video dimension
this.canvas.height = 480;
this.ctx = this.canvas.getContext('2d')!;
//initialize video element
this.videoElement = document.createElement('video');
this.videoElement.muted = true;
}
onTrack(track:MediaStreamTrack, context: IProcessorContext){
//loding MediaStreamTrack into HTMLVideoElement
this.videoElement.srcObject = new MediaStream([track]);
this.videoElement.play();
//extract ImageData
this.ctx.drawImage(this.videoElement, 0, 0);
const imageData = this.ctx.getImageData(0, 0, this.canvas.width, this.canvas.height);
}
}
````
```
### 2. Extending Extension
As we can see here video frame data was only been eatracted once inside the `onTrack` method, but we need to run it inside a constant loop to ouput constant frame rate. Luckily, we can leverage `requestAnimationFrame`to do this for us:
####
a.Choose
between`Extension`
or`AudioExtension`
```typescript
class CustomVideoProcessor extends VideoProcessor {
onTrack(track:MediaStreamTrack, context: IProcessorContext){
this.videoElement.srcObject = new MediaStream([track]);
this.videoElement.play();
this.loop();
}
loop(){
this.ctx.drawImage(this.videoElement, 0, 0);
const imageData = this.ctx.getImageData(0, 0, this.canvas.width, this.canvas.height);
this.process(imageData);
requestAnimationFrame(()=>this.loop());
}
process(){
//your custom video processing logic
}
}
```
### Generating Video Processing Output
Depend
on
your
media
type, which
is
either
video
or
audio, you
should
extend`Extension`
or`AudioExtension`
accordingly.
When we've done video processing, `Processor`'s `output` method should be used to generate video putput. `output` methods requires `MediaStreamTrack` and `IProcessorContext` as it's parameter, so we will need to assemble video buffer into a `MediaStreamTrack`.
```typescript
import {Extension, AudioExtension} from "agora-rte-extension";
Usually `canvas`'s `captureStream` helps us with it:
//for video extension
class YourVideoExtension extends Extension<YourVideoProcessor> {
```typescript
class CustomVideoProcessor extends VideoProcessor {
doneProcessing(){
// making an MediaStream from canvas and get MediaStreamTrack
const msStream = this.canvas.captureStream(30);
const outputTrack = msStream.getVideoTracks()[0];
//output processed track
if(this.context){
this.output(outputTrack, this.context);
}
}
}
```
### Audio Processing
Audio processing differs with video processing as that audio processing typically requires `WebAudio`'s capability to do custom audio processing.
//for audio extension
class YourAudioExtension extends AudioExtension<YourAudioProcessor> {
We can implement `onNode` method to receive notification when the previous audio processor/`ILocalAudioTrack` generated output AudioNode:
```typescript
class CustomAudioProcessor extends AudioProcessor {
onNode(node: AudioNode, context: IAudioProcessorContext) {}
}
```
#### b. implement `_createProcessor` method
for Extension to create VideoProcessor or AudioProcessor, you should implement `_createProcessor` based on `Extension` or `AudioExtension`:
We can call `IAudioProcessorContext.getAudioContext` to get `AudioContext` to create our own audioNode:
```typescript
import {Extension, AudioExtension} from 'agora-rte-extension'
class CustomAudioProcessor extends AudioProcessor {
onNode(node: AudioNode, context: IAudioProcessorContext) {
//accuire AudioContext
const audioContext = context.getAudioContext();
//create custom gaiNode
const gainNode = audioContext.createGain();
}
}
```
Also don't forget to connect the input audio node to our custom audio node:
```typescript
class CustomAudioProcessor extends AudioProcessor {
onNode(node: AudioNode, context: IAudioProcessorContext) {
const audioContext = context.getAudioContext();
class YourVideoExtension extends Extension<YourVideoProcessor> {
protected _createprocessor(){
return new YourVideoProcessor();
const gainNode = audioContext.createGain();
//connect
node.connect(gainNode);
}
}
```
### Generating Audio Processing Output
When we've done audio processing, Processor's `output` method should be used to generate audio output. `output` methods requires `MediaStreamTrack`/`AudioNode` and `IAudioProcessorContext` as its parameter:
```typescript
class YourAudioExtension extends AudioExtension<YourAudioProcessor> {
protected _createprocessor(){
return new YourAudioProcessor();
class CustomAudioProcessor extends AudioProcessor {
onNode(node: AudioNode, context: IAudioProcessorContext) {
const audioContext = context.getAudioContext();
const gainNode = audioContext.createGain();
node.connect(gainNode);
//output
this.output(gainNode, context);
}
}
```
## Testing
WIP
## Best Practices
### Audio Graph Connecting
### Handling Enable and Disable
### Error Handling
SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc