Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@q42/lib-jitsi-meet

Package Overview
Dependencies
Maintainers
17
Versions
25
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@q42/lib-jitsi-meet - npm Package Compare versions

Comparing version 1.0.1 to 1.1.0

modules/e2ee/Context.js

601

doc/API.md

@@ -11,7 +11,7 @@ Jitsi Meet API

```javascript
```html
<script src="https://meet.jit.si/libs/lib-jitsi-meet.min.js"></script>
```
Now you can access Jitsi Meet API trough the ```JitsiMeetJS``` global object.
Now you can access Jitsi Meet API trough the `JitsiMeetJS` global object.

@@ -37,7 +37,7 @@ Components

----------
You can access the following methods and objects trough ```JitsiMeetJS``` object.
You can access the following methods and objects trough `JitsiMeetJS` object.
* ```JitsiMeetJS.init(options)``` - this method initialized Jitsi Meet API.
The ```options``` parameter is JS object with the following properties:
* `JitsiMeetJS.init(options)` - this method initialized Jitsi Meet API.
The `options` parameter is JS object with the following properties:
- `useIPv6` - boolean property

@@ -49,215 +49,263 @@ - `disableAudioLevels` - boolean property. Enables/disables audio levels.

- `enableAnalyticsLogging` - boolean property (default false). Enables/disables analytics logging.
- `externalStorage` - Object that implements the Storage interface. If specified this object will be used for storing data instead of `localStorage`.
- `callStatsCustomScriptUrl` - (optional) custom url to access callstats client script
- `disableRtx` - (optional) boolean property (default to false). Enables/disable the use of RTX.
- `disableH264` - (optional) boolean property (default to false). If enabled, strips the H.264 codec from the local SDP.
- `preferH264` - (optional) boolean property (default to false). Enables/disable preferring the first instance of an h264 codec in an offer by moving it to the front of the codec list.
- `disabledCodec` - the mime type of the code that should not be negotiated on the peerconnection.
- `preferredCodec` the mime type of the codec that needs to be made the preferred codec for the connection.
- `disableH264` - __DEPRECATED__. Use `disabledCodec` instead.
- `preferH264` - __DEPRECATED__. Use `preferredCodec` instead.
* ```JitsiMeetJS.JitsiConnection``` - the ```JitsiConnection``` constructor. You can use that to create new server connection.
* `JitsiMeetJS.JitsiConnection` - the `JitsiConnection` constructor. You can use that to create new server connection.
* ```JitsiMeetJS.setLogLevel``` - changes the log level for the library. For example to have only error messages you should do:
```
* `JitsiMeetJS.setLogLevel` - changes the log level for the library. For example to have only error messages you should do:
```javascript
JitsiMeetJS.setLogLevel(JitsiMeetJS.logLevels.ERROR);
```
* ```JitsiMeetJS.createLocalTracks(options, firePermissionPromptIsShownEvent)``` - Creates the media tracks and returns them trough ```Promise``` object. If rejected, passes ```JitsiTrackError``` instance to catch block.
- options - JS object with configuration options for the local media tracks. You can change the following properties there:
1. devices - array with the devices - "desktop", "video" and "audio" that will be passed to GUM. If that property is not set GUM will try to get all available devices.
2. resolution - the prefered resolution for the local video.
3. constraints - the prefered encoding properties for the created track (replaces 'resolution' in newer releases of browsers)
4. cameraDeviceId - the deviceID for the video device that is going to be used
5. micDeviceId - the deviceID for the audio device that is going to be used
6. minFps - the minimum frame rate for the video stream (passed to GUM)
7. maxFps - the maximum frame rate for the video stream (passed to GUM)
8. facingMode - facing mode for a camera (possible values - 'user', 'environment')
- firePermissionPromptIsShownEvent - optional boolean parameter. If set to ```true```, ```JitsiMediaDevicesEvents.PERMISSION_PROMPT_IS_SHOWN``` will be fired when browser shows gUM permission prompt.
* `JitsiMeetJS.createLocalTracks(options, firePermissionPromptIsShownEvent)` - Creates the media tracks and returns them trough `Promise` object. If rejected, passes `JitsiTrackError` instance to catch block.
- `options` - JS object with configuration options for the local media tracks. You can change the following properties there:
1. `devices` - array with the devices - "desktop", "video" and "audio" that will be passed to GUM. If that property is not set GUM will try to get all available devices.
2. `resolution` - the prefered resolution for the local video.
3. `constraints` - the prefered encoding properties for the created track (replaces 'resolution' in newer releases of browsers)
4. `cameraDeviceId` - the deviceID for the video device that is going to be used
5. `micDeviceId` - the deviceID for the audio device that is going to be used
6. `minFps` - the minimum frame rate for the video stream (passed to GUM)
7. `maxFps` - the maximum frame rate for the video stream (passed to GUM)
8. `desktopSharingFrameRate`
- `min` - Minimum fps
- `max` - Maximum fps
9. `desktopSharingSourceDevice` - The device id or label for a video input source that should be used for screensharing.
10. `facingMode` - facing mode for a camera (possible values - 'user', 'environment')
- firePermissionPromptIsShownEvent - optional boolean parameter. If set to `true`, `JitsiMediaDevicesEvents.PERMISSION_PROMPT_IS_SHOWN` will be fired when browser shows gUM permission prompt.
* ```JitsiMeetJS.createTrackVADEmitter(localAudioDeviceId, sampleRate, vadProcessor)``` - Creates a TrackVADEmitter service that connects an audio track to a VAD (voice activity detection) processor in order to obtain VAD scores for individual PCM audio samples.
- ```localAudioDeviceId``` - The target local audio device.
- ```sampleRate``` - Sample rate at which the emitter will operate. Possible values 256, 512, 1024, 4096, 8192, 16384. Passing other values will default to closes neighbor, i.e. Providing a value of 4096 means that the emitter will process bundles of 4096 PCM samples at a time, higher values mean longer calls, lowers values mean more calls but shorter.
- ```vadProcessor``` - VAD Processors that does the actual compute on a PCM sample.The processor needs to implement the following functions:
- getSampleLength() - Returns the sample size accepted by calculateAudioFrameVAD.
- getRequiredPCMFrequency() - Returns the PCM frequency at which the processor operates .i.e. (16KHz, 44.1 KHz etc.)
- calculateAudioFrameVAD(pcmSample) - Process a 32 float pcm sample of getSampleLength size.
* ```JitsiMeetJS.enumerateDevices(callback)``` - __DEPRECATED__. Use ```JitsiMeetJS.mediaDevices.enumerateDevices(callback)``` instead.
* ```JitsiMeetJS.isDeviceChangeAvailable(deviceType)``` - __DEPRECATED__. Use ```JitsiMeetJS.mediaDevices.isDeviceChangeAvailable(deviceType)``` instead.
* ```JitsiMeetJS.isDesktopSharingEnabled()``` - returns true if desktop sharing is supported and false otherwise. NOTE: that method can be used after ```JitsiMeetJS.init(options)``` is completed otherwise the result will be always null.
* ```JitsiMeetJS.getActiveAudioDevice()``` - goes through all audio devices on the system and returns information about one that is active, i.e. has audio signal. Returns a Promise resolving to an Object with the following structure:
- deviceId - string containing the device ID of the audio track found as active.
- deviceLabel - string containing the label of the audio device.
* ```JitsiMeetJS.getGlobalOnErrorHandler()``` - returns function that can be used to be attached to window.onerror and if options.enableWindowOnErrorHandler is enabled returns the function used by the lib. (function(message, source, lineno, colno, error)).
* `JitsiMeetJS.createTrackVADEmitter(localAudioDeviceId, sampleRate, vadProcessor)` - Creates a TrackVADEmitter service that connects an audio track to a VAD (voice activity detection) processor in order to obtain VAD scores for individual PCM audio samples.
- `localAudioDeviceId` - The target local audio device.
- `sampleRate` - Sample rate at which the emitter will operate. Possible values 256, 512, 1024, 4096, 8192, 16384. Passing other values will default to closes neighbor, i.e. Providing a value of 4096 means that the emitter will process bundles of 4096 PCM samples at a time, higher values mean longer calls, lowers values mean more calls but shorter.
- `vadProcessor` - VAD Processors that does the actual compute on a PCM sample.The processor needs to implement the following functions:
- `getSampleLength()` - Returns the sample size accepted by calculateAudioFrameVAD.
- `getRequiredPCMFrequency()` - Returns the PCM frequency at which the processor operates .i.e. (16KHz, 44.1 KHz etc.)
- `calculateAudioFrameVAD(pcmSample)` - Process a 32 float pcm sample of getSampleLength size.
* `JitsiMeetJS.enumerateDevices(callback)` - __DEPRECATED__. Use `JitsiMeetJS.mediaDevices.enumerateDevices(callback)` instead.
* `JitsiMeetJS.isDeviceChangeAvailable(deviceType)` - __DEPRECATED__. Use `JitsiMeetJS.mediaDevices.isDeviceChangeAvailable(deviceType)` instead.
* `JitsiMeetJS.isDesktopSharingEnabled()` - returns true if desktop sharing is supported and false otherwise. NOTE: that method can be used after `JitsiMeetJS.init(options)` is completed otherwise the result will be always null.
* `JitsiMeetJS.getActiveAudioDevice()` - goes through all audio devices on the system and returns information about one that is active, i.e. has audio signal. Returns a Promise resolving to an Object with the following structure:
- `deviceId` - string containing the device ID of the audio track found as active.
- `deviceLabel` - string containing the label of the audio device.
* `JitsiMeetJS.getGlobalOnErrorHandler()` - returns function that can be used to be attached to window.onerror and if options.enableWindowOnErrorHandler is enabled returns the function used by the lib. (function(message, source, lineno, colno, error)).
* ```JitsiMeetJS.mediaDevices``` - JS object that contains methods for interaction with media devices. Following methods are available:
- ```isDeviceListAvailable()``` - returns true if retrieving the device list is supported and false - otherwise
- ```isDeviceChangeAvailable(deviceType)``` - returns true if changing the input (camera / microphone) or output (audio) device is supported and false if not. ```deviceType``` is a type of device to change. Undefined or 'input' stands for input devices, 'output' - for audio output devices.
- ```enumerateDevices(callback)``` - returns list of the available devices as a parameter to the callback function. Every device is a MediaDeviceInfo object with the following properties:
- label - the name of the device
- kind - "audioinput", "videoinput" or "audiooutput"
- deviceId - the id of the device
- groupId - group identifier, two devices have the same group identifier if they belong to the same physical device; for example a monitor with both a built-in camera and microphone
- ```setAudioOutputDevice(deviceId)``` - sets current audio output device. ```deviceId``` - id of 'audiooutput' device from ```JitsiMeetJS.enumerateDevices()```, '' is for default device.
- ```getAudioOutputDevice()``` - returns currently used audio output device id, '' stands for default device.
- ```isDevicePermissionGranted(type)``` - returns a Promise which resolves to true if user granted permission to media devices. ```type``` - 'audio', 'video' or ```undefined```. In case of ```undefined``` will check if both audio and video permissions were granted.
- ```addEventListener(event, handler)``` - attaches an event handler.
- ```removeEventListener(event, handler)``` - removes an event handler.
* `JitsiMeetJS.mediaDevices` - JS object that contains methods for interaction with media devices. Following methods are available:
- `isDeviceListAvailable()` - returns true if retrieving the device list is supported and false - otherwise
- `isDeviceChangeAvailable(deviceType)` - returns true if changing the input (camera / microphone) or output (audio) device is supported and false if not. `deviceType` is a type of device to change. Undefined or 'input' stands for input devices, 'output' - for audio output devices.
- `enumerateDevices(callback)` - returns list of the available devices as a parameter to the callback function. Every device is a MediaDeviceInfo object with the following properties:
- `label` - the name of the device
- `kind` - "audioinput", "videoinput" or "audiooutput"
- `deviceId` - the id of the device
- `groupId` - group identifier, two devices have the same group identifier if they belong to the same physical device; for example a monitor with both a built-in camera and microphone
- `setAudioOutputDevice(deviceId)` - sets current audio output device. `deviceId` - id of 'audiooutput' device from `JitsiMeetJS.enumerateDevices()`, '' is for default device.
- `getAudioOutputDevice()` - returns currently used audio output device id, '' stands for default device.
- `isDevicePermissionGranted(type)` - returns a Promise which resolves to true if user granted permission to media devices. `type` - 'audio', 'video' or `undefined`. In case of `undefined` will check if both audio and video permissions were granted.
- `addEventListener(event, handler)` - attaches an event handler.
- `removeEventListener(event, handler)` - removes an event handler.
* ```JitsiMeetJS.events``` - JS object that contains all events used by the API. You will need that JS object when you try to subscribe for connection or conference events.
We have two event types - connection and conference. You can access the events with the following code ```JitsiMeetJS.events.<event_type>.<event_name>```.
For example if you want to use the conference event that is fired when somebody leave conference you can use the following code - ```JitsiMeetJS.events.conference.USER_LEFT```.
* `JitsiMeetJS.events` - JS object that contains all events used by the API. You will need that JS object when you try to subscribe for connection or conference events.
We have two event types - connection and conference. You can access the events with the following code `JitsiMeetJS.events.<event_type>.<event_name>`.
For example if you want to use the conference event that is fired when somebody leave conference you can use the following code - `JitsiMeetJS.events.conference.USER_LEFT`.
We support the following events:
1. conference
- TRACK_ADDED - stream received. (parameters - JitsiTrack)
- TRACK_REMOVED - stream removed. (parameters - JitsiTrack)
- TRACK_MUTE_CHANGED - JitsiTrack was muted or unmuted. (parameters - JitsiTrack)
- TRACK_AUDIO_LEVEL_CHANGED - audio level of JitsiTrack has changed. (parameters - participantId(string), audioLevel(number))
- DOMINANT_SPEAKER_CHANGED - the dominant speaker is changed. (parameters - id(string))
- USER_JOINED - new user joined a conference. (parameters - id(string), user(JitsiParticipant))
- USER_LEFT - a participant left conference. (parameters - id(string), user(JitsiParticipant))
- MESSAGE_RECEIVED - new text message received. (parameters - id(string), text(string), ts(number))
- DISPLAY_NAME_CHANGED - user has changed his display name. (parameters - id(string), displayName(string))
- SUBJECT_CHANGED - notifies that subject of the conference has changed (parameters - subject(string))
- LAST_N_ENDPOINTS_CHANGED - last n set was changed (parameters - leavingEndpointIds(array) ids of users leaving lastN, enteringEndpointIds(array) ids of users entering lastN)
- CONFERENCE_JOINED - notifies the local user that he joined the conference successfully. (no parameters)
- CONFERENCE_LEFT - notifies the local user that he left the conference successfully. (no parameters)
- DTMF_SUPPORT_CHANGED - notifies if at least one user supports DTMF. (parameters - supports(boolean))
- USER_ROLE_CHANGED - notifies that role of some user changed. (parameters - id(string), role(string))
- USER_STATUS_CHANGED - notifies that status of some user changed. (parameters - id(string), status(string))
- CONFERENCE_FAILED - notifies that user failed to join the conference. (parameters - errorCode(JitsiMeetJS.errors.conference))
- CONFERENCE_ERROR - notifies that error occurred. (parameters - errorCode(JitsiMeetJS.errors.conference))
- KICKED - notifies that user has been kicked from the conference.
- START_MUTED_POLICY_CHANGED - notifies that all new participants will join with muted audio/video stream (parameters - JS object with 2 properties - audio(boolean), video(boolean))
- STARTED_MUTED - notifies that the local user has started muted
- CONNECTION_STATS - __DEPRECATED__. Use ```JitsiMeetJS.connectionQuality.LOCAL_STATS_UPDATED``` instead.
- BEFORE_STATISTICS_DISPOSED - fired just before the statistics module is disposed and it's the last chance to submit some logs to the statistics service, before it gets disconnected
- AUTH_STATUS_CHANGED - notifies that authentication is enabled or disabled, or local user authenticated (logged in). (parameters - isAuthEnabled(boolean), authIdentity(string))
- ENDPOINT_MESSAGE_RECEIVED - notifies that a new message
1. `conference`
- `TRACK_ADDED` - stream received. (parameters - JitsiTrack)
- `TRACK_REMOVED` - stream removed. (parameters - JitsiTrack)
- `TRACK_MUTE_CHANGED` - JitsiTrack was muted or unmuted. (parameters - JitsiTrack)
- `TRACK_AUDIO_LEVEL_CHANGED` - audio level of JitsiTrack has changed. (parameters - participantId(string), audioLevel(number))
- `DOMINANT_SPEAKER_CHANGED` - the dominant speaker is changed. (parameters - id(string))
- `USER_JOINED` - new user joined a conference. (parameters - id(string), user(JitsiParticipant))
- `USER_LEFT` - a participant left conference. (parameters - id(string), user(JitsiParticipant))
- `MESSAGE_RECEIVED` - new text message received. (parameters - id(string), text(string), ts(number))
- `DISPLAY_NAME_CHANGED` - user has changed his display name. (parameters - id(string), displayName(string))
- `SUBJECT_CHANGED` - notifies that subject of the conference has changed (parameters - subject(string))
- `LAST_N_ENDPOINTS_CHANGED` - last n set was changed (parameters - leavingEndpointIds(array) ids of users leaving lastN, enteringEndpointIds(array) ids of users entering lastN)
- `CONFERENCE_JOINED` - notifies the local user that he joined the conference successfully. (no parameters)
- `CONFERENCE_LEFT` - notifies the local user that he left the conference successfully. (no parameters)
- `DTMF_SUPPORT_CHANGED` - notifies if at least one user supports DTMF. (parameters - supports(boolean))
- `USER_ROLE_CHANGED` - notifies that role of some user changed. (parameters - id(string), role(string))
- `USER_STATUS_CHANGED` - notifies that status of some user changed. (parameters - id(string), status(string))
- `CONFERENCE_FAILED` - notifies that user failed to join the conference. (parameters - errorCode(JitsiMeetJS.errors.conference))
- `CONFERENCE_ERROR` - notifies that error occurred. (parameters - errorCode(JitsiMeetJS.errors.conference))
- `KICKED` - notifies that user has been kicked from the conference.
- `START_MUTED_POLICY_CHANGED` - notifies that all new participants will join with muted audio/video stream (parameters - JS object with 2 properties - audio(boolean), video(boolean))
- `STARTED_MUTED` - notifies that the local user has started muted
- `CONNECTION_STATS` - __DEPRECATED__. Use `JitsiMeetJS.connectionQuality.LOCAL_STATS_UPDATED` instead.
- `BEFORE_STATISTICS_DISPOSED` - fired just before the statistics module is disposed and it's the last chance to submit some logs to the statistics service, before it gets disconnected
- `AUTH_STATUS_CHANGED` - notifies that authentication is enabled or disabled, or local user authenticated (logged in). (parameters - isAuthEnabled(boolean), authIdentity(string))
- `ENDPOINT_MESSAGE_RECEIVED` - notifies that a new message
from another participant is received on a data channel.
- TALK_WHILE_MUTED - notifies that a local user is talking while having the microphone muted.
- NO_AUDIO_INPUT - notifies that the current selected input device has no signal.
- AUDIO_INPUT_STATE_CHANGE - notifies that the current conference audio input switched between audio input states i.e. with or without audio input.
- NOISY_MIC - notifies that the current microphone used by the conference is noisy.
- `TALK_WHILE_MUTED` - notifies that a local user is talking while having the microphone muted.
- `NO_AUDIO_INPUT` - notifies that the current selected input device has no signal.
- `AUDIO_INPUT_STATE_CHANGE` - notifies that the current conference audio input switched between audio input states i.e. with or without audio input.
- `NOISY_MIC` - notifies that the current microphone used by the conference is noisy.
- `PARTICIPANT_PROPERTY_CHANGED` - notifies that user has changed his custom participant property. (parameters - user(JitsiParticipant), propertyKey(string), oldPropertyValue(string), propertyValue(string))
2. connection
- CONNECTION_FAILED - indicates that the server connection failed.
- CONNECTION_ESTABLISHED - indicates that we have successfully established server connection.
- CONNECTION_DISCONNECTED - indicates that we are disconnected.
- WRONG_STATE - indicates that the user has performed action that can't be executed because the connection is in wrong state.
2. `connection`
- `CONNECTION_FAILED` - indicates that the server connection failed.
- `CONNECTION_ESTABLISHED` - indicates that we have successfully established server connection.
- `CONNECTION_DISCONNECTED` - indicates that we are disconnected.
- `WRONG_STATE` - indicates that the user has performed action that can't be executed because the connection is in wrong state.
3. detection
- VAD_SCORE_PUBLISHED - event generated by a TackVADEmitter when it computed a VAD score for an audio PCM sample.
3. `detection`
- `VAD_SCORE_PUBLISHED` - event generated by a TackVADEmitter when it computed a VAD score for an audio PCM sample.
3. track
- LOCAL_TRACK_STOPPED - indicates that a local track was stopped. This
event can be fired when ```dispose()``` method is called or for other reasons.
- TRACK_AUDIO_OUTPUT_CHANGED - indicates that audio output device for track was changed (parameters - deviceId (string) - new audio output device ID).
4. `track`
- `LOCAL_TRACK_STOPPED` - indicates that a local track was stopped. This
event can be fired when `dispose()` method is called or for other reasons.
- `TRACK_AUDIO_OUTPUT_CHANGED` - indicates that audio output device for track was changed (parameters - deviceId (string) - new audio output device ID).
4. mediaDevices
- DEVICE_LIST_CHANGED - indicates that list of currently connected devices has changed (parameters - devices(MediaDeviceInfo[])).
- PERMISSION_PROMPT_IS_SHOWN - Indicates that the environment is currently showing permission prompt to access camera and/or microphone (parameters - environmentType ('chrome'|'opera'|'firefox'|'safari'|'nwjs'|'react-native'|'android').
5. `mediaDevices`
- `DEVICE_LIST_CHANGED` - indicates that list of currently connected devices has changed (parameters - devices(MediaDeviceInfo[])).
- `PERMISSION_PROMPT_IS_SHOWN` - Indicates that the environment is currently showing permission prompt to access camera and/or microphone (parameters - environmentType ('chrome'|'opera'|'firefox'|'safari'|'nwjs'|'react-native'|'android').
5. connectionQuality
- LOCAL_STATS_UPDATED - New local connection statistics are received. (parameters - stats(object))
- REMOTE_STATS_UPDATED - New remote connection statistics are received. (parameters - id(string), stats(object))
6. `connectionQuality`
- `LOCAL_STATS_UPDATED` - New local connection statistics are received. (parameters - stats(object))
- `REMOTE_STATS_UPDATED` - New remote connection statistics are received. (parameters - id(string), stats(object))
* ```JitsiMeetJS.errors``` - JS object that contains all errors used by the API. You can use that object to check the reported errors from the API
We have three error types - connection, conference and track. You can access the events with the following code ```JitsiMeetJS.errors.<error_type>.<error_name>```.
For example if you want to use the conference event that is fired when somebody leave conference you can use the following code - ```JitsiMeetJS.errors.conference.PASSWORD_REQUIRED```.
* `JitsiMeetJS.errors` - JS object that contains all errors used by the API. You can use that object to check the reported errors from the API
We have three error types - connection, conference and track. You can access the events with the following code `JitsiMeetJS.errors.<error_type>.<error_name>`.
For example if you want to use the conference event that is fired when somebody leave conference you can use the following code - `JitsiMeetJS.errors.conference.PASSWORD_REQUIRED`.
We support the following errors:
1. conference
- CONNECTION_ERROR - the connection with the conference is lost.
- SETUP_FAILED - conference setup failed
- AUTHENTICATION_REQUIRED - user must be authenticated to create this conference
- PASSWORD_REQUIRED - that error can be passed when the connection to the conference failed. You should try to join the conference with password.
- PASSWORD_NOT_SUPPORTED - indicates that conference cannot be locked
- VIDEOBRIDGE_NOT_AVAILABLE - video bridge issues.
- RESERVATION_ERROR - error in reservation system
- GRACEFUL_SHUTDOWN - graceful shutdown
- JINGLE_FATAL_ERROR - error in jingle (the orriginal error is attached as parameter.)
- CONFERENCE_DESTROYED - conference has been destroyed
- CHAT_ERROR - chat error happened
- FOCUS_DISCONNECTED - focus error happened
- FOCUS_DISCONNECTED - focus left the conference
- CONFERENCE_MAX_USERS - The maximum users limit has been reached
2. connection
- CONNECTION_DROPPED_ERROR - indicates that the connection was dropped with an error which was most likely caused by some networking issues.
- PASSWORD_REQUIRED - passed when the connection to the server failed. You should try to authenticate with password.
- SERVER_ERROR - indicates too many 5XX errors were received from the server.
- OTHER_ERROR - all other errors
3. track
- GENERAL - generic getUserMedia-related error.
- UNSUPPORTED_RESOLUTION - getUserMedia-related error, indicates that requested video resolution is not supported by camera.
- PERMISSION_DENIED - getUserMedia-related error, indicates that user denied permission to share requested device.
- NOT_FOUND - getUserMedia-related error, indicates that requested device was not found.
- CONSTRAINT_FAILED - getUserMedia-related error, indicates that some of requested constraints in getUserMedia call were not satisfied.
- TRACK_IS_DISPOSED - an error which indicates that track has been already disposed and cannot be longer used.
- TRACK_NO_STREAM_FOUND - an error which indicates that track has no MediaStream associated.
- SCREENSHARING_GENERIC_ERROR - generic error for screensharing.
- SCREENSHARING_USER_CANCELED - an error which indicates that user canceled screen sharing window selection dialog.
1. `conference`
- `CONNECTION_ERROR` - the connection with the conference is lost.
- `SETUP_FAILED` - conference setup failed
- `AUTHENTICATION_REQUIRED` - user must be authenticated to create this conference
- `PASSWORD_REQUIRED` - that error can be passed when the connection to the conference failed. You should try to join the conference with password.
- `PASSWORD_NOT_SUPPORTED` - indicates that conference cannot be locked
- `VIDEOBRIDGE_NOT_AVAILABLE` - video bridge issues.
- `RESERVATION_ERROR` - error in reservation system
- `GRACEFUL_SHUTDOWN` - graceful shutdown
- `JINGLE_FATAL_ERROR` - error in jingle (the orriginal error is attached as parameter.)
- `CONFERENCE_DESTROYED` - conference has been destroyed
- `CHAT_ERROR` - chat error happened
- `FOCUS_DISCONNECTED` - focus error happened
- `FOCUS_DISCONNECTED` - focus left the conference
- `CONFERENCE_MAX_USERS` - The maximum users limit has been reached
2. `connection`
- `CONNECTION_DROPPED_ERROR` - indicates that the connection was dropped with an error which was most likely caused by some networking issues.
- `PASSWORD_REQUIRED` - passed when the connection to the server failed. You should try to authenticate with password.
- `SERVER_ERROR` - indicates too many 5XX errors were received from the server.
- `OTHER_ERROR` - all other errors
3. `track`
- `GENERAL` - generic getUserMedia-related error.
- `UNSUPPORTED_RESOLUTION` - getUserMedia-related error, indicates that requested video resolution is not supported by camera.
- `PERMISSION_DENIED` - getUserMedia-related error, indicates that user denied permission to share requested device.
- `NOT_FOUND` - getUserMedia-related error, indicates that requested device was not found.
- `CONSTRAINT_FAILED` - getUserMedia-related error, indicates that some of requested constraints in getUserMedia call were not satisfied.
- `TRACK_IS_DISPOSED` - an error which indicates that track has been already disposed and cannot be longer used.
- `TRACK_NO_STREAM_FOUND` - an error which indicates that track has no MediaStream associated.
- `SCREENSHARING_GENERIC_ERROR` - generic error for screensharing.
- `SCREENSHARING_USER_CANCELED` - an error which indicates that user canceled screen sharing window selection dialog.
* ```JitsiMeetJS.errorTypes``` - constructors for Error instances that can be produced by library. Are useful for checks like ```error instanceof JitsiMeetJS.errorTypes.JitsiTrackError```. Following Errors are available:
1. ```JitsiTrackError``` - Error that happened to a JitsiTrack.
* `JitsiMeetJS.errorTypes` - constructors for Error instances that can be produced by library. Are useful for checks like `error instanceof JitsiMeetJS.errorTypes.JitsiTrackError`. Following Errors are available:
1. `JitsiTrackError` - Error that happened to a JitsiTrack.
* ```JitsiMeetJS.logLevels``` - object with the log levels:
1. TRACE
2. DEBUG
3. INFO
4. LOG
5. WARN
6. ERROR
* `JitsiMeetJS.logLevels` - object with the log levels:
1. `TRACE`
2. `DEBUG`
3. `INFO`
4. `LOG`
5. `WARN`
6. `ERROR`
JitsiConnection
------------
This objects represents the server connection. You can create new ```JitsiConnection``` object with the constructor ```JitsiMeetJS.JitsiConnection```. ```JitsiConnection``` has the following methods:
This objects represents the server connection. You can create new `JitsiConnection` object with the constructor `JitsiMeetJS.JitsiConnection`. `JitsiConnection` has the following methods:
1. ```JitsiConnection(appID, token, options)``` - constructor. Creates the conference object.
1. `JitsiConnection(appID, token, options)` - constructor. Creates the conference object.
- appID - identification for the provider of Jitsi Meet video conferencing services. **NOTE: not implemented yet. You can safely pass ```null```**
- token - secret generated by the provider of Jitsi Meet video conferencing services. The token will be send to the provider from the Jitsi Meet server deployment for authorization of the current client.
- options - JS object with configuration options for the server connection. You can change the following properties there:
1. serviceUrl - XMPP service URL. For example 'wss://server.com/xmpp-websocket' for Websocket or '//server.com/http-bind' for BOSH.
2. bosh - DEPRECATED, use serviceUrl to specify either BOSH or Websocket URL.
3. hosts - JS Object
- domain
- muc
- anonymousdomain
4. useStunTurn -
5. enableLipSync - (optional) boolean property which enables the lipsync feature. Currently works only in Chrome and is disabled by default.
- `appID` - identification for the provider of Jitsi Meet video conferencing services. **NOTE: not implemented yet. You can safely pass `null`**
- `token` - secret generated by the provider of Jitsi Meet video conferencing services. The token will be send to the provider from the Jitsi Meet server deployment for authorization of the current client.
- `options` - JS object with configuration options for the server connection. You can change the following properties there:
1. `serviceUrl` - XMPP service URL. For example 'wss://server.com/xmpp-websocket' for Websocket or '//server.com/http-bind' for BOSH.
2. `bosh` - DEPRECATED, use serviceUrl to specify either BOSH or Websocket URL.
3. `hosts` - JS Object
- `domain`
- `muc`
- `anonymousdomain`
4. `enableLipSync` - (optional) boolean property which enables the lipsync feature. Currently works only in Chrome and is disabled by default.
5. `clientNode` - The name of client node advertised in XEP-0115 'c' stanza
2. connect(options) - establish server connection
- options - JS Object with ```id``` and ```password``` properties.
2. `connect(options)` - establish server connection
- `options` - JS Object with `id` and `password` properties.
3. disconnect() - destroys the server connection
3. `disconnect()` - destroys the server connection
4. initJitsiConference(name, options) - creates new ```JitsiConference``` object.
- name - the name of the conference
- options - JS object with configuration options for the conference. You can change the following properties there:
- openBridgeChannel - Enables/disables bridge channel. Values can be "datachannel", "websocket", true (treat it as "datachannel"), undefined (treat it as "datachannel") and false (don't open any channel). **NOTE: we recommend to set that option to true**
- recordingType - the type of recording to be used
- callStatsID - callstats credentials
- callStatsSecret - callstats credentials
- enableTalkWhileMuted - boolean property. Enables/disables talk while muted detection, by default the value is false/disabled.
- ignoreStartMuted - ignores start muted events coming from jicofo.
- startSilent - enables silent mode, will mark audio as inactive will not send/receive audio
- confID - Used for statistics to identify conference, if tenants are supported will contain tenant and the non lower case variant for the room name.
- siteID - (optional) Used for statistics to identify the site where the user is coming from, if tenants are supported it will contain a unique identifier for that tenant. If not provided, the value will be infered from confID
- statisticsId - The id to be used as stats instead of default callStatsUsername.
- statisticsDisplayName - The display name to be used for stats, used for callstats.
4. `initJitsiConference(name, options)` - creates new `JitsiConference` object.
- `name` - the name of the conference
- `options` - JS object with configuration options for the conference. You can change the following properties there:
- `openBridgeChannel` - Enables/disables bridge channel. Values can be "datachannel", "websocket", true (treat it as "datachannel"), undefined (treat it as "datachannel") and false (don't open any channel). **NOTE: we recommend to set that option to true**
- `recordingType` - the type of recording to be used
- `callStatsID` - callstats credentials
- `callStatsSecret` - callstats credentials
- `enableTalkWhileMuted` - boolean property. Enables/disables talk while muted detection, by default the value is false/disabled.
- `ignoreStartMuted` - ignores start muted events coming from jicofo.
- `startSilent` - enables silent mode, will mark audio as inactive will not send/receive audio
- `confID` - Used for statistics to identify conference, if tenants are supported will contain tenant and the non lower case variant for the room name.
- `siteID` - (optional) Used for statistics to identify the site where the user is coming from, if tenants are supported it will contain a unique identifier for that tenant. If not provided, the value will be infered from confID
- `statisticsId` - The id to be used as stats instead of default callStatsUsername.
- `statisticsDisplayName` - The display name to be used for stats, used for callstats.
- `focusUserJid` - The real JID of focus participant - can be overridden here
- `enableNoAudioDetection`
- `enableNoisyMicDetection`
- `enableRemb`
- `enableTcc`
- `useRoomAsSharedDocumentName`
- `channelLastN`
- `startBitrate`
- `stereo`
- `forceJVB121Ratio` - "Math.random() < forceJVB121Ratio" will determine whether a 2 people conference should be moved to the JVB instead of P2P. The decision is made on the responder side, after ICE succeeds on the P2P connection.
- `hiddenDomain`
- `startAudioMuted`
- `startVideoMuted`
- `enableLayerSuspension` - if set to 'true', we will cap the video send bitrate when we are told we have not been selected by any endpoints (and therefore the non-thumbnail streams are not in use).
- `deploymentInfo`
- `shard`
- `userRegion`
- `p2p` - Peer to peer related options
- `enabled` - enables or disable peer-to-peer connection, if disabled all media will be routed through the Jitsi Videobridge.
- `stunServers` - list of STUN servers e.g. `{ urls: 'stun:meet-jit-si-turnrelay.jitsi.net:443' }`
- `backToP2PDelay` - a delay given in seconds, before the conference switches back to P2P, after the 3rd participant has left the room.
- `disabledCodec` - the mime type of the code that should not be negotiated on the peerconnection.
- `preferredCodec` the mime type of the codec that needs to be made the preferred codec for the connection.
- `disableH264` - __DEPRECATED__. Use `disabledCodec` instead.
- `preferH264` - __DEPRECATED__. Use `preferredCodec` instead.
- `rttMonitor`
- `enabled`
- `initialDelay`
- `getStatsInterval`
- `analyticsInterval`
- `stunServers`
- `e2eping`
- `pingInterval`
- `abTesting` - A/B testing related options
- `enableSuspendVideoTest`
- `testing`
- `capScreenshareBitrate`
- `p2pTestMode`
- `octo`
- `probability`
**NOTE: if 4 and 5 are set the library is going to send events to callstats. Otherwise the callstats integration will be disabled.**
5. addEventListener(event, listener) - Subscribes the passed listener to the event.
- event - one of the events from ```JitsiMeetJS.events.connection``` object.
- listener - handler for the event.
5. `addEventListener(event, listener)` - Subscribes the passed listener to the event.
- `event` - one of the events from `JitsiMeetJS.events.connection` object.
- `listener` - handler for the event.
6. removeEventListener(event, listener) - Removes event listener.
- event - the event
- listener - the listener that will be removed.
6. `removeEventListener(event, listener)` - Removes event listener.
- `event` - the event
- `listener` - the listener that will be removed.
7. addFeature - Adds new feature to the list of supported features for the local participant
- feature - string, the name of the feature
- submit - boolean, default false, if true - the new list of features will be immediately submitted to the others.
7. `addFeature` - Adds new feature to the list of supported features for the local participant
- `feature` - string, the name of the feature
- `submit` - boolean, default false, if true - the new list of features will be immediately submitted to the others.
8. removeFeature - Removes a feature from the list of supported features for the local participant
- feature - string, the name of the feature
- submit - boolean, default false, if true - the new list of features will be immediately submitted to the others.
8. `removeFeature` - Removes a feature from the list of supported features for the local participant
- `feature` - string, the name of the feature
- `submit` - boolean, default false, if true - the new list of features will be immediately submitted to the others.

@@ -269,30 +317,30 @@ JitsiConference

1. join(password) - Joins the conference
1. `join(password)` - Joins the conference
- password - string of the password. This parameter is not mandatory.
2. leave() - leaves the conference. Returns Promise.
2. `leave()` - leaves the conference. Returns Promise.
3. myUserId() - get local user ID.
3. `myUserId()` - get local user ID.
4. getLocalTracks() - Returns array with JitsiTrack objects for the local streams.
4. `getLocalTracks()` - Returns array with JitsiTrack objects for the local streams.
5. addEventListener(event, listener) - Subscribes the passed listener to the event.
- event - one of the events from ```JitsiMeetJS.events.conference``` object.
- listener - handler for the event.
5. `addEventListener(event, listener)` - Subscribes the passed listener to the event.
- `event` - one of the events from `JitsiMeetJS.events.conference` object.
- `listener` - handler for the event.
6. removeEventListener(event, listener) - Removes event listener.
- event - the event
- listener - the listener that will be removed.
6. `removeEventListener(event, listener)` - Removes event listener.
- `event` - the event
- `listener` - the listener that will be removed.
7. on(event, listener) - alias for addEventListener
7. `on(event, listener)` - alias for addEventListener
8. off(event, listener) - alias for removeEventListener
8. `off(event, listener)` - alias for removeEventListener
9. sendTextMessage(text) - sends the given string to other participants in the conference.
9. `sendTextMessage(text)` - sends the given string to other participants in the conference.
10. setDisplayName(name) - changes the display name of the local participant.
- name - the new display name
10. `setDisplayName(name)` - changes the display name of the local participant.
- `name` - the new display name
11. selectParticipant(participantId) - Elects the participant with the given id to be the selected participant in order to receive higher video quality (if simulcast is enabled).
- participantId - the identifier of the participant
11. `selectParticipant(participantId)` - Elects the participant with the given id to be the selected participant in order to receive higher video quality (if simulcast is enabled).
- `participantId` - the identifier of the participant

@@ -302,8 +350,8 @@ Throws NetworkError or InvalidStateError or Error if the operation fails.

12. sendCommand(name, values) - sends user defined system command to the other participants
- name - the name of the command.
- values - JS object. The object has the following structure:
12. `sendCommand(name, values)` - sends user defined system command to the other participants
- `name` - the name of the command.
- `values` - JS object. The object has the following structure:
```
```javascript
{

@@ -315,3 +363,3 @@

attributes: {},// map with keys the name of the attribute and values - the values of the attributes.
attributes: {}, // map with keys the name of the attribute and values - the values of the attributes.

@@ -327,87 +375,91 @@

13. sendCommandOnce(name, values) - Sends only one time a user defined system command to the other participants
13. `sendCommandOnce(name, values)` - Sends only one time a user defined system command to the other participants
14. removeCommand(name) - removes a command for the list of the commands that are sent to the ther participants
- name - the name of the command
14. `removeCommand(name)` - removes a command for the list of the commands that are sent to the ther participants
- `name` - the name of the command
15. addCommandListener(command, handler) - adds listener
- command - string for the name of the command
- handler(values) - the listener that will be called when a command is received from another participant.
15. `addCommandListener(command, handler)` - adds listener
- `command` - string for the name of the command
- `handler(values)` - the listener that will be called when a command is received from another participant.
16. removeCommandListener(command) - removes the listeners for the specified command
- command - the name of the command
16. `removeCommandListener(command)` - removes the listeners for the specified command
- `command` - the name of the command
17. addTrack(track) - Adds JitsiLocalTrack object to the conference. Throws an error if adding second video stream. Returns Promise.
- track - the JitsiLocalTrack
17. `addTrack(track)` - Adds `JitsiLocalTrack` object to the conference. Throws an error if adding second video stream. Returns Promise.
- `track` - the `JitsiLocalTrack`
18. removeTrack(track) - Removes JitsiLocalTrack object to the conference. Returns Promise.
- track - the JitsiLocalTrack
18. `removeTrack(track)` - Removes `JitsiLocalTrack` object to the conference. Returns Promise.
- `track` - the `JitsiLocalTrack`
19. isDTMFSupported() - Check if at least one user supports DTMF.
19. `isDTMFSupported()` - Check if at least one user supports DTMF.
20. getRole() - returns string with the local user role ("moderator" or "none")
20. `getRole()` - returns string with the local user role ("moderator" or "none")
21. isModerator() - checks if local user has "moderator" role
21. `isModerator()` - checks if local user has "moderator" role
22. lock(password) - set password for the conference; returns Promise
- password - string password
22. `lock(password)` - set password for the conference; returns Promise
- `password` - string password
Note: available only for moderator
23. unlock() - unset conference password; returns Promise
23. `unlock()` - unset conference password; returns Promise
Note: available only for moderator
24. kick(id) - Kick participant from the conference
- id - string participant id
24. `kickParticipant(id)` - Kick participant from the conference
- `id` - string participant id
25. setStartMutedPolicy(policy) - make all new participants join with muted audio/video
- policy - JS object with following properties
- audio - boolean if audio stream should be muted
- video - boolean if video stream should be muted
25. `setStartMutedPolicy(policy)` - make all new participants join with muted audio/video
- `policy` - JS object with following properties
- `audio` - boolean if audio stream should be muted
- `video` - boolean if video stream should be muted
Note: available only for moderator
26. getStartMutedPolicy() - returns the current policy with JS object:
- policy - JS object with following properties
- audio - boolean if audio stream should be muted
- video - boolean if video stream should be muted
26. `getStartMutedPolicy()` - returns the current policy with JS object:
- `policy` - JS object with following properties
- `audio` - boolean if audio stream should be muted
- `video` - boolean if video stream should be muted
27. isStartAudioMuted() - check if audio is muted on join
27. `isStartAudioMuted()` - check if audio is muted on join
28. isStartVideoMuted() - check if video is muted on join
28. `isStartVideoMuted()` - check if video is muted on join
29. sendFeedback(overallFeedback, detailedFeedback) - Sends the given feedback through CallStats if enabled.
- overallFeedback an integer between 1 and 5 indicating the user feedback
- detailedFeedback detailed feedback from the user. Not yet used
29. `sendFeedback(overallFeedback, detailedFeedback)` - Sends the given feedback through CallStats if enabled.
- `overallFeedback` - an integer between 1 and 5 indicating the user feedback
- `detailedFeedback` - detailed feedback from the user. Not yet used
30. setSubject(subject) - change subject of the conference
- subject - string new subject
30. `setSubject(subject)` - change subject of the conference
- `subject` - string new subject
Note: available only for moderator
31. sendEndpointMessage(to, payload) - Sends message via the data channels.
- to - the id of the endpoint that should receive the message. If "" the message will be sent to all participants.
- payload - JSON object - the payload of the message.
31. `sendEndpointMessage(to, payload)` - Sends message via the data channels.
- `to` - the id of the endpoint that should receive the message. If "" the message will be sent to all participants.
- `payload` - JSON object - the payload of the message.
Throws NetworkError or InvalidStateError or Error if the operation fails.
32. broadcastEndpointMessage(payload) - Sends broadcast message via the datachannels.
- payload - JSON object - the payload of the message.
32. `broadcastEndpointMessage(payload)` - Sends broadcast message via the datachannels.
- `payload` - JSON object - the payload of the message.
Throws NetworkError or InvalidStateError or Error if the operation fails.
33. pinParticipant(participantId) - Elects the participant with the given id to be the pinned participant in order to always receive video for this participant (even when last n is enabled).
- participantId - the identifier of the participant
33. `pinParticipant(participantId)` - Elects the participant with the given id to be the pinned participant in order to always receive video for this participant (even when last n is enabled).
- `participantId` - the identifier of the participant
Throws NetworkError or InvalidStateError or Error if the operation fails.
34. setReceiverVideoConstraint(resolution) - set the desired resolution to get from JVB (180, 360, 720, 1080, etc).
34. `setReceiverVideoConstraint(resolution)` - set the desired resolution to get from JVB (180, 360, 720, 1080, etc).
You should use that method if you are using simulcast.
35. setSenderVideoConstraint(resolution) - set the desired resolution to send to JVB or the peer (180, 360, 720).
35. `setSenderVideoConstraint(resolution)` - set the desired resolution to send to JVB or the peer (180, 360, 720).
36. isHidden - checks if local user has joined as a "hidden" user. This is a specialized role used for integrations.
36. `isHidden` - checks if local user has joined as a "hidden" user. This is a specialized role used for integrations.
37. `setLocalParticipantProperty(propertyKey, propertyValue)` - used to set a custom propery to the local participant("fullName": "Full Name", favoriteColor: "red", "userId": 234). Also this can be used to modify an already set custom property.
- `propertyKey` - string - custom property name
- `propertyValue` - string - custom property value
JitsiTrack

@@ -418,48 +470,48 @@ ======

1. getType() - returns string with the type of the track( "video" for the video tracks and "audio" for the audio tracks)
1. `getType()` - returns string with the type of the track( "video" for the video tracks and "audio" for the audio tracks)
2. mute() - mutes the track. Returns Promise.
2. `mute()` - mutes the track. Returns Promise.
Note: This method is implemented only for the local tracks.
3. unmute() - unmutes the track. Returns Promise.
3. `unmute()` - unmutes the track. Returns Promise.
Note: This method is implemented only for the local tracks.
4. isMuted() - check if track is muted
4. `isMuted()` - check if track is muted
5. attach(container) - attaches the track to the given container.
5. `attach(container)` - attaches the track to the given container.
6. detach(container) - removes the track from the container.
6. `detach(container)` - removes the track from the container.
7. dispose() - disposes the track. If the track is added to a conference the track will be removed. Returns Promise.
7. `dispose()` - disposes the track. If the track is added to a conference the track will be removed. Returns Promise.
Note: This method is implemented only for the local tracks.
8. getId() - returns unique string for the track.
8. `getId()` - returns unique string for the track.
9. getParticipantId() - returns id(string) of the track owner
9. `getParticipantId()` - returns id(string) of the track owner
Note: This method is implemented only for the remote tracks.
10. setAudioOutput(audioOutputDeviceId) - sets new audio output device for track's DOM elements. Video tracks are ignored.
10. `setAudioOutput(audioOutputDeviceId)` - sets new audio output device for track's DOM elements. Video tracks are ignored.
11. getDeviceId() - returns device ID associated with track (for local tracks only)
11. `getDeviceId()` - returns device ID associated with track (for local tracks only)
12. isEnded() - returns true if track is ended
12. `isEnded()` - returns true if track is ended
13. setEffect(effect) - Applies the effect by swapping out the existing MediaStream on the JitsiTrack with the new
13. `setEffect(effect)` - Applies the effect by swapping out the existing MediaStream on the JitsiTrack with the new
MediaStream which has the desired effect. "undefined" is passed to this function for removing the effect and for
restoring the original MediaStream on the JitsiTrack.
restoring the original MediaStream on the `JitsiTrack`.
The following methods have to be defined for the effect instance.
startEffect() - Starts the effect and returns a new MediaStream that is to be swapped with the existing one.
`startEffect()` - Starts the effect and returns a new MediaStream that is to be swapped with the existing one.
stopEffect() - Stops the effect.
`stopEffect()` - Stops the effect.
isEnabled() - Checks if the local track supports the effect.
`isEnabled()` - Checks if the local track supports the effect.

@@ -470,8 +522,8 @@ Note: This method is implemented only for the local tracks.

======
The object represents error that happened to a JitsiTrack. Is inherited from JavaScript base ```Error``` object,
so ```"name"```, ```"message"``` and ```"stack"``` properties are available. For GUM-related errors,
exposes additional ```"gum"``` property, which is an object with following properties:
- error - original GUM error
- constraints - GUM constraints object used for the call
- devices - array of devices requested in GUM call (possible values - "audio", "video", "screen", "desktop", "audiooutput")
The object represents error that happened to a JitsiTrack. Is inherited from JavaScript base `Error` object,
so `"name"`, `"message"` and `"stack"` properties are available. For GUM-related errors,
exposes additional `"gum"` property, which is an object with following properties:
- `error` - original GUM error
- `constraints` - GUM constraints object used for the call
- `devices` - array of devices requested in GUM call (possible values - "audio", "video", "screen", "desktop", "audiooutput")

@@ -481,3 +533,3 @@ Getting Started

1. The first thing you must do in order to use Jitsi Meet API is to initialize ```JitsiMeetJS``` object:
1. The first thing you must do in order to use Jitsi Meet API is to initialize `JitsiMeetJS` object:

@@ -506,3 +558,3 @@ ```javascript

4. After you receive the ```CONNECTION_ESTABLISHED``` event you are to create the ```JitsiConference``` object and
4. After you receive the `CONNECTION_ESTABLISHED` event you are to create the `JitsiConference` object and
also you may want to attach listeners for conference events (we are going to add handlers for remote track, conference joined, etc. ):

@@ -512,3 +564,2 @@

```javascript
room = connection.initJitsiConference("conference1", confOptions);

@@ -515,0 +566,0 @@ room.on(JitsiMeetJS.events.conference.TRACK_ADDED, onRemoteTrack);

@@ -1,3 +0,5 @@

# End-to-end encryption using Insertable Streams
# End-to-End Encryption using Insertable Streams
## Overview
**NOTE** e2ee is work in progress.

@@ -7,46 +9,69 @@ This document describes some of the high-level concepts and outlines the design.

## Deriving the key from the e2eekey url hash
We take the key from the url hash. Unlike query parameters this does not get
sent to the server so it is the right place for it. We use
the window.location.onhashchange event to listen for changes in the e2ee
key property.
This library implements End-to-End Encryiption (E2EE) on supported endpopints (currently just browsers with support
for [insertable streams](https://github.com/w3c/webrtc-insertable-streams)).
It is important to note that this key should not get exchanged via the server.
There needs to be some other means of exchanging it.
This implementation follows the model outlined in [SFrame](https://tools.ietf.org/html/draft-omara-sframe-00) with
slight changes.
From this key we derive a 128bit key using PBKDF2. We use the room name as a salt in this key generation. This is a bit weak but we need to start with information that is the same for all participants so we can not yet use a proper random salt.
We add the participant id to the salt when deriving the key which allows us to use per-sender keys. This is done to prepare the ground for the actual architecture and does not change the cryptographic properties.
## Signaling
We plan to rotate the key whenever a participant joins or leaves. However, we need end-to-end encrypted signaling to exchange those keys so we are not doing this yet.
Each participant will have a randomly generated key which is used to encrypt the media. The key is distributed with
other participants (so they can decrypt the media) via an E2EE channel which
is established with [Olm](https://gitlab.matrix.org/matrix-org/olm).
## The encrypted frame
The derived key is used in the transformations of the Insertable Streams API.
These transformations use AES-GCM (with a 128 bit key; we could have used
256 bits but since the keys are short-lived decided against it) and the
webcrypto API:
https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/encrypt
### Key rotation
AES-GCM needs a 96 bit initialization vector which we construct
based on the SSRC, the rtp timestamp and a frame counter which is similar to
how the IV is constructed in SRTP with GCM
https://tools.ietf.org/html/rfc7714#section-8.1
Each participant's key is rotated (a new random one is generated) every time a participant leaves. This new key is
then sent to every other participant over the E2EE Olm channel.
This IV gets sent along with the packet, adding 12 bytes of overhead. The GCM
tag length is the default 128 bits or 16 bytes. For video this overhead is ok but
for audio (where the opus frames are much, much smaller) we are considering shorter
authentication tags.
### Key ratcheting
We do not encrypt the first few bytes of the packet that form the VP8 payload
https://tools.ietf.org/html/rfc6386#section-9.1
nor the Opus TOC byte
https://tools.ietf.org/html/rfc6716#section-3.1
Each participant ratchets their key when another participant joins. The new resulting key is not distributed since
every participant can derive it by ratchetting themselves.
This allows the decoder to understand the frame a bit more and makes it generate the fun looking garbage we see in the video.
This also means the SFU does not know (ideally) that the content is end-to-end encrypted and there are no changes in the SFU required at all.
Unlike described in [SFrame 4.3.5.1](https://tools.ietf.org/html/draft-omara-sframe-00#section-4.3.5.1)
we attempt to ratchet the key forward when we do not find a valid authentication tag. Note that we only update
the set of keys when we find a valid signature which avoids a denial of service attack with invalid signatures.
## Using workers
## Media
Insertable Streams are transferable and can be sent from the main javascript context to a Worker
https://developer.mozilla.org/en-US/docs/Web/API/Worker
We are using a named worker (E2EEworker) which allows very easy inspection in Chrome Devtools.
It also makes the keys very self-contained.
### Packet format
We are using a variant of [SFrame](https://tools.ietf.org/html/draft-omara-sframe-00)
that uses a trailer instead of a header. We call it JFrame.
At a high level the encrypted frame format looks like this:
```
+------------+------------------------------------------+^+
|unencrypted payload header (variable length) | |
+^+------------+------------------------------------------+ |
| | | |
| | | |
| | | |
| | | |
| | Encrypted Frame | |
| | | |
| | | |
| | | |
| | | |
+^+-------------------------------------------------------+ +
| | Authentication Tag | |
| +---------------------------------------+-+-+-+-+-+-+-+-+ |
| | CTR... (length=LEN + 1) |S|LEN |KID | |
| +---------------------------------------+-+-+-+-+-+-+-+-+^|
| |
+----+Encrypted Portion Authenticated Portion+---+
```
We do not encrypt the first few bytes of the packet that form the
[VP8 payload](https://tools.ietf.org/html/rfc6386#section-9.1) (10 bytes for key frames, 3 bytes for interframes) nor
the [Opus TOC byte](https://tools.ietf.org/html/rfc6716#section-3.1)
This allows the decoder to understand the frame a bit more and makes it decode the fun looking garbage we see in the
video. This also means the SFU does not know (ideally) that the content is end-to-end encrypted and there are no
changes in the SFU required at all.
### Using Web Workers
Insertable Streams are transferable and can be sent from the main JavaScript context to a
[Web Worker](https://developer.mozilla.org/en-US/docs/Web/API/Worker).
We are using a named worker (E2EEworker) which allows very easy inspection in Chrome DevTools.

@@ -21,3 +21,3 @@ JWT token authentication Prosody plugin

- 'exp' token expiration timestamp as defined in the RFC
- 'sub' contains the name of the domain used when authenticating with this token. By default assuming that we have full MUC 'conference1@muc.server.net' then 'server.net' should be used here.
- 'sub' contains EITHER the lowercase name of the tenant (for a conference like TENANT1/ROOM with would be 'tenant1') OR the lowercase name of the domain used when authenticating with this token (for a conference like /ROOM). By default assuming that we have full MUC 'conference1@muc.server.net' then 'server.net' should be used here. Alternately, a '*' may be provided, allowing access to rooms in all tenants within the domain or all domains within the server.
- 'aud' application identifier. This value indicates what service is consuming the token. It should be negotiated with the service provider before generating the token.

@@ -31,4 +31,4 @@

In addition to the basic claims used in authentication, the token can also provide user display information in the 'context' field within the JWT payload:
- 'group' is a string which specifies the group the user belongs to. Intended for use in reporting/analytics
In addition to the basic claims used in authentication, the token can also provide user display information in the 'context' field within the JWT payload. None of the information in the context field is used for token validation:
- 'group' is a string which specifies the group the user belongs to. Intended for use in reporting/analytics, not used for token validation.
- 'user' is an object which contains display information for the current user

@@ -88,3 +88,3 @@ - 'id' is a user identifier string. Intended for use in reporting/analytics

- when user connects to Prosody through BOSH. Token value is passed as 'token' query paramater of BOSH URL. User uses XMPP anonymous authentication method.
- when MUC room is being created/joined Prosody compares 'room' claim with the actual name of the room. This prevents from abusing stolen token by unathorized users to allocate new conference rooms in the system. Admin users are not required to provide valid token which is used by Jicofo for example.
- when MUC room is being created/joined Prosody compares 'room' claim with the actual name of the room. In addition, the 'sub' claim is compare to either the tenant (for TENANT/ROOM URLs) or the base domain (for /ROOM URLs). This prevents stolen token being abused by unathorized users to allocate new conference rooms in the system. Admin users are not required to provide valid token which is used by Jicofo for example.

@@ -132,3 +132,3 @@ ### Lib-jitsi-meet options

JWT token authentication requires prosody-trunk version at least 747.
JWT token authentication requires prosody-trunk version at least 747. JWT tokens with websockets requires prosody 0.11.6 or higher.

@@ -135,0 +135,0 @@ You can download latest prosody-trunk packages from [here]. Then install it with the following command:

@@ -223,3 +223,3 @@ import EventEmitter from 'events';

if (availableDevices && availableDevices.length > 0) {
if (availableDevices.length > 0) {
// if we have devices info report device to stats

@@ -226,0 +226,0 @@ // normally this will not happen on startup as this method is called

@@ -27,2 +27,3 @@ /* global __filename */

import recordingConstants from './modules/recording/recordingConstants';
import Settings from './modules/settings/Settings';
import LocalStatsCollector from './modules/statistics/LocalStatsCollector';

@@ -179,2 +180,3 @@ import precallTest from './modules/statistics/PrecallTest';

init(options = {}) {
Settings.init(options.externalStorage);
Statistics.init(options);

@@ -181,0 +183,0 @@

@@ -19,3 +19,2 @@ // Karma configuration

'node_modules/core-js/index.js',
'./index.js',
'./modules/**/*.spec.js'

@@ -33,3 +32,2 @@ ],

'node_modules/core-js/**': [ 'webpack' ],
'./index.js': [ 'webpack' ],
'./**/*.spec.js': [ 'webpack', 'sourcemap' ]

@@ -67,4 +65,4 @@ },

webpack: require('./webpack.config.js')
webpack: require('./webpack-shared-config')
});
};

@@ -284,8 +284,34 @@ import { BrowserDetection } from '@jitsi/js-utils';

supportsInsertableStreams() {
return Boolean(typeof window.RTCRtpSender !== 'undefined'
if (!(typeof window.RTCRtpSender !== 'undefined'
&& (window.RTCRtpSender.prototype.createEncodedStreams
|| window.RTCRtpSender.prototype.createEncodedVideoStreams));
|| window.RTCRtpSender.prototype.createEncodedVideoStreams))) {
return false;
}
// Feature-detect transferable streams which we need to operate in a worker.
// See https://groups.google.com/a/chromium.org/g/blink-dev/c/1LStSgBt6AM/m/hj0odB8pCAAJ
const stream = new ReadableStream();
try {
window.postMessage(stream, '*', [ stream ]);
return true;
} catch {
return false;
}
}
/**
* Whether the browser supports the RED format for audio.
*/
supportsAudioRed() {
return Boolean(window.RTCRtpSender
&& window.RTCRtpSender.getCapabilities
&& window.RTCRtpSender.getCapabilities('audio').codecs.some(codec => codec.mimeType === 'audio/red')
&& window.RTCRtpReceiver
&& window.RTCRtpReceiver.getCapabilities
&& window.RTCRtpReceiver.getCapabilities('audio').codecs.some(codec => codec.mimeType === 'audio/red'));
}
/**
* Checks if the browser supports the "sdpSemantics" configuration option.

@@ -292,0 +318,0 @@ * https://webrtc.org/web-apis/chrome/unified-plan/

@@ -5,4 +5,2 @@ /* global __filename */

import { createWorkerScript } from './Worker';
const logger = getLogger(__filename);

@@ -27,29 +25,38 @@

export default class E2EEcontext {
/**
* Build a new E2EE context instance, which will be used in a given conference.
*
* @param {string} options.salt - Salt to be used for key deviation.
* FIXME: We currently use the MUC room name for this which has the same lifetime
* as this context. While not (pseudo)random as recommended in
* https://developer.mozilla.org/en-US/docs/Web/API/Pbkdf2Params
* this is easily available and the same for all participants.
* We currently do not enforce a minimum length of 16 bytes either.
*/
constructor(options) {
this._options = options;
constructor() {
// Determine the URL for the worker script. Relative URLs are relative to
// the entry point, not the script that launches the worker.
let baseUrl = '';
const ljm = document.querySelector('script[src*="lib-jitsi-meet"]');
// Initialize the E2EE worker.
this._worker = new Worker(createWorkerScript(), {
name: 'E2EE Worker'
});
if (ljm) {
const idx = ljm.src.lastIndexOf('/');
baseUrl = `${ljm.src.substring(0, idx)}/`;
}
// Initialize the E2EE worker. In order to avoid CORS issues, start the worker and have it
// synchronously load the JS.
const workerUrl = `${baseUrl}lib-jitsi-meet.e2ee-worker.js`;
const workerBlob
= new Blob([ `importScripts("${workerUrl}");` ], { type: 'application/javascript' });
const blobUrl = window.URL.createObjectURL(workerBlob);
this._worker = new Worker(blobUrl, { name: 'E2EE Worker' });
this._worker.onerror = e => logger.onerror(e);
}
// Initialize the salt and convert it once.
const encoder = new TextEncoder();
// Send initial options to worker.
/**
* Cleans up all state associated with the given participant. This is needed when a
* participant leaves the current conference.
*
* @param {string} participantId - The participant that just left.
*/
cleanup(participantId) {
this._worker.postMessage({
operation: 'initialize',
salt: encoder.encode(options.salt)
operation: 'cleanup',
participantId
});

@@ -83,6 +90,7 @@ }

operation: 'decode',
readableStream: receiverStreams.readableStream,
writableStream: receiverStreams.writableStream,
readableStream: receiverStreams.readable || receiverStreams.readableStream,
writableStream: receiverStreams.writable || receiverStreams.writableStream,
participantId
}, [ receiverStreams.readableStream, receiverStreams.writableStream ]);
}, [ receiverStreams.readable || receiverStreams.readableStream,
receiverStreams.writable || receiverStreams.writableStream ]);
}

@@ -115,29 +123,24 @@

operation: 'encode',
readableStream: senderStreams.readableStream,
writableStream: senderStreams.writableStream,
readableStream: senderStreams.readable || senderStreams.readableStream,
writableStream: senderStreams.writable || senderStreams.writableStream,
participantId
}, [ senderStreams.readableStream, senderStreams.writableStream ]);
}, [ senderStreams.readable || senderStreams.readableStream,
senderStreams.writable || senderStreams.writableStream ]);
}
/**
* Sets the key to be used for E2EE.
* Set the E2EE key for the specified participant.
*
* @param {string} value - Value to be used as the new key. May be falsy to disable end-to-end encryption.
* @param {string} participantId - the ID of the participant who's key we are setting.
* @param {Uint8Array | boolean} key - they key for the given participant.
* @param {Number} keyIndex - the key index.
*/
setKey(value) {
let key;
if (value) {
const encoder = new TextEncoder();
key = encoder.encode(value);
} else {
key = false;
}
setKey(participantId, key, keyIndex) {
this._worker.postMessage({
operation: 'setKey',
key
participantId,
key,
keyIndex
});
}
}
/* global __filename */
import { getLogger } from 'jitsi-meet-logger';
import debounce from 'lodash.debounce';

@@ -9,5 +11,11 @@ import * as JitsiConferenceEvents from '../../JitsiConferenceEvents';

import E2EEContext from './E2EEContext';
import { OlmAdapter } from './OlmAdapter';
import { importKey, ratchet } from './crypto-utils';
const logger = getLogger(__filename);
// Period which we'll wait before updating / rotating our keys when a participant
// joins or leaves.
const DEBOUNCE_PERIOD = 5000;
/**

@@ -20,15 +28,45 @@ * This module integrates {@link E2EEContext} with {@link JitsiConference} in order to enable E2E encryption.

* @param {JitsiConference} conference - The conference instance for which E2E encryption is to be enabled.
* @param {Object} options
* @param {string} options.salt - Salt to be used for key deviation. Check {@link E2EEContext} for more details.
*/
constructor(conference, { salt }) {
constructor(conference) {
this.conference = conference;
this._e2eeCtx = new E2EEContext({ salt });
this._conferenceJoined = false;
this._enabled = false;
this._initialized = false;
this._key = undefined;
this._e2eeCtx = new E2EEContext();
this._olmAdapter = new OlmAdapter(conference);
// Debounce key rotation / ratcheting to avoid a storm of messages.
this._ratchetKey = debounce(this._ratchetKeyImpl, DEBOUNCE_PERIOD);
this._rotateKey = debounce(this._rotateKeyImpl, DEBOUNCE_PERIOD);
// Participant join / leave operations. Used for key advancement / rotation.
//
this.conference.on(
JitsiConferenceEvents._MEDIA_SESSION_STARTED,
this._onMediaSessionStarted.bind(this));
JitsiConferenceEvents.CONFERENCE_JOINED,
() => {
this._conferenceJoined = true;
});
this.conference.on(
JitsiConferenceEvents.PARTICIPANT_PROPERTY_CHANGED,
this._onParticipantPropertyChanged.bind(this));
this.conference.on(
JitsiConferenceEvents.USER_JOINED,
this._onParticipantJoined.bind(this));
this.conference.on(
JitsiConferenceEvents.USER_LEFT,
this._onParticipantLeft.bind(this));
// Conference media events in order to attach the encryptor / decryptor.
// FIXME add events to TraceablePeerConnection which will allow to see when there's new receiver or sender
// added instead of shenanigans around conference track events and track muted.
// added instead of shenanigans around conference track events and track muted.
//
this.conference.on(
JitsiConferenceEvents._MEDIA_SESSION_STARTED,
this._onMediaSessionStarted.bind(this));
this.conference.on(
JitsiConferenceEvents.TRACK_ADDED,

@@ -42,5 +80,91 @@ track => track.isLocal() && this._onLocalTrackAdded(track));

this._trackMuteChanged.bind(this));
// Olm signalling events.
this._olmAdapter.on(
OlmAdapter.events.OLM_ID_KEY_READY,
this._onOlmIdKeyReady.bind(this));
this._olmAdapter.on(
OlmAdapter.events.PARTICIPANT_E2EE_CHANNEL_READY,
this._onParticipantE2EEChannelReady.bind(this));
this._olmAdapter.on(
OlmAdapter.events.PARTICIPANT_KEY_UPDATED,
this._onParticipantKeyUpdated.bind(this));
}
/**
* Indicates if E2EE is supported in the current platform.
*
* @param {object} config - Global configuration.
* @returns {boolean}
*/
static isSupported(config) {
return browser.supportsInsertableStreams()
&& OlmAdapter.isSupported()
&& !(config.testing && config.testing.disableE2EE);
}
/**
* Indicates whether E2EE is currently enabled or not.
*
* @returns {boolean}
*/
isEnabled() {
return this._enabled;
}
/**
* Enables / disables End-To-End encryption.
*
* @param {boolean} enabled - whether E2EE should be enabled or not.
* @returns {void}
*/
setEnabled(enabled) {
if (enabled === this._enabled) {
return;
}
this._enabled = enabled;
if (!this._initialized && enabled) {
// Need to re-create the peerconnections in order to apply the insertable streams constraint.
// TODO: this was necessary due to some audio issues when indertable streams are used
// even though encryption is not performed. This should be fixed in the browser eventually.
// https://bugs.chromium.org/p/chromium/issues/detail?id=1103280
this.conference._restartMediaSessions();
this._initialized = true;
}
// Generate a random key in case we are enabling.
this._key = enabled ? this._generateKey() : false;
// Send it to others using the E2EE olm channel.
this._olmAdapter.updateKey(this._key).then(index => {
// Set our key so we begin encrypting.
this._e2eeCtx.setKey(this.conference.myUserId(), this._key, index);
});
}
/**
* Generates a new 256 bit random key.
*
* @returns {Uint8Array}
* @private
*/
_generateKey() {
return window.crypto.getRandomValues(new Uint8Array(32));
}
/**
* Setup E2EE on the new track that has been added to the conference, apply it on all the open peerconnections.
* @param {JitsiLocalTrack} track - the new track that's being added to the conference.
* @private
*/
_onLocalTrackAdded(track) {
for (const session of this.conference._getMediaSessions()) {
this._setupSenderE2EEForTrack(session, track);
}
}
/**
* Setups E2E encryption for the new session.

@@ -59,9 +183,21 @@ * @param {JingleSessionPC} session - the new media session.

/**
* Setup E2EE on the new track that has been added to the conference, apply it on all the open peerconnections.
* @param {JitsiLocalTrack} track - the new track that's being added to the conference.
* Publushes our own Olmn id key in presence.
* @private
*/
_onLocalTrackAdded(track) {
for (const session of this.conference._getMediaSessions()) {
this._setupSenderE2EEForTrack(session, track);
_onOlmIdKeyReady(idKey) {
logger.debug(`Olm id key ready: ${idKey}`);
// Publish it in presence.
this.conference.setLocalParticipantProperty('e2ee.idKey', idKey);
}
/**
* Advances (using ratcheting) the current key when a new participant joins the conference.
* @private
*/
_onParticipantJoined(id) {
logger.debug(`Participant ${id} joined`);
if (this._conferenceJoined && this._enabled) {
this._ratchetKey();
}

@@ -71,17 +207,97 @@ }

/**
* Sets the key to be used for End-To-End encryption.
* Rotates the current key when a participant leaves the conference.
* @private
*/
_onParticipantLeft(id) {
logger.debug(`Participant ${id} left`);
this._e2eeCtx.cleanup(id);
if (this._enabled) {
this._rotateKey();
}
}
/**
* Event posted when the E2EE signalling channel has been established with the given participant.
* @private
*/
_onParticipantE2EEChannelReady(id) {
logger.debug(`E2EE channel with participant ${id} is ready`);
}
/**
* Handles an update in a participant's key.
*
* @param {string} key - the key to be used.
* @returns {void}
* @param {string} id - The participant ID.
* @param {Uint8Array | boolean} key - The new key for the participant.
* @param {Number} index - The new key's index.
* @private
*/
setKey(key) {
this._e2eeCtx.setKey(key);
_onParticipantKeyUpdated(id, key, index) {
logger.debug(`Participant ${id} updated their key`);
this._e2eeCtx.setKey(id, key, index);
}
/**
* Handles an update in a participant's presence property.
*
* @param {JitsiParticipant} participant - The participant.
* @param {string} name - The name of the property that changed.
* @param {*} oldValue - The property's previous value.
* @param {*} newValue - The property's new value.
* @private
*/
_onParticipantPropertyChanged(participant, name, oldValue, newValue) {
switch (name) {
case 'e2ee.idKey':
logger.debug(`Participant ${participant.getId()} updated their id key: ${newValue}`);
break;
}
}
/**
* Advances the current key by using ratcheting.
*
* @private
*/
async _ratchetKeyImpl() {
logger.debug('Ratchetting key');
const material = await importKey(this._key);
const newKey = await ratchet(material);
this._key = new Uint8Array(newKey);
const index = await this._olmAdapter.updateCurrentKey(this._key);
this._e2eeCtx.setKey(this.conference.myUserId(), this._key, index);
}
/**
* Rotates the local key. Rotating the key implies creating a new one, then distributing it
* to all participants and once they all received it, start using it.
*
* @private
*/
async _rotateKeyImpl() {
logger.debug('Rotating key');
this._key = this._generateKey();
const index = await this._olmAdapter.updateKey(this._key);
this._e2eeCtx.setKey(this.conference.myUserId(), this._key, index);
}
/**
* Setup E2EE for the receiving side.
*
* @returns {void}
* @private
*/
_setupReceiverE2EEForTrack(tpc, track) {
if (!this._enabled) {
return;
}
const receiver = tpc.findReceiverForTrack(track.track);

@@ -101,5 +317,9 @@

* @param {JitsiLocalTrack} track - the local track for which e2e encoder will be configured.
* @returns {void}
* @private
*/
_setupSenderE2EEForTrack(session, track) {
if (!this._enabled) {
return;
}
const pc = session.peerconnection;

@@ -106,0 +326,0 @@ const sender = pc && pc.findSenderForTrack(track.track);

@@ -1,378 +0,69 @@

// Worker for E2EE/Insertable streams. Currently served as an inline blob.
const code = `
// Polyfill RTCEncoded(Audio|Video)Frame.getMetadata() (not available in M83, available M84+).
// The polyfill can not be done on the prototype since its not exposed in workers. Instead,
// it is done as another transformation to keep it separate.
function polyFillEncodedFrameMetadata(encodedFrame, controller) {
if (!encodedFrame.getMetadata) {
encodedFrame.getMetadata = function() {
return {
// TODO: provide a more complete polyfill based on additionalData for video.
synchronizationSource: this.synchronizationSource,
contributingSources: this.contributingSources
};
};
}
controller.enqueue(encodedFrame);
}
/* global TransformStream */
/* eslint-disable no-bitwise */
// We use a ringbuffer of keys so we can change them and still decode packets that were
// encrypted with an old key.
// In the future when we dont rely on a globally shared key we will actually use it. For
// now set the size to 1 which means there is only a single key. This causes some
// glitches when changing the key but its ok.
const keyRingSize = 1;
// Worker for E2EE/Insertable streams.
//
// We use a 96 bit IV for AES GCM. This is signalled in plain together with the
// packet. See https://developer.mozilla.org/en-US/docs/Web/API/AesGcmParams
const ivLength = 12;
import { Context } from './Context';
import { polyFillEncodedFrameMetadata } from './utils';
// We use a 128 bit key for AES GCM.
const keyGenParameters = {
name: 'AES-GCM',
length: 128
};
const contexts = new Map(); // Map participant id => context
// We copy the first bytes of the VP8 payload unencrypted.
// For keyframes this is 10 bytes, for non-keyframes (delta) 3. See
// https://tools.ietf.org/html/rfc6386#section-9.1
// This allows the bridge to continue detecting keyframes (only one byte needed in the JVB)
// and is also a bit easier for the VP8 decoder (i.e. it generates funny garbage pictures
// instead of being unable to decode).
// This is a bit for show and we might want to reduce to 1 unconditionally in the final version.
//
// For audio (where frame.type is not set) we do not encrypt the opus TOC byte:
// https://tools.ietf.org/html/rfc6716#section-3.1
const unencryptedBytes = {
key: 10,
delta: 3,
undefined: 1 // frame.type is not set on audio
};
onmessage = async event => {
const { operation } = event.data;
// Salt used in key derivation
// FIXME: We currently use the MUC room name for this which has the same lifetime
// as this worker. While not (pseudo)random as recommended in
// https://developer.mozilla.org/en-US/docs/Web/API/Pbkdf2Params
// this is easily available and the same for all participants.
// We currently do not enforce a minimum length of 16 bytes either.
let keySalt;
if (operation === 'encode') {
const { readableStream, writableStream, participantId } = event.data;
// Raw keyBytes used to derive the key.
let keyBytes;
/**
* Derives a AES-GCM key from the input using PBKDF2
* The key length can be configured above and should be either 128 or 256 bits.
* @param {Uint8Array} keyBytes - Value to derive key from
* @param {Uint8Array} salt - Salt used in key derivation
*/
async function deriveKey(keyBytes, salt) {
// https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/importKey
const material = await crypto.subtle.importKey('raw', keyBytes,
'PBKDF2', false, [ 'deriveBits', 'deriveKey' ]);
// https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/deriveKey#PBKDF2
return crypto.subtle.deriveKey({
name: 'PBKDF2',
salt,
iterations: 100000,
hash: 'SHA-256'
}, material, keyGenParameters, false, [ 'encrypt', 'decrypt' ]);
}
/** Per-participant context holding the cryptographic keys and
* encode/decode functions
*/
class Context {
/**
* @param {string} id - local muc resourcepart
*/
constructor(id) {
// An array (ring) of keys that we use for sending and receiving.
this._cryptoKeyRing = new Array(keyRingSize);
// A pointer to the currently used key.
this._currentKeyIndex = -1;
// We keep track of how many frames we have sent per ssrc.
// Starts with a random offset similar to the RTP sequence number.
this._sendCounts = new Map();
this._id = id;
if (!contexts.has(participantId)) {
contexts.set(participantId, new Context(participantId));
}
const context = contexts.get(participantId);
const transformStream = new TransformStream({
transform: context.encodeFunction.bind(context)
});
/**
* Derives a per-participant key.
* @param {Uint8Array} keyBytes - Value to derive key from
* @param {Uint8Array} salt - Salt used in key derivation
*/
async deriveKey(keyBytes, salt) {
const encoder = new TextEncoder();
const idBytes = encoder.encode(this._id);
// Separate both parts by a null byte to avoid ambiguity attacks.
const participantSalt = new Uint8Array(salt.byteLength + idBytes.byteLength + 1);
participantSalt.set(salt);
participantSalt.set(idBytes, salt.byteLength + 1);
readableStream
.pipeThrough(new TransformStream({
transform: polyFillEncodedFrameMetadata // M83 polyfill.
}))
.pipeThrough(transformStream)
.pipeTo(writableStream);
} else if (operation === 'decode') {
const { readableStream, writableStream, participantId } = event.data;
return deriveKey(keyBytes, participantSalt);
if (!contexts.has(participantId)) {
contexts.set(participantId, new Context(participantId));
}
/**
* Sets a key and starts using it for encrypting.
* @param {CryptoKey} key
*/
setKey(key) {
this._currentKeyIndex++;
this._cryptoKeyRing[this._currentKeyIndex % this._cryptoKeyRing.length] = key;
}
const context = contexts.get(participantId);
const transformStream = new TransformStream({
transform: context.decodeFunction.bind(context)
});
/**
* Construct the IV used for AES-GCM and sent (in plain) with the packet similar to
* https://tools.ietf.org/html/rfc7714#section-8.1
* It concatenates
* - the 32 bit synchronization source (SSRC) given on the encoded frame,
* - the 32 bit rtp timestamp given on the encoded frame,
* - a send counter that is specific to the SSRC. Starts at a random number.
* The send counter is essentially the pictureId but we currently have to implement this ourselves.
* There is no XOR with a salt. Note that this IV leaks the SSRC to the receiver but since this is
* randomly generated and SFUs may not rewrite this is considered acceptable.
* The SSRC is used to allow demultiplexing multiple streams with the same key, as described in
* https://tools.ietf.org/html/rfc3711#section-4.1.1
* The RTP timestamp is 32 bits and advances by the codec clock rate (90khz for video, 48khz for
* opus audio) every second. For video it rolls over roughly every 13 hours.
* The send counter will advance at the frame rate (30fps for video, 50fps for 20ms opus audio)
* every second. It will take a long time to roll over.
*
* See also https://developer.mozilla.org/en-US/docs/Web/API/AesGcmParams
*/
makeIV(synchronizationSource, timestamp) {
const iv = new ArrayBuffer(ivLength);
const ivView = new DataView(iv);
readableStream
.pipeThrough(new TransformStream({
transform: polyFillEncodedFrameMetadata // M83 polyfill.
}))
.pipeThrough(transformStream)
.pipeTo(writableStream);
} else if (operation === 'setKey') {
const { participantId, key, keyIndex } = event.data;
// having to keep our own send count (similar to a picture id) is not ideal.
if (!this._sendCounts.has(synchronizationSource)) {
// Initialize with a random offset, similar to the RTP sequence number.
this._sendCounts.set(synchronizationSource, Math.floor(Math.random() * 0xFFFF));
}
const sendCount = this._sendCounts.get(synchronizationSource);
ivView.setUint32(0, synchronizationSource);
ivView.setUint32(4, timestamp);
ivView.setUint32(8, sendCount % 0xFFFF);
this._sendCounts.set(synchronizationSource, sendCount + 1);
return iv;
if (!contexts.has(participantId)) {
contexts.set(participantId, new Context(participantId));
}
const context = contexts.get(participantId);
/**
* Function that will be injected in a stream and will encrypt the given encoded frames.
*
* @param {RTCEncodedVideoFrame|RTCEncodedAudioFrame} encodedFrame - Encoded video frame.
* @param {TransformStreamDefaultController} controller - TransportStreamController.
*
* The packet format is described below. One of the design goals was to not require
* changes to the SFU which for video requires not encrypting the keyframe bit of VP8
* as SFUs need to detect a keyframe (framemarking or the generic frame descriptor will
* solve this eventually). This also "hides" that a client is using E2EE a bit.
*
* Note that this operates on the full frame, i.e. for VP8 the data described in
* https://tools.ietf.org/html/rfc6386#section-9.1
*
* The VP8 payload descriptor described in
* https://tools.ietf.org/html/rfc7741#section-4.2
* is part of the RTP packet and not part of the frame and is not controllable by us.
* This is fine as the SFU keeps having access to it for routing.
*
* The encrypted frame is formed as follows:
* 1) Leave the first (10, 3, 1) bytes unencrypted, depending on the frame type and kind.
* 2) Form the GCM IV for the frame as described above.
* 3) Encrypt the rest of the frame using AES-GCM.
* 4) Allocate space for the encrypted frame.
* 5) Copy the unencrypted bytes to the start of the encrypted frame.
* 6) Append the ciphertext to the encrypted frame.
* 7) Append the IV.
* 8) Append a single byte for the key identifier. TODO: we don't need all the bits.
* 9) Enqueue the encrypted frame for sending.
*/
encodeFunction(encodedFrame, controller) {
const keyIndex = this._currentKeyIndex % this._cryptoKeyRing.length;
if (this._cryptoKeyRing[keyIndex]) {
const iv = this.makeIV(encodedFrame.getMetadata().synchronizationSource, encodedFrame.timestamp);
return crypto.subtle.encrypt({
name: 'AES-GCM',
iv,
additionalData: new Uint8Array(encodedFrame.data, 0, unencryptedBytes[encodedFrame.type])
}, this._cryptoKeyRing[keyIndex], new Uint8Array(encodedFrame.data,
unencryptedBytes[encodedFrame.type]))
.then(cipherText => {
const newData = new ArrayBuffer(unencryptedBytes[encodedFrame.type] + cipherText.byteLength
+ iv.byteLength + 1);
const newUint8 = new Uint8Array(newData);
newUint8.set(
new Uint8Array(encodedFrame.data, 0, unencryptedBytes[encodedFrame.type])); // copy first bytes.
newUint8.set(
new Uint8Array(cipherText), unencryptedBytes[encodedFrame.type]); // add ciphertext.
newUint8.set(
new Uint8Array(iv), unencryptedBytes[encodedFrame.type] + cipherText.byteLength); // append IV.
newUint8[unencryptedBytes[encodedFrame.type] + cipherText.byteLength + ivLength]
= keyIndex; // set key index.
encodedFrame.data = newData;
return controller.enqueue(encodedFrame);
}, e => {
console.error(e);
// We are not enqueuing the frame here on purpose.
});
}
/* NOTE WELL:
* This will send unencrypted data (only protected by DTLS transport encryption) when no key is configured.
* This is ok for demo purposes but should not be done once this becomes more relied upon.
*/
controller.enqueue(encodedFrame);
if (key) {
context.setKey(key, keyIndex);
} else {
context.setKey(false, keyIndex);
}
} else if (operation === 'cleanup') {
const { participantId } = event.data;
/**
* Function that will be injected in a stream and will decrypt the given encoded frames.
*
* @param {RTCEncodedVideoFrame|RTCEncodedAudioFrame} encodedFrame - Encoded video frame.
* @param {TransformStreamDefaultController} controller - TransportStreamController.
*
* The decrypted frame is formed as follows:
* 1) Extract the key index from the last byte of the encrypted frame.
* If there is no key associated with the key index, the frame is enqueued for decoding
* and these steps terminate.
* 2) Determine the frame type in order to look up the number of unencrypted header bytes.
* 2) Extract the 12-byte IV from its position near the end of the packet.
* Note: the IV is treated as opaque and not reconstructed from the input.
* 3) Decrypt the encrypted frame content after the unencrypted bytes using AES-GCM.
* 4) Allocate space for the decrypted frame.
* 5) Copy the unencrypted bytes from the start of the encrypted frame.
* 6) Append the plaintext to the decrypted frame.
* 7) Enqueue the decrypted frame for decoding.
*/
decodeFunction(encodedFrame, controller) {
const data = new Uint8Array(encodedFrame.data);
const keyIndex = data[encodedFrame.data.byteLength - 1];
if (this._cryptoKeyRing[keyIndex]) {
const iv = new Uint8Array(encodedFrame.data, encodedFrame.data.byteLength - ivLength - 1, ivLength);
const cipherTextStart = unencryptedBytes[encodedFrame.type];
const cipherTextLength = encodedFrame.data.byteLength - (unencryptedBytes[encodedFrame.type]
+ ivLength + 1);
return crypto.subtle.decrypt({
name: 'AES-GCM',
iv,
additionalData: new Uint8Array(encodedFrame.data, 0, unencryptedBytes[encodedFrame.type])
}, this._cryptoKeyRing[keyIndex], new Uint8Array(encodedFrame.data, cipherTextStart, cipherTextLength))
.then(plainText => {
const newData = new ArrayBuffer(unencryptedBytes[encodedFrame.type] + plainText.byteLength);
const newUint8 = new Uint8Array(newData);
newUint8.set(new Uint8Array(encodedFrame.data, 0, unencryptedBytes[encodedFrame.type]));
newUint8.set(new Uint8Array(plainText), unencryptedBytes[encodedFrame.type]);
encodedFrame.data = newData;
return controller.enqueue(encodedFrame);
}, e => {
console.error(e);
// TODO: notify the application about error status.
// TODO: For video we need a better strategy since we do not want to based any
// non-error frames on a garbage keyframe.
if (encodedFrame.type === undefined) { // audio, replace with silence.
// audio, replace with silence.
const newData = new ArrayBuffer(3);
const newUint8 = new Uint8Array(newData);
newUint8.set([ 0xd8, 0xff, 0xfe ]); // opus silence frame.
encodedFrame.data = newData;
controller.enqueue(encodedFrame);
}
});
} else if (keyIndex >= this._cryptoKeyRing.length
&& this._cryptoKeyRing[this._currentKeyIndex % this._cryptoKeyRing.length]) {
// If we are encrypting but don't have a key for the remote drop the frame.
// This is a heuristic since we don't know whether a packet is encrypted,
// do not have a checksum and do not have signaling for whether a remote participant does
// encrypt or not.
return;
}
// TODO: this just passes through to the decoder. Is that ok? If we don't know the key yet
// we might want to buffer a bit but it is still unclear how to do that (and for how long etc).
controller.enqueue(encodedFrame);
}
contexts.delete(participantId);
} else {
console.error('e2ee worker', operation);
}
const contexts = new Map(); // Map participant id => context
onmessage = async event => {
const { operation } = event.data;
if (operation === 'initialize') {
keySalt = event.data.salt;
} else if (operation === 'encode') {
const { readableStream, writableStream, participantId } = event.data;
if (!contexts.has(participantId)) {
contexts.set(participantId, new Context(participantId));
}
const context = contexts.get(participantId);
const transformStream = new TransformStream({
transform: context.encodeFunction.bind(context)
});
readableStream
.pipeThrough(new TransformStream({
transform: polyFillEncodedFrameMetadata, // M83 polyfill.
}))
.pipeThrough(transformStream)
.pipeTo(writableStream);
if (keyBytes) {
context.setKey(await context.deriveKey(keyBytes, keySalt));
}
} else if (operation === 'decode') {
const { readableStream, writableStream, participantId } = event.data;
if (!contexts.has(participantId)) {
contexts.set(participantId, new Context(participantId));
}
const context = contexts.get(participantId);
const transformStream = new TransformStream({
transform: context.decodeFunction.bind(context)
});
readableStream
.pipeThrough(new TransformStream({
transform: polyFillEncodedFrameMetadata, // M83 polyfill.
}))
.pipeThrough(transformStream)
.pipeTo(writableStream);
if (keyBytes) {
context.setKey(await context.deriveKey(keyBytes, keySalt));
}
} else if (operation === 'setKey') {
keyBytes = event.data.key;
contexts.forEach(async context => {
if (keyBytes) {
context.setKey(await context.deriveKey(keyBytes, keySalt));
} else {
context.setKey(false);
}
});
} else {
console.error('e2ee worker', operation);
}
};
`;
export const createWorkerScript = () => URL.createObjectURL(new Blob([ code ], { type: 'application/javascript' }));
};

@@ -28,7 +28,5 @@ import { getLogger } from 'jitsi-meet-logger';

if (!peerconnection && !wsUrl) {
throw new TypeError(
'At least peerconnection or wsUrl must be given');
throw new TypeError('At least peerconnection or wsUrl must be given');
} else if (peerconnection && wsUrl) {
throw new TypeError(
'Just one of peerconnection or wsUrl must be given');
throw new TypeError('Just one of peerconnection or wsUrl must be given');
}

@@ -205,9 +203,8 @@

sendSetLastNMessage(value) {
const jsonObject = {
logger.log(`Sending lastN=${value}.`);
this._send({
colibriClass: 'LastNChangedEvent',
lastN: value
};
this._send(jsonObject);
logger.log(`Channel lastN set to: ${value}`);
});
}

@@ -223,5 +220,3 @@

sendPinnedEndpointMessage(endpointId) {
logger.log(
'sending pinned changed notification to the bridge for endpoint ',
endpointId);
logger.log(`Sending pinned endpoint: ${endpointId}.`);

@@ -243,5 +238,3 @@ this._send({

sendSelectedEndpointsMessage(endpointIds) {
logger.log(
'sending selected changed notification to the bridge for endpoints',
endpointIds);
logger.log(`Sending selected endpoints: ${endpointIds}.`);

@@ -260,4 +253,3 @@ this._send({

sendReceiverVideoConstraintMessage(maxFrameHeightPixels) {
logger.log('sending a ReceiverVideoConstraint message with '
+ `a maxFrameHeight of ${maxFrameHeightPixels} pixels`);
logger.log(`Sending ReceiverVideoConstraint with maxFrameHeight=${maxFrameHeightPixels}px`);
this._send({

@@ -303,5 +295,3 @@ colibriClass: 'ReceiverVideoConstraint',

GlobalOnErrorHandler.callErrorHandler(error);
logger.error(
'Failed to parse channel message as JSON: ',
data, error);
logger.error('Failed to parse channel message as JSON: ', data, error);

@@ -318,8 +308,4 @@ return;

logger.info(
'Channel new dominant speaker event: ',
dominantSpeakerEndpoint);
emitter.emit(
RTCEvents.DOMINANT_SPEAKER_CHANGED,
dominantSpeakerEndpoint);
logger.info(`New dominant speaker: ${dominantSpeakerEndpoint}.`);
emitter.emit(RTCEvents.DOMINANT_SPEAKER_CHANGED, dominantSpeakerEndpoint);
break;

@@ -331,7 +317,4 @@ }

logger.info(
`Endpoint connection status changed: ${endpoint} active ? ${
isActive}`);
emitter.emit(RTCEvents.ENDPOINT_CONN_STATUS_CHANGED,
endpoint, isActive);
logger.info(`Endpoint connection status changed: ${endpoint} active=${isActive}`);
emitter.emit(RTCEvents.ENDPOINT_CONN_STATUS_CHANGED, endpoint, isActive);

@@ -341,5 +324,3 @@ break;

case 'EndpointMessage': {
emitter.emit(
RTCEvents.ENDPOINT_MESSAGE_RECEIVED, obj.from,
obj.msgPayload);
emitter.emit(RTCEvents.ENDPOINT_MESSAGE_RECEIVED, obj.from, obj.msgPayload);

@@ -349,9 +330,7 @@ break;

case 'LastNEndpointsChangeEvent': {
// The new/latest list of last-n endpoint IDs.
// The new/latest list of last-n endpoint IDs (i.e. endpoints for which the bridge is sending video).
const lastNEndpoints = obj.lastNEndpoints;
logger.info('Channel new last-n event: ',
lastNEndpoints, obj);
emitter.emit(RTCEvents.LASTN_ENDPOINT_CHANGED,
lastNEndpoints, obj);
logger.info(`New forwarded endpoints: ${lastNEndpoints}`);
emitter.emit(RTCEvents.LASTN_ENDPOINT_CHANGED, lastNEndpoints);

@@ -358,0 +337,0 @@ break;

@@ -427,16 +427,17 @@ /* global __filename, module */

setAudioLevel(audioLevel, tpc) {
// The receiver seems to be reporting audio level immediately after the
// remote user has muted, so do not set the audio level on the track
// if it is muted.
if (browser.supportsReceiverStats()
&& !this.isLocalAudioTrack()
&& this.isWebRTCTrackMuted()) {
return;
let newAudioLevel = audioLevel;
// When using getSynchornizationSources on the audio receiver to gather audio levels for
// remote tracks, browser reports last known audio levels even when the remote user is
// audio muted, we need to reset the value to zero here so that the audio levels are cleared.
// Remote tracks have the tpc info present while local tracks do not.
if (browser.supportsReceiverStats() && typeof tpc !== 'undefined' && this.isMuted()) {
newAudioLevel = 0;
}
if (this.audioLevel !== audioLevel) {
this.audioLevel = audioLevel;
if (this.audioLevel !== newAudioLevel) {
this.audioLevel = newAudioLevel;
this.emit(
JitsiTrackEvents.TRACK_AUDIO_LEVEL_CHANGED,
audioLevel,
newAudioLevel,
tpc);

@@ -447,3 +448,3 @@

} else if (this.audioLevel === 0
&& audioLevel === 0
&& newAudioLevel === 0
&& this.isLocal()

@@ -453,3 +454,3 @@ && !this.isWebRTCTrackMuted()) {

JitsiTrackEvents.NO_AUDIO_INPUT,
audioLevel);
newAudioLevel);
}

@@ -456,0 +457,0 @@ }

@@ -352,2 +352,3 @@ /* global __filename */

_senderVideoConstraintsChanged(senderVideoConstraints) {
logger.info(`Received remote max frame height of ${senderVideoConstraints} on the bridge channel`);
this._senderVideoConstraints = senderVideoConstraints;

@@ -521,4 +522,4 @@ this.eventEmitter.emit(RTCEvents.SENDER_VIDEO_CONSTRAINTS_CHANGED);

iceConfig.encodedInsertableStreams = true;
iceConfig.forceEncodedAudioInsertableStreams = true; // legacy, to be removed in M85.
iceConfig.forceEncodedVideoInsertableStreams = true; // legacy, to be removed in M85.
iceConfig.forceEncodedAudioInsertableStreams = true; // legacy, to be removed in M88.
iceConfig.forceEncodedVideoInsertableStreams = true; // legacy, to be removed in M88.
}

@@ -525,0 +526,0 @@

@@ -103,2 +103,9 @@ /* global

/**
* An empty function.
*/
function emptyFuncton() {
// no-op
}
/**
* Initialize wrapper function for enumerating devices.

@@ -113,3 +120,9 @@ * TODO: remove this, it should no longer be needed.

navigator.mediaDevices.enumerateDevices()
.then(callback, () => callback([]));
.then(devices => {
updateKnownDevices(devices);
callback(devices);
}, () => {
updateKnownDevices([]);
callback([]);
});
};

@@ -326,7 +339,9 @@ }

// https://www.electronjs.org/docs/api/desktop-capturer
// Note. The documentation specifies that chromeMediaSourceId should not be present
// which, in the case a users has multiple monitors, leads to them being shared all
// at once. However we tested with chromeMediaSourceId present and it seems to be
// working properly and also takes care of the previously mentioned issue.
constraints.audio = { mandatory: {
chromeMediaSource: constraints.video.mandatory.chromeMediaSource
} };
delete constraints.video.mandatory.chromeMediaSourceId;
}

@@ -625,3 +640,20 @@ }

/**
* Update known devices.
*
* @param {Array<Object>} pds - The new devices.
* @returns {void}
*
* NOTE: Use this function as a shared callback to handle both the devicechange event and the polling implementations.
* This prevents duplication and works around a chrome bug (verified to occur on 68) where devicechange fires twice in
* a row, which can cause async post devicechange processing to collide.
*/
function updateKnownDevices(pds) {
if (compareAvailableMediaDevices(pds)) {
onMediaDevicesListChanged(pds);
}
}
/**
* Event handler for the 'devicechange' event.

@@ -822,3 +854,3 @@ *

availableDevices = undefined;
availableDevices = [];
window.clearInterval(availableDevicesPollTimer);

@@ -901,18 +933,6 @@ availableDevicesPollTimer = undefined;

// Use a shared callback to handle both the devicechange event
// and the polling implementations. This prevents duplication
// and works around a chrome bug (verified to occur on 68) where
// devicechange fires twice in a row, which can cause async post
// devicechange processing to collide.
const updateKnownDevices = () => this.enumerateDevices(pds => {
if (compareAvailableMediaDevices(pds)) {
onMediaDevicesListChanged(pds);
}
});
if (browser.supportsDeviceChangeEvent()) {
navigator.mediaDevices.addEventListener(
'devicechange',
updateKnownDevices);
() => this.enumerateDevices(emptyFuncton));
} else {

@@ -922,3 +942,3 @@ // Periodically poll enumerateDevices() method to check if

availableDevicesPollTimer = window.setInterval(
updateKnownDevices,
() => this.enumerateDevices(emptyFuncton),
AVAILABLE_DEVICES_POLL_INTERVAL_TIME);

@@ -925,0 +945,0 @@ }

@@ -164,4 +164,7 @@

getDisplayMedia({ video: true,
audio: true })
getDisplayMedia({
video: true,
audio: true,
cursor: 'always'
})
.then(stream => {

@@ -189,5 +192,21 @@ let applyConstraintsPromise;

})
.catch(() =>
errorCallback(new JitsiTrackError(JitsiTrackErrors
.SCREENSHARING_USER_CANCELED)));
.catch(error => {
const errorDetails = {
errorName: error && error.name,
errorMsg: error && error.message,
errorStack: error && error.stack
};
logger.error('getDisplayMedia error', errorDetails);
if (errorDetails.errorMsg && errorDetails.errorMsg.indexOf('denied by system') !== -1) {
// On Chrome this is the only thing different between error returned when user cancels
// and when no permission was given on the OS level.
errorCallback(new JitsiTrackError(JitsiTrackErrors.PERMISSION_DENIED));
return;
}
errorCallback(new JitsiTrackError(JitsiTrackErrors.SCREENSHARING_USER_CANCELED));
});
}

@@ -194,0 +213,0 @@ };

import { getLogger } from 'jitsi-meet-logger';
import transform from 'sdp-transform';
import * as JitsiTrackEvents from '../../JitsiTrackEvents';
import * as MediaType from '../../service/RTC/MediaType';
import RTCEvents from '../../service/RTC/RTCEvents';
import * as VideoType from '../../service/RTC/VideoType';
import browser from '../browser';

@@ -26,4 +24,4 @@

* @param peerconnection - the tpc instance for which we have utility functions.
* @param videoBitrates - the bitrates to be configured on the video senders when
* simulcast is enabled.
* @param videoBitrates - the bitrates to be configured on the video senders for
* different resolutions both in unicast and simulcast mode.
*/

@@ -35,6 +33,15 @@ constructor(peerconnection, videoBitrates) {

/**
* The simulcast encodings that will be configured on the RTCRtpSender
* for the video tracks in the unified plan mode.
* The startup configuration for the stream encodings that are applicable to
* the video stream when a new sender is created on the peerconnection. The initial
* config takes into account the differences in browser's simulcast implementation.
*
* Encoding parameters:
* active - determine the on/off state of a particular encoding.
* maxBitrate - max. bitrate value to be applied to that particular encoding
* based on the encoding's resolution and config.js videoQuality settings if applicable.
* rid - Rtp Stream ID that is configured for a particular simulcast stream.
* scaleResolutionDownBy - the factor by which the encoding is scaled down from the
* original resolution of the captured video.
*/
this.simulcastEncodings = [
this.localStreamEncodingsConfig = [
{

@@ -59,8 +66,2 @@ active: true,

];
/**
* Resolution height constraints for the simulcast encodings that
* are configured for the video tracks.
*/
this.simulcastStreamConstraints = [];
}

@@ -103,3 +104,4 @@

/**
* Obtains stream encodings that need to be configured on the given track.
* Obtains stream encodings that need to be configured on the given track based
* on the track media type and the simulcast setting.
* @param {JitsiLocalTrack} localTrack

@@ -109,6 +111,11 @@ */

if (this.pc.isSimulcastOn() && localTrack.isVideoTrack()) {
return this.simulcastEncodings;
return this.localStreamEncodingsConfig;
}
return [ { active: true } ];
return localTrack.isVideoTrack()
? [ {
active: true,
maxBitrate: this.videoBitrates.high
} ]
: [ { active: true } ];
}

@@ -190,30 +197,9 @@

/**
* Constructs resolution height constraints for the simulcast encodings that are
* created for a given local video track.
* @param {MediaStreamTrack} track - the local video track.
* @returns {void}
*/
setSimulcastStreamConstraints(track) {
if (browser.isReactNative()) {
return;
}
const height = track.getSettings().height;
for (const encoding in this.simulcastEncodings) {
if (this.simulcastEncodings.hasOwnProperty(encoding)) {
this.simulcastStreamConstraints.push({
height: height / this.simulcastEncodings[encoding].scaleResolutionDownBy,
rid: this.simulcastEncodings[encoding].rid
});
}
}
}
/**
* Adds {@link JitsiLocalTrack} to the WebRTC peerconnection for the first time.
* @param {JitsiLocalTrack} track - track to be added to the peerconnection.
* @param {boolean} isInitiator - boolean that indicates if the endpoint is offerer
* in a p2p connection.
* @returns {void}
*/
addTrack(localTrack, isInitiator = true) {
addTrack(localTrack, isInitiator) {
const track = localTrack.getTrack();

@@ -240,7 +226,2 @@

}
// Construct the simulcast stream constraints for the newly added track.
if (localTrack.isVideoTrack() && localTrack.videoType === VideoType.CAMERA && this.pc.isSimulcastOn()) {
this.setSimulcastStreamConstraints(localTrack.getTrack());
}
}

@@ -278,9 +259,2 @@

transceiver.direction = 'sendrecv';
// Construct the simulcast stream constraints for the newly added track.
if (localTrack.isVideoTrack()
&& localTrack.videoType === VideoType.CAMERA
&& this.pc.isSimulcastOn()) {
this.setSimulcastStreamConstraints(localTrack.getTrack());
}
});

@@ -292,13 +266,33 @@ }

return transceiver.sender.replaceTrack(track)
.then(() => {
this.pc.localTracks.set(localTrack.rtcId, localTrack);
});
return transceiver.sender.replaceTrack(track);
}
/**
* Obtains the current local video track's height constraints based on the
* initial stream encodings configuration on the sender and the resolution
* of the current local track added to the peerconnection.
* @param {MediaStreamTrack} localTrack local video track
* @returns {Array[number]} an array containing the resolution heights of
* simulcast streams configured on the video sender.
*/
getLocalStreamHeightConstraints(localTrack) {
// React-native hasn't implemented MediaStreamTrack getSettings yet.
if (browser.isReactNative()) {
return null;
}
const localVideoHeightConstraints = [];
const height = localTrack.getSettings().height;
for (const encoding of this.localStreamEncodingsConfig) {
localVideoHeightConstraints.push(height / encoding.scaleResolutionDownBy);
}
return localVideoHeightConstraints;
}
/**
* Removes the track from the RTCRtpSender as part of the mute operation.
* @param {JitsiLocalTrack} localTrack - track to be removed.
* @returns {Promise<boolean>} - Promise that resolves to false if unmute
* operation is successful, a reject otherwise.
* @returns {Promise<void>} - resolved when done.
*/

@@ -316,8 +310,3 @@ removeTrackMute(localTrack) {

return transceiver.sender.replaceTrack(null)
.then(() => {
this.pc.localTracks.delete(localTrack.rtcId);
return Promise.resolve(false);
});
return transceiver.sender.replaceTrack(null);
}

@@ -335,2 +324,13 @@

const stream = newTrack.getOriginalStream();
// Ignore cases when the track is replaced while the device is in a muted state,like
// replacing camera when video muted or replacing mic when audio muted. These JitsiLocalTracks
// do not have a mediastream attached. Replace track will be called again when the device is
// unmuted and the track will be replaced on the peerconnection then.
if (!stream) {
this.pc.localTracks.delete(oldTrack.rtcId);
this.pc.localTracks.set(newTrack.rtcId, newTrack);
return Promise.resolve();
}
const track = mediaType === MediaType.AUDIO

@@ -363,13 +363,12 @@ ? stream.getAudioTracks()[0]

} else if (oldTrack && !newTrack) {
if (!this.removeTrackMute(oldTrack)) {
return Promise.reject(new Error('replace track failed'));
}
this.pc.localTracks.delete(oldTrack.rtcId);
this.pc.localSSRCs.delete(oldTrack.rtcId);
return this.removeTrackMute(oldTrack)
.then(() => {
this.pc.localTracks.delete(oldTrack.rtcId);
this.pc.localSSRCs.delete(oldTrack.rtcId);
});
} else if (newTrack && !oldTrack) {
const ssrc = this.pc.localSSRCs.get(newTrack.rtcId);
this.addTrackUnmute(newTrack)
return this.addTrackUnmute(newTrack)
.then(() => {
newTrack.emit(JitsiTrackEvents.TRACK_MUTE_CHANGED, newTrack);
this.pc.localTracks.set(newTrack.rtcId, newTrack);

@@ -379,4 +378,2 @@ this.pc.localSSRCs.set(newTrack.rtcId, ssrc);

}
return Promise.resolve();
}

@@ -394,3 +391,3 @@

setAudioTransferActive(active) {
this.setMediaTransferActive('audio', active);
this.setMediaTransferActive(MediaType.AUDIO, active);
}

@@ -427,2 +424,3 @@

logger.info(`${active ? 'Enabling' : 'Suspending'} ${mediaType} media transfer on ${this.pc}`);
transceivers.forEach((transceiver, idx) => {

@@ -452,4 +450,4 @@ if (active) {

setVideoTransferActive(active) {
this.setMediaTransferActive('video', active);
this.setMediaTransferActive(MediaType.VIDEO, active);
}
}

@@ -16,3 +16,19 @@ import { jitsiLocalStorage } from '@jitsi/js-utils';

export default {
/**
* The storage used to store the settings.
*/
_storage: jitsiLocalStorage,
/**
* Initializes the Settings class.
*
* @param {Storage|undefined} externalStorage - Object that implements the Storage interface. This object will be
* used for storing data instead of jitsiLocalStorage if specified.
*/
init(externalStorage) {
this._storage = externalStorage || jitsiLocalStorage;
},
/**
* Returns fake username for callstats

@@ -23,6 +39,6 @@ * @returns {string} fake username for callstats

if (!_callStatsUserName) {
_callStatsUserName = jitsiLocalStorage.getItem('callStatsUserName');
_callStatsUserName = this._storage.getItem('callStatsUserName');
if (!_callStatsUserName) {
_callStatsUserName = generateCallStatsUserName();
jitsiLocalStorage.setItem('callStatsUserName', _callStatsUserName);
this._storage.setItem('callStatsUserName', _callStatsUserName);
}

@@ -40,6 +56,6 @@ }

if (!_machineId) {
_machineId = jitsiLocalStorage.getItem('jitsiMeetId');
_machineId = this._storage.getItem('jitsiMeetId');
if (!_machineId) {
_machineId = generateJitsiMeetId();
jitsiLocalStorage.setItem('jitsiMeetId', _machineId);
this._storage.setItem('jitsiMeetId', _machineId);
}

@@ -58,3 +74,3 @@ }

// instance and that's why we should always re-read it.
return jitsiLocalStorage.getItem('sessionId');
return this._storage.getItem('sessionId');
},

@@ -68,5 +84,5 @@

if (sessionId) {
jitsiLocalStorage.setItem('sessionId', sessionId);
this._storage.setItem('sessionId', sessionId);
} else {
jitsiLocalStorage.removeItem('sessionId');
this._storage.removeItem('sessionId');
}

@@ -73,0 +89,0 @@ }

@@ -39,4 +39,4 @@

return {
average: (this.stats.getAverage() * SECONDS).toFixed(2), // calc rate per min
maxDuration: this.maxDuration
avgRatePerMinute: (this.stats.getAverage() * SECONDS).toFixed(2), // calc rate per min
maxDurationMs: this.maxDuration
};

@@ -43,0 +43,0 @@ }

@@ -50,22 +50,20 @@ import EventEmitter from 'events';

return new Promise((resolve, reject) => {
if (!options.disableThirdPartyRequests) {
const appId = options.callStatsID;
const appSecret = options.callStatsSecret;
const userId = options.statisticsId || options.statisticsDisplayName || Settings.callStatsUserName;
const appId = options.callStatsID;
const appSecret = options.callStatsSecret;
const userId = options.statisticsId || options.statisticsDisplayName || Settings.callStatsUserName;
api.initialize(appId, appSecret, userId, (status, message) => {
if (status === 'success') {
api.on(PRECALL_TEST_RESULTS, (...args) => {
emitter.emit(PRECALL_TEST_RESULTS, ...args);
});
_initialized = true;
resolve();
} else {
reject({
status,
message
});
}
}, null, { disablePrecalltest: true });
}
api.initialize(appId, appSecret, userId, (status, message) => {
if (status === 'success') {
api.on(PRECALL_TEST_RESULTS, (...args) => {
emitter.emit(PRECALL_TEST_RESULTS, ...args);
});
_initialized = true;
resolve();
} else {
reject({
status,
message
});
}
}, null, { disablePrecalltest: true });
});

@@ -85,2 +83,8 @@ }

const { callStatsID, callStatsSecret, disableThirdPartyRequests } = options;
if (!callStatsID || !callStatsSecret || disableThirdPartyRequests) {
throw new Error('Callstats is disabled');
}
await _loadScript();

@@ -87,0 +91,0 @@ // eslint-disable-next-line new-cap

@@ -163,3 +163,3 @@ import EventEmitter from 'events';

this.callStatsIntegrationEnabled
= this.options.callStatsID && this.options.callStatsSecret
= this.options.callStatsID && this.options.callStatsSecret && this.options.enableCallStats

@@ -166,0 +166,0 @@ // Even though AppID and AppSecret may be specified, the integration

@@ -30,3 +30,22 @@

/**
* Calculates a unique hash for a given string similar to Java's
* implementation of String.hashCode()
*
* @param {String} string - String whose hash has to be calculated.
* @returns {number} - Unique hash code calculated.
*/
export function hashString(string) {
let hash = 0;
for (let i = 0; i < string.length; i++) {
hash += Math.pow(string.charCodeAt(i) * 31, string.length - i);
/* eslint-disable no-bitwise */
hash = hash & hash; // Convert to 32bit integer
}
return Math.abs(hash);
}
/**

@@ -33,0 +52,0 @@ * Returns only the positive values from an array of numbers.

@@ -1066,13 +1066,17 @@ /* global $, __filename */

}
const jsonMessage = $(msg).find('>json-message').text();
const parsedJson = this.xmpp.tryParseJSONAndVerify(jsonMessage);
// We emit this event if the message is a valid json, and is not
// delivered after a delay, i.e. stamp is undefined.
// e.g. - subtitles should not be displayed if delayed.
if (parsedJson && stamp === undefined) {
this.eventEmitter.emit(XMPPEvents.JSON_MESSAGE_RECEIVED,
from, parsedJson);
if (jsonMessage) {
const parsedJson = this.xmpp.tryParseJSONAndVerify(jsonMessage);
return;
// We emit this event if the message is a valid json, and is not
// delivered after a delay, i.e. stamp is undefined.
// e.g. - subtitles should not be displayed if delayed.
if (parsedJson && stamp === undefined) {
this.eventEmitter.emit(XMPPEvents.JSON_MESSAGE_RECEIVED,
from, parsedJson);
return;
}
}

@@ -1079,0 +1083,0 @@

@@ -28,2 +28,5 @@ import { Strophe } from 'strophe.js';

this.sentIQs = [];
this._proto = {
socket: undefined
};
}

@@ -62,2 +65,9 @@

simulateConnectionState(newState) {
if (newState === Strophe.Status.CONNECTED) {
this._proto.socket = {
readyState: WebSocket.OPEN
};
} else {
this._proto.socket = undefined;
}
this._connectCb(newState);

@@ -64,0 +74,0 @@ }

@@ -207,2 +207,9 @@ /* global $, Promise */

}
if (config.enableOpusRed === true) {
elem.c(
'property', {
name: 'enableOpusRed',
value: true
}).up();
}
if (config.minParticipants !== undefined) {

@@ -209,0 +216,0 @@ elem.c(

@@ -378,19 +378,13 @@ /* global $, __filename */

if (options.useStunTurn) {
let filter;
let filter;
if (options.useTurnUdp) {
filter = s => s.urls.startsWith('turn');
} else {
// By default we filter out STUN and TURN/UDP and leave only TURN/TCP.
filter = s => s.urls.startsWith('turn') && (s.urls.indexOf('transport=tcp') >= 0);
}
this.jvbIceConfig.iceServers = iceservers.filter(filter);
if (options.useTurnUdp) {
filter = s => s.urls.startsWith('turn');
} else {
// By default we filter out STUN and TURN/UDP and leave only TURN/TCP.
filter = s => s.urls.startsWith('turn') && (s.urls.indexOf('transport=tcp') >= 0);
}
if (options.p2p && options.p2p.useStunTurn) {
this.p2pIceConfig.iceServers = iceservers;
}
this.jvbIceConfig.iceServers = iceservers.filter(filter);
this.p2pIceConfig.iceServers = iceservers;
}, err => {

@@ -397,0 +391,0 @@ logger.warn('getting turn credentials failed', err);

@@ -17,10 +17,12 @@ import { getLogger } from 'jitsi-meet-logger';

/**
* Ping timeout error after 15 sec of waiting.
* Ping timeout error after 5 sec of waiting.
*/
const PING_TIMEOUT = 15000;
const PING_TIMEOUT = 5000;
/**
* Will close the connection after 3 consecutive ping errors.
* How many ping failures will be tolerated before the WebSocket connection is killed.
* The worst case scenario in case of ping timing out without a response is (25 seconds at the time of this writing):
* PING_THRESHOLD * PING_INTERVAL + PING_TIMEOUT
*/
const PING_THRESHOLD = 3;
const PING_THRESHOLD = 2;

@@ -42,10 +44,12 @@ /**

* Contructs new object
* @param {XMPP} xmpp the xmpp module.
* @param {Object} options
* @param {Function} options.onPingThresholdExceeded - Callback called when ping fails too many times (controlled
* by the {@link PING_THRESHOLD} constant).
* @constructor
*/
constructor(xmpp) {
constructor({ onPingThresholdExceeded }) {
super();
this.failedPings = 0;
this.xmpp = xmpp;
this.pingExecIntervals = new Array(PING_TIMESTAMPS_TO_KEEP);
this._onPingThresholdExceeded = onPingThresholdExceeded;
}

@@ -106,9 +110,3 @@

logger.error(errmsg, error);
// FIXME it doesn't help to disconnect when 3rd PING
// times out, it only stops Strophe from retrying.
// Not really sure what's the right thing to do in that
// situation, but just closing the connection makes no
// sense.
// self.connection.disconnect();
this._onPingThresholdExceeded && this._onPingThresholdExceeded();
} else {

@@ -115,0 +113,0 @@ logger.warn(errmsg, error);

@@ -11,2 +11,3 @@ /* global $ */

import browser from '../browser';
import { E2EEncryption } from '../e2ee/E2EEncryption';
import GlobalOnErrorHandler from '../util/GlobalOnErrorHandler';

@@ -21,3 +22,2 @@ import Listenable from '../util/Listenable';

import initStropheLogger from './strophe.logger';
import PingConnectionPlugin from './strophe.ping';
import RayoConnectionPlugin from './strophe.rayo';

@@ -153,5 +153,9 @@ import initStropheUtil from './strophe.util';

if (!this.options.disableRtx) {
// Disable RTX on Firefox because of https://bugzilla.mozilla.org/show_bug.cgi?id=1668028.
if (!(this.options.disableRtx || browser.isFirefox())) {
this.caps.addFeature('urn:ietf:rfc:4588');
}
if (this.options.enableOpusRed === true && browser.supportsAudioRed()) {
this.caps.addFeature('http://jitsi.org/opus-red');
}

@@ -179,3 +183,3 @@ // this is dealt with by SDP O/A so we don't need to announce this

if (browser.supportsInsertableStreams() && !(this.options.testing && this.options.testing.disableE2EE)) {
if (E2EEncryption.isSupported(this.options)) {
this.caps.addFeature('https://jitsi.org/meet/e2ee');

@@ -213,8 +217,4 @@ }

this.eventEmitter.emit(XMPPEvents.CONNECTION_STATUS_CHANGED, credentials, status, msg);
if (status === Strophe.Status.CONNECTED
|| status === Strophe.Status.ATTACHED) {
if (this.options.useStunTurn
|| (this.options.p2p && this.options.p2p.useStunTurn)) {
this.connection.jingle.getStunAndTurnCredentials();
}
if (status === Strophe.Status.CONNECTED || status === Strophe.Status.ATTACHED) {
this.connection.jingle.getStunAndTurnCredentials();

@@ -237,6 +237,2 @@ logger.info(`My Jabber ID: ${this.connection.jid}`);

// It counterintuitive to start ping task when it's not supported, but since PING is now mandatory
// it's done on purpose in order to print error logs and bring more attention.
this.connection.ping.startInterval(pingJid);
// check for speakerstats

@@ -655,3 +651,2 @@ identities.forEach(identity => {

this.connection.addConnectionPlugin('jingle', new JingleConnectionPlugin(this, this.eventEmitter, iceConfig));
this.connection.addConnectionPlugin('ping', new PingConnectionPlugin(this));
this.connection.addConnectionPlugin('rayo', new RayoConnectionPlugin());

@@ -755,2 +750,4 @@ }

} catch (e) {
logger.error(e);
return false;

@@ -757,0 +754,0 @@ }

@@ -6,5 +6,6 @@ import { getLogger } from 'jitsi-meet-logger';

import Listenable from '../util/Listenable';
import { getJitterDelay } from '../util/Retry';
import ResumeTask from './ResumeTask';
import LastSuccessTracker from './StropheLastSuccess';
import PingConnectionPlugin from './strophe.ping';

@@ -56,8 +57,2 @@ const logger = getLogger(__filename);

/**
* The counter increased before each resume retry attempt, used to calculate exponential backoff.
* @type {number}
* @private
*/
this._resumeRetryN = 0;
this._stropheConn = new Strophe.Connection(serviceUrl);

@@ -72,2 +67,4 @@ this._usesWebsocket = serviceUrl.startsWith('ws:') || serviceUrl.startsWith('wss:');

this._resumeTask = new ResumeTask(this._stropheConn);
/**

@@ -86,2 +83,10 @@ * @typedef DeferredSendIQ Object

this._deferredIQs = [];
// Ping plugin is mandatory for the Websocket mode to work correctly. It's used to detect when the connection
// is broken (WebSocket/TCP connection not closed gracefully).
this.addConnectionPlugin(
'ping',
new PingConnectionPlugin({
onPingThresholdExceeded: () => this._onPingErrorThresholdExceeded()
}));
}

@@ -95,3 +100,6 @@

get connected() {
return this._status === Strophe.Status.CONNECTED || this._status === Strophe.Status.ATTACHED;
const websocket = this._stropheConn && this._stropheConn._proto && this._stropheConn._proto.socket;
return (this._status === Strophe.Status.CONNECTED || this._status === Strophe.Status.ATTACHED)
&& (!this.isUsingWebSocket || (websocket && websocket.readyState === WebSocket.OPEN));
}

@@ -251,3 +259,7 @@

this._processDeferredIQs();
this._resumeTask.cancel();
this.ping.startInterval(this.domain);
} else if (status === Strophe.Status.DISCONNECTED) {
this.ping.stopInterval();
// FIXME add RECONNECTING state instead of blocking the DISCONNECTED update

@@ -284,3 +296,6 @@ blockCallback = this._tryResumingConnection();

closeWebsocket() {
this._stropheConn._proto && this._stropheConn._proto.socket && this._stropheConn._proto.socket.close();
if (this._stropheConn && this._stropheConn._proto) {
this._stropheConn._proto._closeSocket();
this._stropheConn._proto._onClose(null);
}
}

@@ -294,3 +309,3 @@

disconnect(...args) {
clearTimeout(this._resumeTimeout);
this._resumeTask.cancel();
clearTimeout(this._wsKeepAlive);

@@ -445,3 +460,4 @@ this._clearDeferredIQs();

result => resolve(result),
error => reject(error));
error => reject(error),
timeout);
} else {

@@ -468,2 +484,14 @@ const deferred = {

/**
* Called by the ping plugin when ping fails too many times.
*
* @returns {void}
*/
_onPingErrorThresholdExceeded() {
if (this.isUsingWebSocket) {
logger.warn('Ping error threshold exceeded - killing the WebSocket');
this.closeWebsocket();
}
}
/**
* Helper function to send presence stanzas. The main benefit is for sending presence stanzas for which you expect

@@ -537,29 +565,4 @@ * a responding presence stanza with the same id (for example when leaving a chat room).

if (resumeToken) {
clearTimeout(this._resumeTimeout);
this._resumeTask.schedule();
// FIXME detect internet offline
// The retry delay will be:
// 1st retry: 1.5s - 3s
// 2nd retry: 3s - 9s
// 3rd retry: 3s - 27s
this._resumeRetryN = Math.min(3, this._resumeRetryN + 1);
const retryTimeout = getJitterDelay(this._resumeRetryN, 1500, 3);
logger.info(`Will try to resume the XMPP connection in ${retryTimeout}ms`);
this._resumeTimeout = setTimeout(() => {
logger.info('Trying to resume the XMPP connection');
const url = new URL(this._stropheConn.service);
let { search } = url;
search += search.indexOf('?') === -1 ? `?previd=${resumeToken}` : `&previd=${resumeToken}`;
url.search = search;
this._stropheConn.service = url.toString();
streamManagement.resume();
}, retryTimeout);
return true;

@@ -566,0 +569,0 @@ }

{
"name": "@q42/lib-jitsi-meet",
"version": "1.0.1",
"version": "1.1.0",
"description": "Borrel fork for accessing Jitsi server side deployments",

@@ -19,9 +19,11 @@ "repository": {

"dependencies": {
"@jitsi/js-utils": "1.0.0",
"@jitsi/js-utils": "1.0.2",
"@jitsi/sdp-interop": "1.0.3",
"@jitsi/sdp-simulcast": "0.3.0",
"@jitsi/sdp-simulcast": "0.4.0",
"async": "0.9.0",
"base64-js": "1.3.1",
"current-executing-script": "0.1.3",
"jitsi-meet-logger": "github:jitsi/jitsi-meet-logger#5ec92357570dc8f0b7ffc1528820721c84c6af8b",
"lodash.clonedeep": "4.5.0",
"lodash.debounce": "4.0.8",
"lodash.isequal": "4.5.0",

@@ -32,2 +34,3 @@ "sdp-transform": "2.3.0",

"strophejs-plugin-stream-management": "github:jitsi/strophejs-plugin-stream-management#001cf02bef2357234e1ac5d163611b4d60bf2b6a",
"uuid": "8.1.0",
"webrtc-adapter": "7.5.0"

@@ -39,2 +42,3 @@ },

"@babel/plugin-proposal-export-namespace-from": "7.0.0",
"@babel/plugin-proposal-optional-chaining": "7.2.0",
"@babel/plugin-transform-flow-strip-types": "7.0.0",

@@ -64,2 +68,3 @@ "@babel/preset-env": "7.1.0",

"lint": "eslint . && flow",
"postinstall": "webpack -p",
"test": "karma start karma.conf.js",

@@ -66,0 +71,0 @@ "test-watch": "karma start karma.conf.js --no-single-run",

@@ -7,3 +7,4 @@ # Jitsi Meet API library

[Checkout the examples.](doc/API.md#installation)
- [Installation guide](doc/API.md#installation)
- [Checkout the example](doc/example)

@@ -10,0 +11,0 @@ ## Building the sources

@@ -1,95 +0,5 @@

/* global __dirname */
const process = require('process');
const { BundleAnalyzerPlugin } = require('webpack-bundle-analyzer');
const analyzeBundle = process.argv.indexOf('--analyze-bundle') !== -1;
const config = require('./webpack-shared-config');
const minimize
= process.argv.indexOf('-p') !== -1
|| process.argv.indexOf('--optimize-minimize') !== -1;
const config = {
// The inline-source-map is used to allow debugging the unit tests with Karma
devtool: minimize ? 'source-map' : 'inline-source-map',
mode: minimize ? 'production' : 'development',
module: {
rules: [ {
// Version this build of the lib-jitsi-meet library.
loader: 'string-replace-loader',
options: {
flags: 'g',
replace:
process.env.LIB_JITSI_MEET_COMMIT_HASH || 'development',
search: '{#COMMIT_HASH#}'
},
test: `${__dirname}/JitsiMeetJS.js`
}, {
// Transpile ES2015 (aka ES6) to ES5.
exclude: [
new RegExp(`${__dirname}/node_modules/(?!@jitsi/js-utils)`)
],
loader: 'babel-loader',
options: {
presets: [
[
'@babel/preset-env',
// Tell babel to avoid compiling imports into CommonJS
// so that webpack may do tree shaking.
{
modules: false,
// Specify our target browsers so no transpiling is
// done unnecessarily. For browsers not specified
// here, the ES2015+ profile will be used.
targets: {
chrome: 58,
electron: 2,
firefox: 54,
safari: 11
}
}
],
'@babel/preset-flow'
],
plugins: [
'@babel/plugin-transform-flow-strip-types',
'@babel/plugin-proposal-class-properties',
'@babel/plugin-proposal-export-namespace-from'
]
},
test: /\.js$/
} ]
},
node: {
// Allow the use of the real filename of the module being executed. By
// default Webpack does not leak path-related information and provides a
// value that is a mock (/index.js).
__filename: true
},
optimization: {
concatenateModules: minimize
},
output: {
filename: `[name]${minimize ? '.min' : ''}.js`,
path: process.cwd(),
sourceMapFilename: `[name].${minimize ? 'min' : 'js'}.map`
},
performance: {
hints: minimize ? 'error' : false,
maxAssetSize: 750 * 1024,
maxEntrypointSize: 750 * 1024
},
plugins: [
analyzeBundle
&& new BundleAnalyzerPlugin({
analyzerMode: 'disabled',
generateStatsFile: true
})
].filter(Boolean)
};
module.exports = [

@@ -104,3 +14,16 @@ Object.assign({}, config, {

})
})
}),
{
entry: {
worker: './modules/e2ee/Worker.js'
},
mode: 'production',
output: {
filename: 'lib-jitsi-meet.e2ee-worker.js',
path: process.cwd()
},
optimization: {
minimize: false
}
}
];

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc