Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

kurento-client-elements

Package Overview
Dependencies
Maintainers
1
Versions
22
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

kurento-client-elements - npm Package Compare versions

Comparing version 6.12.0 to 6.13.0

4

lib/complexTypes/IceCandidate.js

@@ -30,4 +30,4 @@ /* Autogenerated with Kurento Idl */

/**
* IceCandidate representation based on standard
* (http://www.w3.org/TR/webrtc/#rtcicecandidate-type).
* IceCandidate representation based on <code>RTCIceCandidate</code> interface.
* @see https://www.w3.org/TR/2018/CR-webrtc-20180927/#rtcicecandidate-interface
*

@@ -34,0 +34,0 @@ * @constructor module:elements/complexTypes.IceCandidate

@@ -49,3 +49,5 @@ /* Autogenerated with Kurento Idl */

/**
* Notifies a new local candidate. These candidates should be sent to the remote
* Notifies a new local candidate.
* These candidates should be sent to the remote peer, to complete the ICE
* negotiation process.
*

@@ -61,4 +63,4 @@ * @event module:elements#IceCandidateFound

/**
* Event fired when and ICE component state changes. See
* :rom:cls:`IceComponentState` for a list of possible states.
* Event fired when and ICE component state changes.
* See :rom:cls:`IceComponentState` for a list of possible states.
*

@@ -84,3 +86,3 @@ * @event module:elements#IceComponentStateChange

/**
* Event fired when a new pair of ICE candidates is used by the ICE library.
* Event fired when a new pair of ICE candidates is used by the ICE library.
* This could also happen in the middle of a session, though not likely.

@@ -97,3 +99,4 @@ *

/**
* @deprecated</br>Event fired when a data channel is closed.
* Event fired when a data channel is closed.
* @deprecated Use <code>DataChannelClose</code> instead.
*

@@ -109,3 +112,4 @@ * @event module:elements#OnDataChannelClosed

/**
* @deprecated</br>Event fired when a new data channel is created.
* Event fired when a new data channel is created.
* @deprecated Use <code>DataChannelOpen</code> instead.
*

@@ -121,4 +125,6 @@ * @event module:elements#OnDataChannelOpened

/**
* @deprecated</br>Notifies a new local candidate. These candidates should be
* sent to the remote peer, to complete the ICE negotiation process.
* Notifies a new local candidate.
* These candidates should be sent to the remote peer, to complete the ICE
* negotiation process.
* @deprecated Use <code>IceCandidateFound</code> instead.
*

@@ -134,4 +140,5 @@ * @event module:elements#OnIceCandidate

/**
* @deprecated</br>Event fired when and ICE component state changes. See
* :rom:cls:`IceComponentState` for a list of possible states.
* Event fired when and ICE component state changes.
* See :rom:cls:`IceComponentState` for a list of possible states.
* @deprecated Use <code>IceComponentStateChange</code> instead.
*

@@ -151,3 +158,4 @@ * @event module:elements#OnIceComponentStateChanged

/**
* @deprecated</br>Event fired when al ICE candidates have been gathered.
* Event fired when al ICE candidates have been gathered.
* @deprecated Use <code>IceGatheringDone</code> instead.
*

@@ -154,0 +162,0 @@ * @event module:elements#OnIceGatheringDone

@@ -29,3 +29,3 @@ /* Autogenerated with Kurento Idl */

Object.defineProperty(exports, 'name', {value: 'elements'});
Object.defineProperty(exports, 'version', {value: '6.12.0'});
Object.defineProperty(exports, 'version', {value: '6.13.0'});

@@ -32,0 +32,0 @@

@@ -44,10 +44,12 @@ /* Autogenerated with Kurento Idl */

* @classdesc
* Provides the functionality to store contents.
* <p>
* Provides the functionality to store contents. The recorder can store in
* local
* files or in a network resource. It receives a media stream from another
* MediaElement (i.e. the source), and stores it in the designated location.
* The recorder can store in local files or in a network resource. It
* receives a
* media stream from another {@link module:core/abstracts.MediaElement
* MediaElement} (i.e. the source), and
* stores it in the designated location.
* </p>
* <p>
* The following information has to be provided In order to create a
* The following information has to be provided in order to create a
* RecorderEndpoint, and cannot be changed afterwards:

@@ -87,5 +89,7 @@ * </p>

* <li>
* The media profile (@MediaProfileSpecType) used to store the file. This
* will
* determine the encoding. See below for more details about media profile.
* The media profile ({@link
* module:elements.RecorderEndpoint#MediaProfileSpecType}) used to store
* the file.
* This will determine the encoding. See below for more details about media
* profile.
* </li>

@@ -109,7 +113,6 @@ * <li>

* Otherwise, the media server won't be able to store any information, and an
* ErrorEvent will be fired. Please note that if you haven't subscribed to
* that
* type of event, you can be left wondering why your media is not being
* saved,
* while the error message was ignored.
* {@link ErrorEvent} will be fired. Please note that if you haven't
* subscribed to
* that type of event, you can be left wondering why your media is not being
* saved, while the error message was ignored.
* </p>

@@ -126,6 +129,8 @@ * <p>

* </p>
* For example: Say that your pipeline will receive <b>VP8</b>-encoded video
* from
* WebRTC, and sends it to a RecorderEndpoint; depending on the format
* selected...
* <p>
* For example: Say that your pipeline will receive <b>VP8</b>-encoded video
* from
* WebRTC, and sends it to a RecorderEndpoint; depending on the format
* selected...
* </p>
* <ul>

@@ -148,6 +153,8 @@ * <li>

* </ul>
* From this you can see how selecting the correct format for your application
* is a
* very important decision.
* <p>
* From this you can see how selecting the correct format for your
* application is
* a very important decision.
* </p>
* <p>
* Recording will start as soon as the user invokes the record method. The

@@ -154,0 +161,0 @@ * recorder will then store, in the location indicated, the media that the

@@ -45,201 +45,326 @@ /* Autogenerated with Kurento Idl */

* @classdesc
* Control interface for Kurento WebRTC endpoint.
* <p>
* Control interface for Kurento WebRTC endpoint.
* </p>
* <p>
* This endpoint is one side of a peer-to-peer WebRTC communication,
* being the other peer a WebRTC capable browser -using the
* RTCPeerConnection API-, a native WebRTC app or even another Kurento
* Media Server.
* </p>
* <p>
* In order to establish a WebRTC communication, peers engage in an SDP
* negotiation process, where one of the peers (the offerer) sends an
* offer, while the other peer (the offeree) responds with an answer.
* This endpoint can function in both situations
* <ul>
* <li>
* As offerer: The negotiation process is initiated by the media
* server
* <ul style='list-style-type:circle'>
* <li>KMS generates the SDP offer through the
* <code>generateOffer</code> method. This <i>offer</i> must then
* be sent to the remote peer (the offeree) through the signaling
* channel, for processing.</li>
* <li>The remote peer process the <i>offer</i>, and generates an
* <i>answer</i> to this <i>offer</i>. The <i>answer</i> is sent
* back to the media server.</li>
* <li>Upon receiving the <i>answer</i>, the endpoint must invoke
* the <code>processAnswer</code> method.</li>
* </ul>
* </li>
* <li>
* As offeree: The negotiation process is initiated by the remote
* peer
* <ul>
* <li>The remote peer, acting as offerer, generates an SDP
* <i>offer</i> and sends it to the WebRTC endpoint in
* Kurento.</li>
* <li>The endpoint will process the <i>offer</i> invoking the
* <code>processOffer</code> method. The result of this method will
* <li>The SDP <i>answer</i> must be sent back to the offerer, so
* it can be processed.</li>
* </ul>
* </li>
* </ul>
* </p>
* <p>
* SDPs are sent without ICE candidates, following the Trickle ICE
* optimization. Once the SDP negotiation is completed, both peers
* proceed with the ICE discovery process, intended to set up a
* bidirectional media connection. During this process, each peer
* <ul>
* <li>Discovers ICE candidates for itself, containing pairs of IPs and
* <li>ICE candidates are sent via the signaling channel as they are
* discovered, to the remote peer for probing.</li>
* <li>ICE connectivity checks are run as soon as the new candidate
* description, from the remote peer, is available.</li>
* </ul>
* Once a suitable pair of candidates (one for each peer) is discovered,
* the media session can start. The harvesting process in Kurento, begins
* </p>
* <p>
* It's important to keep in mind that WebRTC connection is an
* asynchronous process, when designing interactions between different
* MediaElements. For example, it would be pointless to start recording
* before media is flowing. In order to be notified of state changes, the
* <ul>
* <li>
* <code>IceComponentStateChange</code>: This event informs only
* about changes in the ICE connection state. Possible values are:
* <ul style='list-style-type:circle'>
* <li><code>DISCONNECTED</code>: No activity scheduled</li>
* <li><code>GATHERING</code>: Gathering local candidates</li>
* <li><code>CONNECTING</code>: Establishing connectivity</li>
* <li><code>CONNECTED</code>: At least one working candidate
* pair</li>
* <li><code>READY</code>: ICE concluded, candidate pair selection
* is now final</li>
* <li><code>FAILED</code>: Connectivity checks have been
* completed, but media connection was not established</li>
* </ul>
* The transitions between states are covered in RFC5245.
* It could be said that it's network-only, as it only takes into
* account the state of the network connection, ignoring other higher
* </li>
* <li>
* <code>IceCandidateFound</code>: Raised when a new candidate is
* discovered. ICE candidates must be sent to the remote peer of the
* connection. Failing to do so for some or all of the candidates
* might render the connection unusable.
* </li>
* <li>
* <code>IceGatheringDone</code>: Raised when the ICE harvesting
* process is completed. This means that all candidates have already
* been discovered.
* </li>
* <li>
* <code>NewCandidatePairSelected</code>: Raised when a new ICE
* candidate pair gets selected. The pair contains both local and
* remote candidates being used for a component. This event can be
* raised during a media session, if a new pair of candidates with
* higher priority in the link are found.
* </li>
* <li>
* <code>DataChannelOpen</code>: Raised when a data channel is open.
* </li>
* <li>
* <code>DataChannelClose</code>: Raised when a data channel is
* closed.
* </li>
* </ul>
* </p>
* <p>
* Registering to any of above events requires the application to provide
* </p>
* <p>
* Flow control and congestion management is one of the most important
* features of WebRTC. WebRTC connections start with the lowest bandwidth
* </p>
* <p>
* The default bandwidth range of the endpoint is 100kbps-500kbps, but it
* <ul>
* <li>
* Input bandwidth control mechanism: Configuration interval used to
* inform remote peer the range of bitrates that can be pushed into
* this WebRtcEndpoint object.
* <ul style='list-style-type:circle'>
* <li>
* setMin/MaxVideoRecvBandwidth: sets Min/Max bitrate limits
* expected for received video stream.
* </li>
* <li>
* setMin/MaxAudioRecvBandwidth: sets Min/Max bitrate limits
* expected for received audio stream.
* </li>
* </ul>
* Max values are announced in the SDP, while min values are set to
* limit the lower value of REMB packages. It follows that min values
* </li>
* <li>
* Output bandwidth control mechanism: Configuration interval used to
* <ul style='list-style-type:circle'>
* <li>
* setMin/MaxVideoSendBandwidth: sets Min/Max bitrate limits for
* </li>
* </ul>
* </li>
* </ul>
* All bandwidth control parameters must be changed before the SDP
* negotiation takes place, and can't be changed afterwards.
* </p>
* <p>
* DataChannels allow other media elements that make use of the DataPad,
* to send arbitrary data. For instance, if there is a filter that
* publishes event information, it'll be sent to the remote peer through
* the channel. There is no API available for programmers to make use of
* this feature in the WebRtcElement. DataChannels can be configured to
* provide the following:
* <ul>
* <li>
* Reliable or partially reliable delivery of sent messages
* </li>
* <li>
* In-order or out-of-order delivery of sent messages
* </li>
* </ul>
* Unreliable, out-of-order delivery is equivalent to raw UDP semantics.
* The message may make it, or it may not, and order is not important.
* However, the channel can be configured to be <i>partially reliable</i>
* </p>
* <p>
* The possibility to create DataChannels in a WebRtcEndpoint must be
* explicitly enabled when creating the endpoint, as this feature is
* disabled by default. If this is the case, they can be created invoking
* <ul>
* <li>
* <code>label</code>: assigns a label to the DataChannel. This can
* help identify each possible channel separately.
* </li>
* <li>
* <code>ordered</code>: specifies if the DataChannel guarantees
* order, which is the default mode. If maxPacketLifetime and
* maxRetransmits have not been set, this enables reliable mode.
* </li>
* <li>
* <code>maxPacketLifeTime</code>: The time window in milliseconds,
* during which transmissions and retransmissions may take place in
* unreliable mode. This forces unreliable mode, even if
* <code>ordered</code> has been activated.
* </li>
* <li>
* <code>maxRetransmits</code>: maximum number of retransmissions
* that are attempted in unreliable mode. This forces unreliable
* mode, even if <code>ordered</code> has been activated.
* </li>
* <li>
* <code>Protocol</code>: Name of the subprotocol used for data
* communication.
* </li>
* </ul>
* This endpoint is one side of a peer-to-peer WebRTC communication, being
* the
* other peer a WebRTC capable browser -using the RTCPeerConnection API-, a
* native WebRTC app or even another Kurento Media Server.
* </p>
* <p>
* In order to establish a WebRTC communication, peers engage in an SDP
* negotiation process, where one of the peers (the offerer) sends an offer,
* while the other peer (the offeree) responds with an answer. This endpoint
* can
* function in both situations
* </p>
* <ul>
* <li>
* As offerer: The negotiation process is initiated by the media server
* <ul>
* <li>
* KMS generates the SDP offer through the
* <code>generateOffer</code> method. This <i>offer</i> must then be
* sent
* to the remote peer (the offeree) through the signaling channel, for
* processing.
* </li>
* <li>
* The remote peer processes the <i>offer</i>, and generates an
* <i>answer</i>. The <i>answer</i> is sent back to the media server.
* </li>
* <li>
* Upon receiving the <i>answer</i>, the endpoint must invoke the
* <code>processAnswer</code> method.
* </li>
* </ul>
* </li>
* <li>
* As offeree: The negotiation process is initiated by the remote peer
* <ul>
* <li>
* The remote peer, acting as offerer, generates an SDP <i>offer</i>
* and
* sends it to the WebRTC endpoint in Kurento.
* </li>
* <li>
* The endpoint will process the <i>offer</i> invoking the
* <code>processOffer</code> method. The result of this method will be
* a
* string, containing an SDP <i>answer</i>.
* </li>
* <li>
* The SDP <i>answer</i> must be sent back to the offerer, so it can be
* processed.
* </li>
* </ul>
* </li>
* </ul>
* <p>
* SDPs are sent without ICE candidates, following the Trickle ICE
* optimization.
* Once the SDP negotiation is completed, both peers proceed with the ICE
* discovery process, intended to set up a bidirectional media connection.
* During
* this process, each peer
* </p>
* <ul>
* <li>
* Discovers ICE candidates for itself, containing pairs of IPs and ports.
* </li>
* <li>
* ICE candidates are sent via the signaling channel as they are
* discovered, to
* the remote peer for probing.
* </li>
* <li>
* ICE connectivity checks are run as soon as the new candidate
* description,
* from the remote peer, is available.
* </li>
* </ul>
* <p>
* Once a suitable pair of candidates (one for each peer) is discovered, the
* media session can start. The harvesting process in Kurento, begins with
* the
* invocation of the <code>gatherCandidates</code> method. Since the whole
* Trickle ICE purpose is to speed-up connectivity, candidates are generated
* asynchronously. Therefore, in order to capture the candidates, the user
* must
* subscribe to the event <code>IceCandidateFound</code>. It is important
* that
* the event listener is bound before invoking <code>gatherCandidates</code>,
* otherwise a suitable candidate might be lost, and connection might not be
* established.
* </p>
* <p>
* It's important to keep in mind that WebRTC connection is an asynchronous
* process, when designing interactions between different MediaElements. For
* example, it would be pointless to start recording before media is flowing.
* order to be notified of state changes, the application can subscribe to
* events
* generated by the WebRtcEndpoint. Following is a full list of events
* generated
* by WebRtcEndpoint:
* </p>
* <ul>
* <li>
* <code>IceComponentStateChange</code>: This event informs only about
* changes
* in the ICE connection state. Possible values are:
* <ul>
* <li><code>DISCONNECTED</code>: No activity scheduled</li>
* <li><code>GATHERING</code>: Gathering local candidates</li>
* <li><code>CONNECTING</code>: Establishing connectivity</li>
* <li><code>CONNECTED</code>: At least one working candidate pair</li>
* <li>
* <code>READY</code>: ICE concluded, candidate pair selection is now
* final
* </li>
* <li>
* <code>FAILED</code>: Connectivity checks have been completed, but
* media
* connection was not established
* </li>
* </ul>
* The transitions between states are covered in RFC5245. It could be said
* that
* it's network-only, as it only takes into account the state of the
* network
* connection, ignoring other higher level stuff, like DTLS handshake, RTCP
* flow, etc. This implies that, while the component state is
* <code>CONNECTED</code>, there might be no media flowing between the
* peers.
* This makes this event useful only to receive low-level information about
* connection between peers. Even more, while other events might leave a
* graceful period of time before firing, this event fires immediately
* after
* the state change is detected.
* </li>
* <li>
* <code>IceCandidateFound</code>: Raised when a new candidate is
* discovered.
* ICE candidates must be sent to the remote peer of the connection.
* Failing to
* do so for some or all of the candidates might render the connection
* unusable.
* </li>
* <li>
* <code>IceGatheringDone</code>: Raised when the ICE harvesting process is
* completed. This means that all candidates have already been discovered.
* </li>
* <li>
* <code>NewCandidatePairSelected</code>: Raised when a new ICE candidate
* pair
* gets selected. The pair contains both local and remote candidates being
* used
* for a component. This event can be raised during a media session, if a
* new
* pair of candidates with higher priority in the link are found.
* </li>
* <li><code>DataChannelOpen</code>: Raised when a data channel is open.</li>
* <li><code>DataChannelClose</code>: Raised when a data channel is
* closed.</li>
* </ul>
* <p>
* Registering to any of above events requires the application to provide a
* callback function. Each event provides different information, so it is
* recommended to consult the signature of the event listeners.
* </p>
* <p>
* Flow control and congestion management is one of the most important
* features
* of WebRTC. WebRTC connections start with the lowest bandwidth configured
* and
* slowly ramps up to the maximum available bandwidth, or to the higher limit
* the exploration range in case no bandwidth limitation is detected. Notice
* that
* WebRtcEndpoints in Kurento are designed in a way that multiple WebRTC
* connections fed by the same stream share quality. When a new connection is
* added, as it requires to start with low bandwidth, it will cause the rest
* of
* connections to experience a transient period of degraded quality, until it
* stabilizes its bitrate. This doesn't apply when transcoding is involved.
* Transcoders will adjust their output bitrate based in bandwidth
* requirements,
* but it won't affect the original stream. If an incoming WebRTC stream
* needs to
* be transcoded, for whatever reason, all WebRtcEndpoints fed from
* transcoder
* output will share a separate quality than the ones connected directly to
* the
* original stream.
* </p>
* <p>
* The default bandwidth range of the endpoint is
* <strong>[100 kbps, 500 kbps]</strong>, but it can be changed separately
* for
* input/output directions and for audio/video streams.
* </p>
* <p>
* <strong>
* Check the extended documentation of these parameters in
* {@link module:core/abstracts.SdpEndpoint SdpEndpoint}, {@link
* module:core/abstracts.BaseRtpEndpoint BaseRtpEndpoint}, and
* {@link module:core/complexTypes.RembParams RembParams}.
* </strong>
* </p>
* <ul>
* <li>
* Input bandwidth: Configuration value used to inform remote peers about
* the
* bitrate that can be pushed into this endpoint.
* <ul>
* <li>
* <strong>{get,set}MinVideoRecvBandwidth</strong>: Minimum bitrate
* requested on the received video stream.
* </li>
* <li>
* <strong>{get,set}Max{Audio,Video}RecvBandwidth</strong>: Maximum
* bitrate
* expected for the received stream.
* </li>
* </ul>
* </li>
* <li>
* Output bandwidth: Configuration values used to control bitrate of the
* output
* video stream sent to remote peers. It is important to keep in mind that
* pushed bitrate depends on network and remote peer capabilities. Remote
* peers
* can also announce bandwidth limitation in their SDPs (through the
* <code>b={modifier}:{value}</code> tag). Kurento will always enforce
* bitrate
* limitations specified by the remote peer over internal configurations.
* <ul>
* <li>
* <strong>{get,set}MinVideoSendBandwidth</strong>: Minimum video
* bitrate
* sent to remote peer.
* </li>
* <li>
* <strong>{get,set}MaxVideoSendBandwidth</strong>: Maximum video
* bitrate
* sent to remote peer.
* </li>
* <li>
* <strong>RembParams.rembOnConnect</strong>: Initial local REMB
* bandwidth
* estimation that gets propagated when a new endpoint is connected.
* </li>
* </ul>
* </li>
* </ul>
* <p>
* <strong>
* All bandwidth control parameters must be changed before the SDP
* negotiation
* takes place, and can't be changed afterwards.
* </strong>
* </p>
* <p>
* DataChannels allow other media elements that make use of the DataPad, to
* send
* arbitrary data. For instance, if there is a filter that publishes event
* information, it'll be sent to the remote peer through the channel. There
* is no
* API available for programmers to make use of this feature in the
* WebRtcElement. DataChannels can be configured to provide the following:
* </p>
* <ul>
* <li>
* Reliable or partially reliable delivery of sent messages
* </li>
* <li>
* In-order or out-of-order delivery of sent messages
* </li>
* </ul>
* <p>
* Unreliable, out-of-order delivery is equivalent to raw UDP semantics. The
* message may make it, or it may not, and order is not important. However,
* the
* channel can be configured to be <i>partially reliable</i> by specifying
* the
* maximum number of retransmissions or setting a time limit for
* retransmissions:
* the WebRTC stack will handle the acknowledgments and timeouts.
* </p>
* <p>
* The possibility to create DataChannels in a WebRtcEndpoint must be
* explicitly
* enabled when creating the endpoint, as this feature is disabled by
* default. If
* this is the case, they can be created invoking the createDataChannel
* method.
* The arguments for this method, all of them optional, provide the necessary
* configuration:
* </p>
* <ul>
* <li>
* <code>label</code>: assigns a label to the DataChannel. This can help
* identify each possible channel separately.
* </li>
* <li>
* <code>ordered</code>: specifies if the DataChannel guarantees order,
* which
* is the default mode. If maxPacketLifetime and maxRetransmits have not
* been
* set, this enables reliable mode.
* </li>
* <li>
* <code>maxPacketLifeTime</code>: The time window in milliseconds, during
* which transmissions and retransmissions may take place in unreliable
* mode.
* This forces unreliable mode, even if <code>ordered</code> has been
* activated.
* </li>
* <li>
* <code>maxRetransmits</code>: maximum number of retransmissions that are
* attempted in unreliable mode. This forces unreliable mode, even if
* <code>ordered</code> has been activated.
* </li>
* <li>
* <code>Protocol</code>: Name of the subprotocol used for data
* communication.
* </li>
* </ul>
*

@@ -273,2 +398,120 @@ * @extends module:core/abstracts.BaseRtpEndpoint

/**
* External (public) IP address of the media server.
* <p>
* If you know what will be the external or public IP address of the media
* server
* (e.g. because your deployment has an static IP), you can specify it here.
* Doing so has the advantage of not needing to configure STUN/TURN for the
* media
* server.
* </p>
* <p>
* STUN/TURN are needed only when the media server sits behind a NAT and needs
* find out its own external IP address. However, if you set a static external
* address with this parameter, then there is no need for the STUN/TURN
* auto-discovery.
* </p>
* <p>
* The effect of this parameter is that ALL local ICE candidates that are
* gathered (for WebRTC) will contain the provided external IP address instead
* the local one.
* </p>
* <p>
* <code>externalAddress</code> is an IPv4 or IPv6 address.
* </p>
* <p>Examples:</p>
* <ul>
* <li><code>externalAddress=10.70.35.2</code></li>
* <li><code>externalAddress=2001:0db8:85a3:0000:0000:8a2e:0370:7334</code></li>
* </ul>
*
* @alias module:elements.WebRtcEndpoint#getExternalAddress
*
* @param {module:elements.WebRtcEndpoint~getExternalAddressCallback} [callback]
*
* @return {external:Promise}
*/
WebRtcEndpoint.prototype.getExternalAddress = function(callback){
var transaction = (arguments[0] instanceof Transaction)
? Array.prototype.shift.apply(arguments)
: undefined;
var usePromise = false;
if (callback == undefined) {
usePromise = true;
}
if(!arguments.length) callback = undefined;
callback = (callback || noop).bind(this)
return disguise(this._invoke(transaction, 'getExternalAddress', callback), this)
};
/**
* @callback module:elements.WebRtcEndpoint~getExternalAddressCallback
* @param {external:Error} error
* @param {external:String} result
*/
/**
* External (public) IP address of the media server.
* <p>
* If you know what will be the external or public IP address of the media
* server
* (e.g. because your deployment has an static IP), you can specify it here.
* Doing so has the advantage of not needing to configure STUN/TURN for the
* media
* server.
* </p>
* <p>
* STUN/TURN are needed only when the media server sits behind a NAT and needs
* find out its own external IP address. However, if you set a static external
* address with this parameter, then there is no need for the STUN/TURN
* auto-discovery.
* </p>
* <p>
* The effect of this parameter is that ALL local ICE candidates that are
* gathered (for WebRTC) will contain the provided external IP address instead
* the local one.
* </p>
* <p>
* <code>externalAddress</code> is an IPv4 or IPv6 address.
* </p>
* <p>Examples:</p>
* <ul>
* <li><code>externalAddress=10.70.35.2</code></li>
* <li><code>externalAddress=2001:0db8:85a3:0000:0000:8a2e:0370:7334</code></li>
* </ul>
*
* @alias module:elements.WebRtcEndpoint#setExternalAddress
*
* @param {external:String} externalAddress
* @param {module:elements.WebRtcEndpoint~setExternalAddressCallback} [callback]
*
* @return {external:Promise}
*/
WebRtcEndpoint.prototype.setExternalAddress = function(externalAddress, callback){
var transaction = (arguments[0] instanceof Transaction)
? Array.prototype.shift.apply(arguments)
: undefined;
//
// checkType('String', 'externalAddress', externalAddress, {required: true});
//
var params = {
externalAddress: externalAddress
};
callback = (callback || noop).bind(this)
return disguise(this._invoke(transaction, 'setExternalAddress', params, callback), this)
};
/**
* @callback module:elements.WebRtcEndpoint~setExternalAddressCallback
* @param {external:Error} error
*/
/**
* the ICE candidate pair (local and remote candidates) used by the ice library

@@ -339,4 +582,145 @@ * for each stream.

/**
* address of the STUN server (Only IP address are supported)
* Local network interfaces used for ICE gathering.
* <p>
* If you know which network interfaces should be used to perform ICE (for
* WebRTC
* connectivity), you can define them here. Doing so has several advantages:
* </p>
* <ul>
* <li>
* The WebRTC ICE gathering process will be much quicker. Normally, it needs
* gather local candidates for all of the network interfaces, but this step
* can
* be made faster if you limit it to only the interface that you know will
* work.
* </li>
* <li>
* It will ensure that the media server always decides to use the correct
* network interface. With WebRTC ICE gathering it's possible that, under
* some
* circumstances (in systems with virtual network interfaces such as
* <code>docker0</code>) the ICE process ends up choosing the wrong local
* IP.
* </li>
* </ul>
* <p>
* <code>networkInterfaces</code> is a comma-separated list of network
* interface
* names.
* </p>
* <p>Examples:</p>
* <ul>
* <li><code>networkInterfaces=eth0</code></li>
* <li><code>networkInterfaces=eth0,enp0s25</code></li>
* </ul>
*
* @alias module:elements.WebRtcEndpoint#getNetworkInterfaces
*
* @param {module:elements.WebRtcEndpoint~getNetworkInterfacesCallback} [callback]
*
* @return {external:Promise}
*/
WebRtcEndpoint.prototype.getNetworkInterfaces = function(callback){
var transaction = (arguments[0] instanceof Transaction)
? Array.prototype.shift.apply(arguments)
: undefined;
var usePromise = false;
if (callback == undefined) {
usePromise = true;
}
if(!arguments.length) callback = undefined;
callback = (callback || noop).bind(this)
return disguise(this._invoke(transaction, 'getNetworkInterfaces', callback), this)
};
/**
* @callback module:elements.WebRtcEndpoint~getNetworkInterfacesCallback
* @param {external:Error} error
* @param {external:String} result
*/
/**
* Local network interfaces used for ICE gathering.
* <p>
* If you know which network interfaces should be used to perform ICE (for
* WebRTC
* connectivity), you can define them here. Doing so has several advantages:
* </p>
* <ul>
* <li>
* The WebRTC ICE gathering process will be much quicker. Normally, it needs
* gather local candidates for all of the network interfaces, but this step
* can
* be made faster if you limit it to only the interface that you know will
* work.
* </li>
* <li>
* It will ensure that the media server always decides to use the correct
* network interface. With WebRTC ICE gathering it's possible that, under
* some
* circumstances (in systems with virtual network interfaces such as
* <code>docker0</code>) the ICE process ends up choosing the wrong local
* IP.
* </li>
* </ul>
* <p>
* <code>networkInterfaces</code> is a comma-separated list of network
* interface
* names.
* </p>
* <p>Examples:</p>
* <ul>
* <li><code>networkInterfaces=eth0</code></li>
* <li><code>networkInterfaces=eth0,enp0s25</code></li>
* </ul>
*
* @alias module:elements.WebRtcEndpoint#setNetworkInterfaces
*
* @param {external:String} networkInterfaces
* @param {module:elements.WebRtcEndpoint~setNetworkInterfacesCallback} [callback]
*
* @return {external:Promise}
*/
WebRtcEndpoint.prototype.setNetworkInterfaces = function(networkInterfaces, callback){
var transaction = (arguments[0] instanceof Transaction)
? Array.prototype.shift.apply(arguments)
: undefined;
//
// checkType('String', 'networkInterfaces', networkInterfaces, {required: true});
//
var params = {
networkInterfaces: networkInterfaces
};
callback = (callback || noop).bind(this)
return disguise(this._invoke(transaction, 'setNetworkInterfaces', params, callback), this)
};
/**
* @callback module:elements.WebRtcEndpoint~setNetworkInterfacesCallback
* @param {external:Error} error
*/
/**
* STUN server IP address.
* <p>The ICE process uses STUN to punch holes through NAT firewalls.</p>
* <p>
* <code>stunServerAddress</code> MUST be an IP address; domain names are NOT
* supported.
* </p>
* <p>
* You need to use a well-working STUN server. Use this to check if it
* works:<br />
* https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/<br
* From that check, you should get at least one Server-Reflexive Candidate
* (type
* <code>srflx</code>).
* </p>
*
* @alias module:elements.WebRtcEndpoint#getStunServerAddress

@@ -372,3 +756,16 @@ *

/**
* address of the STUN server (Only IP address are supported)
* STUN server IP address.
* <p>The ICE process uses STUN to punch holes through NAT firewalls.</p>
* <p>
* <code>stunServerAddress</code> MUST be an IP address; domain names are NOT
* supported.
* </p>
* <p>
* You need to use a well-working STUN server. Use this to check if it
* works:<br />
* https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/<br
* From that check, you should get at least one Server-Reflexive Candidate
* (type
* <code>srflx</code>).
* </p>
*

@@ -405,3 +802,3 @@ * @alias module:elements.WebRtcEndpoint#setStunServerAddress

/**
* port of the STUN server
* Port of the STUN server
*

@@ -438,3 +835,3 @@ * @alias module:elements.WebRtcEndpoint#getStunServerPort

/**
* port of the STUN server
* Port of the STUN server
*

@@ -471,4 +868,34 @@ * @alias module:elements.WebRtcEndpoint#setStunServerPort

/**
* TURN server URL with this format:
* <code>user:password@address:port(?transport=[udp|tcp|tls])</code>.</br><code>address</code>
* TURN server URL.
* <p>
* When STUN is not enough to open connections through some NAT firewalls,
* using
* TURN is the remaining alternative.
* </p>
* <p>
* Note that TURN is a superset of STUN, so you don't need to configure STUN
* if
* you are using TURN.
* </p>
* <p>The provided URL should follow one of these formats:</p>
* <ul>
* <li><code>user:password@ipaddress:port</code></li>
* <li>
* <code>user:password@ipaddress:port?transport=[udp|tcp|tls]</code>
* </li>
* </ul>
* <p>
* <code>ipaddress</code> MUST be an IP address; domain names are NOT
* supported.<br />
* <code>transport</code> is OPTIONAL. Possible values: udp, tcp, tls.
* Default: udp.
* </p>
* <p>
* You need to use a well-working TURN server. Use this to check if it
* works:<br />
* https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/<br
* From that check, you should get at least one Server-Reflexive Candidate
* (type
* <code>srflx</code>) AND one Relay Candidate (type <code>relay</code>).
* </p>
*

@@ -505,4 +932,34 @@ * @alias module:elements.WebRtcEndpoint#getTurnUrl

/**
* TURN server URL with this format:
* <code>user:password@address:port(?transport=[udp|tcp|tls])</code>.</br><code>address</code>
* TURN server URL.
* <p>
* When STUN is not enough to open connections through some NAT firewalls,
* using
* TURN is the remaining alternative.
* </p>
* <p>
* Note that TURN is a superset of STUN, so you don't need to configure STUN
* if
* you are using TURN.
* </p>
* <p>The provided URL should follow one of these formats:</p>
* <ul>
* <li><code>user:password@ipaddress:port</code></li>
* <li>
* <code>user:password@ipaddress:port?transport=[udp|tcp|tls]</code>
* </li>
* </ul>
* <p>
* <code>ipaddress</code> MUST be an IP address; domain names are NOT
* supported.<br />
* <code>transport</code> is OPTIONAL. Possible values: udp, tcp, tls.
* Default: udp.
* </p>
* <p>
* You need to use a well-working TURN server. Use this to check if it
* works:<br />
* https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/<br
* From that check, you should get at least one Server-Reflexive Candidate
* (type
* <code>srflx</code>) AND one Relay Candidate (type <code>relay</code>).
* </p>
*

@@ -612,17 +1069,33 @@ * @alias module:elements.WebRtcEndpoint#setTurnUrl

/**
* Create a new data channel, if data channels are supported. If they are not
* supported, this method throws an exception.
* Being supported means that the WebRtcEndpoint has been created with
* Otherwise, the method throws an exception, indicating that the
* operation is not possible.</br>
* Data channels can work in either unreliable mode (analogous to User
* The two modes have a simple distinction:
* <ul>
* <li>Reliable mode guarantees the transmission of messages and
* also the order in which they are delivered. This takes extra
* overhead, thus potentially making this mode slower.</li>
* <li>Unreliable mode does not guarantee every message will get to
* the other side nor what order they get there. This removes the
* overhead, allowing this mode to work much faster.</li>
* </ul>
* Create a new data channel, if data channels are supported.
* <p>
* Being supported means that the WebRtcEndpoint has been created with data
* channel support, the client also supports data channels, and they have been
* negotiated in the SDP exchange. Otherwise, the method throws an exception,
* indicating that the operation is not possible.
* </p>
* <p>
* Data channels can work in either unreliable mode (analogous to User
* Datagram
* Protocol or UDP) or reliable mode (analogous to Transmission Control
* Protocol
* or TCP). The two modes have a simple distinction:
* </p>
* <ul>
* <li>
* Reliable mode guarantees the transmission of messages and also the order
* in
* which they are delivered. This takes extra overhead, thus potentially
* making
* this mode slower.
* </li>
* <li>
* Unreliable mode does not guarantee every message will get to the other
* side
* nor what order they get there. This removes the overhead, allowing this
* mode
* to work much faster.
* </li>
* </ul>
* <p>If data channels are not supported, this method throws an exception.</p>
*

@@ -641,11 +1114,8 @@ * @alias module:elements.WebRtcEndpoint.createDataChannel

* The time window (in milliseconds) during which transmissions and
* retransmissions may take place in unreliable mode.</br>
* <hr/><b>Note</b> This forces unreliable mode, even if
* <code>ordered</code> has been activated
* retransmissions may take place in unreliable mode.
* Note that this forces unreliable mode, even if <code>ordered</code> has been
*
* @param {external:Integer} [maxRetransmits]
* maximum number of retransmissions that are attempted in unreliable
* mode.</br>
* <hr/><b>Note</b> This forces unreliable mode, even if
* <code>ordered</code> has been activated
* maximum number of retransmissions that are attempted in unreliable mode.
* Note that this forces unreliable mode, even if <code>ordered</code> has been
*

@@ -717,5 +1187,10 @@ * @param {external:String} [protocol]

/**
* Start the gathering of ICE candidates.</br>It must be called after
* SdpEndpoint::generateOffer or SdpEndpoint::processOffer for Trickle ICE. If
* invoked before generating or processing an SDP offer, the candidates gathered
* Start the gathering of ICE candidates.
* <p>
* It must be called after <code>SdpEndpoint::generateOffer</code> or
* <code>SdpEndpoint::processOffer</code> for <strong>Trickle ICE</strong>. If
* invoked before generating or processing an SDP offer, the candidates
* gathered
* will be added to the SDP processed.
* </p>
*

@@ -722,0 +1197,0 @@ * @alias module:elements.WebRtcEndpoint.gatherCandidates

{
"name": "kurento-client-elements",
"version": "6.12.0",
"version": "6.13.0",
"description": "JavaScript Client API for Kurento Media Server",

@@ -5,0 +5,0 @@ "repository": {

{
"name": "elements",
"version": "6.12.0",
"version": "6.13.0",
"kurentoVersion": "^6.7.0",

@@ -18,3 +18,3 @@ "imports": [

"mavenArtifactId": "kms-api-elements",
"mavenVersion": "6.12.0"
"mavenVersion": "6.13.0"
}

@@ -27,3 +27,3 @@ },

"mavenArtifactId": "kurento-client",
"mavenVersion": "6.12.0"
"mavenVersion": "6.13.0"
},

@@ -34,3 +34,3 @@ "js": {

"npmGit": "Kurento/kurento-client-elements-js",
"npmVersion": "6.12.0"
"npmVersion": "6.13.0"
}

@@ -367,3 +367,3 @@ },

"name": "RecorderEndpoint",
"doc": "\n<p>\n Provides the functionality to store contents. The recorder can store in local\n files or in a network resource. It receives a media stream from another\n MediaElement (i.e. the source), and stores it in the designated location.\n</p>\n\n<p>\n The following information has to be provided In order to create a\n RecorderEndpoint, and cannot be changed afterwards:\n</p>\n\n<ul>\n <li>\n URI of the resource where media will be stored. Following schemas are\n supported:\n <ul>\n <li>\n Files: mounted in the local file system.\n <ul>\n <li><code>file:///path/to/file</code></li>\n </ul>\n </li>\n\n <li>\n HTTP: Requires the server to support method PUT\n <ul>\n <li><code>http(s)://{server-ip}/path/to/file</code></li>\n <li>\n <code>http(s)://username:password@{server-ip}/path/to/file</code>\n </li>\n </ul>\n </li>\n </ul>\n </li>\n <li>\n Relative URIs (with no schema) are supported. They are completed prepending\n a default URI defined by property <i>defaultPath</i>. This property is\n defined in the configuration file\n <i>/etc/kurento/modules/kurento/UriEndpoint.conf.ini</i>, and the default\n value is <code>file:///var/lib/kurento/</code>\n </li>\n <li>\n The media profile (@MediaProfileSpecType) used to store the file. This will\n determine the encoding. See below for more details about media profile.\n </li>\n <li>\n Optionally, the user can select if the endpoint will stop processing once\n the EndOfStream event is detected.\n </li>\n</ul>\n\n<p>\n RecorderEndpoint requires access to the resource where stream is going to be\n recorded. If it's a local file (<code>file://</code>), the system user running\n the media server daemon (kurento by default), needs to have write permissions\n for that URI. If it's an HTTP server, it must be accessible from the machine\n where media server is running, and also have the correct access rights.\n Otherwise, the media server won't be able to store any information, and an\n ErrorEvent will be fired. Please note that if you haven't subscribed to that\n type of event, you can be left wondering why your media is not being saved,\n while the error message was ignored.\n</p>\n\n<p>\n The media profile is quite an important parameter, as it will determine\n whether the server needs to perform on-the-fly transcoding of the media. If\n the input stream codec if not compatible with the selected media profile, the\n media will be transcoded into a suitable format. This will result in a higher\n CPU load and will impact overall performance of the media server.\n</p>\n\nFor example: Say that your pipeline will receive <b>VP8</b>-encoded video from\nWebRTC, and sends it to a RecorderEndpoint; depending on the format selected...\n<ul>\n <li>\n WEBM: The input codec is the same as the recording format, so no transcoding\n will take place.\n </li>\n <li>\n MP4: The media server will have to transcode from <b>VP8</b> to <b>H264</b>.\n This will raise the CPU load in the system.\n </li>\n <li>\n MKV: Again, video must be transcoded from <b>VP8</b> to <b>H264</b>, which\n means more CPU load.\n </li>\n</ul>\nFrom this you can see how selecting the correct format for your application is a\nvery important decision.\n\n<p>\n Recording will start as soon as the user invokes the record method. The\n recorder will then store, in the location indicated, the media that the source\n is sending to the endpoint's sink. If no media is being received, or no\n endpoint has been connected, then the destination will be empty. The recorder\n starts storing information into the file as soon as it gets it.\n</p>\n\n<p>\n When another endpoint is connected to the recorder, by default both AUDIO and\n VIDEO media types are expected, unless specified otherwise when invoking the\n connect method. Failing to provide both types, will result in teh recording\n buffering the received media: it won't be written to the file until the\n recording is stopped. This is due to the recorder waiting for the other type\n of media to arrive, so they are synchronized.\n</p>\n\n<p>\n The source endpoint can be hot-swapped, while the recording is taking place.\n The recorded file will then contain different feeds. When switching video\n sources, if the new video has different size, the recorder will retain the\n size of the previous source. If the source is disconnected, the last frame\n recorded will be shown for the duration of the disconnection, or until the\n recording is stopped.\n</p>\n\n<p>\n It is recommended to start recording only after media arrives, either to the\n endpoint that is the source of the media connected to the recorder, to the\n recorder itself, or both. Users may use the MediaFlowIn and MediaFlowOut\n events, and synchronize the recording with the moment media comes in. In any\n case, nothing will be stored in the file until the first media packets arrive.\n</p>\n\n<p>\n Stopping the recording process is done through the stopAndWait method, which\n will return only after all the information was stored correctly. If the file\n is empty, this means that no media arrived at the recorder.\n</p>\n ",
"doc": "Provides the functionality to store contents.\n<p>\n The recorder can store in local files or in a network resource. It receives a\n media stream from another :rom:cls:`MediaElement` (i.e. the source), and\n stores it in the designated location.\n</p>\n<p>\n The following information has to be provided in order to create a\n RecorderEndpoint, and cannot be changed afterwards:\n</p>\n<ul>\n <li>\n URI of the resource where media will be stored. Following schemas are\n supported:\n <ul>\n <li>\n Files: mounted in the local file system.\n <ul>\n <li><code>file:///path/to/file</code></li>\n </ul>\n </li>\n <li>\n HTTP: Requires the server to support method PUT\n <ul>\n <li><code>http(s)://{server-ip}/path/to/file</code></li>\n <li>\n <code>http(s)://username:password@{server-ip}/path/to/file</code>\n </li>\n </ul>\n </li>\n </ul>\n </li>\n <li>\n Relative URIs (with no schema) are supported. They are completed prepending\n a default URI defined by property <i>defaultPath</i>. This property is\n defined in the configuration file\n <i>/etc/kurento/modules/kurento/UriEndpoint.conf.ini</i>, and the default\n value is <code>file:///var/lib/kurento/</code>\n </li>\n <li>\n The media profile (:rom:attr:`MediaProfileSpecType`) used to store the file.\n This will determine the encoding. See below for more details about media\n profile.\n </li>\n <li>\n Optionally, the user can select if the endpoint will stop processing once\n the EndOfStream event is detected.\n </li>\n</ul>\n<p>\n RecorderEndpoint requires access to the resource where stream is going to be\n recorded. If it's a local file (<code>file://</code>), the system user running\n the media server daemon (kurento by default), needs to have write permissions\n for that URI. If it's an HTTP server, it must be accessible from the machine\n where media server is running, and also have the correct access rights.\n Otherwise, the media server won't be able to store any information, and an\n :rom:evt:`Error` will be fired. Please note that if you haven't subscribed to\n that type of event, you can be left wondering why your media is not being\n saved, while the error message was ignored.\n</p>\n<p>\n The media profile is quite an important parameter, as it will determine\n whether the server needs to perform on-the-fly transcoding of the media. If\n the input stream codec if not compatible with the selected media profile, the\n media will be transcoded into a suitable format. This will result in a higher\n CPU load and will impact overall performance of the media server.\n</p>\n<p>\n For example: Say that your pipeline will receive <b>VP8</b>-encoded video from\n WebRTC, and sends it to a RecorderEndpoint; depending on the format\n selected...\n</p>\n<ul>\n <li>\n WEBM: The input codec is the same as the recording format, so no transcoding\n will take place.\n </li>\n <li>\n MP4: The media server will have to transcode from <b>VP8</b> to <b>H264</b>.\n This will raise the CPU load in the system.\n </li>\n <li>\n MKV: Again, video must be transcoded from <b>VP8</b> to <b>H264</b>, which\n means more CPU load.\n </li>\n</ul>\n<p>\n From this you can see how selecting the correct format for your application is\n a very important decision.\n</p>\n<p>\n Recording will start as soon as the user invokes the record method. The\n recorder will then store, in the location indicated, the media that the source\n is sending to the endpoint's sink. If no media is being received, or no\n endpoint has been connected, then the destination will be empty. The recorder\n starts storing information into the file as soon as it gets it.\n</p>\n<p>\n When another endpoint is connected to the recorder, by default both AUDIO and\n VIDEO media types are expected, unless specified otherwise when invoking the\n connect method. Failing to provide both types, will result in teh recording\n buffering the received media: it won't be written to the file until the\n recording is stopped. This is due to the recorder waiting for the other type\n of media to arrive, so they are synchronized.\n</p>\n<p>\n The source endpoint can be hot-swapped, while the recording is taking place.\n The recorded file will then contain different feeds. When switching video\n sources, if the new video has different size, the recorder will retain the\n size of the previous source. If the source is disconnected, the last frame\n recorded will be shown for the duration of the disconnection, or until the\n recording is stopped.\n</p>\n<p>\n It is recommended to start recording only after media arrives, either to the\n endpoint that is the source of the media connected to the recorder, to the\n recorder itself, or both. Users may use the MediaFlowIn and MediaFlowOut\n events, and synchronize the recording with the moment media comes in. In any\n case, nothing will be stored in the file until the first media packets arrive.\n</p>\n<p>\n Stopping the recording process is done through the stopAndWait method, which\n will return only after all the information was stored correctly. If the file\n is empty, this means that no media arrived at the recorder.\n</p>\n ",
"extends": "UriEndpoint",

@@ -451,3 +451,3 @@ "constructor": {

"name": "WebRtcEndpoint",
"doc": "<p>\n Control interface for Kurento WebRTC endpoint.\n </p>\n <p>\n This endpoint is one side of a peer-to-peer WebRTC communication, being the other peer a WebRTC capable browser -using the RTCPeerConnection API-, a native WebRTC app or even another Kurento Media Server.\n </p>\n <p>\n In order to establish a WebRTC communication, peers engage in an SDP negotiation process, where one of the peers (the offerer) sends an offer, while the other peer (the offeree) responds with an answer. This endpoint can function in both situations\n <ul>\n <li>\n As offerer: The negotiation process is initiated by the media server\n <ul style='list-style-type:circle'>\n <li>KMS generates the SDP offer through the <code>generateOffer</code> method. This <i>offer</i> must then be sent to the remote peer (the offeree) through the signaling channel, for processing.</li>\n <li>The remote peer process the <i>offer</i>, and generates an <i>answer</i> to this <i>offer</i>. The <i>answer</i> is sent back to the media server.</li>\n <li>Upon receiving the <i>answer</i>, the endpoint must invoke the <code>processAnswer</code> method.</li>\n </ul>\n </li>\n <li>\n As offeree: The negotiation process is initiated by the remote peer\n <ul>\n <li>The remote peer, acting as offerer, generates an SDP <i>offer</i> and sends it to the WebRTC endpoint in Kurento.</li>\n <li>The endpoint will process the <i>offer</i> invoking the <code>processOffer</code> method. The result of this method will be a string, containing an SDP <i>answer</i>.</li>\n <li>The SDP <i>answer</i> must be sent back to the offerer, so it can be processed.</li>\n </ul>\n </li>\n </ul>\n </p>\n <p>\n SDPs are sent without ICE candidates, following the Trickle ICE optimization. Once the SDP negotiation is completed, both peers proceed with the ICE discovery process, intended to set up a bidirectional media connection. During this process, each peer\n <ul>\n <li>Discovers ICE candidates for itself, containing pairs of IPs and ports.</li>\n <li>ICE candidates are sent via the signaling channel as they are discovered, to the remote peer for probing.</li>\n <li>ICE connectivity checks are run as soon as the new candidate description, from the remote peer, is available.</li>\n </ul>\n Once a suitable pair of candidates (one for each peer) is discovered, the media session can start. The harvesting process in Kurento, begins with the invocation of the <code>gatherCandidates</code> method. Since the whole Trickle ICE purpose is to speed-up connectivity, candidates are generated asynchronously. Therefore, in order to capture the candidates, the user must subscribe to the event <code>IceCandidateFound</code>. It is important that the event listener is bound before invoking <code>gatherCandidates</code>, otherwise a suitable candidate might be lost, and connection might not be established.\n </p>\n <p>\n It's important to keep in mind that WebRTC connection is an asynchronous process, when designing interactions between different MediaElements. For example, it would be pointless to start recording before media is flowing. In order to be notified of state changes, the application can subscribe to events generated by the WebRtcEndpoint. Following is a full list of events generated by WebRtcEndpoint:\n <ul>\n <li>\n <code>IceComponentStateChange</code>: This event informs only about changes in the ICE connection state. Possible values are:\n <ul style='list-style-type:circle'>\n <li><code>DISCONNECTED</code>: No activity scheduled</li>\n <li><code>GATHERING</code>: Gathering local candidates</li>\n <li><code>CONNECTING</code>: Establishing connectivity</li>\n <li><code>CONNECTED</code>: At least one working candidate pair</li>\n <li><code>READY</code>: ICE concluded, candidate pair selection is now final</li>\n <li><code>FAILED</code>: Connectivity checks have been completed, but media connection was not established</li>\n </ul>\n The transitions between states are covered in RFC5245.\n It could be said that it's network-only, as it only takes into account the state of the network connection, ignoring other higher level stuff, like DTLS handshake, RTCP flow, etc. This implies that, while the component state is <code>CONNECTED</code>, there might be no media flowing between the peers. This makes this event useful only to receive low-level information about the connection between peers. Even more, while other events might leave a graceful period of time before firing, this event fires immediately after the state change is detected.\n </li>\n <li>\n <code>IceCandidateFound</code>: Raised when a new candidate is discovered. ICE candidates must be sent to the remote peer of the connection. Failing to do so for some or all of the candidates might render the connection unusable.\n </li>\n <li>\n <code>IceGatheringDone</code>: Raised when the ICE harvesting process is completed. This means that all candidates have already been discovered.\n </li>\n <li>\n <code>NewCandidatePairSelected</code>: Raised when a new ICE candidate pair gets selected. The pair contains both local and remote candidates being used for a component. This event can be raised during a media session, if a new pair of candidates with higher priority in the link are found.\n </li>\n <li>\n <code>DataChannelOpen</code>: Raised when a data channel is open.\n </li>\n <li>\n <code>DataChannelClose</code>: Raised when a data channel is closed.\n </li>\n </ul>\n </p>\n <p>\n Registering to any of above events requires the application to provide a callback function. Each event provides different information, so it is recommended to consult the signature of the event listeners.\n </p>\n <p>\n Flow control and congestion management is one of the most important features of WebRTC. WebRTC connections start with the lowest bandwidth configured and slowly ramps up to the maximum available bandwidth, or to the higher limit of the exploration range in case no bandwidth limitation is detected. Notice that WebRtcEndpoints in Kurento are designed in a way that multiple WebRTC connections fed by the same stream share quality. When a new connection is added, as it requires to start with low bandwidth, it will cause the rest of connections to experience a transient period of degraded quality, until it stabilizes its bitrate. This doesn't apply when transcoding is involved. Transcoders will adjust their output bitrate based in bandwidth requirements, but it won't affect the original stream. If an incoming WebRTC stream needs to be transcoded, for whatever reason, all WebRtcEndpoints fed from transcoder output will share a separate quality than the ones connected directly to the original stream.\n </p>\n <p>\n The default bandwidth range of the endpoint is 100kbps-500kbps, but it can be changed separately for input/output directions and for audio/video streams.\n <ul>\n <li>\n Input bandwidth control mechanism: Configuration interval used to inform remote peer the range of bitrates that can be pushed into this WebRtcEndpoint object.\n <ul style='list-style-type:circle'>\n <li>\n setMin/MaxVideoRecvBandwidth: sets Min/Max bitrate limits expected for received video stream.\n </li>\n <li>\n setMin/MaxAudioRecvBandwidth: sets Min/Max bitrate limits expected for received audio stream.\n </li>\n </ul>\n Max values are announced in the SDP, while min values are set to limit the lower value of REMB packages. It follows that min values will only have effect in peers that support this control mechanism, such as Chrome.\n </li>\n <li>\n Output bandwidth control mechanism: Configuration interval used to control bitrate of the output video stream sent to remote peer. It is important to keep in mind that pushed bitrate depends on network and remote peer capabilities. Remote peers can also announce bandwidth limitation in their SDPs (through the <code>b=<modifier>:<value></code> tag). Kurento will always enforce bitrate limitations specified by the remote peer over internal configurations.\n <ul style='list-style-type:circle'>\n <li>\n setMin/MaxVideoSendBandwidth: sets Min/Max bitrate limits for video sent to remote peer\n </li>\n </ul>\n </li>\n </ul>\n All bandwidth control parameters must be changed before the SDP negotiation takes place, and can't be changed afterwards.\n </p>\n <p>\n DataChannels allow other media elements that make use of the DataPad, to send arbitrary data. For instance, if there is a filter that publishes event information, it'll be sent to the remote peer through the channel. There is no API available for programmers to make use of this feature in the WebRtcElement. DataChannels can be configured to provide the following:\n <ul>\n <li>\n Reliable or partially reliable delivery of sent messages\n </li>\n <li>\n In-order or out-of-order delivery of sent messages\n </li>\n </ul>\n Unreliable, out-of-order delivery is equivalent to raw UDP semantics. The message may make it, or it may not, and order is not important. However, the channel can be configured to be <i>partially reliable</i> by specifying the maximum number of retransmissions or setting a time limit for retransmissions: the WebRTC stack will handle the acknowledgments and timeouts.\n </p>\n <p>\n The possibility to create DataChannels in a WebRtcEndpoint must be explicitly enabled when creating the endpoint, as this feature is disabled by default. If this is the case, they can be created invoking the createDataChannel method. The arguments for this method, all of them optional, provide the necessary configuration:\n <ul>\n <li>\n <code>label</code>: assigns a label to the DataChannel. This can help identify each possible channel separately.\n </li>\n <li>\n <code>ordered</code>: specifies if the DataChannel guarantees order, which is the default mode. If maxPacketLifetime and maxRetransmits have not been set, this enables reliable mode.\n </li>\n <li>\n <code>maxPacketLifeTime</code>: The time window in milliseconds, during which transmissions and retransmissions may take place in unreliable mode. This forces unreliable mode, even if <code>ordered</code> has been activated.\n </li>\n <li>\n <code>maxRetransmits</code>: maximum number of retransmissions that are attempted in unreliable mode. This forces unreliable mode, even if <code>ordered</code> has been activated.\n </li>\n <li>\n <code>Protocol</code>: Name of the subprotocol used for data communication.\n </li>\n </ul>\n ",
"doc": "Control interface for Kurento WebRTC endpoint.\n<p>\n This endpoint is one side of a peer-to-peer WebRTC communication, being the\n other peer a WebRTC capable browser -using the RTCPeerConnection API-, a\n native WebRTC app or even another Kurento Media Server.\n</p>\n<p>\n In order to establish a WebRTC communication, peers engage in an SDP\n negotiation process, where one of the peers (the offerer) sends an offer,\n while the other peer (the offeree) responds with an answer. This endpoint can\n function in both situations\n</p>\n<ul>\n <li>\n As offerer: The negotiation process is initiated by the media server\n <ul>\n <li>\n KMS generates the SDP offer through the\n <code>generateOffer</code> method. This <i>offer</i> must then be sent\n to the remote peer (the offeree) through the signaling channel, for\n processing.\n </li>\n <li>\n The remote peer processes the <i>offer</i>, and generates an\n <i>answer</i>. The <i>answer</i> is sent back to the media server.\n </li>\n <li>\n Upon receiving the <i>answer</i>, the endpoint must invoke the\n <code>processAnswer</code> method.\n </li>\n </ul>\n </li>\n <li>\n As offeree: The negotiation process is initiated by the remote peer\n <ul>\n <li>\n The remote peer, acting as offerer, generates an SDP <i>offer</i> and\n sends it to the WebRTC endpoint in Kurento.\n </li>\n <li>\n The endpoint will process the <i>offer</i> invoking the\n <code>processOffer</code> method. The result of this method will be a\n string, containing an SDP <i>answer</i>.\n </li>\n <li>\n The SDP <i>answer</i> must be sent back to the offerer, so it can be\n processed.\n </li>\n </ul>\n </li>\n</ul>\n<p>\n SDPs are sent without ICE candidates, following the Trickle ICE optimization.\n Once the SDP negotiation is completed, both peers proceed with the ICE\n discovery process, intended to set up a bidirectional media connection. During\n this process, each peer\n</p>\n<ul>\n <li>\n Discovers ICE candidates for itself, containing pairs of IPs and ports.\n </li>\n <li>\n ICE candidates are sent via the signaling channel as they are discovered, to\n the remote peer for probing.\n </li>\n <li>\n ICE connectivity checks are run as soon as the new candidate description,\n from the remote peer, is available.\n </li>\n</ul>\n<p>\n Once a suitable pair of candidates (one for each peer) is discovered, the\n media session can start. The harvesting process in Kurento, begins with the\n invocation of the <code>gatherCandidates</code> method. Since the whole\n Trickle ICE purpose is to speed-up connectivity, candidates are generated\n asynchronously. Therefore, in order to capture the candidates, the user must\n subscribe to the event <code>IceCandidateFound</code>. It is important that\n the event listener is bound before invoking <code>gatherCandidates</code>,\n otherwise a suitable candidate might be lost, and connection might not be\n established.\n</p>\n<p>\n It's important to keep in mind that WebRTC connection is an asynchronous\n process, when designing interactions between different MediaElements. For\n example, it would be pointless to start recording before media is flowing. In\n order to be notified of state changes, the application can subscribe to events\n generated by the WebRtcEndpoint. Following is a full list of events generated\n by WebRtcEndpoint:\n</p>\n<ul>\n <li>\n <code>IceComponentStateChange</code>: This event informs only about changes\n in the ICE connection state. Possible values are:\n <ul>\n <li><code>DISCONNECTED</code>: No activity scheduled</li>\n <li><code>GATHERING</code>: Gathering local candidates</li>\n <li><code>CONNECTING</code>: Establishing connectivity</li>\n <li><code>CONNECTED</code>: At least one working candidate pair</li>\n <li>\n <code>READY</code>: ICE concluded, candidate pair selection is now final\n </li>\n <li>\n <code>FAILED</code>: Connectivity checks have been completed, but media\n connection was not established\n </li>\n </ul>\n The transitions between states are covered in RFC5245. It could be said that\n it's network-only, as it only takes into account the state of the network\n connection, ignoring other higher level stuff, like DTLS handshake, RTCP\n flow, etc. This implies that, while the component state is\n <code>CONNECTED</code>, there might be no media flowing between the peers.\n This makes this event useful only to receive low-level information about the\n connection between peers. Even more, while other events might leave a\n graceful period of time before firing, this event fires immediately after\n the state change is detected.\n </li>\n <li>\n <code>IceCandidateFound</code>: Raised when a new candidate is discovered.\n ICE candidates must be sent to the remote peer of the connection. Failing to\n do so for some or all of the candidates might render the connection\n unusable.\n </li>\n <li>\n <code>IceGatheringDone</code>: Raised when the ICE harvesting process is\n completed. This means that all candidates have already been discovered.\n </li>\n <li>\n <code>NewCandidatePairSelected</code>: Raised when a new ICE candidate pair\n gets selected. The pair contains both local and remote candidates being used\n for a component. This event can be raised during a media session, if a new\n pair of candidates with higher priority in the link are found.\n </li>\n <li><code>DataChannelOpen</code>: Raised when a data channel is open.</li>\n <li><code>DataChannelClose</code>: Raised when a data channel is closed.</li>\n</ul>\n<p>\n Registering to any of above events requires the application to provide a\n callback function. Each event provides different information, so it is\n recommended to consult the signature of the event listeners.\n</p>\n<p>\n Flow control and congestion management is one of the most important features\n of WebRTC. WebRTC connections start with the lowest bandwidth configured and\n slowly ramps up to the maximum available bandwidth, or to the higher limit of\n the exploration range in case no bandwidth limitation is detected. Notice that\n WebRtcEndpoints in Kurento are designed in a way that multiple WebRTC\n connections fed by the same stream share quality. When a new connection is\n added, as it requires to start with low bandwidth, it will cause the rest of\n connections to experience a transient period of degraded quality, until it\n stabilizes its bitrate. This doesn't apply when transcoding is involved.\n Transcoders will adjust their output bitrate based in bandwidth requirements,\n but it won't affect the original stream. If an incoming WebRTC stream needs to\n be transcoded, for whatever reason, all WebRtcEndpoints fed from transcoder\n output will share a separate quality than the ones connected directly to the\n original stream.\n</p>\n<p>\n The default bandwidth range of the endpoint is\n <strong>[100 kbps, 500 kbps]</strong>, but it can be changed separately for\n input/output directions and for audio/video streams.\n</p>\n<p>\n <strong>\n Check the extended documentation of these parameters in\n :rom:cls:`SdpEndpoint`, :rom:cls:`BaseRtpEndpoint`, and\n :rom:ref:`RembParams`.\n </strong>\n</p>\n<ul>\n <li>\n Input bandwidth: Configuration value used to inform remote peers about the\n bitrate that can be pushed into this endpoint.\n <ul>\n <li>\n <strong>{get,set}MinVideoRecvBandwidth</strong>: Minimum bitrate\n requested on the received video stream.\n </li>\n <li>\n <strong>{get,set}Max{Audio,Video}RecvBandwidth</strong>: Maximum bitrate\n expected for the received stream.\n </li>\n </ul>\n </li>\n <li>\n Output bandwidth: Configuration values used to control bitrate of the output\n video stream sent to remote peers. It is important to keep in mind that\n pushed bitrate depends on network and remote peer capabilities. Remote peers\n can also announce bandwidth limitation in their SDPs (through the\n <code>b={modifier}:{value}</code> tag). Kurento will always enforce bitrate\n limitations specified by the remote peer over internal configurations.\n <ul>\n <li>\n <strong>{get,set}MinVideoSendBandwidth</strong>: Minimum video bitrate\n sent to remote peer.\n </li>\n <li>\n <strong>{get,set}MaxVideoSendBandwidth</strong>: Maximum video bitrate\n sent to remote peer.\n </li>\n <li>\n <strong>RembParams.rembOnConnect</strong>: Initial local REMB bandwidth\n estimation that gets propagated when a new endpoint is connected.\n </li>\n </ul>\n </li>\n</ul>\n<p>\n <strong>\n All bandwidth control parameters must be changed before the SDP negotiation\n takes place, and can't be changed afterwards.\n </strong>\n</p>\n<p>\n DataChannels allow other media elements that make use of the DataPad, to send\n arbitrary data. For instance, if there is a filter that publishes event\n information, it'll be sent to the remote peer through the channel. There is no\n API available for programmers to make use of this feature in the\n WebRtcElement. DataChannels can be configured to provide the following:\n</p>\n<ul>\n <li>\n Reliable or partially reliable delivery of sent messages\n </li>\n <li>\n In-order or out-of-order delivery of sent messages\n </li>\n</ul>\n<p>\n Unreliable, out-of-order delivery is equivalent to raw UDP semantics. The\n message may make it, or it may not, and order is not important. However, the\n channel can be configured to be <i>partially reliable</i> by specifying the\n maximum number of retransmissions or setting a time limit for retransmissions:\n the WebRTC stack will handle the acknowledgments and timeouts.\n</p>\n<p>\n The possibility to create DataChannels in a WebRtcEndpoint must be explicitly\n enabled when creating the endpoint, as this feature is disabled by default. If\n this is the case, they can be created invoking the createDataChannel method.\n The arguments for this method, all of them optional, provide the necessary\n configuration:\n</p>\n<ul>\n <li>\n <code>label</code>: assigns a label to the DataChannel. This can help\n identify each possible channel separately.\n </li>\n <li>\n <code>ordered</code>: specifies if the DataChannel guarantees order, which\n is the default mode. If maxPacketLifetime and maxRetransmits have not been\n set, this enables reliable mode.\n </li>\n <li>\n <code>maxPacketLifeTime</code>: The time window in milliseconds, during\n which transmissions and retransmissions may take place in unreliable mode.\n This forces unreliable mode, even if <code>ordered</code> has been\n activated.\n </li>\n <li>\n <code>maxRetransmits</code>: maximum number of retransmissions that are\n attempted in unreliable mode. This forces unreliable mode, even if\n <code>ordered</code> has been activated.\n </li>\n <li>\n <code>Protocol</code>: Name of the subprotocol used for data communication.\n </li>\n</ul>\n ",
"extends": "BaseRtpEndpoint",

@@ -494,4 +494,14 @@ "constructor": {

{
"name": "externalAddress",
"doc": "External (public) IP address of the media server.\n<p>\n If you know what will be the external or public IP address of the media server\n (e.g. because your deployment has an static IP), you can specify it here.\n Doing so has the advantage of not needing to configure STUN/TURN for the media\n server.\n</p>\n<p>\n STUN/TURN are needed only when the media server sits behind a NAT and needs to\n find out its own external IP address. However, if you set a static external IP\n address with this parameter, then there is no need for the STUN/TURN\n auto-discovery.\n</p>\n<p>\n The effect of this parameter is that ALL local ICE candidates that are\n gathered (for WebRTC) will contain the provided external IP address instead of\n the local one.\n</p>\n<p>\n <code>externalAddress</code> is an IPv4 or IPv6 address.\n</p>\n<p>Examples:</p>\n<ul>\n <li><code>externalAddress=10.70.35.2</code></li>\n <li><code>externalAddress=2001:0db8:85a3:0000:0000:8a2e:0370:7334</code></li>\n</ul>\n ",
"type": "String"
},
{
"name": "networkInterfaces",
"doc": "Local network interfaces used for ICE gathering.\n<p>\n If you know which network interfaces should be used to perform ICE (for WebRTC\n connectivity), you can define them here. Doing so has several advantages:\n</p>\n<ul>\n <li>\n The WebRTC ICE gathering process will be much quicker. Normally, it needs to\n gather local candidates for all of the network interfaces, but this step can\n be made faster if you limit it to only the interface that you know will\n work.\n </li>\n <li>\n It will ensure that the media server always decides to use the correct\n network interface. With WebRTC ICE gathering it's possible that, under some\n circumstances (in systems with virtual network interfaces such as\n <code>docker0</code>) the ICE process ends up choosing the wrong local IP.\n </li>\n</ul>\n<p>\n <code>networkInterfaces</code> is a comma-separated list of network interface\n names.\n</p>\n<p>Examples:</p>\n<ul>\n <li><code>networkInterfaces=eth0</code></li>\n <li><code>networkInterfaces=eth0,enp0s25</code></li>\n</ul>\n ",
"type": "String"
},
{
"name": "stunServerAddress",
"doc": "address of the STUN server (Only IP address are supported)",
"doc": "STUN server IP address.\n<p>The ICE process uses STUN to punch holes through NAT firewalls.</p>\n<p>\n <code>stunServerAddress</code> MUST be an IP address; domain names are NOT\n supported.\n</p>\n<p>\n You need to use a well-working STUN server. Use this to check if it works:<br />\n https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/<br />\n From that check, you should get at least one Server-Reflexive Candidate (type\n <code>srflx</code>).\n</p>\n ",
"type": "String"

@@ -501,3 +511,3 @@ },

"name": "stunServerPort",
"doc": "port of the STUN server",
"doc": "Port of the STUN server",
"type": "int"

@@ -507,3 +517,3 @@ },

"name": "turnUrl",
"doc": "TURN server URL with this format: <code>user:password@address:port(?transport=[udp|tcp|tls])</code>.</br><code>address</code> must be an IP (not a domain).</br><code>transport</code> is optional (UDP by default).",
"doc": "TURN server URL.\n<p>\n When STUN is not enough to open connections through some NAT firewalls, using\n TURN is the remaining alternative.\n</p>\n<p>\n Note that TURN is a superset of STUN, so you don't need to configure STUN if\n you are using TURN.\n</p>\n<p>The provided URL should follow one of these formats:</p>\n<ul>\n <li><code>user:password@ipaddress:port</code></li>\n <li>\n <code>user:password@ipaddress:port?transport=[udp|tcp|tls]</code>\n </li>\n</ul>\n<p>\n <code>ipaddress</code> MUST be an IP address; domain names are NOT supported.<br />\n <code>transport</code> is OPTIONAL. Possible values: udp, tcp, tls. Default: udp.\n</p>\n<p>\n You need to use a well-working TURN server. Use this to check if it works:<br />\n https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/<br />\n From that check, you should get at least one Server-Reflexive Candidate (type\n <code>srflx</code>) AND one Relay Candidate (type <code>relay</code>).\n</p>\n ",
"type": "String"

@@ -528,3 +538,3 @@ },

"name": "gatherCandidates",
"doc": "Start the gathering of ICE candidates.</br>It must be called after SdpEndpoint::generateOffer or SdpEndpoint::processOffer for Trickle ICE. If invoked before generating or processing an SDP offer, the candidates gathered will be added to the SDP processed."
"doc": "Start the gathering of ICE candidates.\n<p>\n It must be called after <code>SdpEndpoint::generateOffer</code> or\n <code>SdpEndpoint::processOffer</code> for <strong>Trickle ICE</strong>. If\n invoked before generating or processing an SDP offer, the candidates gathered\n will be added to the SDP processed.\n</p>\n "
},

@@ -560,3 +570,3 @@ {

"name": "maxPacketLifeTime",
"doc": "The time window (in milliseconds) during which transmissions and retransmissions may take place in unreliable mode.</br>\n .. note:: This forces unreliable mode, even if <code>ordered</code> has been activated",
"doc": "The time window (in milliseconds) during which transmissions and retransmissions may take place in unreliable mode.\nNote that this forces unreliable mode, even if <code>ordered</code> has been activated.\n ",
"type": "int",

@@ -568,3 +578,3 @@ "optional": true,

"name": "maxRetransmits",
"doc": "maximum number of retransmissions that are attempted in unreliable mode.</br>\n .. note:: This forces unreliable mode, even if <code>ordered</code> has been activated",
"doc": "maximum number of retransmissions that are attempted in unreliable mode.\nNote that this forces unreliable mode, even if <code>ordered</code> has been activated.\n ",
"type": "int",

@@ -583,3 +593,3 @@ "optional": true,

"name": "createDataChannel",
"doc": "Create a new data channel, if data channels are supported. If they are not supported, this method throws an exception.\n Being supported means that the WebRtcEndpoint has been created with data channel support, the client also supports data channels, and they have been negotaited in the SDP exchange.\n Otherwise, the method throws an exception, indicating that the operation is not possible.</br>\n Data channels can work in either unreliable mode (analogous to User Datagram Protocol or UDP) or reliable mode (analogous to Transmission Control Protocol or TCP).\n The two modes have a simple distinction:\n <ul>\n <li>Reliable mode guarantees the transmission of messages and also the order in which they are delivered. This takes extra overhead, thus potentially making this mode slower.</li>\n <li>Unreliable mode does not guarantee every message will get to the other side nor what order they get there. This removes the overhead, allowing this mode to work much faster.</li>\n </ul>"
"doc": "Create a new data channel, if data channels are supported.\n<p>\n Being supported means that the WebRtcEndpoint has been created with data\n channel support, the client also supports data channels, and they have been\n negotiated in the SDP exchange. Otherwise, the method throws an exception,\n indicating that the operation is not possible.\n</p>\n<p>\n Data channels can work in either unreliable mode (analogous to User Datagram\n Protocol or UDP) or reliable mode (analogous to Transmission Control Protocol\n or TCP). The two modes have a simple distinction:\n</p>\n<ul>\n <li>\n Reliable mode guarantees the transmission of messages and also the order in\n which they are delivered. This takes extra overhead, thus potentially making\n this mode slower.\n </li>\n <li>\n Unreliable mode does not guarantee every message will get to the other side\n nor what order they get there. This removes the overhead, allowing this mode\n to work much faster.\n </li>\n</ul>\n<p>If data channels are not supported, this method throws an exception.</p>\n "
},

@@ -615,2 +625,20 @@ {

{
"typeFormat": "ENUM",
"values": [
"WEBM",
"MKV",
"MP4",
"WEBM_VIDEO_ONLY",
"WEBM_AUDIO_ONLY",
"MKV_VIDEO_ONLY",
"MKV_AUDIO_ONLY",
"MP4_VIDEO_ONLY",
"MP4_AUDIO_ONLY",
"JPEG_VIDEO_ONLY",
"KURENTO_SPLIT_RECORDER"
],
"name": "MediaProfileSpecType",
"doc": "Media Profile.\n\nCurrently WEBM, MKV, MP4 and JPEG are supported."
},
{
"typeFormat": "REGISTER",

@@ -635,3 +663,3 @@ "properties": [

"name": "IceCandidate",
"doc": "IceCandidate representation based on standard (http://www.w3.org/TR/webrtc/#rtcicecandidate-type)."
"doc": "IceCandidate representation based on <code>RTCIceCandidate</code> interface.\n@see https://www.w3.org/TR/2018/CR-webrtc-20180927/#rtcicecandidate-interface"
},

@@ -712,20 +740,2 @@ {

"values": [
"WEBM",
"MKV",
"MP4",
"WEBM_VIDEO_ONLY",
"WEBM_AUDIO_ONLY",
"MKV_VIDEO_ONLY",
"MKV_AUDIO_ONLY",
"MP4_VIDEO_ONLY",
"MP4_AUDIO_ONLY",
"JPEG_VIDEO_ONLY",
"KURENTO_SPLIT_RECORDER"
],
"name": "MediaProfileSpecType",
"doc": "Media Profile.\n\nCurrently WEBM, MKV, MP4 and JPEG are supported."
},
{
"typeFormat": "ENUM",
"values": [
"AES_128_CM_HMAC_SHA1_32",

@@ -794,2 +804,8 @@ "AES_128_CM_HMAC_SHA1_80",

{
"properties": [],
"extends": "Media",
"name": "EndOfStream",
"doc": "Event raised when the stream that the element sends out is finished."
},
{
"properties": [

@@ -804,3 +820,3 @@ {

"name": "OnIceCandidate",
"doc": "@deprecated</br>Notifies a new local candidate. These candidates should be sent to the remote peer, to complete the ICE negotiation process."
"doc": "Notifies a new local candidate.\nThese candidates should be sent to the remote peer, to complete the ICE negotiation process.\n@deprecated Use <code>IceCandidateFound</code> instead.\n "
},

@@ -817,3 +833,3 @@ {

"name": "IceCandidateFound",
"doc": "Notifies a new local candidate. These candidates should be sent to the remote peer, to complete the ICE negotiation process."
"doc": "Notifies a new local candidate.\nThese candidates should be sent to the remote peer, to complete the ICE negotiation process.\n "
},

@@ -824,3 +840,3 @@ {

"name": "OnIceGatheringDone",
"doc": "@deprecated</br>Event fired when al ICE candidates have been gathered."
"doc": "Event fired when al ICE candidates have been gathered.\n@deprecated Use <code>IceGatheringDone</code> instead.\n "
},

@@ -853,3 +869,3 @@ {

"name": "OnIceComponentStateChanged",
"doc": "@deprecated</br>Event fired when and ICE component state changes. See :rom:cls:`IceComponentState` for a list of possible states."
"doc": "Event fired when and ICE component state changes.\nSee :rom:cls:`IceComponentState` for a list of possible states.\n@deprecated Use <code>IceComponentStateChange</code> instead.\n "
},

@@ -876,3 +892,3 @@ {

"name": "IceComponentStateChange",
"doc": "Event fired when and ICE component state changes. See :rom:cls:`IceComponentState` for a list of possible states."
"doc": "Event fired when and ICE component state changes.\nSee :rom:cls:`IceComponentState` for a list of possible states.\n "
},

@@ -889,3 +905,3 @@ {

"name": "OnDataChannelOpened",
"doc": "@deprecated</br>Event fired when a new data channel is created."
"doc": "Event fired when a new data channel is created.\n@deprecated Use <code>DataChannelOpen</code> instead.\n "
},

@@ -914,3 +930,3 @@ {

"name": "OnDataChannelClosed",
"doc": "@deprecated</br>Event fired when a data channel is closed."
"doc": "Event fired when a data channel is closed.\n@deprecated Use <code>DataChannelClose</code> instead.\n "
},

@@ -939,11 +955,5 @@ {

"name": "NewCandidatePairSelected",
"doc": "Event fired when a new pair of ICE candidates is used by the ICE library. This could also happen in the middle of a session, though not likely."
"doc": "Event fired when a new pair of ICE candidates is used by the ICE library.\nThis could also happen in the middle of a session, though not likely.\n "
},
{
"properties": [],
"extends": "Media",
"name": "EndOfStream",
"doc": "Event raised when the stream that the element sends out is finished."
},
{
"properties": [

@@ -950,0 +960,0 @@ {

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc