When to Edit SDP Messages for WebRTC Connections - webrtc

I'm following a Stack Overflow post on improving audio quality in WebRTC:
Is it really possible for webRTC to stream high quality audio without noise?
In the one answer, they edit the message between createAnswer() and setLocalDescription()
However this article on limiting bandwidth of WebRTC connections says "we are not allowed to modify the offer between createOffer (or createAnswer) and setLocalDescription"
https://webrtchacks.com/limit-webrtc-bandwidth-sdp/
So my question is: which is correct?

The specification says you are not allowed to do modify the description between createOffer/createAnswer.
For a number of historical reasons browsers do allow "SDP munging" however.
What is allowed (and impossible to forbid by the specification) is modification of the SDP before setRemoteDescription as described in the webrtchacks article. Please note that these days the setParameters API as demonstrated by this sample

Related

ApiRTC - Media always sent to the cloud, even with meshOnlyEnabled

As a follow-up to my previous post (ApiRTC - Behaviour with meshModeEnabled and meshOnlyEnabled)
Hello,
You say that SFU is necessary for any activity that requires centralizing all the streams (recording, bandwidth optimization,...). However, in MESH mode, the files/media exchanged still manage to be recorded on the Apizee media server even though I don't go through the SFU. How is this possible ?
Can this behaviour be disabled so that the exchanged documents never leave the MESH stream ?
I have not found anything about this in the documentation.
By the way, the documentation often mentions the term "MCU", does this mean that ApiRTC also uses an MCU server in addition to the SFU ?
Thanks in advance.
apirtc
Can this behaviour be disabled so that the exchanged documents never
leave the MESH stream ?
Concerning a recording of all the streams in the conversation (via the startRecording method of the Conversation object see https://apirtc.github.io/references/apirtc-js/Conversation.html#startRecording__anchor):
--> The composition of multiple streams into one video file is done server-side by the SFU (v4.4.8).
Concerning the files (through conversation.pushData method):
--> We manage the file transfer through uploading the file on a storage and share the URI to all parties of a conversation. P2P transfer is not available (v4.4.8)
To exchange data in a P2P mode, you can use the Conversation.sendData method to send raw data across all participants.
Regarding your question about the MCU, no, ApiRTC doesnt use any MCU server to date (v. 4.4.8). The document refers to MCU for very specific on-premise deployment, not supported for ApiRTC users.
Cheers,
Romain

Does Google webrtc native implementation have support for SFU?

Does Google WebRTC Native implementation has support for SFU?
Does Google WebRTC Native implementation support for integrating custom/hardware encoder/decoder?
Not without alteration.
Internally WebRTC's internal audio/video pipelines are directly tied to encoder/decoders.
PeerConnectionFactory allows you to provide a video decoder/encoder factory, so you can short circuit the logic here, and grab the encoded frames, mock up a stream, and feed them directly into it as a relay, creating a new PeerConnection and setting those streams onto it.
The audio end is more difficult. There isn't a codec factory, so you will have to short circuit the logic there probably by alteration of libwebrtc.
The final question is RTCP termination, and how to override the mechanisms for quality/bandwidth control to not create a "One goes out, they all go out." situation.
Since libwebrtc will be the SFU, it will receive RTCP feedback from its remote peer for the content it is proxying, and vice versa.
For a 1-1 situation, it needs to be able to forward the RTCP feedback to the remote peer.
For multipoint, it needs to perform some logic to determine if one of the peers is problematic, and stop sending it video, switch off its video feed, or attempt to switch to a lower bitrate video stream. Basically it needs to act as a conduit that attempts to predict why/how packet loss is occurring, and keep as many audio/video feeds operating normally at at the highest possible quality for each peer.
How exactly to hijack the RTCP feedback mechanisms in libwebrtc, I think that again will likely require some customization/hooks into libwebrtc
I think it will be easier to try with GStreamer implementation of WebRTC. Although it is still in "Bad Plugins" it is way easier to get or provide encoded audio and video. Actually it is implemented in that in mind - to make implementation of MFU and SFU easier.

TURN server - Questions on use of certain attributes in the context of WebRTC

I am implementing a TURN server specifically for WebRTC usage and have some questions regarding not supporting certain attributes (send an error response if the attribute is received) or simply ignore them or other doubts. Here they are:
EVEN-PORT If my SDP always signals a=rtcp-mux, will this attribute ever be used? And if so, would it be an error if it appears?
RESERVATION-TOKEN Does this play any role when TURN server is used in the WebRTC context?
SOFTWARE As in STUN, can this be safely ignored without any processing?
DONT-FRAGMENT Is there a preferred and well-accepted norm for this attribute in the WebRTC context?
What is the ideal length of NONCE in the WebRTC context?
Different issue. Are there any statistics available for use of TURN server for transports other than UDP? I am thinking of supporting only UDP for now.
webrtc typically requires rtcp-mux, at least in chrome so I would not care about even-port.
no
yes. It is FYI only.
no. WebRTC implementations typically don't do path-mtu discovery but assume 1200 bytes.
You mean the expiration? https://medium.com/confrere/gone-in-1100-seconds-hunting-bugs-on-the-edge-of-webrtc-132a186c45dd
see https://medium.com/the-making-of-whereby/what-kind-of-turn-server-is-being-used-d67dbfc2ff5d

WebRTC: Why "CreateAnswer can't be called before SetRemoteDescription"?

Browser: Chrome
I am trying to debug a webRTC application which works fine on three out of four fronts! I cannot get video from the receiver to the caller. I can get video and audio from the caller to the receiver and audio from the receiver to the caller. The problem is that the receiver does not fire a video (sdpMid="video") ICE candidate. While desperately trying to solve this problem, I tried to use pc.CreateAnswer before setting pc.remoteDescription and it gave the error quoted in the title.
My question is to understand the reason behind this. An answer SDP would just be the SDP based upon the getUserMedia settings/constraints. So, why do we have to wait for setting remoteDescription. I thought that a createAnswer would start firing the gathering of ICE candidates and this can be done earlier without waiting to set remoteDescription. That is not the case. Why?
Offers and answers aren't independent, they're part of an inherently asymmetric exchange.
An answer is a direct response to a specific offer (hence the name "answer"). Therefore the peer cannot answer before it has an offer, which you set with setRemoteDescription.
An offer contains specific limitations, or envelope (like m-lines), that an answer has to abide by/answer to/stay within. Another way to say it is that the answer is an iteration of the offer.
For instance, an offer created with offer options offerToReceiveVideo: false can only be answered with recvonly for video (meaning receive video from offerer to answerer only), never sendrecv.

API, dev specs or similar for TK102 GPS localizer

I'm using a TK102 GPS localizer. Along with it, I got only simple end-user docs. No API, dev specs or similar for writing code that will use this localizer.
I was told that it uses UDP. So I wrote a simple PHP listener. But either localizer is not using UDP or something is wrong in communication between it and server. Listener works fine (gets UDP packets from other clients) and localizer is sending something (I'm being charge by GSM operator for GPRS transmission), but the data it sends, doesn't reach server.
I asked about server or networking issues on Unix/Linux and SuperUser. Here I would only ask, if someone knows any API/dev-specs for this localizer, so I can check, if it really uses UDP or if I haven't made any other error (in configuration for example).
The localizer and its clones
We're talking about Xexun TK102 Tracker here. The original one, because there are many clones under other companies from China, selling similar GPS localizer, with the same cover and logo, but with:
less performance electronics on-board (for example -- able to report location once per 20 or 30 seconds, not once per 5 seconds like in original one),
the ones that are sending lesser information (lack of direction/bearing, altitude, number of satelites used for location fix and many more),
units using different format of data or non-standard transmission protocol for sending it (for example, cheaper units are unable to use UDP protocol and are transmiting data through TCP protocol, using packets that not always follows standards or definictions.
Coban and Kintech are only two of many clones sold on eBay and in e-shops, claiming to be original Xexun trackers.
On the other hand, original Xexun and some clones (like Coban for example) are harder to control from own script, because they require a correct answer from the server, where data is sent over GPRS. If unit does not receive such reply, it breaks connection. The cheapes unit does not have this checking and will always sent location data to specified IP address over provided port.
Product description
Here is product description of original Xexun localizer (and here is a clone under Kintech name).
Possible buyer must be very careful (and should secure return policy, for which buying directly in China is not recommended) as there are many reports about sellers claiming to sell original Xexun device and sending a clone actually.
Though this device is five years old, it is still sold at many places (including eBay), but even at theses sources it is very hard to get anything worth for developers, except some simple, very basic user guide.
I have confirmed information (from two different sources) that there is no official API available for this device. The only option is to Google around, ask other users or use forums (see below).
If you own original Xexun localizer, you may try to contact company international departament and ask their technicians to include some changes to device source code and to send you updated firmware, with your changes - wow! That was confirmed by company itself.
Forum
I found a perfect forum for TK102 device, with a lot of questions and answers:
here is a general forum on TK102 device (kept alive for 4,5 year with 171 pages and 2000+ posts!),
here you'll find more specific topic on receiving data from this localizer,
this forum is also about TK102 unit, but it is entirely in French.
There are many other devices dissussed and in general, this is the biggest forum in the world, with topics for localizers and simillar information.
GPRS Protocol Specs
In general, any TK102 related devices is opening a socket for a direct TCP transmission (original one can be switched to use UDP protocol). Data is being transsmited over port specified by user, in configuration and using GPRS only (requires SIM card with enabled GPRS, there is no way to use WiFi).
Sending frequency, format and amount of data being send, entirely depends on kind of device is being used -- it is more extensive and more configurable in original one than in clones.
Using FileDropper I shared GPRS Protocol Specification for TK102 Geolocalizer. It contains basic information on how to setup TK102 (and possible all its clones) to send location over GPRS. And what sort of data you should except to receive from in, on server side. This could be useful for someone.
BTW: If links goes dead, contact me for a reupload or sending it over e-mail
Correct server response problem
Make sure, if you're using correct data transmission protocol! Many (really many) cheap clones uses TCP, while only original TK102 allows switching to UDP. This is convenient, because you need really basic server configuration to handle TCP connections, while you have to use specific server-side software (like node.js) or specific configuration (open to certain ports) to handle UDP. But the key thing is to determine correct protocol, as listening to TCP data, while your localizer sends UDP, will most certainly fail.
Take into consideration, that many TK102 clones requires a correct response from the server after each data, it send. It breaks connection after sending some welcome garbage UDP packet, as it does not receive response, it waits for.
It is quite hard (quite impossible?) to find any guide to many of these clones, on what kind of responses server should sent. This often leads into situation of developer being unable to estabilish two-way communication between server and localizer. Many localizers are sold to be used only via SMS communication or throughs paid services that had signed and agreement with producer and received protocol specification that contains valid responses server should generate for particular TK102 clone.
Double check, if this is not source of problem, if you can't communiacte with your localizer from your app.
You can check some models protocol specs here:
http://www.traccar.org/docs/protocol.jsp