WebRTC - sharing one stream with multiple peers - webrtc

I have three peers:
PeerA streams a local media to other peers
PeerB accepts the stream and is located on the same machine as PeerA.
PeerC accepts the stream and is located on a different machine.
All peers are Chrome latest. The signalling server is functioning - I see all descriptions being sent and received as I'd expect.
The issue I'm encountering is that PeerC's onTrack event is firing and I have access to the same stream (with the same id) that PeerB succesfully handles - but for some reason, the stream never appears in the DOM.
My code is based loosely on this: https://github.com/webrtc/samples/blob/gh-pages/src/content/peerconnection/multiple/js/main.js
What could be causing this?

Related

When is an endpoint bundle-aware and when not?

From
Link: www.w3.org/TR/webrtc/#dom-rtcbundlepolicy
Content: 4.2.5 RTCBundlePolicy Enum
"If the remote endpoint is bundle-aware, all media tracks and data channels are bundled onto the same transport."
When is an endpoint bundle-aware and when not? And what does bundle-aware means?
To establish a p2p connection, WebRTC will allocate and do STUN network checks on up to 3 ports (multiplied by ways they can be reached) on either end, and as they're discovered (which takes time), ask JS to trickle-exchange info on each of these "ICE candidates" across a signaling channel, once for video, once for audio, and once for data (if you have it).
WebRTC does this mostly to support connecting to non-browser legacy devices, because all modern browsers support BUNDLE, which is when all but one candidate end up being thrown away, and all media gets bundled over that single port.
WebRTC even has a "max-compat" mode that goes even further, allocating a port for each piece of media, just in case the other endpoint is really old.
WebRTC doesn't know the other endpoint is a browser until it receives an "answer" from it, but if you know, you can specify "max-bundle" and save a couple of milliseconds.

Difference Between TURN Server and SFU in WebRTC?

Scenario: PeerA want to stream a video to PeerB and PeerC. PeerA wont receive anything from PeerB and PeerC and there is no communication between PeerB and PeerC. Hole punching happens between the Peers and SFU as SFU being a WebRTC endpoint. The resources and bandwidth on PeerA can be saved using SFU solution.
But SFU communicates to peers through random ports acquired from hole punching process. Whereas Turn Server allocated a single endpoint (ip:port) for a peer.
Now unfortunately PeerB and PeerC happen to be behind a Symmetric NAT.
My observation is that SFU approach will fail here as it cannot successfully complete the hole punching process with PeerB and PeerC
So PeerA sends the feed to PeerB and PeerC via turn server. Basically it means that PeerA is sending the feed twice and this is an inefficient method.
Question:
Can SFU with a public IP replace the need for a TURN server and connect to peers via a defined port? "In this case SFU can save PeerA's bandwidth by acquiring only one feed from peerA whereas Turn Server approach would have required a two feeds from PeerA for both PeerB and PeerC".

Best way to broadcast camera in real-time

I am trying to find the best way to broadcast a camara and send the stream to 200 connections.
If I use web-rtc, I am limited with the CPU power. I've tried to use a server as a gateway, but the number connection maximum I can perform is 60. And 120 with 2 servers.
I can't use web socket to send stream because, the TCP protocol create latency.
Last solution : use RTMP protocol, but there is 5s-10s of latency.
My question: Is there a solution to stream a camera to many clients (200/300) in real-time ?
Just using webrtc would not work as I assume the device the the camera will need a huge bandwidth. The best way is to use an SFU. This will send the video to to the server to then broadcast it to every peer. It is normally able to handle 200 connections if only video is used.
I've implemented such a server using mediasoup. It also allows you to balance the load over several cpu's and multiple servers.
Here is a simple project where this library is used.
There are also other solutions like Janus gateway or kurento server. Although I haven't used them.
SECOND SOLUTION
I found This github repository which allows video forwarding peer to peer even for large audiences. Basically forwarding the stream to other peers which will also forward their received stream. I assume that there will be a little more latency as the video could be relayed through many peers.

WebRTC - No Streams Apparently Means No ICE Transitions

WebRTC peer to peer setup works perfectly with both audio and video locally and remotely. The ICE connection state transitions as expected and finally lands in the "connected" state.
Now if I don't add any audio or video streams to the peer, the session descriptions and ice candidates are exchanged and applied successfully, but the ice connection state never changes to anything. Not to checking, connected, disconnect, failed, or closed. No exceptions are thrown either.
If I add just an audio stream, again everything is exchanged and applied successfully, and the ice connection state this time transitions to "checking" but nothing after that.
Any insight as to why this is?
If you look at the SDP generated you'll see it has no m= sections. Those are necessary in order to have a=candidate lines and without those you can not establish a connection (and it would be surprising if you got candidates). There is some discussion around this issue here.
For the second question the answer is "it depends". This discusses how to use chrome's webrtc-internal for analysing the issue.

Multiple MQTT connections on a single IOT device

Using the azure-iot-sdk for python I have a program that opens a connection to the IoT Hub and continually listens for direct methods, using the MQTT protocol. This is working as expected. I have a second python program that I invoke from cron hourly, that connects to the IoT Hub and updates the device twin for my device. Again this is using MQTT. Everything is working fine.
However I've come across in the documentation that a device can only have one MQTT connection at a time and the second will drop cause the first to drop. I'm not seeing this, however is what I'm doing unsupported?
Should I have a single program doing both tasks and sharing a single connection?
Yes that is correct, you can't have more than one connection with the same device ID to the IoTHub. Eventually in time you will have inconsistency behaviors and that scenario is unsupported. You should use a single program with a unique device ID doing both tasks.
Depending on the scenario you may want to consider using an iothubowner connection string to do service side operations like manage your IoT hub, and optionally send messages, schedule jobs, invoke direct methods, or send desired property updates to your IoT devices or modules.