Best way to broadcast camera in real-time - webrtc

I am trying to find the best way to broadcast a camara and send the stream to 200 connections.
If I use web-rtc, I am limited with the CPU power. I've tried to use a server as a gateway, but the number connection maximum I can perform is 60. And 120 with 2 servers.
I can't use web socket to send stream because, the TCP protocol create latency.
Last solution : use RTMP protocol, but there is 5s-10s of latency.
My question: Is there a solution to stream a camera to many clients (200/300) in real-time ?

Just using webrtc would not work as I assume the device the the camera will need a huge bandwidth. The best way is to use an SFU. This will send the video to to the server to then broadcast it to every peer. It is normally able to handle 200 connections if only video is used.
I've implemented such a server using mediasoup. It also allows you to balance the load over several cpu's and multiple servers.
Here is a simple project where this library is used.
There are also other solutions like Janus gateway or kurento server. Although I haven't used them.
SECOND SOLUTION
I found This github repository which allows video forwarding peer to peer even for large audiences. Basically forwarding the stream to other peers which will also forward their received stream. I assume that there will be a little more latency as the video could be relayed through many peers.

Related

WebRTC with SFU using so many PeerConnection as Consumer in Group call?

I'am develop group call like google meet using WebRTC and SFU method for routing.
my project work well, until i open chrome://webrtc-internals to see webrtc connection status. and i compare with google meet.
Google meet
only 1 peer connection is active.
my project.
1 peer connection active for broadcast.
n-1 peer connection active as consumer.
so if total users in a room is 5. then on each client side has 5
peer connections are active too (1 as broadcaster, 4 as
consumers).
so my question is, how i can using only 1 peer connection as consumer? or using 1 peer connection as broadcast and also as consumer? maybe my method wrong? or misunderstood the implementation of SFU.
any suggestions or solutions?
I am still discovering/learning the stack of webrtc and related architectures, so take what I am saying with a grain of salt.
With a SFU architecture you can have multiple strategies to distribute the streams between your clients. In all case, you save bandwidth for the local user by only sending his streams once to the SFU.
As you state, for n users you can open 1 RTCPeerConnection with the SFU for the local user and n-1 RTCPeerConnection for remote users.
You can open only one RTCPeerConnection with the SFU for any number of users in the "room". To achieve this, when a new user enters the SFU session, his streams need to be added to the tracks of the PeerConnection present at the SFU. It will trigger some renegotiation through signaling, and your users will know a new track (stream) has been added. The client (javascript code for example) needs to identify the new tracks to a specific user, for that you can add the information of this user in the signaling payload. From the point of view of a given user, these new tracks (audio+video) will correspond to a new user.
The first approach is simpler but takes more ressources, more ice candidate to gather, stun request, connections to the SFU, etc..
The second one is more efficient but harder to implements. Both on the client and the server.
A link to bloggeek.me, which provides excellent ressources for webrtc, and talks about these two approaches, far better than me.
The post states that Jitsi server, use only one peer connection with the SFU, per user.
Other strategies exist, in livekit server, a SFU implementation in Golang, they use 2 PeerConnection per user. One for publishing the streams of the local user and the second to receive streams from all other users. Here a link to the client protocol of Livekit server
For approach 2 and 3, how SFU servers wire up all these streams correctly between each PeerConnection with a local user, I really don't know. It seems really specific to the project.
You have to check the SFU server API you are using, and see what is possible to do with it. But what you are looking for is definitely possible, given the "right" project for your use case.
For the client side it depends on project your are using too.
If you are in the early stage of your project, you can maybe check livekit server. It is an open source project, Apache 2.0 license, develop in golang, and provides a lot of interesting features out of the box. Auto scaling SFU instances through redis, kubernetes setup, client libraries in JavaScript, Flutter, a server sdk to interact with SFU instances in various langage, etc.. The ecosystem seems really nice and the documentation is good too.
Hope it helps a bit

When is an endpoint bundle-aware and when not?

From
Link: www.w3.org/TR/webrtc/#dom-rtcbundlepolicy
Content: 4.2.5 RTCBundlePolicy Enum
"If the remote endpoint is bundle-aware, all media tracks and data channels are bundled onto the same transport."
When is an endpoint bundle-aware and when not? And what does bundle-aware means?
To establish a p2p connection, WebRTC will allocate and do STUN network checks on up to 3 ports (multiplied by ways they can be reached) on either end, and as they're discovered (which takes time), ask JS to trickle-exchange info on each of these "ICE candidates" across a signaling channel, once for video, once for audio, and once for data (if you have it).
WebRTC does this mostly to support connecting to non-browser legacy devices, because all modern browsers support BUNDLE, which is when all but one candidate end up being thrown away, and all media gets bundled over that single port.
WebRTC even has a "max-compat" mode that goes even further, allocating a port for each piece of media, just in case the other endpoint is really old.
WebRTC doesn't know the other endpoint is a browser until it receives an "answer" from it, but if you know, you can specify "max-bundle" and save a couple of milliseconds.

Is there a WebRTC RTCSwarmConnection architecture?

I would like to create an audio (mic) and video (camera) chat room with 12 people using webRTC. I understand signalling and the need for external services like ICE & STUN to help peers connect with each other.
But I don't want to use a full mesh architecture where everyone connects to everyone else because it is less efficient. I don't want to use expensive TURN relay services. I want the swarm to propagate the streams automatically so that if a direct connection isn't possible, the network routes the stream packets via peers automatically using encapsulation.
I don't want to use star architecture because I don't want a bottleneck peer.
I would like a peer to connect to maybe 2 or 3 other peers (max) and broadcast their media across the network without worrying about the relaying between peers. The routing would obviously need to be controlled by some service but I can't see that it's possible to do the stream encapsulation with RTCPeerConnection. Since WebRTC only allows for a RTCPeerConnection object (1 peer to 1 peer) and a peer would need to distinguish where the incoming stream is coming from and whether it needs to be relayed to another peer.
Is there a technology that extends WebRTC to allow for this more bandwidth efficient architecture?

Receive udp broadcast packets ios

I'm almost completely done with and iOS client for my REST service. The only thing I'm missing is the ability for the client to listen on the network for a UDP broadcast that receives the host display name and base URL for uploads. There could be multiple servers on the network broadcasting and waiting for uploads.
Asynchronous is preferred. The servers will be displayed to the user as the device discovers them and I want the user to be able to select a server at any point in time.
The broadcaster is sending to 255.255.255.255 and does not expect any data back.
I am a beginner in objective c so something simple and easy to use is best.
I recommend looking at CocoaAsyncSocket. It can handle UDP sockets well. I haven't tried listening to a broadcast with it, but it's probably your best bet.

What are common UDP usecases?

Can anyone tell be where to use the UDP protocol except live streaming of music/video? What are default usecases for UDP?
UDP is also good for broadcast, such as service discovery - finding that newly plugged in printer.
Also of note is that broadcast is anonymous, you don't need to specify target hosts, as such it can form the foundation of a convenient plug-and-play or high-availability network.
UDP is stateless and is good for applications that have large numbers of clients connecting to a server such as time servers or DNS. The fact that no connection has to established and maintained reduces the memory required by the server. There is no handshaking involved and so this reduces the traffic on the network. On the downside, if the information transferred requires multiple packets there is no transmission control to ensure that all packets arrive and in the correct order - but in games packets lost are probably better than late or disordered.
Anything else where you need performance but can survive if a packet gets lost along the way. Multiplayer games come to mind, for example.
A very common use case is DNS, since the overhead of creating a TCP connection would by far outweight the actual payload.
Additional use cases are NTP (network time service) and most video games.
I use UDP to add chat capabilities to our applications. No need to create a server. It is also useful to dispatch events to all users of our applications.