Does google meet use WebRTC - webrtc

I am new to webrtc development, I would like to create a conference room with multiples connections. I have read about webrtc and peer communication. I would like to know if google meet used one peer connection for every participants one another. If so, how can they handle 250 participants without draining the browser resources?

For group calls, Google Meets does not use peer-to-peer, if that was the case, it would drain one's bandwidth very quickly and browser resources quickly.
Although I am not exactly sure about Meets' infrastructure, they most likely use a Selective Forwarding Unit (SFU) to be able to handle lots of connections, since it's common practice for this sort of application.
The SFU works as a webrtc client server-side, and it basically just forwards the stream to clients that are requesting it browser-side.

Related

Will WebRTC clients work with TURN servers which support only Channels and Not Data/Send mechanisms?

I was reading TURN server RFCs. All related RFCs ( 5766 and the more recent 8656) support Channel mechanism to avoid the 36 bytes overheads of STUN headers (Section 2.5 of RFC 5766) required for the send/data approach:
For some applications (e.g., Voice over IP), the 36 bytes of overhead
that a Send indication or Data indication adds to the application
data can substantially increase the bandwidth required between the
client and the server. To remedy this, TURN offers a second way for
the client and server to associate data with a specific peer.
For WebRTC, clearly there is no point in using the send/data mechanism. How do browsers choose between the two mechanisms for relaying? Is send/data a fallback? Will support for Channels alone in a TURN server be sufficient for WebRTC use-case?
They will usually do SendIndications while waiting for the Channel to be created.
SendIndications also are important if you get something on the relay before the Channel is created. Some clients only create the Channel when they send and not right when the permission is created.
Firefox doesn't support TURN channels: https://bugzilla.mozilla.org/show_bug.cgi?id=857736
Chrome also uses send/binding indications until ICE is done (presumably to avoid the overhead of creating channels which are not used later)
Don't rely on partial implementations of a spec, that won't work.

WebRTC - Video conferencing with only one emitter

I am new to WebRTC technology.
I want to create a video chat / video conferencing with a transmitter and many followers (more 1000).
Example:
I read a lot of documentations :
https://medium.com/linagora-engineering/scalability-in-video-conferencing-part-1-276f52b4acac
https://webrtcglossary.com/sfu/
But I still don't know what is the best solution (in my case) between Selective Forwarding Unit (SFU) and Multiploint Control Unit (MCU).
Can you help me to understand?
I think the best way is MCU but I am not sure.
Second question:
Can you suggest some sources and links that can help me to set up such an architecture. Currently my project works perfectly in Peer To Peer (Mesh) but it is not the right solution. I have absolutely no idea how to set this up.
Thank you so much
It is possible to implement this using an SFU. The more peers are connected, the more you would need processing power to handle those new peers. This could be done by using more threads and/or forwarding requests to another machine.
With mediasoup it is possible have control over this. With this tool you have routers where peers can connect to to get the stream. A router works on a worker which has a limited amount of receiving peers (depending on cpu capacity). Now to allow more peers you can forward the stream to other routers which can expand the total capacity.
useful links:
https://mediasoup.org/documentation/v3/scalability/#one-to-many-broadcasting
https://mediasoup.org/documentation/v3/mediasoup/design/#architecture
https://mediasoup.discourse.group/t/scalability-in-mediasoup-example/793/2?u=dirvann

How do you handle newcomers efficiently in WebRTC signaling?

Signaling is not addressed by WebRTC (even if we do have JSEP as a starting point), but from what I understand, it works that way :
client tells the server it's available at X
server holds that information and maps it to an identifier
other client comes and sends an identifier to get connection information from the first client
other client uses it to create it's one connection information and sends it to the server
server sends this to first client
both client can now talk
This is all nice and well, but what happends if a 3rd client arrives ?
You have to redo the whole things. Which suppose the first two clients are STILL connected to the server, waiting for a 3rd client to signal itself, and start the exchanging process again so they can get the 3rd client connection information.
So does it mean you are required to have to sort of permanent link to the server for each client (long polling, websocket, etc) ? If yes, is there a way to do that efficiently ?
Cause I don't see the point of having webRTC if I have to setup nodejs or tornado and make it scales to the number of my users. It doesn't sound very p2pish to me.
Please tell me I missed something.
What about a chat system? Do you really need to keep a permanent link to the server for each client? Of course, because otherwise you have no way of keeping track of a user's status. This "permanent" link can be done different ways: you mentioned WebSocket and long polling, but simple periodic XHR polling works too (although this will affect the UX, depending on the interval).
So view it like a chat system, except that the media stream is P2P for reduced latency. Once a P2P WebRTC connection is established, the server may die and, of course, the P2P connection will be kept between the two clients. What I mean is: both users may always block your server once the P2P connection is established and still be connected together in the wild Internets.
Understand me well: once the P2P connection is established, your server will not be doing any more WebRTC signalling. The connection is only needed to keep track of the statuses.
So it depends on your application. If you want to keep the statuses of users and make them visible to others, then you're in the same situation as a chat system: you need to keep a certain link, somehow, to make sure their statuses are synced. Otherwise, your server exists to connect them together and is not needed afterwards. An example of the latter situation is: a user goes to a webpage, the webpage provides him with a new room URL, the user shares this URL to another peer by another mean, the other peer joins the room, server connects them together (manages WebRTC signalling) and then forgets them. They are now connected until one of them breaks the link. Just like this reference app.
Instead of a central server keeping one connection per client, a mesh network could also be considered, albeit difficult to implement.

Does WebRTC allow one-to-many (multicast) connections?

I've read a lot about WebRTC, but there's one question that still remains. I hope you can help me with that:
Does WebRTC allow me to create a one-to-many connection? I don't mean "being able to have multiple connections to different computers", I really talk about having one connection that multicasts its data to multiple endpoints without the need to "upload" the data once for each endpoint. Will it be possible to send one single package to the web, that, when it reaches the web, magically splits itself into multiple packages with different targets?
I hope you get what I'm looking for :)
Until now, I've only seen one-to-one connections, or solutions that have one connection to a central server that does the multicast for them (which usually results in twice the ping).
But to me, one-to-one connections don't seem to be really useful (due to low upload-bandwith of clients), and solutions with a central server are also possible without WebRTC (using WebSockets), so the only real use case for WebRTC would be one-to-many connections.
So.. is this something that will be possible in the future? Or is it already possible today?
Three things:
IP multicast in the Internet is not possible at the moment (multicast addresses are not routed by ISPs)
WebRTC fits many use cases beyond one-to-many communication, just have a look at this document: https://datatracker.ietf.org/doc/html/draft-ietf-rtcweb-use-cases-and-requirements-06
WebRTC connections between browsers are always encrypted (using SRTP for A/V data and DTLS for generic data) and the encryption parameters (session keys etc.) are negotiated for every connection separately. How would you do that in a multicast environment (think of it as a distribution tree)?
So no, WebRTC cannot be used with IP multicast.
I would answer "It doesn't for now", because as a programmer, I can tell you, that there are number of ways browser devs to make it work if we (users) insist on it's importance. But how ? Since there's encryption, they could allow sharing of the session's encryption keys to the group of 'registered' (multicast) users. But how ? Well, Web was created for sharing. The most obvious way is through web server mediation and JS WebRTC API function (to load the user keys). Since multicast is most often used for efficient video distribution, you have a RTP/SRTP video server. The web server can coexist at the same machine. If they decide to extend it to web browsers - then just the "server" role can be done by the Web browser who created the multicast stream (the sender). The clients need to know who is it.
Again: In December 2013, this is still not possible. And multicasts are allowed on the Internet only in:
some experimental WAN nets
some internet+video ISP nets
LANs (when enabled at switch level, cheap switches transmit it to all ports). But you can be an ISP, researcher or LAN user, so it's necessary.

UDP Broadcast, Multicast, or Unicast for a "Toy Application"

I'm looking to write a toy application for my own personal use (and possibly to share with friends) for peer-to-peer shared status on a local network. For instance, let's say I wanted to implement it for the name of the current building you're in (let's pretend the network topology is weird, and multiple buildings occupy the same LAN). The idea is if you run the application, you can set what building you're in, and you can see the buildings of every other user running the application on the local network.
The question is, what's the best transport/network layer technology to use to implement this?
My initial inclination was to use UDP Multicast, but the more research I do about it, the more I'm scared off by it: while the technology is great and seems easy to use, if the application is not tailored for a particular site deployment, it also seems most likely to get you a visit from an angry network admin.
I'm wondering, therefore, since this is a relatively low bandwidth application — probably max one update every 4–5 minutes or so from each client, with likely no more than 25–50 clients — whether it might be "cheaper" in many ways to use another strategy:
Multicast: find a way to pick a well-known multicast address from 239.255/16 and have interested applications join the group when they start up.
Broadcast: send out a single UDP Broadcast message every time someone's status changes (and one "refresh" broadcast when the app launches, after which every client replies directly to the requesting user with their current status).
Unicast: send a UDP Broadcast at application start to announce interest, and when a client's status changes, it sends a UDP packet directly to every client who has announced. This results in the highest traffic, but might be less likely to annoy other systems with needless broadcast packets. It also introduces potential complications when apps crash (in terms of generating unnecessary traffic).
Multicast is most certainly the best technology for the job, but I'm wondering if the associated hassles are worth avoiding since this is just a "toy application," not a business-critical service intended for professional network admin deployment and configuration.