I'am develop group call like google meet using WebRTC and SFU method for routing.
my project work well, until i open chrome://webrtc-internals to see webrtc connection status. and i compare with google meet.
Google meet
only 1 peer connection is active.
my project.
1 peer connection active for broadcast.
n-1 peer connection active as consumer.
so if total users in a room is 5. then on each client side has 5
peer connections are active too (1 as broadcaster, 4 as
consumers).
so my question is, how i can using only 1 peer connection as consumer? or using 1 peer connection as broadcast and also as consumer? maybe my method wrong? or misunderstood the implementation of SFU.
any suggestions or solutions?
I am still discovering/learning the stack of webrtc and related architectures, so take what I am saying with a grain of salt.
With a SFU architecture you can have multiple strategies to distribute the streams between your clients. In all case, you save bandwidth for the local user by only sending his streams once to the SFU.
As you state, for n users you can open 1 RTCPeerConnection with the SFU for the local user and n-1 RTCPeerConnection for remote users.
You can open only one RTCPeerConnection with the SFU for any number of users in the "room". To achieve this, when a new user enters the SFU session, his streams need to be added to the tracks of the PeerConnection present at the SFU. It will trigger some renegotiation through signaling, and your users will know a new track (stream) has been added. The client (javascript code for example) needs to identify the new tracks to a specific user, for that you can add the information of this user in the signaling payload. From the point of view of a given user, these new tracks (audio+video) will correspond to a new user.
The first approach is simpler but takes more ressources, more ice candidate to gather, stun request, connections to the SFU, etc..
The second one is more efficient but harder to implements. Both on the client and the server.
A link to bloggeek.me, which provides excellent ressources for webrtc, and talks about these two approaches, far better than me.
The post states that Jitsi server, use only one peer connection with the SFU, per user.
Other strategies exist, in livekit server, a SFU implementation in Golang, they use 2 PeerConnection per user. One for publishing the streams of the local user and the second to receive streams from all other users. Here a link to the client protocol of Livekit server
For approach 2 and 3, how SFU servers wire up all these streams correctly between each PeerConnection with a local user, I really don't know. It seems really specific to the project.
You have to check the SFU server API you are using, and see what is possible to do with it. But what you are looking for is definitely possible, given the "right" project for your use case.
For the client side it depends on project your are using too.
If you are in the early stage of your project, you can maybe check livekit server. It is an open source project, Apache 2.0 license, develop in golang, and provides a lot of interesting features out of the box. Auto scaling SFU instances through redis, kubernetes setup, client libraries in JavaScript, Flutter, a server sdk to interact with SFU instances in various langage, etc.. The ecosystem seems really nice and the documentation is good too.
Hope it helps a bit
Related
From
Link: www.w3.org/TR/webrtc/#dom-rtcbundlepolicy
Content: 4.2.5 RTCBundlePolicy Enum
"If the remote endpoint is bundle-aware, all media tracks and data channels are bundled onto the same transport."
When is an endpoint bundle-aware and when not? And what does bundle-aware means?
To establish a p2p connection, WebRTC will allocate and do STUN network checks on up to 3 ports (multiplied by ways they can be reached) on either end, and as they're discovered (which takes time), ask JS to trickle-exchange info on each of these "ICE candidates" across a signaling channel, once for video, once for audio, and once for data (if you have it).
WebRTC does this mostly to support connecting to non-browser legacy devices, because all modern browsers support BUNDLE, which is when all but one candidate end up being thrown away, and all media gets bundled over that single port.
WebRTC even has a "max-compat" mode that goes even further, allocating a port for each piece of media, just in case the other endpoint is really old.
WebRTC doesn't know the other endpoint is a browser until it receives an "answer" from it, but if you know, you can specify "max-bundle" and save a couple of milliseconds.
I would like to create an audio (mic) and video (camera) chat room with 12 people using webRTC. I understand signalling and the need for external services like ICE & STUN to help peers connect with each other.
But I don't want to use a full mesh architecture where everyone connects to everyone else because it is less efficient. I don't want to use expensive TURN relay services. I want the swarm to propagate the streams automatically so that if a direct connection isn't possible, the network routes the stream packets via peers automatically using encapsulation.
I don't want to use star architecture because I don't want a bottleneck peer.
I would like a peer to connect to maybe 2 or 3 other peers (max) and broadcast their media across the network without worrying about the relaying between peers. The routing would obviously need to be controlled by some service but I can't see that it's possible to do the stream encapsulation with RTCPeerConnection. Since WebRTC only allows for a RTCPeerConnection object (1 peer to 1 peer) and a peer would need to distinguish where the incoming stream is coming from and whether it needs to be relayed to another peer.
Is there a technology that extends WebRTC to allow for this more bandwidth efficient architecture?
I have started to look into WebRTC a bit and I am using it to build a simple peer to peer chat application using the data channel. I have the following questions:
Do I need to establish a RTCPeerConnection to each peer I want to talk to? So if there are three peers they each need 2 RTCPeerConnections (unless I use one of the peers as a sort of ad-hoc server).
If peer A sends out a candidate and sdp when creating a offer to peer B. Can peer B connect to peer A using that info and send its answer (with candidate and its sdp) over the RTCPeerConnection, i.e. using the RTCPeerConnection (before it's been completely established) as a signaling channel? I would assume that when the offer is created by peer A it starts to listen for connections on some port.
My understanding of WebRTC is a bit limited so if I've missunderstood some concept of WebRTC in my questions above please point them out!
Yes, as a direct P2P protocol everybody must be directly connected to everybody else if they want to communicate; unless you create some kind of mesh network in which one peer forwards messages to other peers.
No, the SDP offer and answer and ICE candidates all need to be exchanged through a signalling server; the connection cannot be established until both peers have actually agreed on a specific session configuration and ICE route to use, so you cannot send the SDP answer over a connection which isn't complete yet.
Especially for a simple text-only chat, going through a server is often easier than using P2P; the processing and bandwidth requirements are so minimal that the complications of P2P connections are probably not worth it. And you need a signalling server anyway. P2P only becomes really interesting once you start sending large files or audio/video streams.
In principle it is possible to establish a WebRTC connection without a signalling server, but that requires an out of band exchange of session tokens between the peers. I.e. the user would have to copy a token from the application, somehow send it to another user and the other user would have to paste it.
Additionally those tokens cannot be reused, so this procedure would have to be repeated every time peers want to establish a connection.
So while theoretically possible webrtc is not distributed in practical terms.
There is some noise about specifying support for incoming connections and reusable peer contacts, but the progress on that is unclear.
Basically, I'm building a Dropbox clone that will avoid cloud storage. Ok, I'm not building it, but trying to estimate the amount of work needed.
Been reading different p2p options here on SO, but actually, there are very little topics on centralised p2p connections and how to build them from ground-up. I'm not even sure if it's appropriate to call it a p2p at all.
From ActionScript background I know that it can establish an UDP connection between 2 different clients across the globe with provided centralised server (RTMFP). It's highly abstracted, it doesn't even require to open ports and clients don't know the IPs of each other. So the subset of given options is quite limited.
Anyway, I need create a server-side app and a client-side app that will try to sync files between connected clients. I've read that socket connections are used for file transfers. And the questions here are:
How to pair the clients?
What should server do?
What should client do?
Thank you.
NB
Establishing connections and file syncing solutions are out of the question.
I am having a hard time understanding the ZeroMQ messaging system, so before I dive in, I wanted to see if anyone knew if what I want to do is even possible.
I want to setup a pubsub server with ZeroMQ that will publish certain streams of data and to subscribe to some of those streams, a user must authenticate to see if they have access to those streams. Everything I have seen has the subscribing taking place with the zmq.SUBSCRIBE, command.
Can this be modified to authenticate? Does it support it out of the box?
No, there is no such functionality out of the box. ZeroMQ operates on lower level and it is likely that auth-features will never be in the core.
Since pubsub is implemented on top of IP-multicast, I can suggest to write an auth-server that will control a network router and forbid all multicast traffic to the client by IP/port until this client will not be authorized. You're free to choose auth method in this case, of course.
If you can sacrifice ZeroMQ’s stability and performance to the development cost, just take ActiveMQ. It has authentication features.