Does TURN Server (WebRTC) eliminate redundant uploads (like SFU)? - webrtc

The problem with p2p apps is the amount of Uploads, since most connections are asymmetrical (weak upload, strong upload).
If you're connected to 10 Peers you have to upload your own video stream 10 times, which quickly falls apart.
SFU (selecting forwarding units) solve this by routing your single upload to all peers.
Does a TURN server do the same? Technically it could, since it's already acting as a relay, but my fear is that it tries to emulate the underlying p2p protocol to closely and hence Uploads are still redundant?

No. TURN servers do not decrypt, SFUs need to do that (and a couple of things that require more logic). They're different components, solving different problems.

No, by default TURN server doesn't do it.
You can take some more complicated webrtc servers like flussonic for example (maybe wowza) and get star topology with them.

Related

How to properly tune network for Turn Server in a WebRTC application?

I'm working on a WebRTC solution for audio/picture comunication and I'm a bit concerned about the lack of bandwidth control when two Peers in LAN are communicating.
Basically I want to be able to prioritize and pre-allocate bandwidth on my switch for WebRTC calls. But I couldn't see a proper way of filtering the packets when they are in a P2P call.
Also, I don't want to decode the packet to do that, because of the possible delay caused by this operation.
I hope you guys can show me a proper way or just tell me if my teorical solution could work.
I'm not 100% sure about the idea I'm planning to test, because I don't know how TURN server works internally.
But here is the idea:
And what I dont know is: Is it possible to make 2 turn servers know each other? Would they work like a 2 layer proxy between callers? If yes, could you please show me what I have to do to make it work?
Just install a internal proxy server to the external turn server and prioritize the proxy on your lan.
(answering my own question after realizing the solution was easier than I thought)

what are disadvantages of having two PeerConnections for one call?

I am thinking of changing my application from using a single PeerConnection for transferring media both ways to one PeerConnection for upstream and one for downstream for a single call between two peer.
The advantages I foresee:
Less worry about signalling state of PeerConnection when changing offering media from video+audio to audio and vice-versa
Might be easier to plug an Media Servers like kurento into the application ( in case of multi user call, lesser upload bandwidth required by user).
(not sure of this one) single responsibility principle, each PeerConnection has single role.
the major reason I want to do this change is, I am noticing that if peer(peer1) offers only audio but other peer(peer2) answers with both video+audio, peer1 recieves only the audio for some reason, but if peer1 had been an answerer, it is able to recieve both MediaTracks without any problem. Not sure if it is a bug in my app or browser( got same result in firefox and chrome). I was able to make a workaround by maintaining states, changing offerer based on state and stuff, but having problems with both peers changing state (nearly) simultaneously. Thought above proposal would be simpler solution and I can get rid of maintaining states.
Other than the obvious disadvantages of extra overhead of more ICE candidate requests( n STUN n TURN), maintaining extra PeerConnections, any other issue possible following this design?
Nothing prevents you from doing that, but I suspect there's a simpler solution to your problem which you kind of buried:
the major reason I want to do this change is, I am noticing that if peer(peer1) offers only audio but other peer(peer2) answers with both video+audio, peer1 recieves only the audio for some reason,
Don't ask me why, but the default spec behavior when peer1 only offers audio, is to only request audio from the other side. To override this and leave yourself open to receiving video as well if the other side has it, use RTCOfferOptions:
peer1.createOffer({ offerToReceiveVideo: true }).then( ... )
(or if you're using the legacy non-promise API it's the third argument.)
The nice thing with this is that it is intent-based so you don't need to track any state. e.g. always using { offerToReceiveVideo: true, offerToReceiveAudio: true } may be right for you.
A resource issue would be that you are be utilizing more ports as both sides of the connection have to complete the DTLS handshake(which is done peer-to-peer and not through the signalling server).
A design challenge is keeping track of two connections orthogonally. It could be hairy and will more readily show errors in the underlying webrtc implementation if the state is not handled properly(browser state errors, etc.).

Does WebRTC allow one-to-many (multicast) connections?

I've read a lot about WebRTC, but there's one question that still remains. I hope you can help me with that:
Does WebRTC allow me to create a one-to-many connection? I don't mean "being able to have multiple connections to different computers", I really talk about having one connection that multicasts its data to multiple endpoints without the need to "upload" the data once for each endpoint. Will it be possible to send one single package to the web, that, when it reaches the web, magically splits itself into multiple packages with different targets?
I hope you get what I'm looking for :)
Until now, I've only seen one-to-one connections, or solutions that have one connection to a central server that does the multicast for them (which usually results in twice the ping).
But to me, one-to-one connections don't seem to be really useful (due to low upload-bandwith of clients), and solutions with a central server are also possible without WebRTC (using WebSockets), so the only real use case for WebRTC would be one-to-many connections.
So.. is this something that will be possible in the future? Or is it already possible today?
Three things:
IP multicast in the Internet is not possible at the moment (multicast addresses are not routed by ISPs)
WebRTC fits many use cases beyond one-to-many communication, just have a look at this document: https://datatracker.ietf.org/doc/html/draft-ietf-rtcweb-use-cases-and-requirements-06
WebRTC connections between browsers are always encrypted (using SRTP for A/V data and DTLS for generic data) and the encryption parameters (session keys etc.) are negotiated for every connection separately. How would you do that in a multicast environment (think of it as a distribution tree)?
So no, WebRTC cannot be used with IP multicast.
I would answer "It doesn't for now", because as a programmer, I can tell you, that there are number of ways browser devs to make it work if we (users) insist on it's importance. But how ? Since there's encryption, they could allow sharing of the session's encryption keys to the group of 'registered' (multicast) users. But how ? Well, Web was created for sharing. The most obvious way is through web server mediation and JS WebRTC API function (to load the user keys). Since multicast is most often used for efficient video distribution, you have a RTP/SRTP video server. The web server can coexist at the same machine. If they decide to extend it to web browsers - then just the "server" role can be done by the Web browser who created the multicast stream (the sender). The clients need to know who is it.
Again: In December 2013, this is still not possible. And multicasts are allowed on the Internet only in:
some experimental WAN nets
some internet+video ISP nets
LANs (when enabled at switch level, cheap switches transmit it to all ports). But you can be an ISP, researcher or LAN user, so it's necessary.

UDP Broadcast, Multicast, or Unicast for a "Toy Application"

I'm looking to write a toy application for my own personal use (and possibly to share with friends) for peer-to-peer shared status on a local network. For instance, let's say I wanted to implement it for the name of the current building you're in (let's pretend the network topology is weird, and multiple buildings occupy the same LAN). The idea is if you run the application, you can set what building you're in, and you can see the buildings of every other user running the application on the local network.
The question is, what's the best transport/network layer technology to use to implement this?
My initial inclination was to use UDP Multicast, but the more research I do about it, the more I'm scared off by it: while the technology is great and seems easy to use, if the application is not tailored for a particular site deployment, it also seems most likely to get you a visit from an angry network admin.
I'm wondering, therefore, since this is a relatively low bandwidth application — probably max one update every 4–5 minutes or so from each client, with likely no more than 25–50 clients — whether it might be "cheaper" in many ways to use another strategy:
Multicast: find a way to pick a well-known multicast address from 239.255/16 and have interested applications join the group when they start up.
Broadcast: send out a single UDP Broadcast message every time someone's status changes (and one "refresh" broadcast when the app launches, after which every client replies directly to the requesting user with their current status).
Unicast: send a UDP Broadcast at application start to announce interest, and when a client's status changes, it sends a UDP packet directly to every client who has announced. This results in the highest traffic, but might be less likely to annoy other systems with needless broadcast packets. It also introduces potential complications when apps crash (in terms of generating unnecessary traffic).
Multicast is most certainly the best technology for the job, but I'm wondering if the associated hassles are worth avoiding since this is just a "toy application," not a business-critical service intended for professional network admin deployment and configuration.

using BOSH/similar technique for existing application/system

We've an existing system which connects to the the back end via http (apache/ssl) and polls the server for new messages, needless to say we have scalability issues.
I'm researching on removing this polling and have come across BOSH/XMPP but I'm not sure how we should take the BOSH technique (using long lived http connection).
I've seen there are few libraries available but the entire thing seems bloaty since we do not need buddy lists etc and simply want to notify the clients of available messages.
The client is written in C/C++ and works across most OS so that is an important factor. The server is in Java.
does bosh result in huge number of httpd processes? since it has to keep all the clients connected, what would be the limit on that. we are also planning to move to 64 bit JVM/apache what would be the max limit of clients in that case.
any hints?
I would note that BOSH is separate from XMPP, so there's no "buddy lists" involved. XMPP-over-BOSH is what you're thinking of there.
Take a look at collecta.com and associated blog posts (probably by Jack Moffitt) about how they use BOSH (and also XMPP) to deliver real-time information to large numbers of users.
As for the scaling issues with Apache, I don't know — presumably each connection is using few resources, so you can increase the number of connections per Apache process. But you could also check out some of the connection manager technologies (like punjab) mentioned on the BOSH page above.