Signaling between WebRCT client and SFU - webrtc

If a client A, wants to connect to 2 other end points B and C. How does A initiates call with B and C using SFU. I mean, how will A communicate to SFU that it needs to connect to B and C? How is ICE working working in this setup?

Clients typically interact with SFUs in a publish/subscribe manner, so the signaling doesn't happen directly between them.
The workflow is usually as follows: client A publishes a stream to the SFU, then other clients, like B and C, may subscribe to A's feed. In either case (publishing or subscribing), the signaling is happening between the client and the SFU WebRTC agent. And, of course, since the client signals with the SFU, ICE candidates are exchanged with the SFU, not between themselves.

Related

Do redis streams scale when creating a new stream per client

I am attempting to create a microservice where clients are connected to a certain service A over a TCP connection and a variety of actions are performed on other services within the microservice system (say B, C, D etc.) based on user interaction or other events, and I need to propagate results from these services B, C, D back to service A to be returned to the client.
Since many of these services perform long lived actions, is using redis streams as a buffer to store results from B, C, D to then be propagated to the client by A make sense? Considering a new different redis key is used for each client, will this scale well for thousands of connections? Is redis the right choice for event propagation on a 1:1 basis like this?
Kafka seems like a bad choice because all consumers are delivered every single message. Does it make sense to use something like ActiveMQ instead?

In WebRTC, can ICE candidates be re-used across different RTCPeerConnections?

I am working on setting up group calls involving up to 8 peers using WebRTC.
Let's say a peer needs to set up 7 RTCPeerConnections to join a group call. Instead of relying on onicecandidate event for every single RTCPeerConnection, I was wondering if I can track the client's icecandidates in a central location and reuse it for each new RTCPeerConnection. (e.g. Signaling Server will keep track of a peer's full ICE candidates, and share them with other peers as soon as they need them).
I am unsure what the average number of 'icecandidates' each client will have, but with ice trickle process, it seems that many duplicate http or websocket calls will need to be made to a Signaling Server in oder to exchange ice candidates between any 2 peers.
So I was wondering if I could just "accumulate" ice candidates locally and reuse them when new RTCPeerConnection will need to be made with new peer.
You can not. ICE candidates are associated with the peerconnection and its ice username fragment and password.
There is a feature called ice forking that would allow what you ask for but it is not implemented yet. https://bugs.chromium.org/p/webrtc/issues/detail?id=11252#c3 has some details.

webRTC multi-peer connection

I have successfully connected clients A and B. Problem is I want to add new clients, C and D to build a group chat.
Do I need to spawn new RTC connection and exchange offer/answer/ICE candidates for each clients? For example:
A connects to B
A connects to C
A connects to D
B connects to C
B connects to D
C connects to D
Each of the above client combination spawns their own RTCPeerConnection and goes through the webrtc handshake (offer,icecandidate,answer)
Do I need to spawn new RTC connection and exchange offer/answer/ICE candidates for each clients?
Exactly. Each client just need to create new RTCPeerConnection, attach their unique audio and video tracks to them and exchange their SDP & ICE candidates every time a new client arrives.
An example is available here: https://webrtc.github.io/samples/src/content/peerconnection/multiple/
Source code: https://github.com/webrtc/samples/blob/gh-pages/src/content/peerconnection/multiple/js/main.js

webRTC to setup signaling server

how to setup a signaling server for webRTC when the system are connected in Local Area Network? Its mandatory that we must use STUN and TURN server for signaling?
To make WebRTC run on LAN, you will require to have a signaling server in that LAN. A signaling server is any web server that will allow your web clients to exchange the SDP offer/answer and ICE candidates that are generated by the WebRTC PeerConnection. This can be done using AJAX or WebSockets.
I have listed some top sources for information about WebRTC. Please go through some of the links on that page to better understand how the WebRTC signaling works.
You will not require a STUN/TURN server as your WebRTC clients (i.e. Web Browser) will be in the LAN and accessible to each other. FYI... STUN/TURN servers are not part of the signaling but part of the media leg and usually required for NAT traversals of media.
Webrtc needs some kind of signalling system for initial negotiation.. like transferring SDP, ICE-candidates, sending and receiving offers etc... rest is done by peer-peer connection. For initial signalling you can use any technique like sending AJAX calls, using socket.io etc.
STUN and TURN servers are required for NAT traversal, NAT traversal is important because it is needed for determining the path between peers. You can use google provided STUN/TURN server address stun:stun.l.google.com:19302 etc , or you can configure your own turn server by using rfc-5766 turn server
Making signalling server for WebRTC is quite easy.
I used PHP, MYSQL and AJAX to maintain signalling data.
Suppose A wants to call B.
Then A creates offer using createOffer method. This method returns an offer object.
You have to transfer this offer object to user B, This is a part of signalling process.
Now create MYSQL database, having columns :
caller, callee, offer, answer, callerICE and calleeICE
Now offer created by A is stored in offer attribute with the help of AJAX call .
(Remember to use JSON.stringify the JS object before "POSTing" object to server.)
Now user B scans this offer attribute created by caller A , again with the help of AJAX call.
In this way , offer object created at user A can arrive at user B.
Now, user B responds to the offer by calling createAnswer method. This method returns answer object. This can again be stored in "answer" attribute of database.
Then the caller A scans this "answer" attribute created by callee B.
In this way, answer object created by B can arrive at A.
To store iceCandidate object representing caller A, use "callerIce" attribute of MYSQL table. Note that, callee B is scanning "callerIce" to know the details of caller A.
In this way we can transfer the iceCandidate objects representing future peer.
After you complete transferring of iceCandidate object, the connectionState property holds "connected" indicating two peers are connected.
If any questions, let me know!
Cheers ! You can now share local media stream to the remote peer.

WCF two way HTTP communication to bypass firewalls

I want to use WCF to enable two way communication without opening a port on the client.
I'm developing something like a P2P application (similar to teamviewer/logmein) where you don't need to open ports to communicate.
How do I accomplish two way communication through HTTP/HTTPS without the need to open a port in the client?
Note : Port 80 can be opened in the server...no issues on that.
Thanks
Well those systems you mention work as follows. They first try to make client A and client B communicate directly via a range of different topologies which basically require one of them to allow incoming connections if that fails they fall back on a third party which acts as a man in the middle. So client A talks to the server and sends it messages for client B. Then Client A gets the messages addressed to it back in response. Client B sends it messages to the server and it's gets the message from client A back from the server. This way both client A and B always initiate the connection and don't need to have a port open for incoming traffic.
If I understand correctly in your case you would always want the man in the middle. In order to do this you would have to write a WCF service that provides all relevant methods. For instance things like
void SendMessageToClient(Guid senderId, Guid recipientId, Message msg)
Message[] GetMessages(Guid recipientId)
then have those methods respectively store and retrieve those Message objects from somewhere (like a database or a queue or something).
Then write a client that connects to the WCF service using the HTTP binding and call the methods on the server and process the results.
I hope you understand that
a) this isn't a very efficient way to communicate.
b) that it's difficult to test and debug and understand whats going on since there are so many parties involved and communication is asynchronous living in 3 different processes.
c) it adds an extra layer ontop of the communication so you need to keep it clear for yourself in your head (and prefereably in code) when you are dealing with the infrastructure bits and when you are dealing with the actual protocol clientA and clientB speak to each other in the Message objects.
Pseudo (code) Example
in this example I assume the message object is nothing more then a string and the only command is "whattimeisit" to which the response is the local time in string form
ClientA makes call to server.SendMessageToClient("clientA", "clientB", "whattimeisit");
Server stores this message in the database with ID 1
ClientB makes call to server GetMessages("clientB");
Server retrieves message with ID 1
ClientB recieves back "whattimeisit" as a response
ClientB makes call to server.SendMessageToClient("clientB", "clientA", "19:50:12");
Server stores this message in the database with ID 2
ClientA makes call to server GetMessages("clientA");
Server retrieves message with ID 2
ClientA recieves back "19:50:12" as a response
I'm not sure I understand. The purpose of digital firewalls is (generally) control communication channels. If you want to communicate bypassing firewalls you have two choices.
Hide the message in something the firewall lets through
Use a communications channel the firewall doesn't control
In the case of the earlier:
You could pass messages to proxy that passed them on (email is a good but not exactly responsive example).
In the case of the latter:
You could put the messages on say file where some other transport layer carries them