Is it possible to retrieve the STUN server used once the RTCPeerConnection is connected - webrtc

Not sure the title makes a lot of sense. To add some context, we are building a WebRTC infrastructure and to so do we have a few STUN servers up and and running.
We sometimes have users complaining of call taking too long to connect therefore we would like to get some analytics on the calls. Because we provide a list of STUN IPs (including some public STUN as backup), we would like to detect the STUN server that successfully initiated the call.
We have collected a bunch of information thanks to RTCPeerConnection.getStats but there is nothing related to the STUN itself. So for my questions:
is there any JS API that allow us to retrieve the STUN used?
is there any tool that I am not aware of that could do the job?
do the SDP contains any information related to STUN?
Hope all of this is clear, thanks for your kind replies

The statistics do contain a server url:
https://w3c.github.io/webrtc-stats/#dom-rtcicecandidatestats-url
However, that is not implemented and since STUN servers are not involved in the actual call that information is unlikely to be useful.
For TURN servers you can get the active candidate pair and the IP of any relay involved from getStats. See https://webrtc.github.io/samples/src/content/peerconnection/constraints/ for a sample that shows how to determine the active candidate pair.

Related

webRTC to setup signaling server

how to setup a signaling server for webRTC when the system are connected in Local Area Network? Its mandatory that we must use STUN and TURN server for signaling?
To make WebRTC run on LAN, you will require to have a signaling server in that LAN. A signaling server is any web server that will allow your web clients to exchange the SDP offer/answer and ICE candidates that are generated by the WebRTC PeerConnection. This can be done using AJAX or WebSockets.
I have listed some top sources for information about WebRTC. Please go through some of the links on that page to better understand how the WebRTC signaling works.
You will not require a STUN/TURN server as your WebRTC clients (i.e. Web Browser) will be in the LAN and accessible to each other. FYI... STUN/TURN servers are not part of the signaling but part of the media leg and usually required for NAT traversals of media.
Webrtc needs some kind of signalling system for initial negotiation.. like transferring SDP, ICE-candidates, sending and receiving offers etc... rest is done by peer-peer connection. For initial signalling you can use any technique like sending AJAX calls, using socket.io etc.
STUN and TURN servers are required for NAT traversal, NAT traversal is important because it is needed for determining the path between peers. You can use google provided STUN/TURN server address stun:stun.l.google.com:19302 etc , or you can configure your own turn server by using rfc-5766 turn server
Making signalling server for WebRTC is quite easy.
I used PHP, MYSQL and AJAX to maintain signalling data.
Suppose A wants to call B.
Then A creates offer using createOffer method. This method returns an offer object.
You have to transfer this offer object to user B, This is a part of signalling process.
Now create MYSQL database, having columns :
caller, callee, offer, answer, callerICE and calleeICE
Now offer created by A is stored in offer attribute with the help of AJAX call .
(Remember to use JSON.stringify the JS object before "POSTing" object to server.)
Now user B scans this offer attribute created by caller A , again with the help of AJAX call.
In this way , offer object created at user A can arrive at user B.
Now, user B responds to the offer by calling createAnswer method. This method returns answer object. This can again be stored in "answer" attribute of database.
Then the caller A scans this "answer" attribute created by callee B.
In this way, answer object created by B can arrive at A.
To store iceCandidate object representing caller A, use "callerIce" attribute of MYSQL table. Note that, callee B is scanning "callerIce" to know the details of caller A.
In this way we can transfer the iceCandidate objects representing future peer.
After you complete transferring of iceCandidate object, the connectionState property holds "connected" indicating two peers are connected.
If any questions, let me know!
Cheers ! You can now share local media stream to the remote peer.

How to program pcap with Objective-C and get HTTP request and response values in text format

I am working with pcap in an OS X application to understand packet analysis.
I am working with a app https://github.com/jpiccari/MacAlyzer
but I am getting only raw data but I want to differentiate every domain request into separate and clear way to read request and response value. Please guide me the way to how to develop an application with pcap.
I have tried some code but they translate data into hex format. How do I convert that data into meaningful request and response objects like Charles and Fiddler show?
MacAlyzer wasn't developed for your needs. I know because I'm the author. As already stated, Charles and Fiddler are web proxies and work entirely different (and serve different purposes).
Diving a bit deeper into your question, communication between client and server happens IP-to-IP and not domain-to-domain. Domain information is not contained in the packets at the either the IP or TCP level. Instead computers request domain-to-IP lookup information which is then stored and communication is carried out using the client and server IP addresses.
MacAlyzer, and really libpcap, don't have sophisticated packet dissection (like say Wireshark) and cannot display packet information as verbosely as other programs. Before I lost interest in the project I was planning a library that would allow much richer packet dissection and analysis, but free time became very limited.
As for adding domain information to MacAlyzer, I'll explain at a high-level since it seems you know what you're doing. To include domain information instead of IP address in the Source and Destination columns you could edit function ip_host_string() in ip.m. This function controls how the client and server addresses are displayed. Modifying it to lookup the hostname from IP address and returning the resulting string would cause the domains to be displayed instead of IP addresses.
If you come up with some nice updates, consider submitting a pull request.
Here is the food for thoughts:
http://www.binarytides.com/packet-sniffer-code-c-linux/
Anyway, you will need to use C. Therefore, check the codes of the includes, for example:
http://www.eg.bucknell.edu/~cs363/2014-spring/code/tcp.h
Here is the documentation of "pcap":
http://www-01.ibm.com/support/knowledgecenter/#!/ssw_aix_71/com.ibm.aix.basetrf1/pcap_close.htm

How do you handle newcomers efficiently in WebRTC signaling?

Signaling is not addressed by WebRTC (even if we do have JSEP as a starting point), but from what I understand, it works that way :
client tells the server it's available at X
server holds that information and maps it to an identifier
other client comes and sends an identifier to get connection information from the first client
other client uses it to create it's one connection information and sends it to the server
server sends this to first client
both client can now talk
This is all nice and well, but what happends if a 3rd client arrives ?
You have to redo the whole things. Which suppose the first two clients are STILL connected to the server, waiting for a 3rd client to signal itself, and start the exchanging process again so they can get the 3rd client connection information.
So does it mean you are required to have to sort of permanent link to the server for each client (long polling, websocket, etc) ? If yes, is there a way to do that efficiently ?
Cause I don't see the point of having webRTC if I have to setup nodejs or tornado and make it scales to the number of my users. It doesn't sound very p2pish to me.
Please tell me I missed something.
What about a chat system? Do you really need to keep a permanent link to the server for each client? Of course, because otherwise you have no way of keeping track of a user's status. This "permanent" link can be done different ways: you mentioned WebSocket and long polling, but simple periodic XHR polling works too (although this will affect the UX, depending on the interval).
So view it like a chat system, except that the media stream is P2P for reduced latency. Once a P2P WebRTC connection is established, the server may die and, of course, the P2P connection will be kept between the two clients. What I mean is: both users may always block your server once the P2P connection is established and still be connected together in the wild Internets.
Understand me well: once the P2P connection is established, your server will not be doing any more WebRTC signalling. The connection is only needed to keep track of the statuses.
So it depends on your application. If you want to keep the statuses of users and make them visible to others, then you're in the same situation as a chat system: you need to keep a certain link, somehow, to make sure their statuses are synced. Otherwise, your server exists to connect them together and is not needed afterwards. An example of the latter situation is: a user goes to a webpage, the webpage provides him with a new room URL, the user shares this URL to another peer by another mean, the other peer joins the room, server connects them together (manages WebRTC signalling) and then forgets them. They are now connected until one of them breaks the link. Just like this reference app.
Instead of a central server keeping one connection per client, a mesh network could also be considered, albeit difficult to implement.

Does WebRTC allow one-to-many (multicast) connections?

I've read a lot about WebRTC, but there's one question that still remains. I hope you can help me with that:
Does WebRTC allow me to create a one-to-many connection? I don't mean "being able to have multiple connections to different computers", I really talk about having one connection that multicasts its data to multiple endpoints without the need to "upload" the data once for each endpoint. Will it be possible to send one single package to the web, that, when it reaches the web, magically splits itself into multiple packages with different targets?
I hope you get what I'm looking for :)
Until now, I've only seen one-to-one connections, or solutions that have one connection to a central server that does the multicast for them (which usually results in twice the ping).
But to me, one-to-one connections don't seem to be really useful (due to low upload-bandwith of clients), and solutions with a central server are also possible without WebRTC (using WebSockets), so the only real use case for WebRTC would be one-to-many connections.
So.. is this something that will be possible in the future? Or is it already possible today?
Three things:
IP multicast in the Internet is not possible at the moment (multicast addresses are not routed by ISPs)
WebRTC fits many use cases beyond one-to-many communication, just have a look at this document: https://datatracker.ietf.org/doc/html/draft-ietf-rtcweb-use-cases-and-requirements-06
WebRTC connections between browsers are always encrypted (using SRTP for A/V data and DTLS for generic data) and the encryption parameters (session keys etc.) are negotiated for every connection separately. How would you do that in a multicast environment (think of it as a distribution tree)?
So no, WebRTC cannot be used with IP multicast.
I would answer "It doesn't for now", because as a programmer, I can tell you, that there are number of ways browser devs to make it work if we (users) insist on it's importance. But how ? Since there's encryption, they could allow sharing of the session's encryption keys to the group of 'registered' (multicast) users. But how ? Well, Web was created for sharing. The most obvious way is through web server mediation and JS WebRTC API function (to load the user keys). Since multicast is most often used for efficient video distribution, you have a RTP/SRTP video server. The web server can coexist at the same machine. If they decide to extend it to web browsers - then just the "server" role can be done by the Web browser who created the multicast stream (the sender). The clients need to know who is it.
Again: In December 2013, this is still not possible. And multicasts are allowed on the Internet only in:
some experimental WAN nets
some internet+video ISP nets
LANs (when enabled at switch level, cheap switches transmit it to all ports). But you can be an ISP, researcher or LAN user, so it's necessary.

iPhone GameKit: Clients detect other clients

I'm trying to set up a client-server architecture. I have one GKSession configured as a server, and two others as clients.
When either client uses the sendData:toAllPeers:WithDataMode:error method, it sends it not only to the server but to the other client.
I guess I could use the display name to exclude clients, so client data only goes to the server, but I'm not quite following why this is happening.
My server explicitly accepts a connection, via acceptConnectionFromPeer:error: But my client isn't accepting anything from anybody, it seems to be just silently finding the other client.
Should this be happening? I understand in a peer-peer setup you'd want peers to just find others; but in client-server, this seems a little weird.
Any clarification or advice would be greatly appreciated.
While a client cannot explicitly connect to another client, but the method sendData:toPeers:withDataMode:error: should allows you to send data directly from one client to another given that you have the correct peerId.