Skype protocol and supernodes - udp

I have a question about the skype protocol.
Supposedly, according to wiki, the supernodes in Skype are used in UDP hole punching. The supernodes are nodes without firewalls/NATs.
My question is, how is this reliable? Isn't the vast majority of internet users behind NAT?
And, if I was to create a P2P application using this technique, what happens if there are no peers without firewalls? I don't understand how you can launch an application that relies on that there will be some peers eventually without NAT
Thanks

I can't comment on Skype specifically, but I have some experience with this (http://wiki.squeak.org/squeak/5629). We called our supernodes "big friendly giants" or BFGs :).
The idea behind supernodes is that while you hope that they pop up in the network, giving new users more options for NAT hole punching, you provide, as p2p network operator, a minimal set yourselves (could be just one or two machines, they are just needed for initial hole punching, real traffic will get re-routed directly anyway). As far as I'm aware, Skype does that as well - they run a minimum set of supernodes themselves.
When Skype had issues earlier this year, a lot of people tried to reconnect and thus the supernodes got overloaded, resulting in a domino effect. Skype added supernodes, but the amount of people trying to reconnect at that time was so massive that it took quite a while before the network rebuilt itself. It's quite funny - we had that with the above project as well - that a P2P network can be extremely resilient until it gets pushed over some edge and the whole thing crumbles.
[disclaimer: I work for eBay, former owner of Skype, but this is all my personal opinion and based on public information]

Read the papers on libjingle with discussion about services like STUN. When both parties are behind NAT an external service is often required to relay through or assist in punching a hole open on one or other side.
http://code.google.com/apis/talk/libjingle/important_concepts.html

Related

How signaling in WebRTC works in following scenario?

I read about this online and I come to know that signaling is used for initiation of the communication, But I am very confused about how would I get the ip:port combination for the other peer.
I thought of three scenarios.
On the same network (LAN)
On the internet(but not behind a
proxy/NAT)
On the internet but behind a proxy/NAT
For 1. I get that we can get the ip:port from the signaling process it self(if I am not wrong)
For 2. and 3. How would we get the IP:PORT, how would STUN/TURN servers and ICE will play their role in all these cases? I know for what reason they are used for but can't figure out how each of them will fit in these scenarios.
Update: The question that is posted in comments is the one which compliments what I assumed that it should work in this way, what I don't know is how TURN would do the update, would it be establishing the communication using SDP? like it did for STUN? Is it absolutely necessary for TURN to work we should deploy it on cloud?

UDP hole punching for an already existent application

I'm attempting to use YAWCam on my university network. I am using it as an mjpg streamer that another application behind another network needs to access. Unfortunately, there is no way to port forward on my university network. enter udp-hole punching. I thought this was fantastic when I learned of it, but quickly realized that, unless I could figure out how to actually modify this program (which is not open source) I would not be able to make UDP hole punching work conventionally.
My question is, is there a way to hole punch without changing the original program? possibly by sending packets from the same port yawcam uses to punch the hole, and then let regular requests refresh it? I'm a bit new to netcode so Im not entirely sure what the "correct" method is for this.
No it is not possible. Two different programs can't bind to same port. The purpose of port numbers is to identify which running application instance to route the traffic to. if two applications were to each bind two sockets to the same port number, that routing becomes impossible.

Does WebRTC allow one-to-many (multicast) connections?

I've read a lot about WebRTC, but there's one question that still remains. I hope you can help me with that:
Does WebRTC allow me to create a one-to-many connection? I don't mean "being able to have multiple connections to different computers", I really talk about having one connection that multicasts its data to multiple endpoints without the need to "upload" the data once for each endpoint. Will it be possible to send one single package to the web, that, when it reaches the web, magically splits itself into multiple packages with different targets?
I hope you get what I'm looking for :)
Until now, I've only seen one-to-one connections, or solutions that have one connection to a central server that does the multicast for them (which usually results in twice the ping).
But to me, one-to-one connections don't seem to be really useful (due to low upload-bandwith of clients), and solutions with a central server are also possible without WebRTC (using WebSockets), so the only real use case for WebRTC would be one-to-many connections.
So.. is this something that will be possible in the future? Or is it already possible today?
Three things:
IP multicast in the Internet is not possible at the moment (multicast addresses are not routed by ISPs)
WebRTC fits many use cases beyond one-to-many communication, just have a look at this document: https://datatracker.ietf.org/doc/html/draft-ietf-rtcweb-use-cases-and-requirements-06
WebRTC connections between browsers are always encrypted (using SRTP for A/V data and DTLS for generic data) and the encryption parameters (session keys etc.) are negotiated for every connection separately. How would you do that in a multicast environment (think of it as a distribution tree)?
So no, WebRTC cannot be used with IP multicast.
I would answer "It doesn't for now", because as a programmer, I can tell you, that there are number of ways browser devs to make it work if we (users) insist on it's importance. But how ? Since there's encryption, they could allow sharing of the session's encryption keys to the group of 'registered' (multicast) users. But how ? Well, Web was created for sharing. The most obvious way is through web server mediation and JS WebRTC API function (to load the user keys). Since multicast is most often used for efficient video distribution, you have a RTP/SRTP video server. The web server can coexist at the same machine. If they decide to extend it to web browsers - then just the "server" role can be done by the Web browser who created the multicast stream (the sender). The clients need to know who is it.
Again: In December 2013, this is still not possible. And multicasts are allowed on the Internet only in:
some experimental WAN nets
some internet+video ISP nets
LANs (when enabled at switch level, cheap switches transmit it to all ports). But you can be an ISP, researcher or LAN user, so it's necessary.

UDP Broadcast, Multicast, or Unicast for a "Toy Application"

I'm looking to write a toy application for my own personal use (and possibly to share with friends) for peer-to-peer shared status on a local network. For instance, let's say I wanted to implement it for the name of the current building you're in (let's pretend the network topology is weird, and multiple buildings occupy the same LAN). The idea is if you run the application, you can set what building you're in, and you can see the buildings of every other user running the application on the local network.
The question is, what's the best transport/network layer technology to use to implement this?
My initial inclination was to use UDP Multicast, but the more research I do about it, the more I'm scared off by it: while the technology is great and seems easy to use, if the application is not tailored for a particular site deployment, it also seems most likely to get you a visit from an angry network admin.
I'm wondering, therefore, since this is a relatively low bandwidth application — probably max one update every 4–5 minutes or so from each client, with likely no more than 25–50 clients — whether it might be "cheaper" in many ways to use another strategy:
Multicast: find a way to pick a well-known multicast address from 239.255/16 and have interested applications join the group when they start up.
Broadcast: send out a single UDP Broadcast message every time someone's status changes (and one "refresh" broadcast when the app launches, after which every client replies directly to the requesting user with their current status).
Unicast: send a UDP Broadcast at application start to announce interest, and when a client's status changes, it sends a UDP packet directly to every client who has announced. This results in the highest traffic, but might be less likely to annoy other systems with needless broadcast packets. It also introduces potential complications when apps crash (in terms of generating unnecessary traffic).
Multicast is most certainly the best technology for the job, but I'm wondering if the associated hassles are worth avoiding since this is just a "toy application," not a business-critical service intended for professional network admin deployment and configuration.

WiFi communication to embedded display

I'm trying to create an embedded outdoor display of bus arrival times at my university. I'd like the device to utilize my school's secured WiFi network to show arrival time updates determined from a server script I have running.
I was hoping to get some advice on the high-level operation of this thing -- would it be better for the display board to poll a hosted database via the WiFi network or should I have a script try to communicate with the board directly over 802.11? (Push or Pull?)
I was planning to use a Wifly or WIZnet ethernet board in combination with a wireless access hub. Mostly inspired by this project: http://www.circuitcellar.com/Wiznet/winners/001166.html Would anyone recommend something else over one of the WIZnet boards? I saw SPI/UART options and thought these boards could work with an AVR platform.
And out of curiosity -- if you were to 'cold start' this device (ie, request a bus arrival time by pushing the display's on button) you might expect it to take 10-20 seconds to get assigned an IP and successfully connect to the database, does that sound right?
I'd go pull. In fact, I'd have outdoor display make http or https requests of the server. That way the server could tell it how long to show a given set of data before polling for a new one using standard http page expiration.
I think pull would make it easier to have multiple displays, and to test your server as well. I've also got a gut feeling that this would make your display more secure. Someone would have to hack your server to hijack your display.
There's a very cool looking Arduino-targetted product called the WiShield. Seems super easy to use and he provides some source code. It uses SPI for host communication. If you're not interested in going the Arduino route, I'm sure the source code wouldn't be too hard to port to something like avr-gcc. Check it out, might save you some time and headaches for $55. Worth checking out anyway.