I'm looking to write a toy application for my own personal use (and possibly to share with friends) for peer-to-peer shared status on a local network. For instance, let's say I wanted to implement it for the name of the current building you're in (let's pretend the network topology is weird, and multiple buildings occupy the same LAN). The idea is if you run the application, you can set what building you're in, and you can see the buildings of every other user running the application on the local network.
The question is, what's the best transport/network layer technology to use to implement this?
My initial inclination was to use UDP Multicast, but the more research I do about it, the more I'm scared off by it: while the technology is great and seems easy to use, if the application is not tailored for a particular site deployment, it also seems most likely to get you a visit from an angry network admin.
I'm wondering, therefore, since this is a relatively low bandwidth application — probably max one update every 4–5 minutes or so from each client, with likely no more than 25–50 clients — whether it might be "cheaper" in many ways to use another strategy:
Multicast: find a way to pick a well-known multicast address from 239.255/16 and have interested applications join the group when they start up.
Broadcast: send out a single UDP Broadcast message every time someone's status changes (and one "refresh" broadcast when the app launches, after which every client replies directly to the requesting user with their current status).
Unicast: send a UDP Broadcast at application start to announce interest, and when a client's status changes, it sends a UDP packet directly to every client who has announced. This results in the highest traffic, but might be less likely to annoy other systems with needless broadcast packets. It also introduces potential complications when apps crash (in terms of generating unnecessary traffic).
Multicast is most certainly the best technology for the job, but I'm wondering if the associated hassles are worth avoiding since this is just a "toy application," not a business-critical service intended for professional network admin deployment and configuration.
Related
I am new to webrtc development, I would like to create a conference room with multiples connections. I have read about webrtc and peer communication. I would like to know if google meet used one peer connection for every participants one another. If so, how can they handle 250 participants without draining the browser resources?
For group calls, Google Meets does not use peer-to-peer, if that was the case, it would drain one's bandwidth very quickly and browser resources quickly.
Although I am not exactly sure about Meets' infrastructure, they most likely use a Selective Forwarding Unit (SFU) to be able to handle lots of connections, since it's common practice for this sort of application.
The SFU works as a webrtc client server-side, and it basically just forwards the stream to clients that are requesting it browser-side.
Im trying to communicate with 2 xmpp clients but this is not like messaging or chatting. It's more like event caused at one end and action performed at other (realtime). I wish there will not be any latency time when a Client A is trying to send packets to Client B. If available will there be any possible way to minimalize that it should be un noticed.? Is it possible to do this or by any other means?
First of all, that is still messaging.
As for you latency, there will always be some latency when sending data between processes. You haven't said what tolerance levels you are looking for as opposed to what you are getting so it is hard to say what you should do to improve them.
The biggest factors to any current latency you have will be message size and network speed. Of course direct point to point communication would remove one hop for you message, but without knowing your application there is no way of saying whether this is an acceptable direction.
A small message should be delivered in a few milliseconds on a fast network. If it is a slow network, then your problems lie outside of any communications protocol.
I have a question about the skype protocol.
Supposedly, according to wiki, the supernodes in Skype are used in UDP hole punching. The supernodes are nodes without firewalls/NATs.
My question is, how is this reliable? Isn't the vast majority of internet users behind NAT?
And, if I was to create a P2P application using this technique, what happens if there are no peers without firewalls? I don't understand how you can launch an application that relies on that there will be some peers eventually without NAT
Thanks
I can't comment on Skype specifically, but I have some experience with this (http://wiki.squeak.org/squeak/5629). We called our supernodes "big friendly giants" or BFGs :).
The idea behind supernodes is that while you hope that they pop up in the network, giving new users more options for NAT hole punching, you provide, as p2p network operator, a minimal set yourselves (could be just one or two machines, they are just needed for initial hole punching, real traffic will get re-routed directly anyway). As far as I'm aware, Skype does that as well - they run a minimum set of supernodes themselves.
When Skype had issues earlier this year, a lot of people tried to reconnect and thus the supernodes got overloaded, resulting in a domino effect. Skype added supernodes, but the amount of people trying to reconnect at that time was so massive that it took quite a while before the network rebuilt itself. It's quite funny - we had that with the above project as well - that a P2P network can be extremely resilient until it gets pushed over some edge and the whole thing crumbles.
[disclaimer: I work for eBay, former owner of Skype, but this is all my personal opinion and based on public information]
Read the papers on libjingle with discussion about services like STUN. When both parties are behind NAT an external service is often required to relay through or assist in punching a hole open on one or other side.
http://code.google.com/apis/talk/libjingle/important_concepts.html
On my team at work, we use the IBM MQ technology a lot for cross-application communication. I've seen lately on Hacker News and other places about other MQ technologies like RabbitMQ. I have a basic understanding of what it is (a commonly checked area to put and get messages), but what I want to know what exactly is it good at? How will I know where I want to use it and when? Why not just stick with more rudimentary forms of interprocess messaging?
All the explanations so far are accurate and to the point - but might be missing something: one of the main benefits of message queueing: resilience.
Imagine this: you need to communicate with two or three other systems. A common approach these days will be web services which is fine if you need an answers right away.
However: web services can be down and not available - what do you do then? Putting your message into a message queue (which has a component on your machine/server, too) typically will work in this scenario - your message just doesn't get delivered and thus processed right now - but it will later on, when the other side of the service comes back online.
So in many cases, using message queues to connect disparate systems is a more reliable, more robust way of sending messages back and forth. It doesn't work well for everything (if you want to know the current stock price for MSFT, putting that request into a queue might not be the best of ideas) - but in lots of cases, like putting an order into your supplier's message queue, it works really well and can help ease some of the reliability issues with other technologies.
MQ stands for messaging queue.
It's an abstraction layer that allows multiple processes (likely on different machines) to communicate via various models (e.g., point-to-point, publish subscribe, etc.). Depending on the implementation, it can be configured for things like guaranteed reliability, error reporting, security, discovery, performance, etc.
You can do all this manually with sockets, but it's very difficult.
For example: Suppose you want to processes to communicate, but one of them can die in the middle and later get reconnected. How would you ensure that interim messages were not lost? MQ solutions can do that for you.
Message queueuing systems are supposed to give you several bonuses. Among most important ones are monitoring and transactional behavior.
Transactional design is important if you want to be immune to failures, such as power failure. Imagine that you want to notify a bank system of ATM money withdrawal, and it has to be done exactly once per request, no matter what servers failed temporarily in the middle. MQ systems would allow you to coordinate transactions across multiple database, MQ and other systems.
Needless to say, such systems are very slow compared to named pipes, TCP or other non-transactional tools. If high performance is required, you would not allow your messages to be written thru disk. Instead, it will complicate your design - to achieve exotic reliable AND fast communication, which pushes the designer into really non-trivial tricks.
MQ systems normally allow users to watch the queue contents, write plugins, clear queus, etc.
MQ simply stands for Message Queue.
You would use one when you need to reliably send a inter-process/cross-platform/cross-application message that isn't time dependent.
The Message Queue receives the message, places it in the proper queue, and waits for the application to retrieve the message when ready.
reference: web services can be down and not available - what do you do then?
As an extension to that; what if your local network and your local pc is down as well?? While you wait for the system to recover the dependent deployed systems elsewhere waiting for that data needs to see an alternative data stream.
Otherwise, that might not be good enough 'real time' response for today's and very soon in the future Internet of Things (IOT) requirements.
if you want true parallel, non volatile storage of various FIFO streams(at least at some point along the signal chain) use an FPGA and FRAM memory. FRAM runs at clock speed and FPGA devices can be reprogrammed on the fly adding and taking away however many independent parallel data streams are needed(within established constraints of course).
I'm trying to create an embedded outdoor display of bus arrival times at my university. I'd like the device to utilize my school's secured WiFi network to show arrival time updates determined from a server script I have running.
I was hoping to get some advice on the high-level operation of this thing -- would it be better for the display board to poll a hosted database via the WiFi network or should I have a script try to communicate with the board directly over 802.11? (Push or Pull?)
I was planning to use a Wifly or WIZnet ethernet board in combination with a wireless access hub. Mostly inspired by this project: http://www.circuitcellar.com/Wiznet/winners/001166.html Would anyone recommend something else over one of the WIZnet boards? I saw SPI/UART options and thought these boards could work with an AVR platform.
And out of curiosity -- if you were to 'cold start' this device (ie, request a bus arrival time by pushing the display's on button) you might expect it to take 10-20 seconds to get assigned an IP and successfully connect to the database, does that sound right?
I'd go pull. In fact, I'd have outdoor display make http or https requests of the server. That way the server could tell it how long to show a given set of data before polling for a new one using standard http page expiration.
I think pull would make it easier to have multiple displays, and to test your server as well. I've also got a gut feeling that this would make your display more secure. Someone would have to hack your server to hijack your display.
There's a very cool looking Arduino-targetted product called the WiShield. Seems super easy to use and he provides some source code. It uses SPI for host communication. If you're not interested in going the Arduino route, I'm sure the source code wouldn't be too hard to port to something like avr-gcc. Check it out, might save you some time and headaches for $55. Worth checking out anyway.