Polling vs handshaking in hardware - hardware

Brookshear & Brylow's Computer Science: An Overview (12th ed.) states the following:
a process such as printing a document involves a constant two-way dialogue, known as handshaking, in which the computer and the peripheral device exchange information about the device’s status and coordinate their activities.
I'm more familiar with "handshaking" as the process of establishing a TCP connection, and "polling" as the technique of repeatedly checking the status of a hardware device.
This ScienceDirect summary complicates things further, mentioning two kinds of handshaking - hardware and software - neither of which has the meaning I'm familiar with.
So what is the exact relationship between "handshaking" and "polling"?

Handshaking is a multistage process where devices establish a connection and confirm that they are listening/talking to each other. It is a way to ensure, to a certain degree, that data will go to the right place and it will be received.
Polling, as you said, is just repeatedly checking a status of the device. As for relationship, you need to have a way to get polling data and handshaking is often a way to establish this connection

Related

Losing data with UDP over WiFi when multicasting

I'm currently working a network protocol which includes a client-to-client system with auto-discovering of clients on the current local network.
Right now, I'm periodically broadsting over 255.255.255.255 and if a client doesn't emit for 30 seconds I consider it dead (then offline). The goal is to keep an up-to-date list of clients runing. It's working well using UDP, but UDP does not ensure that the packets have been sucessfully delivered. So when it comes to the WiFi parts of the network, I sometimes have "false postivives" of dead clients. Currently I've reduced the time between 2 broadcasts to solve the issue (still not working well), but I don't find this clean.
Is there anything I can do to keep a list of "online" clients without this risk of "false positives" ?
To minimize the false positives, due to dropped packets you should alter a little bit the logic of your heartbeat protocol.
Rather than relying on a single packet broadcast per N seconds, you can send a burst 3 or more packets immediately one after the other every N seconds. This is an approach that ping and traceroute tools follow. With this method you decrease significantly the probability of a lost announcement from a peer.
Furthermore, you can specify a certain number of lost announcements that your application can afford. Also, in order to minimize the possibility of packet loss using wireless network, try to minimize as much as possible the size of the broadcast UDP packet.
You can turn this over, so you will broadcast "ServerIsUp" message
and every client than can register on server. When client is going offline it will unregister, otherwise you can consider it alive.

What are ICE Candidates and how do the peer connection choose between them?

I newly wrote a simple chat application, but I didn't really understand the background of ICE Candidates.
When the peer create a connection they get ICE Candidates and they exchange them and set
them finally to the peerconnection.
So my question is, where do the ICE Candidates come from and how are they used and are they all really used ?
I have noticed that my colleague got less candidates when he executes the application on his machine, what could be the reason for different amount of Candidates ?
the answer from #Ichigo is correct, but it is a litte bit bigger. Every ICE contains 'a node' of your network, until it has reached the outside. By this you send these ICE's to the other peer, so they know through what connection points they can reach you.
See it as a large building: one is in the building, and needs to tell the other (who is not familiar) how to walk through it. Same here, if I have a lot of network devices, the incoming connection somehow needs to find the right way to my computer.
By providing all nodes, the RTC connection finds the shortest route itself. So when you would connect to the computer next to you, which is connected to the same router/switch/whatever, it uses all ICE's and determine the shortest, and that is directly through that point. That your collegue got less ICE candidates has to do with the ammount of devices it has to go through.
Please note that every network adapter inside your computer which has an IP adress (I have a vEthernet switch from hyper-v) it also creates an ICE for it.
ICE stands for Interactive Connectivity Establishment , its a techniques used in NAT( network address translator ) for establishing communication for VOIP, peer-peer, instant-messaging, and other kind of interactive media.
Typically ice candidate provides the information about the ipaddress and port from where the data is going to be exchanged.
It's format is something like follows
a=candidate:1 1 UDP 2130706431 192.168.1.102 1816 typ host
here UDP specifies the protocol to be used, the typ host specifies which type of ice candidates it is, host means the candidates is generated within the firewall.
If you use wireshark to monitor the traffic then you can see the ports that are used for data transfer are same as the one present in ice-candidates.
Another type is relay , which denotes this candidates can be used when communication is to be done outside the firewall.
It may contain more information depending on browser you are using.
Many time i have seen 8-12 ice-candidates are generated by browser.
Ichigo has a good answer, but doesn't emphasise how each candidate is used. I think MarijnS95's answer is plain wrong:
Every ICE contains 'a node' of your network, until it has reached the outside
By providing all nodes, the RTC connection finds the shortest route itself.
First, he means ICE candidate, but that part is fine. Maybe I'm misinterpreting him, but by saying 'until it has reached the outside', he makes it seem like a client (the initiating peer) is the inner most layer of an onion, and suggests the ICE candidate helps you peel the layers until you get to the 'internet', where can get to the responding peer, perhaps peeling another onion to get to it. This is just not true. If an initiating peer fails to reach a responding peer through the transport address, it discards this candidate and will try a different candidate. It does not store any nodes anywhere in the candidate. The ICE candidates are generated before any communication with the responding peer. An ice candidate does not help you peel the proverbial NAT onion. Also regarding the second quote I made from his answer, he makes it seem like ICE is used in a shortest path algorithm, where 'shortest' does not show up in the ICE RFC at all.
From RFC8445 terminology list:
ICE allows the agents to discover enough information
about their topologies to potentially find one or more paths by which
they can establish a data session.
The purpose of ICE is to discover which pairs of addresses will work. The way that ICE does this is to systematically try all possible pairs (in a carefully sorted order) until it finds one or more that work.
Candidate, Candidate Information: A transport address that is a
potential point of contact for receipt of data. Candidates also
have properties -- their type (server reflexive, relayed, or
host), priority, foundation, and base.
Transport Address: The combination of an IP address and the
transport protocol (such as UDP or TCP) port.
So there you have it, (ICE) Candidate was defined (an IP address and port that could potentially be an address that receives data, which might not work), and the selection process was explained (the first transport address pair that works). Note, it is not a list of nodes or onion peels.
Different users may have different ice candidates because of the process of "gathering candidates". There are different types of candidates, and some are obtained from the local interface. If you have an extra virtual interface on your device, then an extra ICE will be generated (I did not test this!). If you want to know how ICE candidates are 'gathered', read the 2.1. Gathering Candidates

How do you handle newcomers efficiently in WebRTC signaling?

Signaling is not addressed by WebRTC (even if we do have JSEP as a starting point), but from what I understand, it works that way :
client tells the server it's available at X
server holds that information and maps it to an identifier
other client comes and sends an identifier to get connection information from the first client
other client uses it to create it's one connection information and sends it to the server
server sends this to first client
both client can now talk
This is all nice and well, but what happends if a 3rd client arrives ?
You have to redo the whole things. Which suppose the first two clients are STILL connected to the server, waiting for a 3rd client to signal itself, and start the exchanging process again so they can get the 3rd client connection information.
So does it mean you are required to have to sort of permanent link to the server for each client (long polling, websocket, etc) ? If yes, is there a way to do that efficiently ?
Cause I don't see the point of having webRTC if I have to setup nodejs or tornado and make it scales to the number of my users. It doesn't sound very p2pish to me.
Please tell me I missed something.
What about a chat system? Do you really need to keep a permanent link to the server for each client? Of course, because otherwise you have no way of keeping track of a user's status. This "permanent" link can be done different ways: you mentioned WebSocket and long polling, but simple periodic XHR polling works too (although this will affect the UX, depending on the interval).
So view it like a chat system, except that the media stream is P2P for reduced latency. Once a P2P WebRTC connection is established, the server may die and, of course, the P2P connection will be kept between the two clients. What I mean is: both users may always block your server once the P2P connection is established and still be connected together in the wild Internets.
Understand me well: once the P2P connection is established, your server will not be doing any more WebRTC signalling. The connection is only needed to keep track of the statuses.
So it depends on your application. If you want to keep the statuses of users and make them visible to others, then you're in the same situation as a chat system: you need to keep a certain link, somehow, to make sure their statuses are synced. Otherwise, your server exists to connect them together and is not needed afterwards. An example of the latter situation is: a user goes to a webpage, the webpage provides him with a new room URL, the user shares this URL to another peer by another mean, the other peer joins the room, server connects them together (manages WebRTC signalling) and then forgets them. They are now connected until one of them breaks the link. Just like this reference app.
Instead of a central server keeping one connection per client, a mesh network could also be considered, albeit difficult to implement.

UDP Broadcast, Multicast, or Unicast for a "Toy Application"

I'm looking to write a toy application for my own personal use (and possibly to share with friends) for peer-to-peer shared status on a local network. For instance, let's say I wanted to implement it for the name of the current building you're in (let's pretend the network topology is weird, and multiple buildings occupy the same LAN). The idea is if you run the application, you can set what building you're in, and you can see the buildings of every other user running the application on the local network.
The question is, what's the best transport/network layer technology to use to implement this?
My initial inclination was to use UDP Multicast, but the more research I do about it, the more I'm scared off by it: while the technology is great and seems easy to use, if the application is not tailored for a particular site deployment, it also seems most likely to get you a visit from an angry network admin.
I'm wondering, therefore, since this is a relatively low bandwidth application — probably max one update every 4–5 minutes or so from each client, with likely no more than 25–50 clients — whether it might be "cheaper" in many ways to use another strategy:
Multicast: find a way to pick a well-known multicast address from 239.255/16 and have interested applications join the group when they start up.
Broadcast: send out a single UDP Broadcast message every time someone's status changes (and one "refresh" broadcast when the app launches, after which every client replies directly to the requesting user with their current status).
Unicast: send a UDP Broadcast at application start to announce interest, and when a client's status changes, it sends a UDP packet directly to every client who has announced. This results in the highest traffic, but might be less likely to annoy other systems with needless broadcast packets. It also introduces potential complications when apps crash (in terms of generating unnecessary traffic).
Multicast is most certainly the best technology for the job, but I'm wondering if the associated hassles are worth avoiding since this is just a "toy application," not a business-critical service intended for professional network admin deployment and configuration.

Compact Framework serial port and balance

So, to open up a serial port and successfully transmit data from the balance through the serial port, i need to make sure that the settings on the serialPort object match the actual settings of the balance.
Now, the question is how do i detect that the connection hasn't been established due to the settings being different? No exception is thrown by serialPort.Open to indicate that the connection has been established. Yes, the settings are valid, but if they don't match the device (balance) settings; I am in the dark as to why the weight off the balance is not being captured.
Any input here?
Without knowing any more information on the format of the data you expect from your balance, only general serial port settings mismatch detection techniques are applicable.
If the UART settings are significantly incorrect, you'll likely see a lot of framing errors: when the UART is expecting a 1 stop bit, it will in fact see a 0. You can detect this with the ErrorReceived event on the port.
private void OnErrorReceived(object sender, SerialErrorReceivedEventArgs e)
{
if ((e.EventType & SerialError.Frame) == SerialError.Frame)
{
// your settings don't match, try something else
}
}
If things are close, but still incorrect, the .NET serial port object may not even give you an error (that is, until something catastrophic occurs).
My most common serial port communication failure occurs due to mismatched baud rates. If you have a message that you know you can get an 'echo' for, try that as part of a handshaking effort. Perhaps the device you're connecting to has a 'status' message. No harm will come from requesting it, and you will find out if communication is flowing correctly.
For software handshaking (xon xoff) There's very little you can do to detect whether or not it's configured right. The serial port object can do anything from ignore it completely to have thread exception errors, depending on the underlying serial port driver implementation. I've had serial port drivers that completely ignore xon/xoff, and pass the characters straight into the program - yikes!
For hardware handshaking, the basic echo strategy for baud rate may work, depending on how your device works. If you know that it will do hardware handshaking, you may be able to detect it and turn it on. If the device requires hardware handshaking and it's not on, you may get nothing, and vice versa.
Another setting that's more rarely used is the DTR pin - data terminal ready. Some serial devices require that this be asserted (ie, set to true) to indicate that it's time to start sending data. It's set to false by default; give toggling it a whirl.
Note that the serial port object is ... finicky. While not necessarily required, I would consider closing the port before you make any changes.
Edit:
Thanks to your comments, it looks like this is your device. It says the default settings should be:
1200 baud
Odd parity
1 stop bit
Hardware handshaking
It doesn't specify how many data bits, but the device says it supports 7 and 8. I'd try both of those. It also says it supports 600, 1200, 2400, 4800, 9600, and 19200 baud.
If you've turned on hardware handshaking, enabled DTR (different things) and cycled through all the different baud rates, there's a good chance that it's not your settings. It could be that the serial cable that's being used may be wired incorrectly for your device. Some serial cables are 'passthrough' cables, where the 1-9 pins on one side match exactly with the 1-9 pins on the other. Then, you have 'crossover' cables, where the "TX" and "RX" cables are switched (so that when one side transmits, the other side receives, a very handy cable.)
Consider looking at the command table in the back of the manual there; there's a "print software version" command you could issue to get some type of echo back.
Serial ports use a very, very old communications technology that use a very, very old protocol called RS-232. This is pretty much as simple as it gets... the two end points have synchronized clocks and they test the line voltage every clock cycle to see if it is high or low (with high meaning 0 and low meaing 1, which is the opposite of most conventions... again an artifact of the protocol's age). The clock synchronization is accomplished through the use of stop bits, which are really just rest time in between bytes. There are also a few other things thrown into the more advanced uses of the protocol such as parity bits, XON/XOFF, etc, but those all ride on top of this very basic communication layer. Detecting a mismatch of the clocks on each end of the serial line is going to be nearly impossible -- you'll just get incorrect data on the recieving end. The protocol itself has no way built in to identify this situation. I am unaware of any serial driver that is smart enough to notice the input data being clocked an an inappropriate frequency. If you're using one of the error detection schemes such as parity bits, probabilistically every byte will be declared an error. In short, the best you can do is check the incoming data for errors (parity errors should be detected by your driver/software layer, whereas errors in the data received by your app from that layer will need to be checked by your program -- the latter can be assisted by the use of checksums).