Converting from UDP Datagram to UDS datagram - udp

I have a couple of questions regarding Unix Domain Sockets. We currently have an application that has a receiver service receiving datagram packets from multiple client processes on the same machine using the loopback address. We want to convert it over to using Unix Domain Sockets. Here are my questions:
The receiver process may be down when senders are running, in UDP, the packets were just dropped. Is the behavior of UDS the same or do the senders receive an error (may also have started before the receiver and therefor the UDS path may not have been bound)?
The receiver may go down and be restarted. Since it has to unlink the path before binding, do the running senders packets make it to the receiver or do they need to reset?
If the receiver is down for an extended period, do the oldest packets get dropped or will it fill blocking the senders?

Related

Can IPv6 multicasting work when one or more receivers are unable to bind to the program's well-known port?

Consider a simple IPv6 multicast application:
A "talker" program periodically sends out IPv6 UDP packets to a well-known multicast-group, sending them to a well-known port.
Zero or more "listener" programs bind themselves to that well-known port and join the well-known multicast group, and they all receive the UDP packets.
That all works pretty well, except in the case where one or more of the listener-programs is unable to bind to the well-known UDP port because a socket in some other (unrelated) program has already bound to that UDP port (and didn't set the SO_REUSEADDR and/or SO_REUSEPORT options to allow it to be shared with anyone else). AFAICT in that case, the listener program is simply out of luck, there is nothing it can do to receive the multicast data, short of asking the user to terminate the interfering program in order to free up the port.
Or is there? For example, is there some technique or approach that would allow a multicast listener to receive all the incoming multicast packets for a given multicast-group, regardless of which UDP port they are being sent to?
If you want to receive all multicast traffic regardless of port, you'd need to use raw sockets to get the complete IP datagram. You could then directly inspect the IP header, check if it's using UDP, then check the UDP header before reading the application layer data. Note that methods of doing this are OS specific and typically require administrative privileges.
Regarding SO_REUSEADDR and SO_REUSEPORT, apps that do this will allow multiple programs to receive multicast packets sent to a given port. However, if you also need to receive unicast packets this method has issues. Incoming unicast packets may be set to both sockets, may always be sent to one specific socket, or sent to each in an alternating fashion. This also differs based on the OS.

What's the problem of always using QoS 0 in a MQTT session?

I've been developing an always-connected MQTT client on a strictly resource constraint embedded device using Paho C library. Here's my questions:
Apart from the broker and client crashes, Are there any other reasons for a QoS 0 message to not arrive at the destination?
In a subscription request, is it possible that broker does not accept the requested QoS?
Under what circumstances could a QoS 1 message be received multiple times?
(1) A message delivered at QOS0 via TCP/IP is only guaranteed to have reached the remote machine's TCP stack not the actual application that is running (be that a MQTT client or MQTT broker).
Messages sent at higher QOS are acknowledged by the application not just the TCP/IP stack of the host machine so mean that you can be more certain it has actually been processed.
(2) Some brokers may only support QOS 0 or QOS 0/1 (e.g. AWS IoT) and as mentioned in the doc the SUBACK message includes the QOS level that was granted which may not match what was requested. So even if the subscribing client
(3) If the client crashes having processed the message but before sending the PUBACK then the broker can try and deliver the message again when the client reconnects.

Should an IPv6 UDP socket that is set up to receive multicast packets also be able to receive unicast packets?

I've got a little client program that listens on an IPv6 multicast group (e.g. ff12::blah:blah%en0) for multicast packets that are sent out by a server. It works well.
The server would also like to sometimes send a unicast packet to my client (since if the packet is only relevant to one client there is no point in bothering all the other members of the multicast group with it). So my server just does a sendto() to my client's IP address and the port that the client's IPv6 multicast socket is listening on.
If my client is running under MacOS/X, this works fine; the unicast packet is received by the same socket that receives the multicast packets. Under Windows, OTOH, the client never receives the unicast packet (even though it does receive the multicast packets without any problems).
My question is, is it expected that a multicast-listener IPv6 UDP socket should also be able to receive unicast packets on that same port (in which case perhaps I'm doing something wrong, or have Windows misconfigured)? Or is this something that "just happens to work" under MacOS/X but isn't guaranteed, so the fact that it doesn't work for me under Windows just means I had the wrong expectations?
It should work fine. As long as you bind to IN6ADDR_ANY, then join the multicast groups, you should be able to send and receive unicast packets with no problem.
It's important to bind to IN6ADDR_ANY (or INADDR_ANY for IPv4) when using multicast. If you bind to a specific interface, this breaks multicast on Linux systems.

recv() fails on UDP

I’m writing a simple client-server app which for the time being will be for my own personal use. I’m using Winsock for the net communication. I have not done any networking for the last 10 years, so I am quite rusty. I’d like to use as little external code as possible, so I have written a home-made server discovery mechanism, as follows.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket. The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket. The client then uses this info to connect to the server (on a different socket). This mechanism should allow the server to bind its listening socket to the first port it can within the dynamic port range (49152-65535) and to the clients to discover where the server is and on which port it is listening.
The server part works fine: the server receives the broadcast messages and successfully sends its response.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket).
But the message never makes it to the client app. I’ve tried doing a recv() in blocking and non-blocking mode, and there is never any data available. ioctlsocket() always shows no data is available, even though I know the packet got it to the machine.
The server succeeds on doing a recv() on broadcasted data. But the client fails on doing a recv() of the server’s response which is addressed to its discovery socket.
The question is very vague: what gotchas should I watch for in this scenario? Why would recv() fail to get a packet which has actually arrived to the machine? The sockets are UDP, so the fact that they are not connected is irrelevant. Or is it?
Many thanks in advance.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket.
The message doesn't need to contain anything. Just broadcast an empty message from the 'discovery socket'. recvfrom() will tell the server where it came from, and it can just reply directly.
The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket.
Fair enough, although actually the server could just broadcast its own TCP listening port every 5 seconds or whatever.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket). But the message never makes it to the client app
If it got to the host it must get to the application. You must have got the ports mixed up somehow. Simplify it as above and retry.
Well, it was one of those stupid situations: Windows Firewall was active, besides the other firewall, and silently dropping packets. Deactivating it solved the problem.
But I still don’t understand how it works, as it was allowing the server to receive packets sent through broadcasting. And when I got at my wits end and set the server to answer back through a broadcast, THOSE packets got dropped.
Two days of frustration. I hope someone profits from my experience.

Why is SNMP usually run over UDP and not TCP/IP?

This morning, there were big problems at work because an SNMP trap didn't "go through" because SNMP is run over UDP. I remember from the networking class in college that UDP isn't guaranteed delivery like TCP/IP. And Wikipedia says that SNMP can be run over TCP/IP, but UDP is more common.
I get that some of the advantages of UDP over TCP/IP are speed, broadcasting, and multicasting. But it seems to me that guaranteed delivery is more important for network monitoring than broadcasting ability. Particularly when there are serious high-security needs. One of my coworkers told me that UDP packets are the first to be dropped when traffic gets heavy. That is yet another reason to prefer TCP/IP over UDP for network monitoring (IMO).
So why does SNMP use UDP? I can't figure it out and can't find a good reason on Google either.
UDP is actually expected to work better than TCP in lossy networks (or congested networks). TCP is far better at transferring large quantities of data, but when the network fails it's more likely that UDP will get through. (in fact, I recently did a study testing this and it found that SNMP over UDP succeeded far better than SNMP over TCP in lossy networks when the UDP timeout was set properly). Generally, TCP starts behaving poorly at about 5% packet loss and becomes completely useless at 33% (ish) and UDP will still succeed (eventually).
So the right thing to do, as always, is pick the right tool for the right job. If you're doing routine monitoring of lots of data, you might consider TCP. But be prepared to fall back to UDP for fixing problems. Most stacks these days can actually use both TCP and UDP.
As for sending TRAPs, yes TRAPs are unreliable because they're not acknowledged. However, SNMP INFORMs are an acknowledged version of a SNMP TRAP. Thus if you want to know that the notification receiver got the message, please use INFORMs. Note that TCP does not solve this problem as it only provides layer 3 level notification that the message was received. There is no assurance that the notification receiver actually got it. SNMP INFORMs do application level acknowledgement and are much more trustworthy than assuming a TCP ack indicates they got it.
If systems sent SNMP traps via TCP they could block waiting for the packets to be ACKed if there was a problem getting the traffic to the receiver. If a lot of traps were generated, it could use up the available sockets on the system and the system would lock up. With UDP that is not an issue because it is stateless. A similar problem took out BitBucket in January although it was syslog protocol rather than SNMP--basically, they were inadvertently using syslog over TCP due to a configuration error, the syslog server went down, and all of the servers locked up waiting for the syslog server to ACK their packets. If SNMP traps were sent over TCP, a similar problem could occur.
http://blog.bitbucket.org/2012/01/12/follow-up-on-our-downtime-last-week/
Check out O'Reilly's writings on SNMP: https://library.oreilly.com/book/9780596008406/essential-snmp/18.xhtml
One advantage of using UDP for SNMP traps is that you can direct UDP to a broadcast address, and then field them with multiple management stations on that subnet.
The use of traps with SNMP is considered unreliable. You really should not be relying on traps.
SNMP was designed to be used as a request/response protocol. The protocol details are simple (hence the name, "simple network management protocol"). And UDP is a very simple transport. Try implementing TCP on your basic agent - it's considerably more complex than a simple agent coded using UDP.
SNMP get/getnext operations have a retry mechanism - if a response is not received within timeout then the same request is sent up to a maximum number of tries.
Usually, when you're doing SNMP, you're on a company network, you're not doing this over the long haul. UDP can be more efficient. Let's look at (a gross oversimplification of) the conversation via TCP, then via UDP...
TCP version:
client sends SYN to server
server sends SYN/ACK to client
client sends ACK to server - socket is now established
client sends DATA to server
server sends ACK to client
server sends RESPONSE to client
client sends ACK to server
client sends FIN to server
server sends FIN/ACK to client
client sends ACK to server - socket is torn down
UDP version:
client sends request to server
server sends response to client
generally, the UDP version succeeds since it's on the same subnet, or not far away (i.e. on the company network).
However, if there is a problem with either the initial request or the response, it's up to the app to decide. A. can we get by with a missed packet? if so, who cares, just move on. B. do we need to make sure the message is sent? simple, just redo the whole thing... client sends request to server, server sends response to client. The application can provide a number just in case the recipient of the message receives both messages, he knows it's really the same message being sent again.
This same technique is why DNS is done over UDP. It's much lighter weight and generally it works the first time because you are supposed to be near your DNS resolver.