Should an IPv6 UDP socket that is set up to receive multicast packets also be able to receive unicast packets? - udp

I've got a little client program that listens on an IPv6 multicast group (e.g. ff12::blah:blah%en0) for multicast packets that are sent out by a server. It works well.
The server would also like to sometimes send a unicast packet to my client (since if the packet is only relevant to one client there is no point in bothering all the other members of the multicast group with it). So my server just does a sendto() to my client's IP address and the port that the client's IPv6 multicast socket is listening on.
If my client is running under MacOS/X, this works fine; the unicast packet is received by the same socket that receives the multicast packets. Under Windows, OTOH, the client never receives the unicast packet (even though it does receive the multicast packets without any problems).
My question is, is it expected that a multicast-listener IPv6 UDP socket should also be able to receive unicast packets on that same port (in which case perhaps I'm doing something wrong, or have Windows misconfigured)? Or is this something that "just happens to work" under MacOS/X but isn't guaranteed, so the fact that it doesn't work for me under Windows just means I had the wrong expectations?

It should work fine. As long as you bind to IN6ADDR_ANY, then join the multicast groups, you should be able to send and receive unicast packets with no problem.
It's important to bind to IN6ADDR_ANY (or INADDR_ANY for IPv4) when using multicast. If you bind to a specific interface, this breaks multicast on Linux systems.

Related

When joining a multicast group, will a UDP listener stop listening to direct UDP messages?

I have built a small UPnP framework on top of ASIO (https://think-async.com/Asio/), and the UDP listener joins the UPnP multicast address like so:
socket_.set_option(multicast::join_group(
address::from_string("239.255.255.250").to_v4(),
host_address_));
This works nicely, as I can receive UPnP multicast queries. However, when sending unicast UDP messages directly to ip_address:1900 the UDP listener does not receive anything. Is this to be expected?
Reading this question: Why can't I get UPnP unicast M-SEARCH to work instead of MultiCast M-SEARCH? it seems like it is supposed to work.
Maybe this is a quirk with ASIO?
Edit: Testing the multicast (cpp11) ASIO example, it all works as expected. I can send multicast and unicast messages, all get received. But this is with port 30001 and most other ports I've tested with. EXCEPT port 1900. If I use port 1900, only multicast messages get received.
Can it be that another application has bound to 1900 in non-reuse mode?

Can IPv6 multicasting work when one or more receivers are unable to bind to the program's well-known port?

Consider a simple IPv6 multicast application:
A "talker" program periodically sends out IPv6 UDP packets to a well-known multicast-group, sending them to a well-known port.
Zero or more "listener" programs bind themselves to that well-known port and join the well-known multicast group, and they all receive the UDP packets.
That all works pretty well, except in the case where one or more of the listener-programs is unable to bind to the well-known UDP port because a socket in some other (unrelated) program has already bound to that UDP port (and didn't set the SO_REUSEADDR and/or SO_REUSEPORT options to allow it to be shared with anyone else). AFAICT in that case, the listener program is simply out of luck, there is nothing it can do to receive the multicast data, short of asking the user to terminate the interfering program in order to free up the port.
Or is there? For example, is there some technique or approach that would allow a multicast listener to receive all the incoming multicast packets for a given multicast-group, regardless of which UDP port they are being sent to?
If you want to receive all multicast traffic regardless of port, you'd need to use raw sockets to get the complete IP datagram. You could then directly inspect the IP header, check if it's using UDP, then check the UDP header before reading the application layer data. Note that methods of doing this are OS specific and typically require administrative privileges.
Regarding SO_REUSEADDR and SO_REUSEPORT, apps that do this will allow multiple programs to receive multicast packets sent to a given port. However, if you also need to receive unicast packets this method has issues. Incoming unicast packets may be set to both sockets, may always be sent to one specific socket, or sent to each in an alternating fashion. This also differs based on the OS.

Converting from UDP Datagram to UDS datagram

I have a couple of questions regarding Unix Domain Sockets. We currently have an application that has a receiver service receiving datagram packets from multiple client processes on the same machine using the loopback address. We want to convert it over to using Unix Domain Sockets. Here are my questions:
The receiver process may be down when senders are running, in UDP, the packets were just dropped. Is the behavior of UDS the same or do the senders receive an error (may also have started before the receiver and therefor the UDS path may not have been bound)?
The receiver may go down and be restarted. Since it has to unlink the path before binding, do the running senders packets make it to the receiver or do they need to reset?
If the receiver is down for an extended period, do the oldest packets get dropped or will it fill blocking the senders?

recv() fails on UDP

I’m writing a simple client-server app which for the time being will be for my own personal use. I’m using Winsock for the net communication. I have not done any networking for the last 10 years, so I am quite rusty. I’d like to use as little external code as possible, so I have written a home-made server discovery mechanism, as follows.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket. The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket. The client then uses this info to connect to the server (on a different socket). This mechanism should allow the server to bind its listening socket to the first port it can within the dynamic port range (49152-65535) and to the clients to discover where the server is and on which port it is listening.
The server part works fine: the server receives the broadcast messages and successfully sends its response.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket).
But the message never makes it to the client app. I’ve tried doing a recv() in blocking and non-blocking mode, and there is never any data available. ioctlsocket() always shows no data is available, even though I know the packet got it to the machine.
The server succeeds on doing a recv() on broadcasted data. But the client fails on doing a recv() of the server’s response which is addressed to its discovery socket.
The question is very vague: what gotchas should I watch for in this scenario? Why would recv() fail to get a packet which has actually arrived to the machine? The sockets are UDP, so the fact that they are not connected is irrelevant. Or is it?
Many thanks in advance.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket.
The message doesn't need to contain anything. Just broadcast an empty message from the 'discovery socket'. recvfrom() will tell the server where it came from, and it can just reply directly.
The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket.
Fair enough, although actually the server could just broadcast its own TCP listening port every 5 seconds or whatever.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket). But the message never makes it to the client app
If it got to the host it must get to the application. You must have got the ports mixed up somehow. Simplify it as above and retry.
Well, it was one of those stupid situations: Windows Firewall was active, besides the other firewall, and silently dropping packets. Deactivating it solved the problem.
But I still don’t understand how it works, as it was allowing the server to receive packets sent through broadcasting. And when I got at my wits end and set the server to answer back through a broadcast, THOSE packets got dropped.
Two days of frustration. I hope someone profits from my experience.

icmp packets appearing with udp packets

I am sending udp packets from one pc to other. I am observing the traffic on wireshark on the pc where I am receiving udp packets. One interesting thing I see is icmp packets appearing suddenly. Then they disappear and again appear in a cyclic manner. What can be the reason for this. Am I doing some thing wrong. And what bad effects can it have on my udp reception performance.
Please also find the attached wireshark figure taken from the destination pc.
The ICMP packets are created by the other host because the UDP port is not open. The ICMP packet includes the first X bytes of the packet that was dropped so the sender can read out which session was affected.