What happens to client message if Server does not exist in UDP Socket programming? - udp

I ran the client.java only when I filled the form and pressed send button, it was jammed and I could not do anything.
Is there any explanation for this?
enter image description here

TLDR; the User Datagram Protocol (UDP) is "fire-and-forget".
Unreliable – When a UDP message is sent, it cannot be known if it will reach its destination; it could get lost along the way. There is no concept of acknowledgment, retransmission, or timeout.
So if a UDP message is sent and nobody listens then the packet is just dropped. (UDP packets can also be silently dropped due to other network issues/congestion.)
While there could be a prior-error such as resolving the IP for the server (eg. an invalid hostname) or attempting to use an invalid IP, once the UDP packet is out the door, it's out the door and is considered "successfully sent".
Now, if a program is waiting on a response that never comes (ie. the server is down or packet was "otherwise lost") then that could be .. problematic.
That is, this code which requires a UDP response message to continue would "hang":
sendUDPToServerThatNeverResponds();
// There is no guarantee the server will get the UDP message,
// much less that it will send a reply or the reply will get back
// to the client..
waitForUDPReplyFromServerThatWillNeverCome();
Since UDP has no reliability guarantee or retry mechanism, this must be handled in code. For example, in the above maybe the code would wait for 1 second and retry sending a packet, and after 5 seconds of no responses it would report an error to the client.
sendUDPToServerThatMayOrMayNotRespond();
while (i++ < 5) {
reply = waitForUDPReplyForOneSecond();
if (reply)
break;
}
if (reply)
doSomethingAwesome();
else
showErrorToUser();
Of course, "just using TCP" can often make these sorts of tasks simpler due to the stream and reliability characteristics that the Transmission Control Protoocol (TCP) provides. For example, the pseudo-code above is not very robust as the client must also be prepared to handle latent/slow UDP packet arrival from previous requests.
(Also, given the current "screenshot", the code might be as flawed as while(true) {} - make sure to provide an SSCCE and relevant code with questions.)

Related

UDP hole punching logic puzzle

I am trying to solve a logical puzzle in my UDP hole punching implementation.
The puzzle is the following: "can I guarantee that two clients I am trying to connect will come to the same conclusion (hole punched/hole not punched) within a reasonable time (ideally no more than a few seconds after they were given each other's IP addresses).
With UDP hole punching, both clients have to start sending "punch" packets around the same time. Some of these initial packets will be lost because of the NAT/firewall, it is expected. Then at some point, when client's #1 message gets through, let's take an optimistic scenario,
this does not mean that the message that other client's #2 sent also gets through. Clients then have to reply with "ack" messages to confirm that connection has been established. This is where timing issue also comes into play: one of the clients receives ack before the timeout, while other does not. And they come to different conclusions.
I also tried to make logic more complex: keeping track of how many acks each client sent, giving each new sent ack a unique number. Plus keeping track of how many different acks the given client received. Still, one client condition of "success" does not mean that the other client comes to the same conclusion in a limited time, given the nature of UDP when packets can get lost. And from usability point of view, I cannot hold user waiting forever, I have to present the result ideally within 2 seconds after server connects two clients.
To be more concrete, I can have a loop:
while (within_timeout)
{
// check for new data
// reply with acks if received
if (acks_sent >= 3 && acks_received >= 3) break;
}
success = (acks_sent >= 3 && acks_received >= 3);
Client #1 in our case knows that it sent three or more acks and received at least 3 acks. So it leaves the loop. Client #2 knows that it sent at least 3 acks (because client #1 received that many) but it may not received all acks sent by client #1 and timeout ends for client #2.
Server can also can designate each client "master" or "slave" id. When master will have a final word. Still, even then master will have to tell the slave of his solution and expect the ack, which may not arrive within the reasonable timeout.
It can be that there's no 100% solution, is there a solution that approaches 100%?

What is the correct method to receive UDP data from several clients synchronously?

I have 1 server and several (maybe up to 20) clients. All clients are sending UDP datagram at random time. Each datagram is quite short (about 10B), but I must make sure all the data from each client is received correctly.
If I let all clients send datagram to the same port, and client B sends it datagram at the exact time when the server is receiving data from client A, it seems the server will miss the data from client A.
So what's the correct method to do this job? Do I need to create a listener for each of the 20 clients?
When you bind a UDP socket to a port, the networking stack will allocate a buffer for a finite number of incoming UDP packets for you, so that (assuming you call recv() in a relatively timely manner), no incoming packets should get lost.
If you want see your buffer size in terminal, you can take a look at:
/proc/sys/net/core/rmem_default for recv
and
/proc/sys/net/core/wmem_default for send
I think the default buffer size on Linux is 131071B.
On Linux, you can change the UDP buffer size (e.g. to 26214400) by (as root):
sysctl -w net.core.rmem_max=26214400
You can also make it permanent by adding this line to /etc/sysctl.conf:
net.core.rmem_max=26214400
Since each packet is only 10B, shouldnt be a problem.
If you are still worried about packet loss you could implement a protocol where your client waits for a ACK from the server or it will resend. Many protocols use such a feature, but this is only possible if timing allows it. For example in streaming data it is not useful because there is no time to resend.
or consider using tcp ( if it is an option)

Why does the TLS heartbeat extension allow user supplied data?

The heartbeat protocol requires the other end to reply with the same data that was sent to it, to know that the other end is alive. Wouldn't sending a certain fixed message be simpler? Is it to prevent some kind of attack?
At least the size of the packet seems to be relevant, because according to RFC6520, 5.1 the heartbeat message will be used with DTLS (e.g. TLS over UDP) for PMTU discovery - in which cases it needs messages of different sizes. Apart from that it might be simply modelled after ICMP ping, where you can also specify the payload content for no reason.
Just like with ICMP Ping, the idea is to ensure you can match up a "pong" heartbeat response you received with whichever "ping" heartbeat request you made. Some packets may get lost or arrive out of order and if you send the requests fast enough and all the response contents are the same, there's no way to tell which of your requests were answered.
One might think, "WHO CARES? I just got a response; therefore, the other side is alive and well, ready to do my bidding :D!" But what if the response was actually for a heartbeat request 10 minutes ago (an extreme case, maybe due to the server being overloaded)? If you just sent another heartbeat request a few seconds ago and the expected responses are the same for all (a "fixed message"), then you would have no way to tell the difference.
A timely response is important in determining the health of the connection. From RFC6520 page 3:
... after a number of retransmissions without
receiving a corresponding HeartbeatResponse message having the
expected payload, the DTLS connection SHOULD be terminated.
By allowing the requester to specify the return payload (and assuming the requester always generates a unique payload), the requester can match up a heartbeat response to a particular heartbeat request made, and therefore be able to calculate the round-trip time, expiring the connection if appropriate.
This of course only makes much sense if you are using TLS over a non-reliable protocol like UDP instead of TCP.
So why allow the requester to specify the length of the payload? Couldn't it be inferred?
See this excellent answer: https://security.stackexchange.com/a/55608/44094
... seems to be part of an attempt at genericity and coherence. In the SSL/TLS standard, all messages follow regular encoding rules, using a specific presentation language. No part of the protocol "infers" length from the record length.
One gain of not inferring length from the outer structure is that it makes it much easier to include optional extensions afterwards. This was done with ClientHello messages, for instance.
In short, YES, it could've been, but for consistency with existing format and for future proofing, the size is spec'd out so that other data can follow the same message.

UDP transmit performance

I have an application that transmits some data in a loop.
Underlying protocol is UDP on WinSock. If I don't add sleep(1ms) after each transmit operation most of the data is not sent (or wireshark can not capture it) Have you experienced such a behavour that UDP does not handle repetitive sending in a loop ?
Regards
Tugrul
First thing you should check the return values when you send data to check if data is successfully sent or not.
Second thing, This can happen internal buffer of UDP cannot accommodate more data because previous data is yet not transmitted. So the simplest solution is that each time before send the data you should check if your UDP socket is writable or not. You can do it by calling "select" or "poll" on that UDP socket.

Dealing with sendto failure for UDP socket

If sendto fails according to the manpage
"On success, these calls return the number of characters sent. On error, -1 is returned, and errno is set appropriately."
I know that with TCP that is definately the case and you should really attempt to send the remaining data as pointed out in Beej's guide to network programming.
However, partially sending a UDP packet makes no sense to me, and this comment seems to imply it.
If the message is too long to pass atomically through the underlying protocol, the error EMSGSIZE is returned, and the message
is not transmitted.
Can someone confirm for me that if I call sendto (or send) with a UDP packet that if it actually doesn't fit in the outbound buffer then I'll get -1 returned with errno set to EMSGSIZE and no partial send as with a stream (TCP) socket?
There is no hidden meaning, the function just returns the count of bytes sent. It is a standard pattern for Unix APIs. Datagrams are all or nothing delivery, receipt is more complicated if the network caused fragmentation to occur but generally the stack hides all the details and presents each complete packet as it is reconstructed.
EMSGSIZE indicates that "the socket requires that the message be sent atomically, but the size of the message to be sent makes this impossible" (see man sendto).
However, the outbound buffer being full isn't necessarily the reason - Linux (for instance) apparently won't fragment UDP packets by default (see man udp).