RecvFrom missing a few UDP packets - udp

OK, I am aware that UDP doesn't guarantee delivery, but I had hoped to catch all by having the RecvFrom in a thread with TimeCritical priority and just quickly moving the incoming messages into a buffer. However, when the rate of messages get up to about 1000 1500 bytes messages per seconds a few are missed. I have verified with WireShark that the messages actually are received by the computer.
I am pretty sure that the messages are lost in the extremely short time from the RecvFrom returns and until it is called again.
Is there any way to "catch all", since the messages apparently are received?
Thanks.

Is there any way to "catch all", since the messages apparently are received?
No. If you are not fast enough reading messages from the socket buffer and this receive buffer fills up, the messages simply get dropped. It does not matter if they were received on the computer and are visible with Wireshark, all what matters is if they end up in the sockets receive buffer.
You might try to increase this buffer in order to make a loss less likely but it can still happen. Unreliability of delivery is one of the trade-offs you have with UDP and there is no magic which will fix it. Either you can cope with packet loss or you have to keep track of loss and somehow request that the message gets send again.

Related

Clear WebRTC Data Channel queue

I have been trying to use WebRTC Data Channel for a game, however, I am unable to consistently send live player data without hitting the queue size limit (8KB) after 50-70 secs of playing.
Sine the data is required to be real-time, I have no use for data that comes out of order. I have initialized the data channel with the following attributes:
negotiated: true,
id: id,
ordered: true,
maxRetransmits: 0,
maxPacketLifetime: 66
The MDN Docs said that the buffer cannot be altered in any way.
Is there anyway I can consistently send data without exceeding the buffer space? I don't mind purging the buffer space as it only contains data that has been clogged up over time.
NOTE: The data is transmitting until the buffer size exceeds the 8KB space.
EDIT: I forgot to add that this issue is only occurring when the two sides are on different networks. When both are within the same LAN, there is no buffering (since higher bandwidth, I presume). I tried to add multiple Data Channels (8 in parallel). However, this only increased the time before the failure occurred again. All 8 buffers were full. I also tried creating a new channel each time the buffer was close to being full and switched to the new DC while closing the previous one that was full, but I found out the hard way (reading Note in MDN Docs) that the buffer space is not released immediately, rather tries to transmit all data in the buffer taking away precious bandwidth.
Thanks in advance.
The maxRetransmits value is ignored if the maxPacketLifetime value is set; thus, you've configured your channel to resend packets for up to 66ms. For your application, it is probably better to use a pure unreliable channel by setting maxPacketLifetime to 0.
As Sean said, there is no way to flush the queue. What you can do is to drop packets before sending them if the channel is congested:
if(dc.bufferedAmount > 0)
return;
dc.send(data);
Finally, you should realise that buffering may happen in the network as well as at the sender: any router can buffer packets when it is congested, and many routers have very large buffers (this is called BufferBloat). The WebRTC stack should prevent you from buffering too much data in the network, but if WebRTC's behaviour is not aggressive enough for your needs, you will need to add explicit feedback from the sender to the receiver in order to avoid having too many packets in flight.
I don't believe you can flush the outbound buffer, you will probably need to watch the bufferedAmount and adjust what you are sending if it grows.
Maybe handle the retransmissions yourselves and discard old data if needed? WebRTC doesn't surface the SACKs from SCTP. So I think you will need to implement something yourself.
It's an interesting problem. Would love to hear the WebRTC W3C WorkGroup takes on it if exposing more info would make things easier for you.

What happens to client message if Server does not exist in UDP Socket programming?

I ran the client.java only when I filled the form and pressed send button, it was jammed and I could not do anything.
Is there any explanation for this?
enter image description here
TLDR; the User Datagram Protocol (UDP) is "fire-and-forget".
Unreliable – When a UDP message is sent, it cannot be known if it will reach its destination; it could get lost along the way. There is no concept of acknowledgment, retransmission, or timeout.
So if a UDP message is sent and nobody listens then the packet is just dropped. (UDP packets can also be silently dropped due to other network issues/congestion.)
While there could be a prior-error such as resolving the IP for the server (eg. an invalid hostname) or attempting to use an invalid IP, once the UDP packet is out the door, it's out the door and is considered "successfully sent".
Now, if a program is waiting on a response that never comes (ie. the server is down or packet was "otherwise lost") then that could be .. problematic.
That is, this code which requires a UDP response message to continue would "hang":
sendUDPToServerThatNeverResponds();
// There is no guarantee the server will get the UDP message,
// much less that it will send a reply or the reply will get back
// to the client..
waitForUDPReplyFromServerThatWillNeverCome();
Since UDP has no reliability guarantee or retry mechanism, this must be handled in code. For example, in the above maybe the code would wait for 1 second and retry sending a packet, and after 5 seconds of no responses it would report an error to the client.
sendUDPToServerThatMayOrMayNotRespond();
while (i++ < 5) {
reply = waitForUDPReplyForOneSecond();
if (reply)
break;
}
if (reply)
doSomethingAwesome();
else
showErrorToUser();
Of course, "just using TCP" can often make these sorts of tasks simpler due to the stream and reliability characteristics that the Transmission Control Protoocol (TCP) provides. For example, the pseudo-code above is not very robust as the client must also be prepared to handle latent/slow UDP packet arrival from previous requests.
(Also, given the current "screenshot", the code might be as flawed as while(true) {} - make sure to provide an SSCCE and relevant code with questions.)

rabbitMQ unable to get heartbeat working with php-amqplib

I have observed RabbitMQ "stuck" with unacked messages. The queue shows a consumer which no longer exists, and I assume what's happening is that RabbitMQ is continuing to deliver messages to that consumer. They show as an ever-increasing count of unacked messages. I'm doing this in PHP with php-amqplib.
I can produce the problem by killing the consumer process (control-C on command line).
I tried specifying a heartbeat of 3 seconds and tried keep-alive both true and false. With heartbeat, the consumer will eventually fail:
Exception fwrite(): send of 573 bytes failed with errno=32 Broken pipe
PhpAmqpLib\Wire\IO\StreamIO->error_handler(8, 'fwrite(): send ...',
php-amqplib/PhpAmqpLib/Wire/IO/StreamIO.php(281): fwrite(Resource id #176, '\x01\x00\x01\x00\x00\x00\x15\x00<\x00(\x00\x00\fb...', 8192)
Issue #374 might relate: https://github.com/php-amqplib/php-amqplib/issues/374
The consumer is consuming from multiple queues, but I believe that shouldn't matter.
The problem I'm trying to solve is that RabbitMQ continues to think that a consumer exists when it doesn't, with the result that RabbitMQ delivers those messages nowhere, and they go unacknowledged. I'm looking for a way to get rid of that spurious connection so that those messages can be re-delivered to a live consumer. I think that's what heartbeat is for, but I haven't gotten it to work.
The first and more important think that we need to do in this case is try to "print" your content message, and only return true to consumer. Don't process your real code, if you can "consume" the messages the problem isn't in rabbit but in our process, because probably we expend to much time to acknowledge message to rabbit and Rabbit closes our connections.
I'm not saying that its you case, but I'm just trying to help debugging the problem.
In my case I change the approach of this problem, because I have many product ids(my case) for each message and its expend long time to ACK process cause they reach database, I fit my messages and it works well after do that.
We can change the approach like create another queues to fit this messages, I don't know, but 90% of problems is it.
You can read more about Detecting Dead TCP Connections with Heartbeats here

why RabbitMQ shows activity on Message rates but not on Queued messages?

I have this issue, I want to know my rabbit is working great.
I am not gonna send the message, so, Im not 100% sure is being sent correctly. But the problem is this.
After all is configured and all....
I see at the RabbitMQ web manager
And when I supposedly send a message the I see activity on the "message rates" chart but nothing at the "queued messages" .
I frankly dont know whats going on, is it too fast that doesnt need to queue the messages? Or something is misconfigured?
Any idea of the difference?
Thanks.
In case RabbitMQ receive non-routable message it drop it. So while message was received, it was not queued.
You may configure Alternate Exchanges to catch such messages.
In my case,
Situation1:
when my Exchange in rabbitTemplate.convertAndSend was not set properly -- the message was not sent to the correct queue -- the Queued messages was empty all time.
however, Message rates is not zero, it does show there are message get sent.
Which correspond to what the other answer is saying:
In case RabbitMQ receive non-routable message it drop it.
Situation2:
when my Exchange in rabbitTemplate.convertAndSend was indeed set properly -- the message was sent to the correct queue -- the Queued messages was queuing up the message.
Everything seems fine.
Situation3:
(continue from Situation2)
And now, I turn on the receiver service which has the #RabbitListener.
The Queued messages immediately drops down to 0, and never goes up again.
But the transporting of messages is still working fine.
Situation4:
(continue from Situation2)
And now, I change the receiver service to use the rabbitTemplate.receiveAndConvert.
Which I manually receive the message from the queue every 2s by using a loop.
(message is also sent from sender service every 2s by using a loop, same as the situations before.)
Now, the Queued messages stays at constant -- a straight line
(depends on how many message you have queued up, in my case 1, before the receiver service is up, then it stays at 1).
Conclusion:
I suspect that, when the message is consumed too fast, the Queued messages will just show 0.
Which correspond to what the OP is saying:
is it too fast that doesnt need to queue the messages?
(or, I could screw up some setting in RabbitMQ and led to wrong conclusion. I dont think so, but idk, I am not familiar with RabbitMQ.)

Explanation of NetworkStream.Read behaviour needed

I am consuming real-time data from a network stream using a blocking read as follows:
Do
NetworkStream.Read(Bytes, 0, ReceiveBufferSize)
'Do stuff with data here
Loop
Watching packets come in on the wire in Wireshark, I see that sometimes when a new packet comes in, .NET sees it immediately and unblocks, letting me process it. Other times, multiple packets will come in on the wire before the NetworkStream.Read unblocks and returns the whole lot in one go - I've seen up to 8 packets buffer before the NetworkStream read unblocks.
Is this expected behaviour? Is there a way to grab and process each packet immediately as it is received across the wire? Will an Async receive model make any difference here? Or am I just fundamentally misunderstanding the way that TCP streams work?