Akka.net EchoServer Example deadletter when closing connection - akka.net

I am getting the following deadletter when closing the connection with the Akka.Net TcpEchoService Example (http://getakka.net/docs/IO):
[INFO][2015-09-04 09:13:51 AM][Thread 0013][akka://system/system/IO-TCP/selectors/$b/1] Message ChannelAcceptable from NoSender to akka://system/system/IO-TCP/selectors/$b/1 was not delivered. 1 dead letters encountered.
Is this by design? Or is there something I can do to stop the deadletter from happening?

Akka.NET uses the messages and actors to manage it's own behavior. For this reason, sometimes when you stop a connection or shutdown a system, some of the system's internal actors may die before they receive awaiting system messages - like in your case. It's not always a bad thing, rather a nature of asynchronous communication between actors.

Related

RabbitMQ: how to handle unwanted duplicate un-ack message after connection lost?

In my app(multiple instances), we occasionally see the case where connection is lost between my app and rabbitmq due to network issues(my app and rabbitmq are both alive), then after connection is recovered(re-established) we will receive messages that are unacked.
This creates an issue for us, because my app wasn't dead, and it is still processing the same message it received before, but now the message is redeivered, and it causes the app to process the message again (which can be fatal to us).
Since the app has multiple instances, it is not easy for an instance to check if another instance is processing the same message at the same time. We can't simply filter out redelivered message, because we need this feature to handle instance/app crashes/re-deployments.
It doesn't seem that there is an api to tell rabbitmq when to not redeliver unacked messages.
So what is the recommended practice to handle this situation ?
Thanks,
The general solution for such scenario is to make the consumers handle the messages in an idempotent manner . Generally what I do is from the producer side ( in case there is no unique identifier in the message body ) I add an attribute idempotencyId to the message body which is a guid and on the consumer side for each message this id is validated against the stored value in database , any duplicates are rejected.
This approach also works for messages which might be shoveled from another cluster or if in a same cluster multiple instances of consumers are listening then too this approach guarantee one time processing.
Would suggest to go over the RabbitMQ Reliability Guide here
Yeah, exactly-once delivery is not something RabbitMQ is good at. In fact, I'd say you should probably not be using it for these kinds of problems. Honestly, the only way to truly fix this is to use distributed transactions or locking.
Anyway, you could turn the problem on its head by ack'ing the message as soon as the consumer gets it, before it starts working on it. That would avoid the RabbitMQ-related duplication issue at least. This is at-most-once delivery.
Of course, it means that if the consumer crashes, the message is lost forever. So you need to persist the message right before you ack it so you can recover it later and also the consumer should remove it once it's complete.
Considering that crashes are rare, you can then have a single dedicated process that just works on those persisted messages. Or for that matter, handle them manually.
Just be aware that you are pushing the duplication problem in front of you, because the consumer might fail to remove the persisted message after it's done working with it anyway, but at least you have the option to implement it however you want.
Storage in this case could be anything from files, a RDBMS or something like ZooKeeper or Redis to lock/unlock in-flight messages.

Resiliently processing messages from RabbitMQ

I'm not sure how to resiliently handle RabbitMQ messages in the event of an intermittent outage.
I subscribe in a windows service, read the message, then store it my database. If I can't process the record because of the data I publish it to a dead letter queue for a human to address and reprocess.
I am not sure what to do if I have some intermittent technical issue that will fix itself (database reboot, network outage, drive space, etc). I don't want hundreds of messages showing up on dead letter that just needed to wait for a for a glitch but now would be waiting on a human.
Currently, I re-queue the event and retry it once, but it retries so fast the issue is not usually resolved. I thought of retrying forever but I don't want a real issue to get stuck in an infinite loop.
Is a broad topic but from the server side you could persist your messages and make your queues durable, this means that in the eventuality the server gets restarted they won't be lost, check more here How to persist messages during RabbitMQ broker restart?
For the consumer (client) it will depend on how you configure your client, from the docs:
In the event of network failure (or a node crashing), messages can be duplicated, and consumers must be prepared to handle them. If possible, the simplest way to handle this is to ensure that your consumers handle messages in an idempotent way rather than explicitly deal with deduplication.
If a message is delivered to a consumer and then requeued (because it was not acknowledged before the consumer connection dropped, for example) then RabbitMQ will set the redelivered flag on it when it is delivered again (whether to the same consumer or a different one). This is a hint that a consumer may have seen this message before (although that's not guaranteed, the message may have made it out of the broker but not into a consumer before the connection dropped). Conversely if the redelivered flag is not set then it is guaranteed that the message has not been seen before. Therefore if a consumer finds it more expensive to deduplicate messages or process them in an idempotent manner, it can do this only for messages with the redelivered flag set.
Check more here: https://www.rabbitmq.com/reliability.html#consumer

rabbitMQ unable to get heartbeat working with php-amqplib

I have observed RabbitMQ "stuck" with unacked messages. The queue shows a consumer which no longer exists, and I assume what's happening is that RabbitMQ is continuing to deliver messages to that consumer. They show as an ever-increasing count of unacked messages. I'm doing this in PHP with php-amqplib.
I can produce the problem by killing the consumer process (control-C on command line).
I tried specifying a heartbeat of 3 seconds and tried keep-alive both true and false. With heartbeat, the consumer will eventually fail:
Exception fwrite(): send of 573 bytes failed with errno=32 Broken pipe
PhpAmqpLib\Wire\IO\StreamIO->error_handler(8, 'fwrite(): send ...',
php-amqplib/PhpAmqpLib/Wire/IO/StreamIO.php(281): fwrite(Resource id #176, '\x01\x00\x01\x00\x00\x00\x15\x00<\x00(\x00\x00\fb...', 8192)
Issue #374 might relate: https://github.com/php-amqplib/php-amqplib/issues/374
The consumer is consuming from multiple queues, but I believe that shouldn't matter.
The problem I'm trying to solve is that RabbitMQ continues to think that a consumer exists when it doesn't, with the result that RabbitMQ delivers those messages nowhere, and they go unacknowledged. I'm looking for a way to get rid of that spurious connection so that those messages can be re-delivered to a live consumer. I think that's what heartbeat is for, but I haven't gotten it to work.
The first and more important think that we need to do in this case is try to "print" your content message, and only return true to consumer. Don't process your real code, if you can "consume" the messages the problem isn't in rabbit but in our process, because probably we expend to much time to acknowledge message to rabbit and Rabbit closes our connections.
I'm not saying that its you case, but I'm just trying to help debugging the problem.
In my case I change the approach of this problem, because I have many product ids(my case) for each message and its expend long time to ACK process cause they reach database, I fit my messages and it works well after do that.
We can change the approach like create another queues to fit this messages, I don't know, but 90% of problems is it.
You can read more about Detecting Dead TCP Connections with Heartbeats here

"Unknown delivery tag" from RabbitMQ when ack'ing a message in a cluster with replicated queues

We've been using Rabbit successfully for about a year. Recently have upgraded to v2.6.1, because we want to use clusters with replicated message queues.
My testing has hit a puzzling behavior that smells like a Rabbit bug to me. The test that uncovers this is working with a two-node cluster. Both nodes are running v2.6.1. Both nodes have disk. Both nodes are running on Mac OS, though I doubt this is pertinent.
I'm also running Alice on the node that runs the test. The test uses it to programmatically do a stop_app on one of the nodes, because the test is trying to validate that if the cluster master fails, and a slave is elevated to take its place, that we don't lose messages.
So, the test has a small thread pool, which is given tasks that periodically 1) publish messages, and 2) toggle the state of the Rabbit master node (stopped if running; started if stopped). Other threads are consuming messages from queues.
I'm using publisher confirms, and I'm also acknowledging the messages in the consumers (using autoAck=false for channel.basicConsume()).
When the master node is stopped, I see both the producers and consumers catching ShutdownSignalException. They handle this by attempting to reconnect to the cluster. This works fine. When reconnected, they continue with their business.
Sometimes, what I see is that a consumer has successfully fetched a message from the broker, and is calling channel.basicAck() when it gets that ShutdownSignalException.
Later, when the consumer has reconnected, it again pulls down the same message. (The message bodies are tagged with a UUID, so I know it is the same one.) This time, when the consumer attempts to basicAck() the message, it again gets ShutdownSignalException, but this one has the following text in it: "reply-text=PRECONDITION_FAILED - unknown delivery tag 7".
In fact, that is the same delivery tag that was offered to the consumer by the broker before the master went down and the consumer reconnected.
Googling suggests that this event means that the consumer is attempting to ack the same message more than once.
But, how can this be so? If the first ack succeeded, then the message should have been removed from the broker's queues, and the consumer shouldn't see the same message again.
Yet, if the first ack did not succeed, then the consumer shouldn't be dinged for attempting to re-ack the message.
Anyone seen this before? It smells like a bug in Rabbit's replicated queues to me, but I've still new to Rabbit, and so am willing to believe there's a subtlety here in consuming from a clustered broker that I haven't yet grokked!
Thanks, --Steve
I'm not sure if my case matching yours, but I have seen similar "unknown delivery tag" on attempts to ack after reconnect and then the same message arrived again. Initially it looked like a bug to me, but in fact this is expected behavior. Consumer with QOS>1 may have in it's local buffer some messages and delivery tag will be invalid for all o them after reconnect. From another hand, attempt to ack even the current message after reconnect doesn't make any sense, because that message already nacked automatically on connection lost and this is why I got it again.

How can I recover unacknowledged AMQP messages from other channels than my connection's own?

It seems the longer I keep my rabbitmq server running, the more trouble I have with unacknowledged messages. I would love to requeue them. In fact there seems to be an amqp command to do this, but it only applies to the channel that your connection is using. I built a little pika script to at least try it out, but I am either missing something or it cannot be done this way (how about with rabbitmqctl?)
import pika
credentials = pika.PlainCredentials('***', '***')
parameters = pika.ConnectionParameters(host='localhost',port=5672,\
credentials=credentials, virtual_host='***')
def handle_delivery(body):
"""Called when we receive a message from RabbitMQ"""
print body
def on_connected(connection):
"""Called when we are fully connected to RabbitMQ"""
connection.channel(on_channel_open)
def on_channel_open(new_channel):
"""Called when our channel has opened"""
global channel
channel = new_channel
channel.basic_recover(callback=handle_delivery,requeue=True)
try:
connection = pika.SelectConnection(parameters=parameters,\
on_open_callback=on_connected)
# Loop so we can communicate with RabbitMQ
connection.ioloop.start()
except KeyboardInterrupt:
# Gracefully close the connection
connection.close()
# Loop until we're fully closed, will stop on its own
connection.ioloop.start()
Unacknowledged messages are those which have been delivered across the network to a consumer but have not yet been ack'ed or rejected -- but that consumer hasn't yet closed the channel or connection over which it originally received them. Therefore the broker can't figure out if the consumer is just taking a long time to process those messages or if it has forgotten about them. So, it leaves them in an unacknowledged state until either the consumer dies or they get ack'ed or rejected.
Since those messages could still be validly processed in the future by the still-alive consumer that originally consumed them, you can't (to my knowledge) insert another consumer into the mix and try to make external decisions about them. You need to fix your consumers to make decisions about each message as they get processed rather than leaving old messages unacknowledged.
If messages are unacked there are only two ways to get them back into the queue:
basic.nack
This command will cause the message to be placed back into the queue and redelivered.
Disconnect from the broker
This action will force all unacked messages from this channel to be put back into the queue.
NOTE: basic.recover will try to republish unacked messages on the same channel (to the same consumer), which is sometimes the desired behaviour.
RabbitMQ spec for basic.recover and basic.nack
The real question is: Why are the messages unacknowledged?
Possible scenarios to cause unacked messages:
Consumer fetching too many messages, then not processing and acking them quickly enough.
Solution: Prefetch as few messages as appropriate.
Buggy client library (I have this issue currently with pika 0.9.13. If the queue has a lot of messages, a certain number of messages will get stuck unacked, even hours later.
Solution: I have to restart the consumer several times until all unacked messages are gone from the queue.
All the unacknowledged messages will go to ready state once all the workers/consumers are stopped.
Ensure all workers are stopped by confirming with a grep on ps aux output, and stopping/killing them if found.
If you are managing workers using supervisor, which shows as worker is stopped, you may want to check for zombies. Supervisor reports the worker to be stopped but still you will find zombie processes running when grepped on ps aux output. Killing the zombie processes will bring messages back to ready state.