Is there a way to catch consumer cancelled events in aio-pika? - rabbitmq

The title says it all. I see that I can catch the channel close event but I get no error at all if I remotely delete the queue being consumed from. I'm using aio-pika 6.6.0 and Python 3.7.7.

Related

Creating new channels after RabbitMQ connection was blocked

I have a very basic demo application for testing the RabbitMQ blocking behaviour. I use RabbitMQ 3.10.6 with the .NET library RabbitMQ.Client 6.2.4 in .NET Framework 4.8.
The disk is filled until the configured threshold in the RabbitMQ config file is exceeded. The connection state is "blocking".
I queue a message this way:
AMQP properties are added to the message using channel.CreateBasicProperties() with Persistent = true. It is then queued:
sendChannel.BasicPublish("", "sendQueueName", amqpProperties, someBytes);
sendChannel.WaitForConfirmsOrDie(TimeSpan.FromSeconds(5));
WaitForConfirmsOrDie() closes the underlying channel when the broker is blocking or blocked. Because this is the case the channel is closed and I need to create a new one if I want to queue messages again.
The connection state is "blocked".
First example: I catch the TimeoutException that is thrown, remove the resource alarm by providing enough disk space and create a new channel in the catch block. This works.
Second example: I catch the TimeoutException that is thrown but do nothing in the catch block. I remove the resource alarm by providing enough disk space and wait for the ConnectionUnblocked event to be fired. In here I create a new channel. But here it doesn't work. I get a TimeoutException.
Why can't I create any more channels outside the catch block once the connection was blocked?
The connection is created using ConnectionFactory.CreateConnection() and uses AutomaticRecoveryEnabled = true (although this doesn't seem to make any difference).
A channel is created using Connection.CreateModel().

Catch a disconnect event from ActiveMQ

Using the 1.6 version of NMS (1.6.3 activemq)
I'm setting up a listener to wait for messages.
The listener has a thread of it's own (not mine) and my code get out of scope (until the listener's function is being called).
If the ActiveMQ server disconnects, I get a global exception which I can only catch globally.
(my thread that created the listener will not catch it. I have nothing to wrap with "try" and "catch").
Is there a way to set a callback function like - OnError += ErrorHandlingFunction() as I use the listener to deal with this issue in a local way and not by global exception catcher ?
Is there a better way to deal with this issue (I can't use Transport Failure as I don't have any other options, but to wait a while, and disconnect, maybe log something or send a message that the server is offline).
There is no mechanism in the client for hooking in the async message listener to find out if the connection dropped during the processing of a message. You should really examine why you think you need such a thing there.
NMS API methods you use in the async callback will throw an exception when not connected so if you did something like try to ACK a message in the async message event handler then it would throw an exception if the connection was down.

NServiceBus SetHeader In Catch of Handler

We are using a try catch in our message handler, which we realize is against the recommended best practices of not handling exceptions. That said, I have been asked to identify the last retry and send a message to another queue in a suppressed transaction. The sending of the message is working however, I am calling message.SetHeader (also tried Bus.CurrentMessageContext.Headers[EsbService.FirstLevelRetriesHeader] = currentFirstLevelRetryAttempt.ToString();) to implement my own tracking of the retry attempts. Basically looking to write the an incrementing number in the header and look to see when it reaches a specific value to trigger the send of the message to another queue. It seems to write to it, but when the the message is processed again, it is never present. I am using transactions, so is it possible that the changes are getting rolled back when I throw the exception. I tried writing to the header in a suppressed transaction as well and that did not work.
Is there any way to update a header while still letting the exception bubble up to NSB.

RabbitMQ - what to do if client is unable to handle the message

I am wondering what would be the best practice when a consumer is not able to handle the message it receives. What would be the mechanism to notify the rabbit to it will either put it back on to the queue or move to an error queue?
im using the .net client from rabbitmq
Either discard it or put it on an error queue. If there is a problem with the message such that the consumer cannot handle it then do not put it back on the queue as the consumer will just try to read it again.
It is an exception so handle it as such. In the exception handling you should raise an error message stating what happened and what you have done with the message. Best practice is to put it on an error queue where it could be handled manually.

Uncatchable errors in node.js

So I'm trying to write a simple TCP socket server that broadcasts information to all connected clients. So when a user connects, they get added to the list of clients, and when the stream emits the close event, they get removed from the client list.
This works well, except that sometimes I'm sending a message just as a user disconnects.
I've tried wrapping stream.write() in a try/catch block, but no luck. It seems like the error is uncatchable.
The solution is to add a listener for the stream's 'error' event. This might seem counter-intuitive at first, but the justification for it is sound.
stream.write() sends data asynchronously. By the time that node has realized that writing to the socket has raised an error your code has moved on, past the call to stream.write, so there's no way for it to raise the error there.
Instead, what node does in this situation is emit an 'error' event from the stream, and EventEmitter is coded such that if there are no listeners for an 'error' event, the error is raised as a toplevel exception, and the process ends.
Peter is quite right,
and there is also another way, you can also make a catch all error handler with
process.on('uncaughtException',function(error){
// process error
})
this will catch everything which is thrown...
it's usually better to do this peter's way, if possible, however if you where writing, say, a test framework, it may be a good idea to use process.on('uncaughtException',...
here is a gist which covers (i think) all the different aways of handling errors in nodejs
http://gist.github.com/636290
I had the same problem with the time server example from here
My clients get killed and the time server then tries to write to closed socket.
Setting an error handler does not work as the error event only fires on reception. The time server does no receiving, (see stream event documentation).
My solution is to set a handler on the stream close event.
stream.on('close', function() {
subscribers.remove(stream);
stream.end();
console.log('Subscriber CLOSE: ' + subscribers.length + " total.\n");
});