I have a very basic demo application for testing the RabbitMQ blocking behaviour. I use RabbitMQ 3.10.6 with the .NET library RabbitMQ.Client 6.2.4 in .NET Framework 4.8.
The disk is filled until the configured threshold in the RabbitMQ config file is exceeded. The connection state is "blocking".
I queue a message this way:
AMQP properties are added to the message using channel.CreateBasicProperties() with Persistent = true. It is then queued:
sendChannel.BasicPublish("", "sendQueueName", amqpProperties, someBytes);
sendChannel.WaitForConfirmsOrDie(TimeSpan.FromSeconds(5));
WaitForConfirmsOrDie() closes the underlying channel when the broker is blocking or blocked. Because this is the case the channel is closed and I need to create a new one if I want to queue messages again.
The connection state is "blocked".
First example: I catch the TimeoutException that is thrown, remove the resource alarm by providing enough disk space and create a new channel in the catch block. This works.
Second example: I catch the TimeoutException that is thrown but do nothing in the catch block. I remove the resource alarm by providing enough disk space and wait for the ConnectionUnblocked event to be fired. In here I create a new channel. But here it doesn't work. I get a TimeoutException.
Why can't I create any more channels outside the catch block once the connection was blocked?
The connection is created using ConnectionFactory.CreateConnection() and uses AutomaticRecoveryEnabled = true (although this doesn't seem to make any difference).
A channel is created using Connection.CreateModel().
Related
I have a very basic demo application for testing the RabbitMQ blocking behaviour. I use RabbitMQ 3.10.6 with the .NET library RabbitMQ.Client 6.2.4 in .NET Framework 4.8.
The connection is created using ConnectionFactory.CreateConnection() and uses AutomaticRecoveryEnabled = true.
The application creates one channel and one queue for sending messages:
IModel sendChannel = Connection.CreateModel();
sendChannel.ConfirmSelect();
sendChannel.QueueDeclare("sendQueueName", true, false, false);
For receiving messages, again one channel and one queue are created:
IModel receiveChannel = Connection.CreateModel();
receiveChannel.ConfirmSelect();
receiveChannel.QueueDeclare("receiveQueueName", true, false, false);
var receiveQueueConsumer = new QueueConsumer(receiveChannel); // This is my own class which inherits from 'DefaultBasicConsumer' and passes 'receiveChannel' to its base in the constructor.
receiveChannel.BasicConsume("receiveQueueName", false, receiveQueueConsumer);
Now I fill my disk until the configured threshold in the RabbitMQ config file is reached.
As expected, the ConnectionBlocked event is fired. The connection now is in state "blocking".
Now I queue a message. AMQP properties are added to the message using channel.CreateBasicProperties() with Persistent = true. It is then queued:
sendChannel.BasicPublish("", "sendQueueName", amqpProperties, someBytes);
sendChannel.WaitForConfirms(TimeSpan.FromSeconds(5)); // Returns as expected after 5 seconds with return value 'true'.
The connection now is in state "blocked".
Now I shut down my demo application and I have to realize that disposing does not work as expected.
sendChannel.Close(); // Blocks for 10 seconds.
if (receiveChannel.IsOpen) receiveChannel.BasicCancel(ConsumerTags.First()); // In 'receiveQueueConsumer'. Throws a 'TimeoutException'.
Connection.Close(); // Freezes for at least a minute.
The behaviour is the same when calling Dispose() or Abort() instead of Close(). When I finally force kill the application (or when I set a timeout for Abort()) then the application closes but the underlying connection and channels are not removed. The connection still is in state "blocked".
At least once there is enough space on the disk again, the blocked connections and their channels are automatically removed by the broker. Without the need to restart it.
Here and here it sounds like the broker just won't react when it is "blocked".
There can be a broad number of reasons for a timeout, from a genuine connection interruption to a resource alarm in effect that prevents target node from reading any data coming from clients unless the alarm clears.
Nodes will temporarily block publishing connections by suspending reading from client connection.
Which would mean I can't free my resources unless I restart the broker or I make sure that the broker has plenty of resources to turn off the resource alarm. Is there an official confirmation for this? Or how do I need to adjust the dispose mechanism to make it work when the broker is blocked?
I have written a program which requires multiple queues interaction - means consumer of one queue writes message to another queue and same program has consumer to take action on that queue.
Problem: How to handle network time-out issues with queue while sending messages asynchronously using spring rabbit ampq library?or RabbitTemplate.send() function must throw an exception if there are network issues.
Currently, I have implemented RabbitTemplate.send() that returns immediately and working fine. But, If network is down, send function returns immediately, doesn't throw any exception and client code assumes success. As a result, i have in-consistent state in DB that message is successfully processed. Please note that call to send function is wrapped inside transactional block and goal is if queue writing fails, DB commit must also rollback. I am exploring following solutions but no success:
Can we configure rabbitTemplate to throw run-time exception if any network connectivity issue so that client call is notified? Please suggest how to do this.
Shall we use synchronous SendAndReceive function call but it leads to delay in processing? Another problem, observed with this function, my consumer code gets notification while sendAndReceive function is still blocked for writing message to queue. Please advise if we can delay notification to queue unless sendAndReceive function is returned. But call to SendAndReceive() was throwing an amqp exception if network was down which we were able to capture, but it has cost associated related to performance.
My application is multi-threaded, if multiple threads are sending message using sendAndReceive(), how spring-amqp library manages queue communication? Does it internally creates channel per request? If messages are delivered via same channel, it would impact performance a lot for multi-threaded application.
Can some-one share sample code for using SendAndReceive function with best-practices?
Do we have any function in spring-amqp library to check health of RabbitMQ server before submitting send function call? I explored rabbitTemplate.isRunning() but not getting proper result. If any specific configuration required, please suggest.
Any other solution to consider for guaranteed message delivery or handle network time-out issues to throw runtime exceptions to client..
As per Gary comment below, I have set: rabbitTemplate.setChannelTransacted(true); and it makes call sync. Next part of problem is that if I have transaction block on outer block, call to RabbitTemplate.send() returns immediately. I expect transaction block of outer function must wait for inner function to return, otherwise, ii don't get expected result as my DB changes are persisted though we enabled setChannelTransacted to true. I tried various Transaction propagation level but no success. Please advise if I am doing anything wrong and review transactional propagation settings as below
#Transactional
public void notifyQueueAndDB(DBRequest dbRequest) {
logger.info("Updating Request in DB");
dbService.updateRequest(dbRequest));
//Below is call to RabbitMQ library
mqService.sendmessage(dbRequest); //If sendMessage fails because of network outage, I want DB commit also to be rolled-back.
}
MQService defined in another library of project, snippet below.
#Transactional( propagation = Propagation.NESTED)
private void sendMessage(......) {
....
rabbitTemplate.send(this.queueExchange, queueName, amqpMessage);
}catch (Exception exception) {
throw exception
}
Enable transactions so that the send is synchronous.
or
Use Publisher confirms and wait for the confirmation to be received.
Either one will be quite a bit slower.
Using the 1.6 version of NMS (1.6.3 activemq)
I'm setting up a listener to wait for messages.
The listener has a thread of it's own (not mine) and my code get out of scope (until the listener's function is being called).
If the ActiveMQ server disconnects, I get a global exception which I can only catch globally.
(my thread that created the listener will not catch it. I have nothing to wrap with "try" and "catch").
Is there a way to set a callback function like - OnError += ErrorHandlingFunction() as I use the listener to deal with this issue in a local way and not by global exception catcher ?
Is there a better way to deal with this issue (I can't use Transport Failure as I don't have any other options, but to wait a while, and disconnect, maybe log something or send a message that the server is offline).
There is no mechanism in the client for hooking in the async message listener to find out if the connection dropped during the processing of a message. You should really examine why you think you need such a thing there.
NMS API methods you use in the async callback will throw an exception when not connected so if you did something like try to ACK a message in the async message event handler then it would throw an exception if the connection was down.
I have an application which is acting as a consumer for a queue in activemq. This application is written on c++ and using activemq-cpp to get the services of activemq.
I want to achieve is when my application goes down and again comes up, it should first delete all the messages which gets populated in queue during the time my application is down that is it should first delete all the old messages in queue and then starts receiving new messages.
Is there any way to achieve this using activemq-cpp?
If you cast your Connection instance to an ActiveMQConnection there is a destroyDestination method that will remove the destination from the broker and all messages if there are no active subscriptions when called, otherwise it will throw an exception so be prepared for that. A small code snippet follows.
ActiveMQConnection* connection =
dynamic_cast<ActiveMQConnection*>( cmsConnection );
try {
connection->destroyDestination(destination);
} catch(Exception& ex) {
}
The below text is an effort to expand and add color to this question:
How do I prevent a misbehaving client from taking down the entire service?
I have essentially this scenario: a WCF service is up and running with a client callback having a straight forward, simple oneway communication, not very different from this one:
public interface IMyClientContract
{
[OperationContract(IsOneWay = true)]
void SomethingChanged(simpleObject myObj);
}
I'm calling this method potentially thousands of times a second from the service to what will eventually be about 50 concurrently connected clients, with as low latency as possible (<15 ms would be nice). This works fine until I set a break point on one of the client apps connected to the server and then everything hangs after maybe 2-5 seconds the service hangs and none of the other clients receive any data for about 30 seconds or so until the service registers a connection fault event and disconnects the offending client. After this all the other clients continue on their merry way receiving messages.
I've done research on serviceThrottling, concurrency tweaking, setting threadpool minimum threads, WCF secret sauces and the whole 9 yards, but at the end of the day this article MSDN - WCF essentials, One-Way Calls, Callbacks and Events describes exactly the issue I'm having without really making a recommendation.
The third solution that allows the service to safely call back to the client is to have the callback contract operations configured as one-way operations. Doing so enables the service to call back even when concurrency is set to single-threaded, because there will not be any reply message to contend for the lock.
but earlier in the article it describes the issue I'm seeing, only from a client perspective
When one-way calls reach the service, they may not be dispatched all at once and may be queued up on the service side to be dispatched one at a time, all according to the service configured concurrency mode behavior and session mode. How many messages (whether one-way or request-reply) the service is willing to queue up is a product of the configured channel and the reliability mode. If the number of queued messages has exceeded the queue's capacity, then the client will block, even when issuing a one-way call
I can only assume that the reverse is true, the number of queued messages to the client has exceeded the queue capacity and the threadpool is now filled with threads attempting to call this client that are now all blocked.
What is the right way to handle this? Should I research a way to check how many messages are queued at the service communication layer per client and abort their connections after a certain limit is reached?
It almost seems that if the WCF service itself is blocking on a queue filling up then all the async / oneway / fire-and-forget strategies I could ever implement inside the service will still get blocked whenever one client's queue gets full.
Don't know much about the client callbacks, but it sounds similar to generic wcf code blocking issues. I often solve these problems by spawning a BackgroundWorker, and performing the client call in the thread. During that time, the main thread counts how long the child thread is taking. If the child has not finished in a few milliseconds, the main thread just moves on and abandons the thread (it eventually dies by itself, so no memory leak). This is basically what Mr.Graves suggests with the phrase "fire-and-forget".
Update:
I implemented a Fire-and-forget setup to call the client's callback channel and the server no longer blocks once the buffer fills to the client
MyEvent is an event with a delegate that matches one of the methods defined in the WCF client contract, when they connect I'm essentially adding the callback to the event
MyEvent += OperationContext.Current.GetCallbackChannel<IFancyClientContract>().SomethingChanged
etc... and then to send this data to all clients, I'm doing the following
//serialize using protobuff
using (var ms = new MemoryStream())
{
ProtoBuf.Serializer.Serialize(ms, new SpecialDataTransferObject(inputData));
byte[] data = ms.GetBuffer();
Parallel.ForEach(MyEvent.GetInvocationList(), p => ThreadUtil.FireAndForget(p, data));
}
in the ThreadUtil class I made essentially the following change to the code defined in the fire-and-foget article
static void InvokeWrappedDelegate(Delegate d, object[] args)
{
try
{
d.DynamicInvoke(args);
}
catch (Exception ex)
{
//THIS will eventually throw once the client's WCF callback channel has filled up and timed out, and it will throw once for every single time you ever tried sending them a payload, so do some smarter logging here!!
Console.WriteLine("Error calling client, attempting to disconnect.");
try
{
MyService.SingletonServiceController.TerminateClientChannelByHashcode(d.Target.GetHashCode());//this is an IContextChannel object, kept in a dictionary of active connections, cross referenced by hashcode just for this exact occasion
}
catch (Exception ex2)
{
Console.WriteLine("Attempt to disconnect client failed: " + ex2.ToString());
}
}
}
I don't have any good ideas how to go and kill all the pending packets the server is still waiting to see if they'll get delivered on. Once I get the first exception I should in theory be able to go and terminate all the other requests in some queue somewhere, but this setup is functional and meets the objectives.