JMSXGroupID/correlation-id to queue messages on stomp client doesn't seem to work - activemq

I was trying to queue messages to the same consumer using stomp-js on a node server.
Producer:
producer.send({'JMSXGroupID':JMSXGroupID, 'destination':confMgr.getConfig("jmsqueue.destination"), 'body':JSON.stringify(msg), 'persistent':'true'}, false);
Consumer:
client.on('message', function(message) {
client.ack(message.headers['message-id']);
})
I was sending two messages using the same JMSXGroupID and it seems that the the client processess both the messages in parallel rather than processing message1 and ack'ing it and going ahead to process message2 and then ack'ing message2. I tried using 'correlation-id' and it doesn't seem to work either. Can anyone suggest a better way?
Thank you in advance,
Chandra.

I guess you are using this stomp-js lib (correct me if I'm wrong): https://github.com/benjaminws/stomp-js
Message groups are supported by ActiveMQ using Stomp, so you are most likely getting the messages in order. Processing them in order requires you to somehow process each message synchronously on the client, which is rather simple when you can controll how many threads that the listener will run in. This might not be as easy with java script. which is not
From what I can see, the lib you are using is not the most well documented, the only setting you could tweak that might (I have not tried it!), is to alter the prefetch size to one.
var headers = {
destination: '/queue/test_stomp',
ack: 'client',
'activemq.prefetchSize': '1'
};
It might be the case that this lib still starts eagerly directly to fetch the next message, but you might want to test it.
On the other hand, you might as well want to re design the application to be sequence independent, since you are running node.js and java script. It's always better to have a sequence independence with messaging, since you are able to optimize performance a lot better and can avoid synchronous behaviours.
I don't know what you did try to achieve with correlation id, but that header is used to correlate a request with a reply, which is not the case here.

Related

ActiveMQ: How do I limit the number of messages being dispatched?

Let's say I have one ActiveMQ Broker and an undefined numbers of consumers.
Problem:
To process a message, consumers need an external service which is either "DATA1" or "DATA2" (specified in the message)
Each server, "DATA1" and "DATA2", can only handle 20 connections
So at most 20 "DATA1" and 20 "DATA2" messages must be dispatched at any time
Because of priorization, the messages must be enqueued in the same queue
Even if message A has a higher prio than message B, if A can't be processed because the external service has no free slots, message B needs to be processed instead
How can this be solved? As long as I was using message pulling (prefetch of 0), I was able to do this by using a BrokerPlugin that, on messagePull, achieved this by using semaphores and selectors. If the limits were reached, the pull returned null.
However, due to performance issues I had to set prefetch to 1 and use push instead. Therefore, my messagePull hack no longer works (it's never called).
So far I'm considering implementing a custom Cursor but I was wondering if someone knows a better solution.
Update the custom cursor worked but broke features like message removal. I tried with a custom Queue and QueueDispatchSelector (which is a pain to configure since there isn't a proper API to do so) and it mostly works but I still have synchronisation issues.
Also, a very suitable API seems to be DispatchPolicy, however, while it is referenced by Queue, it's never used.
Queues give you buffering for system processing time for free. Messages are delivered on demand. With prefetch=0 or prefetch=1, should effectively get you there. Messages will only be delivered to a consumer when the consumer is ready (ie.. during the consumer.receive() method).
consumer.receive() is a blocking call, so you should not need any custom plugin or other to delay delivery until the consumer process (and its required downstream services) are ready to handle it.
The behavior should work out-of-the-box, or there are some details to your use case that are not provided to shed more light on the scenario.

Does EasyNetQ support nack?

What I'm really trying to do is leave the message on the queue in the case where it is rejected by the current consumer. In RabbitMQ I could send a NACK to accomplish this. Is NACK supported in EasyNetQ? Is there another way to achieve the behavior I'm looking for?
Update: not a lot of responses, so I'm wondering how people are generally handling the lack of NACK in EasyNetQ. Not having the equivalent of basic.reject limits consumers to "I can always process every message" scenarios. I suppose consumers could throw a specific "rejected" exception to cause EasyNetQ to dequeue the message to the error queue, and I could requeue messages with those errors. Anyone else have other workarounds in place?
I used EasyNetQ for almost a year, but no matter how we tweaked it (amongst other things added our own implementation of IConsumerErrorStrategy) I never really got it to work the way I wanted. The fact that it is single threaded gave us some unexpected behaviour (sometimes deadlocks) when performing RequestAsync while in a SubscribeAsync handler.
The solution for us was to move from EasyNetQ. After working with the official RabbitMq Client for a while, I spent a few days writing a super thin client on top of that. It is influenced by EasyNetQ and supports most of the concepts that EasyNetQ has. However, I added some neat features like pluggable message contexts. I think that the Nack feature of IAdvancedMessageContext that I just added can be something for you:
var client = service.GetService<IBusClient<AdvancedMessageContext>>();
client.RespondAsync<BasicRequest, BasicResponse>((req, ctx) =>
{
ctx?.Nack(); // the context implements IAdvancedMessageContext.
return Task.FromResult<BasicResponse>(null);
}, cfg => cfg.WithNoAck(false));
If you're interested you can read more about it at the Github page (especially the NackTests.cs).
I think you can change the behavior by implementing your own IConsumerErrorStrategy:
https://github.com/EasyNetQ/EasyNetQ/blob/master/Source/EasyNetQ/Consumer/DefaultConsumerErrorStrategy.cs
But if you need that kind of control you might consider just using the RabbitMQ client directly?
It sounds like you are trying to handle failures. You can NACK a message, but that means it sits at the head of the queue. Great, but then it means that you could end up with a bunch of messages that are truthfully unable to be processed, and you will be unable to actually process real messages.
The solution that I have always used when using RabbitMQ is to utilize the default error handling of EasyNetQ, and have a separate application to resend messages. That is, when an exception is captured in RabbitMQ, it routes the message to a queue called "EasyNetQ_Default_Error_Queue". You are able to override this name and have different queues go to different error queues, but for now let's stick with the default. You can then have a Windows Service/Azure Worker role reading these messages, and working out what to do. That may include having a "RetryCount" on your message envelope/wrapper to make sure that it only loops around so many times. All in all, it's going to be a bit of work.
What you are finding, is what many people run into when using RabbitMQ/EasyNetQ. She's pretty raw.

Auto spawn rabbit mq listener

I'm working on an application where-in I have a listener on a rabbit mq queue. Depending on the kind of message, the listener goes ahead and performs a task. My problem is I need a way to spawn a new listener if a single listener isn't able to cope up with the queue. As far as I can tell, I can use the rabbitmq json api to find the len of the queue and take actions based on that. So, I write a script that checks using curl the queue length and spawns a new listener process. Am I on the right path here? Is there a better way to achieve this? I'm looking for a solution that kinda scales with load to a certain limit atleast.
Checking the RabbitMQ API to see the length of the queue is one way, and it would definitely work.
You should try to predict when the load is spiking so that you slowly can increase the number of consumers if needed, so that you don't see a sudden spike of instances spawning. Having many instances spawning simultaneously could cause unnecessary load on your system.

Persisting Data in a Twisted App

I'm trying to understand how to persist data in a Twisted application. Let's say I've decided to write a Twisted server that:
Accepts inbound SMTP requests
Sends the message to a 3rd party system for modification
Relays the modified message to its destination
A typical Twisted tutorial would have you build this app using Deferreds and callbacks, roughly:
A Factory handles inbound requests
Each time a full email is received a call is sent to the remote message processor, returning a deferred
Add an errback that substitutes the original message if anything goes wrong in the modify call.
Add a callback to send the message on to the recipient, which again returns a deferred.
A real server would add/include additional call/errbacks to retry or notify the sender or whatnot. Again for simplicity, assume we consider this an acceptable amount of effort and just log errors.
Of course, this persists NO data in the event of a crash/restart/something else. I get that a solution involves a 3rd party persistent datastore (RabbitMQ is often mentioned) and could probably come up with a dozen random ways to achieve the outcome.
However, I imagine there are a few approaches that work best in a Twisted app. What do they look like? How do they store (and restore in the event of a crash) the in-process messages?
If you found this question, you probably already know that Twisted is event-based. It sounds simple, but the "hardest" part of the answer is to get the persistence platform generating the events we need when we need them. Naturally, you can persist the data in a DB or a message queue, but some platforms don't naturally generate events. For example:
ZeroMQ has (or at least had) no callback for new data. It's also relatively poor at persistence.
In other cases, events are easy but reliability is a problem:
pgSQL can be configured to generate events using triggers, but they're one-time things so you can't resume incomplete events
The light at the end of the tunnel seems to be something like RabbitMQ.
RabbitMQ can persist the message in a database to survive a crash
We can use acknowledgements on both legs (incoming SMTP to RabbitMQ and RabbitMQ to outgoing SMTP) to ensure the application is reliable. Importantly, RabbitMQ supports acknowledgements.
Finally, several of the RabbitMQ clients provide full asynchronous support (see for example pika, txampq, and puka)
It's enough for our purposes that the RabbitMQ client provides us an event-based interface.
At a more theoretical level, however, this need not be the case. In fact, despite the "notification" issue above, ZeroMQ has an event-based client. Even if our software is elegantly event-based, we will run into systems that aren't. In these cases, we have no choice but to fall back on polling. In principle, if not in practice, we just query the message provider for messages. When we exhaust the current queue (and immediately if there are no messages), we use a callLater command to check again in the future. It may feel anti-pattern, but it's (to the best of my knowledge anyway) the right way to handle this particular case.

How to detect alarm-based blocking RabbitMQ producer?

I have a producer sending durable messages to a RabbitMQ exchange. If the RabbitMQ memory or disk exceeds the watermark threshold, RabbitMQ will block my producer. The documentation says that it stops reading from the socket, and also pauses heartbeats.
What I would like is a way to know in my producer code that I have been blocked. Currently, even with a heartbeat enabled, everything just pauses forever. I'd like to receive some sort of exception so that I know I've been blocked and I can warn the user and/or take some other action, but I can't find any way to do this. I am using both the Java and C# clients and would need this functionality in both. Any advice? Thanks.
Sorry to tell you but with RabbitMQ (at least with 2.8.6) this isn't possible :-(
had a similar problem, which centred around trying to establish a channel when the connection was blocked. The result was the same as what you're experiencing.
I did some investigation into the actual core of the RabbitMQ C# .Net Library and discovered the root cause of the problem is that it goes into an infinite blocking state.
You can see more details on the RabbitMQ mailing list here:
http://rabbitmq.1065348.n5.nabble.com/Net-Client-locks-trying-to-create-a-channel-on-a-blocked-connection-td21588.html
One suggestion (which we didn't implement) was to do the work inside of a thread and have some other component manage the timeout and kill the thread if it is exceeded. We just accepted the risk :-(
The Rabbitmq uses a blocking rpc call that listens for a reply indefinitely.
If you look the Java client api, what it does is:
AMQChannel.BlockingRpcContinuation k = new AMQChannel.SimpleBlockingRpcContinuation();
k.getReply(-1);
Now -1 passed in the argument blocks until a reply is received.
The good thing is you could pass in your timeout in order to make it return.
The bad thing is you will have to update the client jars.
If you are OK with doing that, you could pass in a timeout wherever a blocking call like above is made.
The code would look something like:
try {
return k.getReply(200);
} catch (TimeoutException e) {
throw new MyCustomRuntimeorTimeoutException("RabbitTimeout ex",e);
}
And in your code you could handle this exception and perform your logic in this event.
Some related classes that might require this fix would be:
com.rabbitmq.client.impl.AMQChannel
com.rabbitmq.client.impl.ChannelN
com.rabbitmq.client.impl.AMQConnection
FYI: I have tried this and it works.