How to process messages in parallel from a Weblogic JMS queue? - weblogic

I am new to JMS, and I am trying to understand if there is a way to consume messages in parallel from a JMS queue and process them using Spring JMS.
I checked a few answers on Stack Overflow, but I am still confused.
The application I am working on uses Spring Boot and Weblogic JMS as the messaging broker. It listens to a JMS queue from a single producer using the JmsListener class.
In the JMS ConnectionFactory configuration of the application the following parameter has been set:
DefaultJmsListenerContainerFactory.setConcurrency("6-10");
Does that mean if there are 100 messages currently in a queue then 10 messages will be consumed and processed in parallel? If so, can I increase the value to process more messages in parallel? If so, are there any limitations to it?
Also, I am confused about what DefaultJmsListenerContainerFactory.setConcurrency and setConcurrentConsumers does.
Currently the processing of JMS client app is very slow. So I need suggestions to implement parallel processing.

concurrentConsumers is a fixed number of consumers where as concurrency can specify a variable number which scale up/down as needed. Also see maxConcurrentConsumers.
The actual behavior also depends on prefetch; if each consumer prefetches 100 messages then only one consumer might get them all.
There is no limit (aside from memory/cpu constraints).

Related

RabbitMQ delivery throttle

So I'm testing RabbitMQ in one node. Plain and simple,
One producer sends messages to the queue,
Multiple consumers take tasks from that queue.
Currently consumers execute thousands of messages per second, they are too fast so I need them to slow down. Managing consumer-side throttling is not possible due to network unreliable nature.
Collectively consumers must not take more than 10 messages per second altogether from that queue.
Is there a way to configure RabbitMQ so as the queue dispatches a maximum of 10 messages per second?
If I remember correctly, once Rabbit MQ has delivered a message to the queue, it's up to consumers to consume a message. There are various consumers in different languages, you haven't mentioned anything specific, so I'm giving a generic answer.
In my understanding, you shouldn't try to impose any restrictions on Rabbit MQ itself, instead, consider implementing connection pool of message consumers that will be able to handle not more than X messages simultaneously on the client side. Alternatively, you can provide some kind of semaphore at the handler itself, but not on the Rabbit MQ server itself.

Implementing the reliability pattern in CloudHub with VM queues

I have more-or-less implemented the Reliability Pattern in my Mule application using persistent VM queues CloudHub, as documented here. While everything works fine, it has left me with a number of questions about actually ensuring reliable delivery of my messages. To illustrate the points below, assume I have http-request component within my "application logic flow" (see the diagram on the link above) that is throwing an exception because the endpoint is down, and I want to ensure that the in flight message will eventually get delivered to the endpoint:
As detailed on the link above, I have observed that when the exception is thrown within my "application logic flow", and I have made the flow transactional, the message is put back on the VM queue. However all that happens is the message then repeatedly taken off the queue, processed by the flow, and the exception is thrown again - ad infinitum. There appears to be no way of configuring any sort of retry delay or maximum number of retries on VM queues as is possible, for example, with ActiveMQ. The best work around I have come up with is to surround the http-request message processor with the until-successful scope, but I'd rather have these sorts of things apply to my whole flow (without having to wrap the whole flow in until-successful). Is this sort of thing possible using only VM queues and CloudHub?
I have configured my until-successful to place the message on another VM queue which I want to use as a dead-letter-queue. Again, this works fine, and I can login to CloudHub and see the messages populated on my DLQ - but then it appears to offer no way of moving messages from this queue back into the flow when the endpoint comes back up. All it seems you can do in CloudHub is clear your queue. Again, is this possible using VM queues and CloudHub only (i.e. no other queueing tool)?
VM queues are very basic, whether you use them in CloudHub or not.
VM queues have no capacity for delaying redelivery (like exponential back-offs). Use JMS queues if you need such features.
You need to create a flow for processing the DLQ, for example one that regularly consumes the queue via the requester module and re-injects the messages into the main queue. Again, with JMS, you would have better control.
Alternatively to JMS, you could consider hosted queues like CloudAMQP, Iron.io or AWS SQS. You would lose transaction support on the inbound endpoint but would gain better control on the (re)delivery behaviour.

Active MQ one message by a consumer

Can we configure ActiveMQ to send only one message per an instance of application ?
Actually i have tomcat installed in a cluster mode.
I'm using Spring JMS template as consumer.
You need to explain your question further; it's not clear what you are asking.
If you are talking about prefetch, IIRC ActiveMQ sets the prefetch to 1000 by default; set it to 0 to force messages to be distributed across all instances (at the cost of performance). Typically you will want to use prefetch, but you need to tune it for your needs.
Set the maxConcurrentConsumers property to 1. This should make it so that only one thread consumes from the queue per node.

Does NServiceBus 4.x with RabbitMQ support round robing consumers or the competing consumer model?

I'm using NServiceBus 4.x with RabbitMQ 3.2.x as my transport.
I made the assumption that by using RabbitMQ as my transport I would be given the competing consumer model as an option. I understand that NServiceBus employs the "Fannout" exchange type for all exchanges and does not support round robin at this time. However is there a way to configure NServiceBus to take advantage of the levels of indirection via Exchanges and channels that RabbitMQ offers.
I have several consumers I would like to compete for messages from a given queue. What I am observing is subscribers' blocking access to further message retrieval from the queue until the message is consumed. So having more then one consumer at this point does me no good other then redundancy.
After reading some documentation on RabbitMQ I'm assuming that it's normal to block until the Ack receipt is sent from the subscriber. But I had assumed that subscriber #2 would have free access to the queue to fetch another message.
There is mention of increasing the prefetch count on RabbitMQ channel.
Example:
channel.BasicQos(0,prefetchcount,false)
I don't see anywhere that I can change this setting via configuration in NServiceBus. Furthermore as I read what prefetch does I'm really not sure this what i'm looking for.
Is it possible to use RabbitMQ with out a distirbutor type pattern used with MSMQ? Or should I move to MassTransit or Rebus?
Put prefetchcount=2 in your connection string. Any value above 1 will tell the broker to allow more than X unacked message to go out. You need to fiddle with this setting to find the optimum for your scenario.

ActiveMQ: Reject connections from producers when persistent store fills

I would like to configure my ActiveMQ producers to failover (I'm using the Stomp protocol) when a broker reaches a configured limit. I want to allow consumers to continue consumption from the overloaded broker, unabated.
Reading ActiveMQ docs, it looks like I can configure ActiveMQ to do one of a few things when a broker reaches its limits (memory or disk):
Slow down messages using producerFlowControl="true" (by blocking the send)
Throw exceptions when using sendFailIfNoSpace="true"
Neither of the above, in which case..I'm not sure what happens? Reverts to TCP flow control?
It doesn't look like any of these things are designed to trigger a producer failover. A producer will failover when it fails to connect but not, as far as I can tell, when it fails to send (due to producer flow control, for example).
So, is it possible for me to configure a broker to refuse connections when it reaches its limits? Or is my best bet to detect slow down on the producer side, and to manually reconfigure my producers to use the a different broker at that time?
Thanks!
Your best bet is to use sendFailIfNoSpace, or better sendFailIfNoSpaceAfterTimeout. This will throw an exception up to your client, which can then attempt to resend the message to another broker at the application level (though you can encapsulate this logic over the top of your Stomp library, and use this facade from your code). Though if your ActiveMQ setup is correctly wired, your load both in terms of production and consumption should be more or less evenly distributed across your brokers, so this feature may not buy you a great deal.
You would probably get a better result if you concentrated on fast consumption of the messages, and increased the storage limits to smooth out peaks in load.