Spring JMS with ActiveMQ: How to asynchronously read selective messages - activemq

I am using Spring JMS with ActiveMQ as the broker and running the application on Tomcat. I have seen many examples of receiving messages synchronously with a specified message selector using receiveSelected(..). But I cannot find any way to dynamically specify the message selector on a jms:listener-container to receive a message asynchronously. The selector will be known only at runtime.
The only way I can think of doing it is to use DefaultMessageListenerContainer instead and create a new instance every time I need a new selector. But I'm unsure if this is the right approach and the best practices in doing so. For example, should the listenerContainer associated with a selector be cached? When should it be shutdown etc.?
I would really appreciate it if someone could point me to an example or outline a strategy to handle this situation.
Thanks in advance!

You can't change the selector while the container is running (you can, but only new listener threads will use it). You can stop the container, modify the selector to include the new conditions, and start the container again.

Related

RabbitMQ as both Producer and Consumer in a single application

I am currently learning RabbitMQ and AMQP in general. I started working with some tutorials I found online and all of them show more or less the same example - a Spring Boot web app that, upon a REST call, produces a message and puts in onto a RabbitMQ queue and then, another class from the same app, which is configured as the Consumer of that message consumes it and processes the handler method.
I can't wrap my head around why this is beneficial in any way. The upside I understand is that the handler is executed in a separate thread, while the controller method can return right after sending the message to the queue. However, why would this be in any way better than just using Spring's #Async annotation on that handler method and calling it explicitly? In that case I suppose we would achieve the same thing, while not having to host and manage a seperate instance of a message broker like RabbitMQ.
Can someone please explain? Thanks.
Very simply:
with RabbitMq you can have persistent messages and a much safer and consistent exception management. In case the machine crashes, already pushed messages are not lost.
A message can be pushed to an exchange and consumed by more parallel consumers, that helps scaling the application in case the consumer code is too slow.
and a lot of other reasons...

All Endpoint Instances subscribe and handle event

I have a notification service that handles events and publishes them to clients using various technologies, such as SignalR. I want every instance of my notification service to pick up and handle these events. However, NServiceBus only allows any one instance of my notification service endpoint to pick up the event, and the other instances never get it.
My current workaround for this is to create a separate named endpoint for each instance of my notification service (the name has the server host name added to it), but then I have to make sure I unsubscribe from the event when the instance goes down or is moved to another server.
Is there a better way to do this? It would be nice if I could configure NServiceBus to create a separate incoming queue for each endpoint instance in this case, but I can't figure out how to do that, or even if NServiceBus supports such a use case.
You are correct. NServiceBus does not support such a case. Subscribers are always treated as logical endpoints, so individualized queues would not be used even if they were available.
Differentiating the instances by modifying the endpoint name is the most straightforward way to achieve what you want.
Changing your differentiator to a controllable runtime value, for instance an environment variable, would at least alleviate the need to unsubscribe when an instance is moved.
Also, if you want to review the scenario in more detail please don't hesitate to reach out to us directly, we might have other approaches to suggest. Just open a support ticket.

Apache camel restarting the route

I have a something as following:
from("rabbitmq://...")
.process(processor1)
:
.process(processorn)
.process(SendToExternalAppProcessor)
The SendToExternalAppProcessor.process() uses producer template to send the some request formed from the contents in the exchange parameter to another rabbitmq2 with sendBody() method.
The issue is that once SendToExternalAppProcessor.process() executes and above route executes, it restarts above route again along with the listener of the rabbitmq2.
What I am missing here? Is there any apache camel configuration that is slipping from my attention?
PS: I know I have not given any concrete code here so as to replicate the scenario on your machine, but am in hope that experienced head and eyes will be quick to recall and suggest something. (Also I cannot straightway share my project code and also its big and complex)
Update:
I tried by commenting sendBody() and still restarts the route. I must be missing something weird basic setting here...
I think this is just a misunderstanding of the way routes work. 'from' is not a one-shot event; it will keep accepting messages from the source until you explicitly tell the route to stop.
"from " works as a normal rabbitmq consumer. The route is designed as always running.
If you just want to transfer exchanges to another rabbitmq , "to" is enough.
from("rabbitmq://...")
.process(processor1)
:
.process(processorn)
.to("rabbitmq://rabbit2...")
Please let us know which version of Camel you are using.
Are you using transacted camel flow? If any transaction mode is on then one possible issue could be JMS commit acknowledgment. May be Camel is consuming message and processing but not acknowledging rabbitmq. So message is still there and consuming again and again by camel route. By default it is AUTO_ACKNOWLEDGE, so that should not be the case if not transacted camel route.

How to detect alarm-based blocking RabbitMQ producer?

I have a producer sending durable messages to a RabbitMQ exchange. If the RabbitMQ memory or disk exceeds the watermark threshold, RabbitMQ will block my producer. The documentation says that it stops reading from the socket, and also pauses heartbeats.
What I would like is a way to know in my producer code that I have been blocked. Currently, even with a heartbeat enabled, everything just pauses forever. I'd like to receive some sort of exception so that I know I've been blocked and I can warn the user and/or take some other action, but I can't find any way to do this. I am using both the Java and C# clients and would need this functionality in both. Any advice? Thanks.
Sorry to tell you but with RabbitMQ (at least with 2.8.6) this isn't possible :-(
had a similar problem, which centred around trying to establish a channel when the connection was blocked. The result was the same as what you're experiencing.
I did some investigation into the actual core of the RabbitMQ C# .Net Library and discovered the root cause of the problem is that it goes into an infinite blocking state.
You can see more details on the RabbitMQ mailing list here:
http://rabbitmq.1065348.n5.nabble.com/Net-Client-locks-trying-to-create-a-channel-on-a-blocked-connection-td21588.html
One suggestion (which we didn't implement) was to do the work inside of a thread and have some other component manage the timeout and kill the thread if it is exceeded. We just accepted the risk :-(
The Rabbitmq uses a blocking rpc call that listens for a reply indefinitely.
If you look the Java client api, what it does is:
AMQChannel.BlockingRpcContinuation k = new AMQChannel.SimpleBlockingRpcContinuation();
k.getReply(-1);
Now -1 passed in the argument blocks until a reply is received.
The good thing is you could pass in your timeout in order to make it return.
The bad thing is you will have to update the client jars.
If you are OK with doing that, you could pass in a timeout wherever a blocking call like above is made.
The code would look something like:
try {
return k.getReply(200);
} catch (TimeoutException e) {
throw new MyCustomRuntimeorTimeoutException("RabbitTimeout ex",e);
}
And in your code you could handle this exception and perform your logic in this event.
Some related classes that might require this fix would be:
com.rabbitmq.client.impl.AMQChannel
com.rabbitmq.client.impl.ChannelN
com.rabbitmq.client.impl.AMQConnection
FYI: I have tried this and it works.

Using JavaMail from a webapp in GlassFish

I have set up a JavaMail session in a backing bean for my JSF application, and it turns out to be fairly easy to send e-mail. However depending on network conditions, it can take a fair amount of time. The Transport.send() method will block the calling thread until the e-mail is sent or the protocol fails somehow.
My question is: Is this okay to do in a JSF backing bean, considering the possibility of many users accessing the server at the same time?
I can create an application-scoped worker thread that would work off of a BlockingQueue to handle all the e-mail in background. Is this the right thing to do?
A posibility is to have an ejb producing jms-messages, an mdb that consumes the messages asyncronously and invokes the Transport.send()
look at this example: http://faeddalberto.blogspot.com/2011/03/sending-email-with-ejb-3-mdb-message.html
Yes, it's better to move anything that uses the network (and thus may be delayed unpredictably) into a separate thread.