How to create queues and exchanges at application start? - rabbitmq

I'm using RabbitMQ and Spring Boot and I want to create all queues and exchanges declared when application starts.
I have one exchange and two queues binding to it. Also I have another queue without binding to any exchange.
The exchange declaration is like this:
#Bean
TopicExchange exchange() {
return new TopicExchange(name, false, false);
}
And the queues:
#Bean
Queue queue1() {
return new Queue(name, false);
}
#Bean
Binding bindingLogger(Queue queue1, TopicExchange exchange) {
return BindingBuilder.bind(queue1).to(exchange).with("routingKey");
}
And the queue without binding:
#Bean
Queue queue2() {
return new Queue(name, false);
}
Also I have used #Component tag in the classes.
I think this is ok because if I add a "dummy" #RabbitListener all queues and the exchange are created. Adding something like this:
#Component
public class DummyListener {
#RabbitListener(queues = {FAKE_QUEUE_NAME})
public void dummyMethod(Message message, Channel channel) {
// The code never will enter here because nobody are going to
// insert data into the queue.
// This method is only to create queues and exchange on init
}
}
But I think this is a dirty solution, is neccesary create a listener which never will be triggered and a queue which never will be used.
And, as I said before, the queues and exchange declarations works perfectly and are created when project start if this "dummy listener" is implemented.
So, how can I create the exchange and queues (if not exists) when start the application? Is there a more elegant way?
I've read about rabbitAdmin but I think this is to create a new queue at runtime (actually I don't know if I have to manage in a different way at start and at runtime)
Thanks in advance.

Those Declarables are populated into RabbitMQ broker when the connection is opened.
This really happens with the listener container starting from that #RabbitListener.
All the hard logic is done from the mentioned RabbitAdmin:
/**
* If {#link #setAutoStartup(boolean) autoStartup} is set to true, registers a callback on the
* {#link ConnectionFactory} to declare all exchanges and queues in the enclosing application context. If the
* callback fails then it may cause other clients of the connection factory to fail, but since only exchanges,
* queues and bindings are declared failure is not expected.
*
* #see InitializingBean#afterPropertiesSet()
* #see #initialize()
*/
#Override
public void afterPropertiesSet() {
Another point of connection is of course a RabbitTemplate when you produce the message into an exchange.
If you really are not going to do any consumption or production, you can consider to inject an AmqpAdmin into your service and call its initialize() when you need:
/**
* Declares all the exchanges, queues and bindings in the enclosing application context, if any. It should be safe
* (but unnecessary) to call this method more than once.
*/
#Override // NOSONAR complexity
public void initialize() {
However from here is the question: what is the point to have all those declarations in your application if you are not going to use them in the further logic. Looks like a mix of concerns and abuse of AMQP API. Better to have those entities declared outside of your application, e.g. using RabbitMQ Management console or command line util...

You can simply open the connection. If you are using Spring Boot, see this answer.
If you are not using Spring Boot, add a #Bean that implements SmartLifecycle and open the connection in start().

Related

how to check ActiveMQ queue size from message listener

I am using message listener for performing some actions on activeMQ queues
I want to check size of queue while performing.
I am using below logic but it works outside listener.
Any suggestion?
public class TestClass {
MessageConsumer consumerTransformation;
MessageListener listenerObjectTransformation;
public static void main(String []args) throws JMSException {
ActiveMQModel activeMQModelObject = new ActiveMQModel();
//String subject = "TRANSFORMATION_QUEUE";
String subject = "IMPORT_QUEUE";
//consumerTransformation = activeMQModelObject.getActiveMQConsumer(subject);
// Here we set the listener to listen to all the messages in the queue
//listenerObjectTransformation = new TransformationMessageListener();
//consumerTransformation.setMessageListener(listenerObjectTransformation);
boolean isQueueEmpty = activeMQModelObject.isMessageQueueEmpty(subject);
System.out.println("Size " + isQueueEmpty);
}
/*private class TransformationMessageListener implements MessageListener {
#Override
public void onMessage(Message messagearg) {
System.out.println("test....");
}
}*/
}
What is way to check activeMQ queue size from message listener
The JMS API does not define methods for checking Queue size or other metrics from a client, the API is meant to decouple the clients from any server administration and from each other. A sender has no awareness of the receivers that might or might not be there and the receiver is unaware of who might be producing or if there is anything to consume at that given moment. By using the asynchronous listener you are subscribing for content either currently available or content yet to be produced.
You can in some cases make us of the JMX metrics that are available from the server in your code but this is not good practice.

Channel.basicQos() ignored by rabbitMq

I'm developing an application that uses RabbitMq with Micronaut(v1.1.3) Framework, the goal of this application is to write on the queue the path of a file. The workers (RabbitListeners) consume the queue and do certain operations on the indicated file. These operations can be burdensome and therefore I don't want the queue to immediately assign messages to the first available worker so as to avoid overloading a worker. I have read that you need to set the "prefetch_count" to prevent the worker from being overloaded.
The problem is that channel.basicQos (1) is completely ignored and therefore the prefetch_count is not set in the queue.
#Singleton
public class ChannelPoolListener extends ChannelInitializer {
#Override
public void initialize(Channel channel) throws IOException {
channel.basicQos(1);
channel.exchangeDeclare("micronaut", BuiltinExchangeType.DIRECT, true);
channel.queueDeclare("log", true, false, false, null);
channel.queueBind("log", "micronaut", "log");
}
}
The channel passed in the initializer has no guarantees to be used beyond that scope. You need to set the prefetch in the Queue annotation. See https://micronaut-projects.github.io/micronaut-rabbitmq/latest/api/io/micronaut/configuration/rabbitmq/annotation/Queue.html#prefetch--

Change RabbitMQ bindings

I have one application which acts as a publisher and regularly sends messages to an exchange and a dozen others (subscribers) which are groped semantically by topics. My problem is that the subscribers can move between different groups, hence their topic subscription should change, but I cannot figure out a way how to alter the bindings dynamically. Any ideas?
My config for each subscriber looks like this:
#Bean
TopicExchange exchange() {
return new TopicExchange(exchangeName);
}
#Bean
Binding binding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(routingKey);
}
#Bean
Queue queue(SystemInformationService systemInformationService) {
return new Queue(systemInformationService.getInfo().getTemplateName() != null ? systemInformationService.getInfo().getTemplateName() : queueName , true);
}
}
P.S: I must not restart my Subscriber SpringBoot application, otherwise it is quite obvious.
You cannot change bindings; you can, however, add and remove them.
You can either do that manually using the management UI or you can use a RabbitAdmin.

Using Flux.cache to reuse Redis channel subscription in webflux

I am using Spring webflux to build an endpoint that will stream events received from a Redis channel subscription.
It's something like this:
class MyService(redisTemplate: ReactiveRedisOperations<String, String>) {
private val redisChannelFlux = redisTemplate
.listenToChannel("myChannel")
.map { it.message }
.cache(0) // transforms this FLux into a reusable Hot publisher
fun watch() : Flux<String> {
return redisChannelFlux
}
}
class MyController(val svc: MyService) {
#GetMapping("/api/watch", produces = [MediaType.TEXT_EVENT_STREAM_VALUE])
fun watch() : Flux<String> {
return svc.watch()
}
}
It works. When a client subscribes to the /api/watch endpoint it starts receiving new events from the Redis channel, and I can confirm in the Redis monitor that "SUBSCRIBE" "myChannel" happens only once, regardless of how many clients are connected to my reactive endpoint. Awesome!
I am just not sure how safe it is to use Flux.cache() in this scenario. Am I flirting with disaster here? Is there a recommended way of reusing an existing Publisher with new Subscribers?
Flux.cache() is usually used for replaying to new subscribers the last N elements of that Flux. Since here you're setting that number to 0, it seems you're just interesting in sharing resources, and not replaying the latest events to a new subscriber.
With that in mind, you can just use Flux.share() instead. This operator will subscribe to your redis instance as soon as a first subscriber comes in, and share the resources with all others. Once all subscribers are gone, the connection to your redis instance is closed, until another subscriber comes, etc.

Don't requeue if transaction fails

I have a job with the following config:
#Autowired
private ConnectionFactory connectionFactory;
#Bean
Step step() {
return steps.get("step")
.<~>chunk(chunkSize)
.reader(reader())
.processor(processor())
.writer(writer())
.build();
}
#Bean
ItemReader<Person> reader() {
return new AmqpItemReader<>(amqpTemplate());
}
#Bean
AmqpTemplate amqpTemplate() {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setChannelTransacted(true);
return rabbitTemplate;
}
Is it possible to change behavior of RabbitResourceHolder to not requeue the message in case of a transaction rollback? It makes sense in Spring Batch?
Not when using an external transaction manager; the whole point of rolling back a transaction is to put things back the way they were before the transaction started.
If you don't use transactions (or just use a local transaction - via setChannelTransacted(true) and no transaction manager), you (or an ErrorHandler) can throw an AmqpRejectAndDontRequeueException (or set defaultRequeueRejected to false on the container) and the message will go to the DLQ.
I can see that this is inconsistent; the RabbitMQ documentation says:
On the consuming side, the acknowledgements are transactional, not the consuming of the messages themselves.
So rabbit itself does not requeue the delivery but, as you point out, the resource holder does (but the container will reject the delivery when there is no transaction manager and one of the 2 conditions I described is true).
I think we need to provide at least an option for the behavior you want.
I opened AMQP-711.