I have one application which acts as a publisher and regularly sends messages to an exchange and a dozen others (subscribers) which are groped semantically by topics. My problem is that the subscribers can move between different groups, hence their topic subscription should change, but I cannot figure out a way how to alter the bindings dynamically. Any ideas?
My config for each subscriber looks like this:
#Bean
TopicExchange exchange() {
return new TopicExchange(exchangeName);
}
#Bean
Binding binding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(routingKey);
}
#Bean
Queue queue(SystemInformationService systemInformationService) {
return new Queue(systemInformationService.getInfo().getTemplateName() != null ? systemInformationService.getInfo().getTemplateName() : queueName , true);
}
}
P.S: I must not restart my Subscriber SpringBoot application, otherwise it is quite obvious.
You cannot change bindings; you can, however, add and remove them.
You can either do that manually using the management UI or you can use a RabbitAdmin.
Related
I'm using RabbitMQ and Spring Boot and I want to create all queues and exchanges declared when application starts.
I have one exchange and two queues binding to it. Also I have another queue without binding to any exchange.
The exchange declaration is like this:
#Bean
TopicExchange exchange() {
return new TopicExchange(name, false, false);
}
And the queues:
#Bean
Queue queue1() {
return new Queue(name, false);
}
#Bean
Binding bindingLogger(Queue queue1, TopicExchange exchange) {
return BindingBuilder.bind(queue1).to(exchange).with("routingKey");
}
And the queue without binding:
#Bean
Queue queue2() {
return new Queue(name, false);
}
Also I have used #Component tag in the classes.
I think this is ok because if I add a "dummy" #RabbitListener all queues and the exchange are created. Adding something like this:
#Component
public class DummyListener {
#RabbitListener(queues = {FAKE_QUEUE_NAME})
public void dummyMethod(Message message, Channel channel) {
// The code never will enter here because nobody are going to
// insert data into the queue.
// This method is only to create queues and exchange on init
}
}
But I think this is a dirty solution, is neccesary create a listener which never will be triggered and a queue which never will be used.
And, as I said before, the queues and exchange declarations works perfectly and are created when project start if this "dummy listener" is implemented.
So, how can I create the exchange and queues (if not exists) when start the application? Is there a more elegant way?
I've read about rabbitAdmin but I think this is to create a new queue at runtime (actually I don't know if I have to manage in a different way at start and at runtime)
Thanks in advance.
Those Declarables are populated into RabbitMQ broker when the connection is opened.
This really happens with the listener container starting from that #RabbitListener.
All the hard logic is done from the mentioned RabbitAdmin:
/**
* If {#link #setAutoStartup(boolean) autoStartup} is set to true, registers a callback on the
* {#link ConnectionFactory} to declare all exchanges and queues in the enclosing application context. If the
* callback fails then it may cause other clients of the connection factory to fail, but since only exchanges,
* queues and bindings are declared failure is not expected.
*
* #see InitializingBean#afterPropertiesSet()
* #see #initialize()
*/
#Override
public void afterPropertiesSet() {
Another point of connection is of course a RabbitTemplate when you produce the message into an exchange.
If you really are not going to do any consumption or production, you can consider to inject an AmqpAdmin into your service and call its initialize() when you need:
/**
* Declares all the exchanges, queues and bindings in the enclosing application context, if any. It should be safe
* (but unnecessary) to call this method more than once.
*/
#Override // NOSONAR complexity
public void initialize() {
However from here is the question: what is the point to have all those declarations in your application if you are not going to use them in the further logic. Looks like a mix of concerns and abuse of AMQP API. Better to have those entities declared outside of your application, e.g. using RabbitMQ Management console or command line util...
You can simply open the connection. If you are using Spring Boot, see this answer.
If you are not using Spring Boot, add a #Bean that implements SmartLifecycle and open the connection in start().
My, already "legacy" by now, implementation of a pub/sub solution using ServiceStack quickly ran out of clients, when it reached the 20 client limit.
We do something like:
_redisConsumer = MqClientFactory.Instance.GetRedisClient(); // Returns a IRedisClient
_subscription = _redisConsumer.CreateSubscription();
_subscription.OnSubscribe = channel => CoreLog.Instance.Info($"Subscription started on {eventChannelName}");
_subscription.OnUnSubscribe = channel => CoreLog.Instance.Warning($"Unsubscribed from {eventChannelName}");
_subscription.OnMessage = (channel, msg) =>
{
try
{
onMessageReceived(CoreRequestJsonEnvelope.CreateCoreRequestFromJson(msg));
}
catch (Exception ex)
{
CoreLog.Instance.Exception(ex);
}
};
// Since it blocks execution, we put this in a Task:
Task.Run(() =>
{
try
{
_subscription.SubscribeToChannels(eventChannelName); // blocking
}
catch(Exception e)
{
}
});
and when we have enough different channels to listen too, it runs out.
I then thought, that maybe instead of taking a new IRedisClient for each subscription, I could use the same IRedisClient for all of them, so:
_redisConsumer = mySavedRedisClient;
...
but that returns Unknown reply on multi-request after a few seconds/executions.
Lastly, I looked at the RedisPubSubServer, but it seems that I need to specify the channels in the constructor, and I cannot change after that. I do need to add and remove channels in runtime, and channels are not known from start.
What is the recommended approach?
Is it to increaase the Max-limit and continue as before?
Is it to use RedisPubSub, but how to handle dynamic channels?
What does "unknown reply on multi-request" actually mean?
Thanks!
It's not clear what 20 client limit you're referring to & how the client limit is dependent on channels or subscribers, but if this is your App own limit than sounds like increasing it would be the easiest solution.
ServiceStack.Redis doesn't support changing the subscribed channels after a subscription has started. Instead of managing the IRedisSubscription yourself you may want to consider ServiceStack.Redis Managed Pub/Sub Server which manages the background subscription thread with added resiliency and support for auto retries.
Whilst you can't change the subscribed channels at runtime, you can modify the modify the Channels collection and restart the subscription to create a new subscription to the updated channels list, e.g:
var pubSub = new RedisPubSubServer(clientsManager, chan1);
pubSub.Start();
//...
pubSub.Channels = new[] { chan1, chan2 };
pubSub.Restart();
Otherwise depending on your use-case you may be able to subscribe to a channel pattern which allows you to subscribe to a multiple dynamic channels matching wildcard channel pattern:
var pubSub = new RedisPubSubServer(clientsManager) {
ChannelsMatching = new[] { "chan:*" }
}
.Start();
Where it will handle any messages clients send that matches the channel pattern.
I am using Spring webflux to build an endpoint that will stream events received from a Redis channel subscription.
It's something like this:
class MyService(redisTemplate: ReactiveRedisOperations<String, String>) {
private val redisChannelFlux = redisTemplate
.listenToChannel("myChannel")
.map { it.message }
.cache(0) // transforms this FLux into a reusable Hot publisher
fun watch() : Flux<String> {
return redisChannelFlux
}
}
class MyController(val svc: MyService) {
#GetMapping("/api/watch", produces = [MediaType.TEXT_EVENT_STREAM_VALUE])
fun watch() : Flux<String> {
return svc.watch()
}
}
It works. When a client subscribes to the /api/watch endpoint it starts receiving new events from the Redis channel, and I can confirm in the Redis monitor that "SUBSCRIBE" "myChannel" happens only once, regardless of how many clients are connected to my reactive endpoint. Awesome!
I am just not sure how safe it is to use Flux.cache() in this scenario. Am I flirting with disaster here? Is there a recommended way of reusing an existing Publisher with new Subscribers?
Flux.cache() is usually used for replaying to new subscribers the last N elements of that Flux. Since here you're setting that number to 0, it seems you're just interesting in sharing resources, and not replaying the latest events to a new subscriber.
With that in mind, you can just use Flux.share() instead. This operator will subscribe to your redis instance as soon as a first subscriber comes in, and share the resources with all others. Once all subscribers are gone, the connection to your redis instance is closed, until another subscriber comes, etc.
I have a job with the following config:
#Autowired
private ConnectionFactory connectionFactory;
#Bean
Step step() {
return steps.get("step")
.<~>chunk(chunkSize)
.reader(reader())
.processor(processor())
.writer(writer())
.build();
}
#Bean
ItemReader<Person> reader() {
return new AmqpItemReader<>(amqpTemplate());
}
#Bean
AmqpTemplate amqpTemplate() {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setChannelTransacted(true);
return rabbitTemplate;
}
Is it possible to change behavior of RabbitResourceHolder to not requeue the message in case of a transaction rollback? It makes sense in Spring Batch?
Not when using an external transaction manager; the whole point of rolling back a transaction is to put things back the way they were before the transaction started.
If you don't use transactions (or just use a local transaction - via setChannelTransacted(true) and no transaction manager), you (or an ErrorHandler) can throw an AmqpRejectAndDontRequeueException (or set defaultRequeueRejected to false on the container) and the message will go to the DLQ.
I can see that this is inconsistent; the RabbitMQ documentation says:
On the consuming side, the acknowledgements are transactional, not the consuming of the messages themselves.
So rabbit itself does not requeue the delivery but, as you point out, the resource holder does (but the container will reject the delivery when there is no transaction manager and one of the 2 conditions I described is true).
I think we need to provide at least an option for the behavior you want.
I opened AMQP-711.
I have started producer and consumer concurrently. After 6 hours producer produced around 6 crores messages into queue and stopped producer after 6 hours but consumer is running continuously, even after running 18 hours still 4 crores messages are in queue. Could any one please let me know why consumer performance is very slow?
Thanks in advance!
#Bean
public SimpleMessageListenerContainer listenerContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory());
container.setQueueNames(this.queueName);
container.setMessageListener(new MessageListenerAdapter(new TestMessageHandler(), new JsonMessageConverter()));
return container;
}
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory(
"localhost");
connectionFactory.setUsername("guest");
connectionFactory.setPassword("guest");
return connectionFactory;
}
#Bean
public RabbitTemplate rabbitTemplate() {
RabbitTemplate template = new RabbitTemplate(connectionFactory());
template.setMessageConverter(new JsonMessageConverter());
template.setRoutingKey(this.queueName);
template.setQueue(this.queueName);
return template;
}
public class TestMessageHandler {
// receive messages
public void handleMessage(MessageBeanTest msgBean) {
// Storing bean data into CSV file
}
}
As per Gary's suggestion you can set them as follows. Check out #RabbitListener
#Bean
public SimpleRabbitListenerContainerFactory listenerContainer( {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(baseConfig.connectionFactory());
factory.setConcurrentConsumers(7); // choose a value
factory.setPrefetchCount(1); // how many messages per consumer at a time
factory.setMaxConcurrentConsumers(10); // choose a value
factory.setDefaultRequeueRejected(false); // if you want to deadletter
return factory;
}
According to WikiPedia, crore == 10,000,000 so you mean 60 million.
The container can only process messages as fast as your listener does - you need to analyze what you are doing with each message.
You also need to experiment with the container concurrency settings (concurrentConsumers), prefetch, etc, to obtain the optimum performance, but it still ends up being your listener that takes the majority of the processing time; the container has very little overhead. Increasing the concurrency won't help if your listener is not well constructed.
If you are using transactions, that will significantly slow down consumption.
Try using a listener that does nothing with the message.
Finally, you should always show configuration when asking questions like this.