Using Flux.cache to reuse Redis channel subscription in webflux - spring-webflux

I am using Spring webflux to build an endpoint that will stream events received from a Redis channel subscription.
It's something like this:
class MyService(redisTemplate: ReactiveRedisOperations<String, String>) {
private val redisChannelFlux = redisTemplate
.listenToChannel("myChannel")
.map { it.message }
.cache(0) // transforms this FLux into a reusable Hot publisher
fun watch() : Flux<String> {
return redisChannelFlux
}
}
class MyController(val svc: MyService) {
#GetMapping("/api/watch", produces = [MediaType.TEXT_EVENT_STREAM_VALUE])
fun watch() : Flux<String> {
return svc.watch()
}
}
It works. When a client subscribes to the /api/watch endpoint it starts receiving new events from the Redis channel, and I can confirm in the Redis monitor that "SUBSCRIBE" "myChannel" happens only once, regardless of how many clients are connected to my reactive endpoint. Awesome!
I am just not sure how safe it is to use Flux.cache() in this scenario. Am I flirting with disaster here? Is there a recommended way of reusing an existing Publisher with new Subscribers?

Flux.cache() is usually used for replaying to new subscribers the last N elements of that Flux. Since here you're setting that number to 0, it seems you're just interesting in sharing resources, and not replaying the latest events to a new subscriber.
With that in mind, you can just use Flux.share() instead. This operator will subscribe to your redis instance as soon as a first subscriber comes in, and share the resources with all others. Once all subscribers are gone, the connection to your redis instance is closed, until another subscriber comes, etc.

Related

republishing message into RabbitMQ queue creates an infinite loop

I have a RabbitMQ queue to hold unprocessed messages. I happy path, I will read message from the queue, process it, and removes the message in the queue. But if certain criteria are met while processing I have to republish the message to the queue again. I am using a pollable channel adapter to fetch the message. since I want to fetch all the available messages in that queue while polling I have set the maxMessagesPerPoll to -1. This causes the code to go in an infinite loop. after republishing the message into the queue, the inbound polled adapter picks it up immediately. How can I prevent this situation?
Is there any way to delay the message delivery or can we restrict the message processing once per message in single polling of the InboundPolledAdapter. What will be the best approach?
The inboundPolledAdapter is,
#Bean
public IntegrationFlow inboundIntegrationFlowPaymentRetry() {
return IntegrationFlows
.from(Amqp.inboundPolledAdapter(connectionFactory, RetryQueue),
e -> e.poller(Pollers.fixedDelay(20_000).maxMessagesPerPoll(-1)).autoStartup(true))
.handle(message -> {
channelRequestFromQueue()
.send(MessageBuilder.withPayload(message.getPayload()).copyHeaders(message.getHeaders())
.setHeader(IntegrationConstants.QUEUED_MESSAGE, message).build());
}).get();
}
For the first posting of first message to the queue by,
#Bean
Binding bindingRetryQueue() {
return BindingBuilder.bind(queueRetry()).to(exchangeRetry())
.with(ProcessQueuedMessageService.RETRY_ROUTING_KEY);
}
#Bean
TopicExchange exchangeRetry() {
return new TopicExchange(ProcessQueuedMessageService.RETRY_EXCHANGE);
}
#Bean
Queue queueRetry() {
return new Queue(RetryQueue, false);
}
#Bean
#ServiceActivator(inputChannel = "channelAmqpOutbound")
public AmqpOutboundEndpoint outboundAmqp(AmqpTemplate amqpTemplate) {
final AmqpOutboundEndpoint outbound = new AmqpOutboundEndpoint(amqpTemplate);
outbound.setRoutingKey(RetryQueue);
return outbound;
}
Republishing message by,
StaticMessageHeaderAccessor.getAcknowledgmentCallback(requeueMessage).acknowledge(Status.REQUEUE);
Is there any way to delay the message delivery
See Delayed Exchange feature in Rabbit MQ and its API in Spring AMQP: https://docs.spring.io/spring-amqp/docs/current/reference/html/#delayed-message-exchange
restrict the message processing once per message
For this scenario you can take a look into Idempotent Receiver pattern and its implementation in Spring Integration: https://docs.spring.io/spring-integration/docs/current/reference/html/messaging-endpoints.html#idempotent-receiver.
The redelivered message is going to have an AmqpHeaders.REDELIVERED header.
See more in docs: https://www.rabbitmq.com/reliability.html#consumer-side

How to create queues and exchanges at application start?

I'm using RabbitMQ and Spring Boot and I want to create all queues and exchanges declared when application starts.
I have one exchange and two queues binding to it. Also I have another queue without binding to any exchange.
The exchange declaration is like this:
#Bean
TopicExchange exchange() {
return new TopicExchange(name, false, false);
}
And the queues:
#Bean
Queue queue1() {
return new Queue(name, false);
}
#Bean
Binding bindingLogger(Queue queue1, TopicExchange exchange) {
return BindingBuilder.bind(queue1).to(exchange).with("routingKey");
}
And the queue without binding:
#Bean
Queue queue2() {
return new Queue(name, false);
}
Also I have used #Component tag in the classes.
I think this is ok because if I add a "dummy" #RabbitListener all queues and the exchange are created. Adding something like this:
#Component
public class DummyListener {
#RabbitListener(queues = {FAKE_QUEUE_NAME})
public void dummyMethod(Message message, Channel channel) {
// The code never will enter here because nobody are going to
// insert data into the queue.
// This method is only to create queues and exchange on init
}
}
But I think this is a dirty solution, is neccesary create a listener which never will be triggered and a queue which never will be used.
And, as I said before, the queues and exchange declarations works perfectly and are created when project start if this "dummy listener" is implemented.
So, how can I create the exchange and queues (if not exists) when start the application? Is there a more elegant way?
I've read about rabbitAdmin but I think this is to create a new queue at runtime (actually I don't know if I have to manage in a different way at start and at runtime)
Thanks in advance.
Those Declarables are populated into RabbitMQ broker when the connection is opened.
This really happens with the listener container starting from that #RabbitListener.
All the hard logic is done from the mentioned RabbitAdmin:
/**
* If {#link #setAutoStartup(boolean) autoStartup} is set to true, registers a callback on the
* {#link ConnectionFactory} to declare all exchanges and queues in the enclosing application context. If the
* callback fails then it may cause other clients of the connection factory to fail, but since only exchanges,
* queues and bindings are declared failure is not expected.
*
* #see InitializingBean#afterPropertiesSet()
* #see #initialize()
*/
#Override
public void afterPropertiesSet() {
Another point of connection is of course a RabbitTemplate when you produce the message into an exchange.
If you really are not going to do any consumption or production, you can consider to inject an AmqpAdmin into your service and call its initialize() when you need:
/**
* Declares all the exchanges, queues and bindings in the enclosing application context, if any. It should be safe
* (but unnecessary) to call this method more than once.
*/
#Override // NOSONAR complexity
public void initialize() {
However from here is the question: what is the point to have all those declarations in your application if you are not going to use them in the further logic. Looks like a mix of concerns and abuse of AMQP API. Better to have those entities declared outside of your application, e.g. using RabbitMQ Management console or command line util...
You can simply open the connection. If you are using Spring Boot, see this answer.
If you are not using Spring Boot, add a #Bean that implements SmartLifecycle and open the connection in start().

ServiceStack RedisMqServer: No way to add or remove channels in runtime?

My, already "legacy" by now, implementation of a pub/sub solution using ServiceStack quickly ran out of clients, when it reached the 20 client limit.
We do something like:
_redisConsumer = MqClientFactory.Instance.GetRedisClient(); // Returns a IRedisClient
_subscription = _redisConsumer.CreateSubscription();
_subscription.OnSubscribe = channel => CoreLog.Instance.Info($"Subscription started on {eventChannelName}");
_subscription.OnUnSubscribe = channel => CoreLog.Instance.Warning($"Unsubscribed from {eventChannelName}");
_subscription.OnMessage = (channel, msg) =>
{
try
{
onMessageReceived(CoreRequestJsonEnvelope.CreateCoreRequestFromJson(msg));
}
catch (Exception ex)
{
CoreLog.Instance.Exception(ex);
}
};
// Since it blocks execution, we put this in a Task:
Task.Run(() =>
{
try
{
_subscription.SubscribeToChannels(eventChannelName); // blocking
}
catch(Exception e)
{
}
});
and when we have enough different channels to listen too, it runs out.
I then thought, that maybe instead of taking a new IRedisClient for each subscription, I could use the same IRedisClient for all of them, so:
_redisConsumer = mySavedRedisClient;
...
but that returns Unknown reply on multi-request after a few seconds/executions.
Lastly, I looked at the RedisPubSubServer, but it seems that I need to specify the channels in the constructor, and I cannot change after that. I do need to add and remove channels in runtime, and channels are not known from start.
What is the recommended approach?
Is it to increaase the Max-limit and continue as before?
Is it to use RedisPubSub, but how to handle dynamic channels?
What does "unknown reply on multi-request" actually mean?
Thanks!
It's not clear what 20 client limit you're referring to & how the client limit is dependent on channels or subscribers, but if this is your App own limit than sounds like increasing it would be the easiest solution.
ServiceStack.Redis doesn't support changing the subscribed channels after a subscription has started. Instead of managing the IRedisSubscription yourself you may want to consider ServiceStack.Redis Managed Pub/Sub Server which manages the background subscription thread with added resiliency and support for auto retries.
Whilst you can't change the subscribed channels at runtime, you can modify the modify the Channels collection and restart the subscription to create a new subscription to the updated channels list, e.g:
var pubSub = new RedisPubSubServer(clientsManager, chan1);
pubSub.Start();
//...
pubSub.Channels = new[] { chan1, chan2 };
pubSub.Restart();
Otherwise depending on your use-case you may be able to subscribe to a channel pattern which allows you to subscribe to a multiple dynamic channels matching wildcard channel pattern:
var pubSub = new RedisPubSubServer(clientsManager) {
ChannelsMatching = new[] { "chan:*" }
}
.Start();
Where it will handle any messages clients send that matches the channel pattern.

Change RabbitMQ bindings

I have one application which acts as a publisher and regularly sends messages to an exchange and a dozen others (subscribers) which are groped semantically by topics. My problem is that the subscribers can move between different groups, hence their topic subscription should change, but I cannot figure out a way how to alter the bindings dynamically. Any ideas?
My config for each subscriber looks like this:
#Bean
TopicExchange exchange() {
return new TopicExchange(exchangeName);
}
#Bean
Binding binding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(routingKey);
}
#Bean
Queue queue(SystemInformationService systemInformationService) {
return new Queue(systemInformationService.getInfo().getTemplateName() != null ? systemInformationService.getInfo().getTemplateName() : queueName , true);
}
}
P.S: I must not restart my Subscriber SpringBoot application, otherwise it is quite obvious.
You cannot change bindings; you can, however, add and remove them.
You can either do that manually using the management UI or you can use a RabbitAdmin.

How to wrap a Flux with a blocking operation in the subscribe?

In the documentation it is written that you should wrap blocking code into a Mono: http://projectreactor.io/docs/core/release/reference/#faq.wrap-blocking
But it is not written how to actually do it.
I have the following code:
#PostMapping(path = "some-path", consumes = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> doeSomething(#Valid #RequestBody Flux<Something> something) {
something.subscribe(something -> {
// some blocking operation
});
// how to return Mono<Void> here?
}
The first problem I have here is that I need to return something but I cant.
If I would return a Mono.empty for example the request would be closed before the work of the flux is done.
The second problem is: how do I actually wrap the blocking code like it is suggested in the documentation:
Mono blockingWrapper = Mono.fromCallable(() -> {
return /* make a remote synchronous call */
});
blockingWrapper = blockingWrapper.subscribeOn(Schedulers.elastic());
You should not call subscribe within a controller handler, but just build a reactive pipeline and return it. Ultimately, the HTTP client will request data (through the Spring WebFlux engine) and that's what subscribes and requests data to the pipeline.
Subscribing manually will decouple the request processing from that other operation, which will 1) remove any guarantee about the order of operations and 2) break the processing if that other operation is using HTTP resources (such as the request body).
In this case, the source is not blocking, but only the transform operation is. So we'd better use publishOn to signal that the rest of the chain should be executed on a specific Scheduler. If the operation here is I/O bound, then Schedulers.elastic() is the best choice, if it's CPU-bound then Schedulers .paralell is better. Here's an example:
#PostMapping(path = "/some-path", consumes = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> doSomething(#Valid #RequestBody Flux<Something> something) {
return something.collectList()
.publishOn(Schedulers.elastic())
.map(things -> {
return processThings(things);
})
.then();
}
public ProcessingResult processThings(List<Something> things) {
//...
}
For more information on that topic, check out the Scheduler section in the reactor docs. If your application tends to do a lot of things like this, you're losing a lot of the benefits of reactive streams and you might consider switching to a Servlet-based model where you can configure thread pools accordingly.