RabbitMq Consumer not working because micronaut assigns executor threads to kafka consumer - rabbitmq

I am running kafka in micronaut v3.4.3 in Kotlin and recently I integrated RabbitMq with the server using micronaut-rabbitmq v3.4.0. In the docs it is mentioned to specify the executors for the RabbitMq consumers in application.yml.
Now when the server starts, since the kafka listeners are already using the executor threads indefinitely RabbitMq consumers are not able to get a lock on those threads.
So, Is there a way to segregate consumer executor threads for both kafka and RabbitMq?

Related

Flink connector for RabbitMQ Streams?

The RabbitMQ Connector for Apache Flink apparently can't be used for RabbitMQ Streams. At least I wasn't able to make it work.
Does someone have experience with that and/or has been able to connect Apache Flink to RabbitMQ Stream queue?

How to batch process RabbitMQ messages in Quarkus

Is it possible to batch process RabbitMQ messages with Quarkus?
Based on their documentation it seems that it's currently not supported and there is no information on if it's planned.

RabbitMQ auto-delete queues with timeouts

I have a k8s service, using rabbitMQ as message broker.
I want to be able to delete a specific queue if the service deployment which may have multiple pods is stopped.
Reading the documentation RabbitMq Queues Docs I found that the best case for me in this case is to use the auto-deleted property of the queue.
Is there any option so the auto-deleted queue will not be deleted immediately after the clients are disconnected, instead to wait some seconds to wait for reconnection ?

config Mass transit for both Kafka and RabbitMQ

I want to use Mass Transit with RabbitMQ and Kafka for message broker.
and in deployment phase can decide which one be used for transporting messages.
I know what's ConsumeContext,ISendEndpointProvider, or IPublishEndpoint. And according of the link this Interfaces shared between RabbitMQ and Kafka for publishing messages and consuming that.
I configured both RabbitMQ and Kafka in startup of my .Net Core App and by appSetting in deployment phase can decide which one must be register.
this approach is correct for having and configure both Kafka and RabbitMq in my app?
Using MassTransit, Kafka is a Rider which is configured along with a bus – and the bus still requires a supported transport. If you have RabbitMQ, that would be your bus transport, and the Kafka Rider would exist alongside the bus using the bus for publishing and sending messages.
With Kafka, messages can only be consumed or produced – they cannot be published, nor can they be sent. The ITopicProducer<T> interface is used to produce messages to Kafka topics.
Calling Publish or Send on the ConsumeContext will publish or send those messages to the rider's bus (which may be RabbitMQ, or any supported transport included the InMemory transport). Producing messages to Kafka topics must be done using the ITopicProducer<T> (which may be injected as a dependency to the consumer).
You can't "switch out" RabbitMQ with Kafka as the two services are very different.

JMS queue not synchronized in instances of GlassFish cluster

I have a problem using Message Driven Beans in CLUSTERED Glassfish 3.1.1. The problem is with the queue in the Glassfish, the queue is not synchronized between the instances. I am trying by best to explain the scenario below.
I created 2 instances in a GlassFish cluster, created a JMS QueueConnectionFactory, created a JMS Queue. Their targets were made towards the cluster. Then I deployed the web application and the MessageDrivenBean module in the cluster. The web application sends a TextMessage to the JMS Queue.
Everything works well here, like the message is sent to the queue and served by the message driven beans in both the instances.
Then I disable the MessageDrivenBean module. Request the web application which sends the message to the JMS Queue in both the instances. Then I shutdown myInstance2. Re-deploy the MDB in the cluster. Now here is the problem, the MessageDrivenBean only receives the messages of myInstance1 and not the messages sent to myInstance2 queue. The messages in the queue of myInstance2 are only served when myInstance2 is started. Can anyone help me here with the settings that GlassFish uses to synchronize the queue in both the instance so that even for some reason when one instance is down and there are messages in that instance’s queue, the other instance will take the messages of that queue and serve them.
I am using OpenMQ, GlassFish 3.1.1 and I have turned on the HA(high availability) option in GlassFish, but still it does not work.
Thanks
The high-availability options for GlassFish and the high-availability options for Message Queue are configured separately. You need to configure your message queue cluster to be an "enhanced cluster" rather than a "conventional cluster". This is described in the GlassFish 3.1 High Availability Administration Guide.