Is there a way to have two exchange or two queue for the single consumer one for testing message and another for production message? - rabbitmq

I have a requirement where while deploying the rabbitmq consumer component, if there is any message on the queue then this consumer component should not consume the message immediately once deployment is done.
After the deployment there is sanity test on this component, once sanity test is done then only this consumer component should start consuming the message from the queue.
I have made autostart = "false" on the consumer component so that consumer will not consume the message once deployment is done.
After sanity test is done then I start the container listener using rest call.
The problem is sanity test also posting the message on the same queue. Sanit test is failing as the message posted by sanity test is waiting in the queue due to autostart = "false".
Is there a way while the production message still waiting on the queue but sanity test message can be consumed ?
#RabbitListener(id = LISTENER_ID,
bindings = #QueueBinding(exchange = #Exchange(value = "${listener.exchange}", type = "topic"),
value = #Queue(value = "${listener.queue}", durable = "true"), key = "${listener.routingKey}"),
containerFactory = "rabbitListenerContainerFactory", autoStartup = "false" )
public void receiveMessage(#Valid #Payload RequestMessage requestMessage,
#Headers Map<String, Object> requestHeaders) {
//some code
}

You would have to use 2 queues and 2 listeners (or change the listener to only listen on the sanity queue and then add the production queue to the listener container).

It is possible to have a single Queue listen to 2 exchanges.
It is also possible to have two queues listening to a single exchange.
RabbitMQ is all about bindings, all you need to do is proper binding. See the snippet below (Spring AMQP).
package com.savk.workout.spring.rabbitmqconversendreceivefanoutproducer;
import org.springframework.amqp.core.*;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
public class AmqpConfig {
private final String PREFIX = "savk-sndandrcv-fanout";
private final String RK = PREFIX + "-" + "rk";
private final String EXCHANGE = PREFIX + "-" + "exchange";
private final String QUEUE = System.getenv("INSTANCE"); //PREFIX + "-" + "queue";
#Bean
public Exchange exchange() {
return ExchangeBuilder.fanoutExchange(EXCHANGE).autoDelete().build();
}
#Bean
public Queue queue() {
return QueueBuilder.nonDurable(QUEUE).autoDelete().build();
}
#Bean
public Queue queue2() {
return QueueBuilder.nonDurable(QUEUE).autoDelete().build();
}
#Bean
public Binding binding1(Exchange exchange) {
return BindingBuilder.bind(queue()).to(exchange).with(RK).noargs();
}
#Bean
public Binding binding2(Exchange exchange) {
return BindingBuilder.bind(queue2()).to(exchange).with(RK).noargs();
}
}

Related

how to check ActiveMQ queue size from message listener

I am using message listener for performing some actions on activeMQ queues
I want to check size of queue while performing.
I am using below logic but it works outside listener.
Any suggestion?
public class TestClass {
MessageConsumer consumerTransformation;
MessageListener listenerObjectTransformation;
public static void main(String []args) throws JMSException {
ActiveMQModel activeMQModelObject = new ActiveMQModel();
//String subject = "TRANSFORMATION_QUEUE";
String subject = "IMPORT_QUEUE";
//consumerTransformation = activeMQModelObject.getActiveMQConsumer(subject);
// Here we set the listener to listen to all the messages in the queue
//listenerObjectTransformation = new TransformationMessageListener();
//consumerTransformation.setMessageListener(listenerObjectTransformation);
boolean isQueueEmpty = activeMQModelObject.isMessageQueueEmpty(subject);
System.out.println("Size " + isQueueEmpty);
}
/*private class TransformationMessageListener implements MessageListener {
#Override
public void onMessage(Message messagearg) {
System.out.println("test....");
}
}*/
}
What is way to check activeMQ queue size from message listener
The JMS API does not define methods for checking Queue size or other metrics from a client, the API is meant to decouple the clients from any server administration and from each other. A sender has no awareness of the receivers that might or might not be there and the receiver is unaware of who might be producing or if there is anything to consume at that given moment. By using the asynchronous listener you are subscribing for content either currently available or content yet to be produced.
You can in some cases make us of the JMX metrics that are available from the server in your code but this is not good practice.

Correct way of using spring webclient in spring amqp

I have below tech stack for a spring amqp application consuming messages from rabbitmq -
Spring boot 2.2.6.RELEASE
Reactor Netty 0.9.12.RELEASE
Reactor Core 3.3.10.RELEASE
Application is deployed on 4 core RHEL.
Below are some of the configurations being used for rabbitmq
#Bean
public CachingConnectionFactory connectionFactory() {
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory();
cachingConnectionFactory.setHost(<<HOST NAME>>);
cachingConnectionFactory.setUsername(<<USERNAME>>);
cachingConnectionFactory.setPassword(<<PASSWORD>>);
cachingConnectionFactory.setChannelCacheSize(50);
return cachingConnectionFactory;
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setMaxConcurrentConsumers(50);
factory.setMessageConverter(new Jackson2JsonMessageConverter());
factory.setDefaultRequeueRejected(false); /** DLQ is in place **/
return factory;
}
The consumers make downstream API calls using spring webclient in synchronous mode. Below is configuration for Webclient
#Bean
public WebClient webClient() {
ConnectionProvider connectionProvider = ConnectionProvider
.builder("fixed")
.lifo()
.pendingAcquireTimeout(Duration.ofMillis(200000))
.maxConnections(16)
.pendingAcquireMaxCount(3000)
.maxIdleTime(Duration.ofMillis(290000))
.build();
HttpClient client = HttpClient.create(connectionProvider);
client.tcpConfiguration(<<connection timeout, read timeout, write timeout is set here....>>);
Webclient.Builder builder =
Webclient.builder().baseUrl(<<base URL>>).clientConnector(new ReactorClientHttpConnector(client));
return builder.build();
}
This webclient is autowired into a #Service class as
#Autowired
private Webclient webClient;
and used as below in two places. First place is one call -
public DownstreamStatusEnum downstream(String messageid, String payload, String contentType) {
return call(messageid,payload,contentType);
}
private DownstreamStatusEnum call(String messageid, String payload, String contentType) {
DownstreamResponse response = sendRequest(messageid,payload,contentType).**block()**;
return response;
}
private Mono<DownstreamResponse> sendRequest(String messageid, String payload, String contentType) {
return webClient
.method(POST)
.uri(<<URI>>)
.contentType(MediaType.valueOf(contentType))
.body(BodyInserters.fromValue(payload))
.exchange()
.flatMap(response -> response.bodyToMono(DownstreamResponse.class));
}
Other place requires parallel downstream calls and has been implemented as below
private Flux<DownstreamResponse> getValues (List<DownstreamRequest> reqList, String messageid) {
return Flux
.fromIterable(reqList)
.parallel()
.runOn(Schedulers.elastic())
.flatMap(s -> {
return webClient
.method(POST)
.uri(<<downstream url>>)
.body(BodyInserters.fromValue(s))
.exchange()
.flatMap(response -> {
if(response.statusCode().isError()) {
return Mono.just(new DownstreamResponse());
}
return response.bodyToMono(DownstreamResponse.class);
});
}).sequential();
}
public List<DownstreamResponse> updateValue (List<DownstreamRequest> reqList,String messageid) {
return getValues(reqList,messageid).collectList().**block()**;
}
The application has been working fine for past one year or so. Of late, we are seeing an issue whereby one or more consumers seem to just get stuck with the default prefetch (250) number of messages in unack status. The only way to fix the issue is to restart app.
We have not done any code changes recently. Also there have been no infra changes recently either.
When this happens, we took thread dumps. The pattern observed is similar. Most of the consumer threads are in TIMED_WAITING status while one or two consumers show in WAITING state with below stacks -
"org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#0-13" waiting for condition ...
java.lang.Thread.State: WAITING (parking)
- parking to wait for ......
at .......
at .......
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(......
at reactor.core.publisher.Mono.block(....
at .........WebClientServiceImpl.call(...
Also see below -
"org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#0-13" waiting for condition ...
java.lang.Thread.State: WAITING (parking)
- parking to wait for ......
at .......
at .......
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(......
at reactor.core.publisher.Mono.block(....
at .........WebClientServiceImpl.updateValue(...
Not exactly sure if this thread dump is showing that consumer threads are actually stuck at this
"block" call.
Please help advise what could be the issue here and what steps need to be taken to fix this. Earlier we thought it may be some issue with rabbitmq/spring aqmp but based on thread dump, looks like issue with webclient "block" call.
On adding Blockhound, it is printing below stacktrace in log file -
Error has been observed at following site(s)
Checkpoint Request to POST https://....... [DefaultWebClient]
Stack Trace:
at java.lang.Object.wait
......
at java.net.InetAddress.checkLookupTable
at java.net.InetAddress.getAddressFromNameService
......
at io.netty.util.internal.SocketUtils$8.run
......
at io.netty.resolver.DefaultNameResolver.doResolve
Sorry, just realized that the flatMap in parallel flux call was actually like below
.flatMap(response -> {
if(response.statusCode().isError()) {
return Mono.just(new DownstreamResponse());
}
return response.bodyToMono(DownstreamResponse.class);
});
So in error scenarios, I think the underlying connection was not being properly released. When I updated it like below, it seemed to have fixed the issue -
.flatMap(response -> {
if(response.statusCode().isError()) {
response.releaseBody().thenReturn(Mono.just(new DownstreamResponse()));
}
return response.bodyToMono(DownstreamResponse.class);
});

Reading messages from rabbitMQ queue at an interval is not working

What I am trying to achieve is to read messages from a RabbitMQ queue every 15 minutes. From the documentation, I could see that I can use the "receiveTimeout" method to set the interval.
Polling Consumer
The AmqpTemplate itself can be used for polled Message reception. By default, if no message is
available, null is returned immediately. There is no blocking. Starting with version 1.5, you can set
a receiveTimeout, in milliseconds, and the receive methods block for up to that long, waiting for a
message.
But I tried implementing it with sprint integration, the receiveTimeout is not working as I expected.
My test code is given below.
#Bean
Queue createMessageQueue() {
return new Queue(RetryQueue, false);
}
#Bean
public SimpleMessageListenerContainer QueueMessageListenerContainer(ConnectionFactory connectionFactory) {
final SimpleMessageListenerContainer messageListenerContainer = new SimpleMessageListenerContainer(
connectionFactory);
messageListenerContainer.setQueueNames(RetryQueue);
messageListenerContainer.setReceiveTimeout(900000);
return messageListenerContainer;
}
#Bean
public AmqpInboundChannelAdapter inboundQueueChannelAdapter(
#Qualifier("QueueMessageListenerContainer") AbstractMessageListenerContainer messageListenerContainer) {
final AmqpInboundChannelAdapter amqpInboundChannelAdapter = new AmqpInboundChannelAdapter(
messageListenerContainer);
amqpInboundChannelAdapter.setOutputChannelName("channelRequestFromQueue");
return amqpInboundChannelAdapter;
}
#ServiceActivator(inputChannel = "channelRequestFromQueue")
public void activatorRequestFromQueue(Message<String> message) {
System.out.println("Message: " + message.getPayload() + ", recieved at: " + LocalDateTime.now());
}
I am getting the payload logged in the console in near real-time.
Can anyone help? How much time the consumer will be active once it starts?
UPDATE
IntegrationFlow I used to retrieve messages from queue at an interval,
#Bean
public IntegrationFlow inboundIntegrationFlowPaymentRetry() {
return IntegrationFlows
.from(Amqp.inboundPolledAdapter(connectionFactory, RetryQueue),
e -> e.poller(Pollers.fixedDelay(20_000).maxMessagesPerPoll(-1)).autoStartup(true))
.handle(message -> {
channelRequestFromQueue()
.send(MessageBuilder.withPayload(message.getPayload()).copyHeaders(message.getHeaders())
.setHeader(IntegrationConstants.QUEUED_MESSAGE, message).build());
}).get();
}
The Polling Consumer documentation is from the Spring AMQP documentation about the `RabbitTemplate, and has nothing to do with the listener container, or Spring Integration.
https://docs.spring.io/spring-amqp/docs/current/reference/html/#polling-consumer
Spring integration's adapter is message-driven and you will get messages whenever they are available.
To get messages on-demand, you need to call the RabbitTemplate on whatever interval you want.

RabbitMQ delayed message not working

I am trying to use delayed exchange plugin, but somehow its not working for me and message are received without delay.
I tried following things :
a) enabled rabbitmq_delayed_message_exchange successfully and restarted rabbitmq server on ubuntu-16.04.
b) Declaring exchange
Map<String,Object> props = new HashMap<String,Object>();
props.put("x-delayed-type", "direct");
this.automationExchange = new DirectExchange(exchangeName,true,false, props);
c) Pushing message as
DefaultClassMapper typeMapper = QueueUtils.classMapper;
typeMapper.setDefaultType(type);
Jackson2JsonMessageConverter converter = QueueUtils.converter;
converter.setClassMapper(typeMapper);
RabbitTemplate template = AMQPRabbitMQTemplate.getAMQPTemplate();
template.setMessageConverter(converter);
template.convertAndSend(routingKey, message, new MessagePostProcessor() {
#Override
public Message postProcessMessage(Message m) throws AmqpException {
m.getMessageProperties().setDelay(delayMiliSeconds);
m.getMessageProperties().setDeliveryMode(MessageDeliveryMode.PERSISTENT);
return m;
}
});
Now when i am printing message
public void onMessage(Message message, Channel channel) throws Exception{
System.out.println(message.getMessageProperties().getDelay());
channel.basicAck(message.getMessageProperties().getDeliveryTag(), false);
}
It is printing null for getDelay, which ideally should be negative of set value as per https://www.rabbitmq.com/blog/2015/04/16/scheduling-messages-with-rabbitmq.
Please let me know if i am doing something wrong.
I am using 1.6.8.RELEASE version for spring-amqp and spring-rabbit.
In order to avoid unexpected propagation of headers from an inbound message to an outbound message, certain headers for inbound messages are provided by MessageProperties.getReceived... methods.
In this case, the header is in MessageProperties.getReceivedDelay().
You also need setDelayed(true) on automationExchange before declaring it with the admin.
I presume you have set the exchange as the default in the RabbitTemplate too.

Delayed message in RabbitMQ

Is it possible to send message via RabbitMQ with some delay?
For example I want to expire client session after 30 minutes, and I send a message which will be processed after 30 minutes.
There are two approaches you can try:
Old Approach: Set the TTL(time to live) header in each message/queue(policy) and then introduce a DLQ to handle it. once the ttl expired your messages will move from DLQ to main queue so that your listener can process it.
Latest Approach: Recently RabbitMQ came up with RabbitMQ Delayed Message Plugin , using which you can achieve the same and this plugin support available since RabbitMQ-3.5.8.
You can declare an exchange with the type x-delayed-message and then publish messages with the custom header x-delay expressing in milliseconds a delay time for the message. The message will be delivered to the respective queues after x-delay milliseconds
byte[] messageBodyBytes = "delayed payload".getBytes("UTF-8");
Map<String, Object> headers = new HashMap<String, Object>();
headers.put("x-delay", 5000);
AMQP.BasicProperties.Builder props = new
AMQP.BasicProperties.Builder().headers(headers);
channel.basicPublish("my-exchange", "", props.build(), messageBodyBytes);
More here: git
With the release of RabbitMQ v2.8, scheduled delivery is now available but as an indirect feature: http://www.javacodegeeks.com/2012/04/rabbitmq-scheduled-message-delivery.html
Thanks to Norman's answer, I could implement it in Node.js.
Everything is pretty clear from the code.
var ch = channel;
ch.assertExchange("my_intermediate_exchange", 'fanout', {durable: false});
ch.assertExchange("my_final_delayed_exchange", 'fanout', {durable: false});
// setup intermediate queue which will never be listened.
// all messages are TTLed so when they are "dead", they come to another exchange
ch.assertQueue("my_intermediate_queue", {
deadLetterExchange: "my_final_delayed_exchange",
messageTtl: 5000, // 5sec
}, function (err, q) {
ch.bindQueue(q.queue, "my_intermediate_exchange", '');
});
ch.assertQueue("my_final_delayed_queue", {}, function (err, q) {
ch.bindQueue(q.queue, "my_final_delayed_exchange", '');
ch.consume(q.queue, function (msg) {
console.log("delayed - [x] %s", msg.content.toString());
}, {noAck: true});
});
As I don't have enough reputation to add comment, posting a new answer. This is just an addition to what has been already discussed at http://www.javacodegeeks.com/2012/04/rabbitmq-scheduled-message-delivery.html
Except instead of setting ttl on messages, you can set it at queue level. Also you can avoid creating a new exchange just for the sake of redirecting the messages to different Queue. Here is sample Java code:
Producer:
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.ConnectionFactory;
import java.util.HashMap;
import java.util.Map;
public class DelayedProducer {
private final static String QUEUE_NAME = "ParkingQueue";
private final static String DESTINATION_QUEUE_NAME = "DestinationQueue";
public static void main(String[] args) throws Exception{
ConnectionFactory connectionFactory = new ConnectionFactory();
connectionFactory.setHost("localhost");
Connection connection = connectionFactory.newConnection();
Channel channel = connection.createChannel();
Map<String, Object> arguments = new HashMap<String, Object>();
arguments.put("x-message-ttl", 10000);
arguments.put("x-dead-letter-exchange", "");
arguments.put("x-dead-letter-routing-key", DESTINATION_QUEUE_NAME );
channel.queueDeclare(QUEUE_NAME, false, false, false, arguments);
for (int i=0; i<5; i++) {
String message = "This is a sample message " + i;
channel.basicPublish("", QUEUE_NAME, null, message.getBytes());
System.out.println("message "+i+" got published to the queue!");
Thread.sleep(3000);
}
channel.close();
connection.close();
}
}
Consumer:
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.QueueingConsumer;
public class Consumer {
private final static String DESTINATION_QUEUE_NAME = "DestinationQueue";
public static void main(String[] args) throws Exception{
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
System.out.println(" [*] Waiting for messages. To exit press CTRL+C");
QueueingConsumer consumer = new QueueingConsumer(channel);
boolean autoAck = false;
channel.basicConsume(DESTINATION_QUEUE_NAME, autoAck, consumer);
while (true) {
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
String message = new String(delivery.getBody());
System.out.println(" [x] Received '" + message + "'");
channel.basicAck(delivery.getEnvelope().getDeliveryTag(), false);
}
}
}
It looks like this blog post describes using the dead letter exchange and message ttl to do something similar.
The code below uses CoffeeScript and Node.js to access Rabbit and implement something similar.
amqp = require 'amqp'
events = require 'events'
em = new events.EventEmitter()
conn = amqp.createConnection()
key = "send.later.#{new Date().getTime()}"
conn.on 'ready', ->
conn.queue key, {
arguments:{
"x-dead-letter-exchange":"immediate"
, "x-message-ttl": 5000
, "x-expires": 6000
}
}, ->
conn.publish key, {v:1}, {contentType:'application/json'}
conn.exchange 'immediate'
conn.queue 'right.now.queue', {
autoDelete: false
, durable: true
}, (q) ->
q.bind('immediate', 'right.now.queue')
q.subscribe (msg, headers, deliveryInfo) ->
console.log msg
console.log headers
That's currently not possible. You have to store your expiration timestamps in a database or something similiar, and then have a helper program that reads those timestamps and queues a message.
Delayed messages are an often requested feature, as they're useful in many situations. However, if your need is to expire client sessions I believe that messaging is not the ideal solution for you, and that another approach might work better.
Suppose you had control over the consumer, you could achieve the delaying on the consumer like this??:
If we are sure that the nth message in the queue always has a smaller delay than the n+1th message (this can true for many use cases): The producer sends timeInformation in the task conveying the time at which this job needs to be executed (currentTime + delay). The consumer:
1) Reads the scheduledTime from the task
2) if currentTime > scheduledTime go ahead.
Else delay = scheduledTime - currentTime
sleep for time indicated by delay
The consumer is always configured with a concurrency parameter. So, the other messages will just wait in the queue until a consumer finishes the job. So, this solution could just work well though it looks awkward especially for big time delays.
AMQP protocol does not support delayed messaging, but by using Time-To-Live and Expiration and Dead Letter Exchanges extensions delayed messaging is possible. The solution is described in this link. I copied the following steps from that link:
Step by step:
Declare the delayed queue
Add the x-dead-letter-exchange argument property, and set it to the default exchange "".
Add the x-dead-letter-routing-key argument property, and set it to the name of the destination queue.
Add the x-message-ttl argument property, and set it to the number of milliseconds you want to delay the message.
Subscribe to the destination queue
There is also a plugin for delayed messaging in RabbitMQ repository on GitHub.
Note that there is a solution called Celery which supports delayed task queuing on RabbitMQ broker by presenting a calling API called apply_async(). Celery supports Python, node and PHP.