RabbitMQ CLI status different from management portal - rabbitmq

I've noticed that when I run the RabbitMQ status command from the command line (rabbitmqctl status), all of the reported numbers are way out of whack with what I know to be reality. My reality is confirmed by what I see in the Management web portal.
Here's an output of the CLI status:
[{pid,26647},
{running_applications,
[{rabbitmq_management,"RabbitMQ Management Console","3.5.4"},
{rabbitmq_web_dispatch,"RabbitMQ Web Dispatcher","3.5.4"},
{webmachine,"webmachine","1.10.3-rmq3.5.4-gite9359c7"},
{mochiweb,"MochiMedia Web Server","2.7.0-rmq3.5.4-git680dba8"},
{rabbitmq_management_agent,"RabbitMQ Management Agent","3.5.4"},
{rabbit,"RabbitMQ","3.5.4"},
{os_mon,"CPO CXC 138 46","2.3"},
{inets,"INETS CXC 138 49","5.10.4"},
{mnesia,"MNESIA CXC 138 12","4.12.4"},
{amqp_client,"RabbitMQ AMQP Client","3.5.4"},
{xmerl,"XML parser","1.3.7"},
{sasl,"SASL CXC 138 11","2.4.1"},
{stdlib,"ERTS CXC 138 10","2.3"},
{kernel,"ERTS CXC 138 10","3.1"}]},
{os,{unix,linux}},
{erlang_version,
"Erlang/OTP 17 [erts-6.3] [source] [64-bit] [smp:4:4] [async-threads:64] [kernel-poll:true]\n"},
{memory,
[{total,45994136},
{connection_readers,101856},
{connection_writers,54800},
{connection_channels,165968},
{connection_other,373008},
{queue_procs,175376},
{queue_slave_procs,0},
{plugins,437024},
{other_proc,13385792},
{mnesia,131904},
{mgmt_db,484216},
{msg_index,53112},
{other_ets,1119384},
{binary,3890640},
{code,20097289},
{atom,711569},
{other_system,4812198}]},
{alarms,[]},
{listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]},
{vm_memory_high_watermark,0.4},
{vm_memory_limit,787190579},
{disk_free_limit,50000000},
{disk_free,778919936},
{file_descriptors,
[{total_limit,924},
{total_used,13},
{sockets_limit,829},
{sockets_used,11}]},
{processes,[{limit,1048576},{used,350}]},
{run_queue,0},
{uptime,911}]
The number of readers, writers, channels, etc, basically every number is multiplied multiple-thousand-fold.
The numbers are see in the management portal (ss below) are correct. 10 total connections, each with two channels
All of my queues are non-durable and I'm only sending non-persistent messages using fanout exchanges. As I understand it, this should mean nothing is ever persisted should something go wrong (which is fine for my needs).
I have noticed that whenever I spin up or down one of the modules that connects to the broker, the number of readers/writers goes up by ~17,000 on the command line, despite only going up/down 1 in the portal.
Here's my broker configuration code for reference:
private String endPoint;
private int port;
private String userName;
private String password;
private Exchange publisherExchange;
private ExchangeType publisherExchangeType;
private Map<Exchange, ExchangeType> subscriptionExchanges;
private Channel publishChannel;
private Channel subscriptionChannel;
private Consumer consumer;
private BrokerHandler(BrokerHandlerBuilder builder) throws ConnectException{
this.endPoint = builder.endPoint;
this.port = builder.port;
this.userName = builder.userName;
this.password = builder.password;
this.publisherExchange = builder.publisherExchange;
this.publisherExchangeType = builder.publisherExchangeType;
this.subscriptionExchanges = builder.subscriptionExchanges;
connect();
}
private void connect() throws ConnectException{
ConnectionFactory factory = new ConnectionFactory();
factory.setHost(this.endPoint);
factory.setPort(this.port);
if(this.userName != null && this.password != null){
factory.setUsername(this.userName);
factory.setPassword(this.password);
factory.setAutomaticRecoveryEnabled(true);
factory.setNetworkRecoveryInterval(RMQConstants.RABBITMQ_MAX_RETRY_DELAY);
}
try {
log.info("Registering with broker on topic " + this.publisherExchange.toString() + " on " + this.endPoint + "...");
connection = factory.newConnection();
publishChannel = connection.createChannel();
subscriptionChannel = connection.createChannel();
configureConsumer();
publishChannel.exchangeDeclare(this.publisherExchange.toString(), this.publisherExchangeType.toString());
} catch(Exception e){
throw new ConnectException("Unable to connect to RabbitMQ broker.");
}
if(this.subscriptionExchanges.size() > 0){
subscribe(this.subscriptionExchanges);
}
}
/**
* Allows callers to publish a message to the broker, which will be broadcast to all listeners using a FANOUT strategy
* #throws ConnectException if the handler is not connected to the broker
*/
private void publishToBroker(String msg) throws ConnectException{
try {
publishChannel.basicPublish(this.publisherExchange.toString(), "", null, msg.getBytes());
}
catch (IOException e) {
log.error("Unable to write message to broker.", e);
}
}
private void subscribe(Map<Exchange, ExchangeType> exchanges){
try {
String queueName = subscriptionChannel.queueDeclare().getQueue();
exchanges.forEach((k,v) -> {
try {
subscriptionChannel.exchangeDeclare(k.toString(), v.toString());
subscriptionChannel.queueBind(queueName, k.toString(), "");
} catch (Exception e) {
log.error("Error declaring exchanges for exchange: " + k.toString(), e);
}
});
subscriptionChannel.basicConsume(queueName, true, consumer);
}
catch(Exception e){
log.error("Error declaring a queue for subscription channel", e);
}
}
private void configureConsumer(){
consumer = new DefaultConsumer(subscriptionChannel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope,
AMQP.BasicProperties properties, byte[] body) throws IOException {
String message = new String(body, "UTF-8");
handleMessage(message);
}
};
}
Clients use a builder pattern to instantiate a broker connection, at which point they specify their publishing exchange and any number of exchanges they wish to subscribe to. There's only 19 exchanges total in this system.
Messages are being published and received properly, but I've been getting reports that the broker is bogging down the servers. I'll be monitoring it more closely but I'd really like to be able to explain these wacky results from the status call. I have tried stopping the app and resetting then reconfiguring the broker, which brings the connection counts back to 0, but once modules start reconnecting the numbers start going back up.
Thanks for taking the time to look through this. Any advice would be greatly appreciated!

The connection_readers, connection_writers, connection_channels, etc. that you report are in the memory section, and the units are bytes, not number of connections. This is different data entirely to the section of the Management UI that you report.
To get data through the CLI about the number of connections, use the rabbitmqctl list_connections command.
channels
Number of channels using the connection.
See also list_exchanges, list_queues, and list_consumers.

Related

Correct way of using spring webclient in spring amqp

I have below tech stack for a spring amqp application consuming messages from rabbitmq -
Spring boot 2.2.6.RELEASE
Reactor Netty 0.9.12.RELEASE
Reactor Core 3.3.10.RELEASE
Application is deployed on 4 core RHEL.
Below are some of the configurations being used for rabbitmq
#Bean
public CachingConnectionFactory connectionFactory() {
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory();
cachingConnectionFactory.setHost(<<HOST NAME>>);
cachingConnectionFactory.setUsername(<<USERNAME>>);
cachingConnectionFactory.setPassword(<<PASSWORD>>);
cachingConnectionFactory.setChannelCacheSize(50);
return cachingConnectionFactory;
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setMaxConcurrentConsumers(50);
factory.setMessageConverter(new Jackson2JsonMessageConverter());
factory.setDefaultRequeueRejected(false); /** DLQ is in place **/
return factory;
}
The consumers make downstream API calls using spring webclient in synchronous mode. Below is configuration for Webclient
#Bean
public WebClient webClient() {
ConnectionProvider connectionProvider = ConnectionProvider
.builder("fixed")
.lifo()
.pendingAcquireTimeout(Duration.ofMillis(200000))
.maxConnections(16)
.pendingAcquireMaxCount(3000)
.maxIdleTime(Duration.ofMillis(290000))
.build();
HttpClient client = HttpClient.create(connectionProvider);
client.tcpConfiguration(<<connection timeout, read timeout, write timeout is set here....>>);
Webclient.Builder builder =
Webclient.builder().baseUrl(<<base URL>>).clientConnector(new ReactorClientHttpConnector(client));
return builder.build();
}
This webclient is autowired into a #Service class as
#Autowired
private Webclient webClient;
and used as below in two places. First place is one call -
public DownstreamStatusEnum downstream(String messageid, String payload, String contentType) {
return call(messageid,payload,contentType);
}
private DownstreamStatusEnum call(String messageid, String payload, String contentType) {
DownstreamResponse response = sendRequest(messageid,payload,contentType).**block()**;
return response;
}
private Mono<DownstreamResponse> sendRequest(String messageid, String payload, String contentType) {
return webClient
.method(POST)
.uri(<<URI>>)
.contentType(MediaType.valueOf(contentType))
.body(BodyInserters.fromValue(payload))
.exchange()
.flatMap(response -> response.bodyToMono(DownstreamResponse.class));
}
Other place requires parallel downstream calls and has been implemented as below
private Flux<DownstreamResponse> getValues (List<DownstreamRequest> reqList, String messageid) {
return Flux
.fromIterable(reqList)
.parallel()
.runOn(Schedulers.elastic())
.flatMap(s -> {
return webClient
.method(POST)
.uri(<<downstream url>>)
.body(BodyInserters.fromValue(s))
.exchange()
.flatMap(response -> {
if(response.statusCode().isError()) {
return Mono.just(new DownstreamResponse());
}
return response.bodyToMono(DownstreamResponse.class);
});
}).sequential();
}
public List<DownstreamResponse> updateValue (List<DownstreamRequest> reqList,String messageid) {
return getValues(reqList,messageid).collectList().**block()**;
}
The application has been working fine for past one year or so. Of late, we are seeing an issue whereby one or more consumers seem to just get stuck with the default prefetch (250) number of messages in unack status. The only way to fix the issue is to restart app.
We have not done any code changes recently. Also there have been no infra changes recently either.
When this happens, we took thread dumps. The pattern observed is similar. Most of the consumer threads are in TIMED_WAITING status while one or two consumers show in WAITING state with below stacks -
"org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#0-13" waiting for condition ...
java.lang.Thread.State: WAITING (parking)
- parking to wait for ......
at .......
at .......
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(......
at reactor.core.publisher.Mono.block(....
at .........WebClientServiceImpl.call(...
Also see below -
"org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#0-13" waiting for condition ...
java.lang.Thread.State: WAITING (parking)
- parking to wait for ......
at .......
at .......
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(......
at reactor.core.publisher.Mono.block(....
at .........WebClientServiceImpl.updateValue(...
Not exactly sure if this thread dump is showing that consumer threads are actually stuck at this
"block" call.
Please help advise what could be the issue here and what steps need to be taken to fix this. Earlier we thought it may be some issue with rabbitmq/spring aqmp but based on thread dump, looks like issue with webclient "block" call.
On adding Blockhound, it is printing below stacktrace in log file -
Error has been observed at following site(s)
Checkpoint Request to POST https://....... [DefaultWebClient]
Stack Trace:
at java.lang.Object.wait
......
at java.net.InetAddress.checkLookupTable
at java.net.InetAddress.getAddressFromNameService
......
at io.netty.util.internal.SocketUtils$8.run
......
at io.netty.resolver.DefaultNameResolver.doResolve
Sorry, just realized that the flatMap in parallel flux call was actually like below
.flatMap(response -> {
if(response.statusCode().isError()) {
return Mono.just(new DownstreamResponse());
}
return response.bodyToMono(DownstreamResponse.class);
});
So in error scenarios, I think the underlying connection was not being properly released. When I updated it like below, it seemed to have fixed the issue -
.flatMap(response -> {
if(response.statusCode().isError()) {
response.releaseBody().thenReturn(Mono.just(new DownstreamResponse()));
}
return response.bodyToMono(DownstreamResponse.class);
});

Reading messages from rabbitMQ queue at an interval is not working

What I am trying to achieve is to read messages from a RabbitMQ queue every 15 minutes. From the documentation, I could see that I can use the "receiveTimeout" method to set the interval.
Polling Consumer
The AmqpTemplate itself can be used for polled Message reception. By default, if no message is
available, null is returned immediately. There is no blocking. Starting with version 1.5, you can set
a receiveTimeout, in milliseconds, and the receive methods block for up to that long, waiting for a
message.
But I tried implementing it with sprint integration, the receiveTimeout is not working as I expected.
My test code is given below.
#Bean
Queue createMessageQueue() {
return new Queue(RetryQueue, false);
}
#Bean
public SimpleMessageListenerContainer QueueMessageListenerContainer(ConnectionFactory connectionFactory) {
final SimpleMessageListenerContainer messageListenerContainer = new SimpleMessageListenerContainer(
connectionFactory);
messageListenerContainer.setQueueNames(RetryQueue);
messageListenerContainer.setReceiveTimeout(900000);
return messageListenerContainer;
}
#Bean
public AmqpInboundChannelAdapter inboundQueueChannelAdapter(
#Qualifier("QueueMessageListenerContainer") AbstractMessageListenerContainer messageListenerContainer) {
final AmqpInboundChannelAdapter amqpInboundChannelAdapter = new AmqpInboundChannelAdapter(
messageListenerContainer);
amqpInboundChannelAdapter.setOutputChannelName("channelRequestFromQueue");
return amqpInboundChannelAdapter;
}
#ServiceActivator(inputChannel = "channelRequestFromQueue")
public void activatorRequestFromQueue(Message<String> message) {
System.out.println("Message: " + message.getPayload() + ", recieved at: " + LocalDateTime.now());
}
I am getting the payload logged in the console in near real-time.
Can anyone help? How much time the consumer will be active once it starts?
UPDATE
IntegrationFlow I used to retrieve messages from queue at an interval,
#Bean
public IntegrationFlow inboundIntegrationFlowPaymentRetry() {
return IntegrationFlows
.from(Amqp.inboundPolledAdapter(connectionFactory, RetryQueue),
e -> e.poller(Pollers.fixedDelay(20_000).maxMessagesPerPoll(-1)).autoStartup(true))
.handle(message -> {
channelRequestFromQueue()
.send(MessageBuilder.withPayload(message.getPayload()).copyHeaders(message.getHeaders())
.setHeader(IntegrationConstants.QUEUED_MESSAGE, message).build());
}).get();
}
The Polling Consumer documentation is from the Spring AMQP documentation about the `RabbitTemplate, and has nothing to do with the listener container, or Spring Integration.
https://docs.spring.io/spring-amqp/docs/current/reference/html/#polling-consumer
Spring integration's adapter is message-driven and you will get messages whenever they are available.
To get messages on-demand, you need to call the RabbitTemplate on whatever interval you want.

RabbitMQ delayed message not working

I am trying to use delayed exchange plugin, but somehow its not working for me and message are received without delay.
I tried following things :
a) enabled rabbitmq_delayed_message_exchange successfully and restarted rabbitmq server on ubuntu-16.04.
b) Declaring exchange
Map<String,Object> props = new HashMap<String,Object>();
props.put("x-delayed-type", "direct");
this.automationExchange = new DirectExchange(exchangeName,true,false, props);
c) Pushing message as
DefaultClassMapper typeMapper = QueueUtils.classMapper;
typeMapper.setDefaultType(type);
Jackson2JsonMessageConverter converter = QueueUtils.converter;
converter.setClassMapper(typeMapper);
RabbitTemplate template = AMQPRabbitMQTemplate.getAMQPTemplate();
template.setMessageConverter(converter);
template.convertAndSend(routingKey, message, new MessagePostProcessor() {
#Override
public Message postProcessMessage(Message m) throws AmqpException {
m.getMessageProperties().setDelay(delayMiliSeconds);
m.getMessageProperties().setDeliveryMode(MessageDeliveryMode.PERSISTENT);
return m;
}
});
Now when i am printing message
public void onMessage(Message message, Channel channel) throws Exception{
System.out.println(message.getMessageProperties().getDelay());
channel.basicAck(message.getMessageProperties().getDeliveryTag(), false);
}
It is printing null for getDelay, which ideally should be negative of set value as per https://www.rabbitmq.com/blog/2015/04/16/scheduling-messages-with-rabbitmq.
Please let me know if i am doing something wrong.
I am using 1.6.8.RELEASE version for spring-amqp and spring-rabbit.
In order to avoid unexpected propagation of headers from an inbound message to an outbound message, certain headers for inbound messages are provided by MessageProperties.getReceived... methods.
In this case, the header is in MessageProperties.getReceivedDelay().
You also need setDelayed(true) on automationExchange before declaring it with the admin.
I presume you have set the exchange as the default in the RabbitTemplate too.

How do I make my ActiveMQ broker drop offline durable subscribers

We have an ActiveMQ broker that's connected to from very different clients using JMS, AMQP, and MQTT. For some reason we haven't figured out yet a specific set of MQTT clients often (not always) subscribes durably. This is a test environment where clients are added and removed quite often, the latter sometimes by pulling the plug or rebooting an embedded device, so that they cannot properly unsubscribe. The effect (IIUC) is that the broker piles up "offline durable subscription" for devices which it might never see again (I can see these under http://my_broker:8161/admin/subscribers.jsp), keeping messages on those topics forever, until it finally breaks down under its own memory footprint.
The issue at hand here is that the subscribers subscribe durably, and we need to find out why that's the case. However, it was also decided that clients doing this (unwittingly) shouldn't bring the broker to a grinding halt, so we need to solve this problem independently.
I have found there are settings for a timeout for offline durable subscriptions and put those into our broker configuration (last two lines):
<broker
xmlns="http://activemq.apache.org/schema/core"
brokerName="my_broker"
dataDirectory="${activemq.data}"
useJmx="true"
advisorySupport="false"
persistent="false"
offlineDurableSubscriberTimeout="1800000"
offlineDurableSubscriberTaskSchedule="60000">
If I understand correctly, the above should check every minute and dismiss clients it hasn't seen for half an hour. However, contrary to the docs, this doesn't seem to work: A consumer I had subscribe and then pulled the plug on days ago is still visible in the list of offline durable subscribers, the broker's memory footprint is constantly increasing, and if I delete subscribers manually in the broker's web interface I can see the memory footprint going down.
So here's my questions:
What determines whether a MQTT subscription to a topic on an ActiveMQ broker is durable?
What am I doing wrong in setting up the timeout for dropping offline durably subscriptions in the ActiveMQ settings?
I extracted the relevant code (doCleanup()) that removes timed out durable subscriptions.
In success case, it executes:
LOG.info("Destroying durable subscriber due to inactivity: {}", sub);
In failure case, it executes:
LOG.error("Failed to remove inactive durable subscriber", e);
Look for above log line in your log file and match it with details that you observed using admin/subscribers.jsp viewer. If it doesn't print any of the lines, the subscriptions might be remaining active for some reason or you may have stumbled into a bug.
Also, could you try to remove the underscore (_) in broker name if you can? The manual talks about problems with underscores in broker names.
Code:
public TopicRegion(RegionBroker broker, DestinationStatistics destinationStatistics, SystemUsage memoryManager, TaskRunnerFactory taskRunnerFactory, DestinationFactory destinationFactory) {
super(broker, destinationStatistics, memoryManager, taskRunnerFactory, destinationFactory);
if (broker.getBrokerService().getOfflineDurableSubscriberTaskSchedule() != -1 && broker.getBrokerService().getOfflineDurableSubscriberTimeout() != -1) {
this.cleanupTimer = new Timer("ActiveMQ Durable Subscriber Cleanup Timer", true);
this.cleanupTask = new TimerTask() {
#Override
public void run() {
doCleanup();
}
};
this.cleanupTimer.schedule(cleanupTask, broker.getBrokerService().getOfflineDurableSubscriberTaskSchedule(),broker.getBrokerService().getOfflineDurableSubscriberTaskSchedule());
}
}
public void doCleanup() {
long now = System.currentTimeMillis();
for (Map.Entry<SubscriptionKey, DurableTopicSubscription> entry : durableSubscriptions.entrySet()) {
DurableTopicSubscription sub = entry.getValue();
if (!sub.isActive()) {
long offline = sub.getOfflineTimestamp();
if (offline != -1 && now - offline >= broker.getBrokerService().getOfflineDurableSubscriberTimeout()) {
LOG.info("Destroying durable subscriber due to inactivity: {}", sub);
try {
RemoveSubscriptionInfo info = new RemoveSubscriptionInfo();
info.setClientId(entry.getKey().getClientId());
info.setSubscriptionName(entry.getKey().getSubscriptionName());
ConnectionContext context = new ConnectionContext();
context.setBroker(broker);
context.setClientId(entry.getKey().getClientId());
removeSubscription(context, info);
} catch (Exception e) {
LOG.error("Failed to remove inactive durable subscriber", e);
}
}
}
}
}
// The toString method for DurableTopicSubscription class
#Override
public synchronized String toString() {
return "DurableTopicSubscription-" + getSubscriptionKey() + ", id=" + info.getConsumerId() + ", active=" + isActive() + ", destinations=" + durableDestinations.size() + ", total=" + getSubscriptionStatistics().getEnqueues().getCount() + ", pending=" + getPendingQueueSize() + ", dispatched=" + getSubscriptionStatistics().getDispatched().getCount() + ", inflight=" + dispatched.size() + ", prefetchExtension=" + getPrefetchExtension();
}

Delayed message in RabbitMQ

Is it possible to send message via RabbitMQ with some delay?
For example I want to expire client session after 30 minutes, and I send a message which will be processed after 30 minutes.
There are two approaches you can try:
Old Approach: Set the TTL(time to live) header in each message/queue(policy) and then introduce a DLQ to handle it. once the ttl expired your messages will move from DLQ to main queue so that your listener can process it.
Latest Approach: Recently RabbitMQ came up with RabbitMQ Delayed Message Plugin , using which you can achieve the same and this plugin support available since RabbitMQ-3.5.8.
You can declare an exchange with the type x-delayed-message and then publish messages with the custom header x-delay expressing in milliseconds a delay time for the message. The message will be delivered to the respective queues after x-delay milliseconds
byte[] messageBodyBytes = "delayed payload".getBytes("UTF-8");
Map<String, Object> headers = new HashMap<String, Object>();
headers.put("x-delay", 5000);
AMQP.BasicProperties.Builder props = new
AMQP.BasicProperties.Builder().headers(headers);
channel.basicPublish("my-exchange", "", props.build(), messageBodyBytes);
More here: git
With the release of RabbitMQ v2.8, scheduled delivery is now available but as an indirect feature: http://www.javacodegeeks.com/2012/04/rabbitmq-scheduled-message-delivery.html
Thanks to Norman's answer, I could implement it in Node.js.
Everything is pretty clear from the code.
var ch = channel;
ch.assertExchange("my_intermediate_exchange", 'fanout', {durable: false});
ch.assertExchange("my_final_delayed_exchange", 'fanout', {durable: false});
// setup intermediate queue which will never be listened.
// all messages are TTLed so when they are "dead", they come to another exchange
ch.assertQueue("my_intermediate_queue", {
deadLetterExchange: "my_final_delayed_exchange",
messageTtl: 5000, // 5sec
}, function (err, q) {
ch.bindQueue(q.queue, "my_intermediate_exchange", '');
});
ch.assertQueue("my_final_delayed_queue", {}, function (err, q) {
ch.bindQueue(q.queue, "my_final_delayed_exchange", '');
ch.consume(q.queue, function (msg) {
console.log("delayed - [x] %s", msg.content.toString());
}, {noAck: true});
});
As I don't have enough reputation to add comment, posting a new answer. This is just an addition to what has been already discussed at http://www.javacodegeeks.com/2012/04/rabbitmq-scheduled-message-delivery.html
Except instead of setting ttl on messages, you can set it at queue level. Also you can avoid creating a new exchange just for the sake of redirecting the messages to different Queue. Here is sample Java code:
Producer:
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.ConnectionFactory;
import java.util.HashMap;
import java.util.Map;
public class DelayedProducer {
private final static String QUEUE_NAME = "ParkingQueue";
private final static String DESTINATION_QUEUE_NAME = "DestinationQueue";
public static void main(String[] args) throws Exception{
ConnectionFactory connectionFactory = new ConnectionFactory();
connectionFactory.setHost("localhost");
Connection connection = connectionFactory.newConnection();
Channel channel = connection.createChannel();
Map<String, Object> arguments = new HashMap<String, Object>();
arguments.put("x-message-ttl", 10000);
arguments.put("x-dead-letter-exchange", "");
arguments.put("x-dead-letter-routing-key", DESTINATION_QUEUE_NAME );
channel.queueDeclare(QUEUE_NAME, false, false, false, arguments);
for (int i=0; i<5; i++) {
String message = "This is a sample message " + i;
channel.basicPublish("", QUEUE_NAME, null, message.getBytes());
System.out.println("message "+i+" got published to the queue!");
Thread.sleep(3000);
}
channel.close();
connection.close();
}
}
Consumer:
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.QueueingConsumer;
public class Consumer {
private final static String DESTINATION_QUEUE_NAME = "DestinationQueue";
public static void main(String[] args) throws Exception{
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
System.out.println(" [*] Waiting for messages. To exit press CTRL+C");
QueueingConsumer consumer = new QueueingConsumer(channel);
boolean autoAck = false;
channel.basicConsume(DESTINATION_QUEUE_NAME, autoAck, consumer);
while (true) {
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
String message = new String(delivery.getBody());
System.out.println(" [x] Received '" + message + "'");
channel.basicAck(delivery.getEnvelope().getDeliveryTag(), false);
}
}
}
It looks like this blog post describes using the dead letter exchange and message ttl to do something similar.
The code below uses CoffeeScript and Node.js to access Rabbit and implement something similar.
amqp = require 'amqp'
events = require 'events'
em = new events.EventEmitter()
conn = amqp.createConnection()
key = "send.later.#{new Date().getTime()}"
conn.on 'ready', ->
conn.queue key, {
arguments:{
"x-dead-letter-exchange":"immediate"
, "x-message-ttl": 5000
, "x-expires": 6000
}
}, ->
conn.publish key, {v:1}, {contentType:'application/json'}
conn.exchange 'immediate'
conn.queue 'right.now.queue', {
autoDelete: false
, durable: true
}, (q) ->
q.bind('immediate', 'right.now.queue')
q.subscribe (msg, headers, deliveryInfo) ->
console.log msg
console.log headers
That's currently not possible. You have to store your expiration timestamps in a database or something similiar, and then have a helper program that reads those timestamps and queues a message.
Delayed messages are an often requested feature, as they're useful in many situations. However, if your need is to expire client sessions I believe that messaging is not the ideal solution for you, and that another approach might work better.
Suppose you had control over the consumer, you could achieve the delaying on the consumer like this??:
If we are sure that the nth message in the queue always has a smaller delay than the n+1th message (this can true for many use cases): The producer sends timeInformation in the task conveying the time at which this job needs to be executed (currentTime + delay). The consumer:
1) Reads the scheduledTime from the task
2) if currentTime > scheduledTime go ahead.
Else delay = scheduledTime - currentTime
sleep for time indicated by delay
The consumer is always configured with a concurrency parameter. So, the other messages will just wait in the queue until a consumer finishes the job. So, this solution could just work well though it looks awkward especially for big time delays.
AMQP protocol does not support delayed messaging, but by using Time-To-Live and Expiration and Dead Letter Exchanges extensions delayed messaging is possible. The solution is described in this link. I copied the following steps from that link:
Step by step:
Declare the delayed queue
Add the x-dead-letter-exchange argument property, and set it to the default exchange "".
Add the x-dead-letter-routing-key argument property, and set it to the name of the destination queue.
Add the x-message-ttl argument property, and set it to the number of milliseconds you want to delay the message.
Subscribe to the destination queue
There is also a plugin for delayed messaging in RabbitMQ repository on GitHub.
Note that there is a solution called Celery which supports delayed task queuing on RabbitMQ broker by presenting a calling API called apply_async(). Celery supports Python, node and PHP.