RabbitMQ consume most recent message - rabbitmq

I have a real-time RabbitMQ queue that I'm running. I'd like to consume the most recent entry, ignoring all others.
Better yet, is it possible to have a fanout exchange with a singleton queue size?

Yes, this can be done by specifying the maximum queue length limit when declaring the queue.
As the documentation states,
The maximum length of a queue can be limited to a set number of messages, or a set number of bytes (the total of all message body lengths, ignoring message properties and any overheads), or both.
The default behaviour for RabbitMQ when a maximum queue length or size is set and the maximum is reached is to drop or dead-letter messages from the front of the queue (i.e. the oldest messages in the queue). To modify this behaviour, use the overflow setting described below.
If you're using Java, you would do the following:
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-max-length", 1);
channel.queueDeclare("myqueue", false, false, false, args);

Related

Are there alternatives to Redis when buffering is undesired and unacceptable?

I want to stream real-time sensor data(webcam, laser point cloud, etc.) from one robot to multiple observers.
In this use case, only the newest data is useful. For example, when a new frame of point cloud arrives, the older ones will be useless.
Redis has nice publisher/consumer support, but it has buffers according to (Redis Pubsub and Message Queueing).
So are there better alternatives? Something like ROS's publishers/subscribers. They have a message queue size parameter.
/**
* The subscribe() call is how you tell ROS that you want to receive messages
* on a given topic.
*
* The second parameter to the subscribe() function is the size of the message
* queue. If messages are arriving faster than they are being processed, this
* is the number of messages that will be buffered up before beginning to throw
* away the oldest ones.
*/
ros::Subscriber sub = n.subscribe("chatter", 1000, chatterCallback);
Maybe you can use redis list data structure for your purpose, like a queue. The list data structure in redis is made with linked list and adding a new item is O(1). Whenever your robot produces data it can put it in a list with LPUSH command, and when you want to get the latest item from the list use LRANGE "key-name" 0 0. This command will retrive the latest pushed item. Also if you want to not accumulate the data in queue, you may try to use LTRIM before LRANGE to maintain the latest records. For example LTRIM "key-name" 0 10 will keep the records of last 10 elements. This trim interval should be set according to your observer processing speeds. ref: https://redis.io/docs/data-types/lists/

How to declare circular queue in pika

Is it possible to declare a queue in pika (python-pika) as a circular queue ? If yes, how ?
I mean by circular queue (or a ring) is a queue when a message is selected it will be re-inserted at the end of the queue, instead of removing it from the queue. For example, if the queue contains: msg_3, msg_2, msg_1, then after a consumer get msg_1, the latter will be inserted at the end, so the queue will be: msg_1, msg_3, msg_2 (instead of msg_3, msg_2)
Edit: As proposed by IMSoP, I can make the consumer republish every consumed message (for example, at the end of the callback function).

How to make a queue switches from FIFO mode to priority mode?

I would like to implement a queue capable of operating both in the FIFO mode and in the priority mode. This is a message queue, and the priority is first of all based on the message type: for example, if the messages of A type have higher priority than the messages of the B type, as a consequence all messages of A type are dequeued first, and finally the messages of B type are dequeued.
Priority mode: my idea consists of using multiple queues, one for each type of message; in this way, I can manage a priority based on the message type: just take first the messages from the queue at a higher priority and progressively from lower priority queues.
FIFO mode: how to handle FIFO mode using multiple queues? In other words, the user does not see multiple queues, but it uses the queue as if it were a single queue, so that the messages leave the queue in the order they arrive when the priority mode is disabled. In order to achieve this second goal I have thought to use a further queue to manage the order of arrival of the types of messages: let me explain better with the following code snippet.
int NUMBER_OF_MESSAGE_TYPES = 4;
int CAPACITY = 50;
Queue[] internalQueues = new Queue[NUMBER_OF_MESSAGE_TYPES];
Queue<int> queueIndexes = new Queue<int>(CAPACITY);
void Enqueue(object message)
{
int index = ... // the destination queue (ie its index) is chosen according to the type of message.
internalQueues[index].Enqueue(message);
queueIndexes.Enqueue(index);
}
object Dequeue()
{
if (fifo_mode_enabled)
{
// What is the next type that has been enqueued?
int index = queueIndexes.Dequeue();
return internalQueues[index].Dequeue();
}
if (priority_mode_enabled)
{
for(int i=0; i < NUMBER_OF_MESSAGE_TYPES; i++)
{
int currentQueueIndex = i;
if (!internalQueues[currentQueueIndex].IsEmpty())
{
object result = internalQueues[currentQueueIndex].Dequeue();
// The following statement is fundamental to a subsequent switching
// from priority mode to FIFO mode: the messages that have not been
// dequeued (since they had lower priority) remain in the order in
// which they were queued.
queueIndexes.RemoveFirstOccurrence(currentQueueIndex);
return result;
}
}
}
}
What do you think about this idea?
Are there better or more simple implementations?
Should work. However at a brief glance my thoughts are
a) Not thread safe and a lot of work to make it so.
b) Not exception safe - i.e. an exception en queuing or De-queuing may leave an inconsistent state - maybe not a problem, e.g. if an exception was fatal, but maybe it is.
c) Possibly over complicated and fragile, although I do not know the context it's being used.
Personally unless I had profiled and had shown to have a performance problem, I would have one "container", and the priority mode would walk through the container looking for the next highest priority message - after all it's only 50 messages. I would almost certainly use a linked list. My next optimization would be to have one container with pointers to the first of each message type into that container, and update the pointer on de-queue of message.

How does ActiveMQPrefetchPolicy work? What does queueBrowserPrefetch mean?

I'm trying to use ActiveMQPrefetchPolicy but cannot quite understand how to use it.
I'm using queue, there are 3 params that I can define for PrefetchPolicy:
queuePrefetch, queueBrowserPrefetch, inputStreamPrefetch
Actually I don't get the meaning of queueBrowserPrefetch and inputStreamPrefetch so I do not know how to use it.
I assume that you have seen the ActiveMQ page on prefetch limits.
queueBrowserPrefetch sets the maximum number of messages sent to a
ActiveMQQueueBrowser until acks are received.
inputStreamPrefetch sets the maximum number of messages sent
through a jms-stream until acks are received
Both queue-browser and jms-stream are specialized consumers. You can read more about each one of them but if you are not using them it won't matter what you assign to their prefetch limits.

One producer, Two consumers and usage of pthread_cond_signal & pthread_mutex_lock

I am fairly new to pthread programming and am trying to get my head around cond_signal & mutex_lock. I am writing a sample program which has One producer thread and Two consumer threads.
There is a queue between producer and the first consumer and a different queue between producer and the second consumer. My producer is basically a communication interface which reads packets from the network and based on a configured filter delivers the packets to either of the consumers.
I am trying to use pthread_cond_signal & pthread_mutex_lock the following way between producer and consumer.
[At producer]
0) Wait for packets to arrive
1) Lock the mutex pthread_mutex_lock(&cons1Mux)
2) Add the packet to the tail of the consumer queue
3) Signal the Consumer 1 process pthread_cond_signal(&msgForCons1)
4) Unlock the mutex pthread_mutex_lock(&cons1Mux)
5) Go to step 0
[At consumer]
1) Lock the mutex pthread_mutex_lock(&cons1Mux)
2) Wait for signal pthread_cond_wait(&msgForCons1,&cons1Mux)
3) After waking up, read the packet
4) Delete from queue.
5) Unlock the mutex pthread_mutex_unlock(&cons1Mux)
6) Goto Step 1
Are the above steps correct? If a switch happens from the consumer thread exactly after step 5 to the producer thread, then is it possible that the producer may signal a packet is waiting even though the consumer hasn't yet started listening for that signal. Will that cause a "missed signal"?
Are there any other problems with these steps?
Yes, you're correct you could have a problem there: if there are no threads waiting, the pthread_cond_signal is a no-op. It isn't queued up somewhere to trigger a subsequent wait.
What's you're supposed to do is, in the consumer, once you've acquired the mutex, test whether there is any work to do. If there is, well, you have a mutex; take ownership, update state, and do it. You only need to wait if there is nothing to do.
The cannonical example is:
decrement_count()
{ pthread_mutex_lock(&count_lock);
while (count == 0)
pthread_cond_wait(&count_nonzero, &count_lock);
count = count - 1;
pthread_mutex_unlock(&count_lock);
}
increment_count()
{ pthread_mutex_lock(&count_lock);
if (count == 0)
pthread_cond_signal(&count_nonzero);
count = count + 1;
pthread_mutex_unlock(&count_lock);
}
Note how the "consumer" decrementing thread doesn't wait around if there's something to decrement. The pattern applies equally well to the case where count is replaced by a queue size or the validity of a struct containing a message.