Create a high priority serial dispatch queue with GCD - objective-c

How can I create a custom serial queue that runs at high priority?
Right now I'm using myQueue = dispatch_queue_create("com.MyApp.MyQueue", NULL); but this doesn't seem to allow for setting a priority?

Create a serial queue, then use dispatch_set_target_queue() to set its target queue to the high priority queue.
Here's how:
dispatch_set_target_queue(myQueue, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0));
Now myQueue should run serially with high priority.
Here's another SO answer if you want to know more.

The dispatch_queue_attr_make_with_qos_class function may be new since the accepted answer was posted, but something like:
dispatch_queue_attr_t priorityAttribute = dispatch_queue_attr_make_with_qos_class(
DISPATCH_QUEUE_SERIAL, QOS_CLASS_USER_INITIATED, -1
);
myQueue = dispatch_queue_create("com.MyApp.MyQueue", priorityAttribute);
could give the queue a high priority ('quality of service'). There is a higher QOS class, but QOS_CLASS_USER_INITIATED is equivalent to DISPATCH_QUEUE_PRIORITY_HIGH.

is it a requirement that you have a custom queue? If not, you could look at dispatching a block to the high priority global queue, which you can retrieve using:
dispatch_queue_t q = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)
keep in mind that this is the global queue so it may impact other concurrent operations.

Related

RabbitMQ consume most recent message

I have a real-time RabbitMQ queue that I'm running. I'd like to consume the most recent entry, ignoring all others.
Better yet, is it possible to have a fanout exchange with a singleton queue size?
Yes, this can be done by specifying the maximum queue length limit when declaring the queue.
As the documentation states,
The maximum length of a queue can be limited to a set number of messages, or a set number of bytes (the total of all message body lengths, ignoring message properties and any overheads), or both.
The default behaviour for RabbitMQ when a maximum queue length or size is set and the maximum is reached is to drop or dead-letter messages from the front of the queue (i.e. the oldest messages in the queue). To modify this behaviour, use the overflow setting described below.
If you're using Java, you would do the following:
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-max-length", 1);
channel.queueDeclare("myqueue", false, false, false, args);

How to declare circular queue in pika

Is it possible to declare a queue in pika (python-pika) as a circular queue ? If yes, how ?
I mean by circular queue (or a ring) is a queue when a message is selected it will be re-inserted at the end of the queue, instead of removing it from the queue. For example, if the queue contains: msg_3, msg_2, msg_1, then after a consumer get msg_1, the latter will be inserted at the end, so the queue will be: msg_1, msg_3, msg_2 (instead of msg_3, msg_2)
Edit: As proposed by IMSoP, I can make the consumer republish every consumed message (for example, at the end of the callback function).

Is dispatch_apply synchronous or asynchronous?

I was told that I could use Grand Central Dispatch to run n processes simultaneously, in an asynchronous fashion. The documentation said that if the processes were in a for loop, I could use the function dispatch_apply. But now it's saying
Note that dispatch_apply is synchronous, so all the applied blocks
will have completed by the time it returns.
Does this mean the blocks that are submitted to a queue using dispatch_apply are executed in order? If so, what is the point of using concurrency? Won't the slowdown be the same?
dispatch_apply is, as stated in the docs, synchronous. It runs a block on the specified queue in parallel (if possible) and waits until all the blocks return. If you want to run a block just once asynchronously, use dispatch_async, if you want to run a block multiple times in parallel without blocking your current queue, just call dispatch_apply within dispatch_async:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
dispatch_apply(10, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^(size_t size) {
NSLog(#"%lu", size);
});
});
The purpose of the synchronous dispatch_apply is to asynchronously dispatch the inner loop interations to available parallel processing resources. Thus, the overall loop performance may speed up.
Faster loop performance? Very possibly, Yes. (see caveat)
Blocks the thread calling dispatch_apply? Yes, just like loop blocks until completed.
For GCD, dispatch_apply is synchronous since dispatch_apply will not return until all the asynchronous, parallel tasks that dispatch_apply creates have completed.
However, each individual task enqueued by dispatch_apply can run as concurrent asynchronous tasks if the target queue is asynchronous.
For example in Swift:
let batchCount: Int = 10
let queue = dispatch_get_global_queue(QOS_CLASS_UTILITY, 0)
dispatch_apply(batchCount, queue) {
(i: Int) -> Void in
print(i, terminator: " ")
}
print("\ndispatch_apply QOS_CLASS_UTILITY queue completed")
yields unordered output like:
0 8 1 9 2 3 4 5 6 7
dispatch_apply QOS_CLASS_UTILITY queue completed
So, dispatch_apply synchronously blocks when called, but the "batch" of tasks generated by dispatch_apply can run concurrently, asynchronously, in parallel to each other.
Keep in mind the caveat that ...
the work performed during each iteration is distinct from the work
performed during all other iterations, and the order in which each
successive loop finished is unimportant
Also, note that use of a serial queue for the inner loop tasks will not have any performance gain.
Although using a serial queue is permissible and does the right thing
for your code, using such a queue has no real performance advantages
over leaving the loop in place.
You can get performance speed up by using gcl_create_dispatch_queue() with dispatch_apply()
For example:
#import Foundation;
#import OpenCL; // gcl_create_dispatch_queue()
int main() {
dispatch_queue_t queue = gcl_create_dispatch_queue(CL_DEVICE_TYPE_ALL, NULL);
dispatch_apply(10, queue, ^(size_t t) {
// some code here
});
}
More info:
OpenCL Programming Guide for Mac

How to make a queue switches from FIFO mode to priority mode?

I would like to implement a queue capable of operating both in the FIFO mode and in the priority mode. This is a message queue, and the priority is first of all based on the message type: for example, if the messages of A type have higher priority than the messages of the B type, as a consequence all messages of A type are dequeued first, and finally the messages of B type are dequeued.
Priority mode: my idea consists of using multiple queues, one for each type of message; in this way, I can manage a priority based on the message type: just take first the messages from the queue at a higher priority and progressively from lower priority queues.
FIFO mode: how to handle FIFO mode using multiple queues? In other words, the user does not see multiple queues, but it uses the queue as if it were a single queue, so that the messages leave the queue in the order they arrive when the priority mode is disabled. In order to achieve this second goal I have thought to use a further queue to manage the order of arrival of the types of messages: let me explain better with the following code snippet.
int NUMBER_OF_MESSAGE_TYPES = 4;
int CAPACITY = 50;
Queue[] internalQueues = new Queue[NUMBER_OF_MESSAGE_TYPES];
Queue<int> queueIndexes = new Queue<int>(CAPACITY);
void Enqueue(object message)
{
int index = ... // the destination queue (ie its index) is chosen according to the type of message.
internalQueues[index].Enqueue(message);
queueIndexes.Enqueue(index);
}
object Dequeue()
{
if (fifo_mode_enabled)
{
// What is the next type that has been enqueued?
int index = queueIndexes.Dequeue();
return internalQueues[index].Dequeue();
}
if (priority_mode_enabled)
{
for(int i=0; i < NUMBER_OF_MESSAGE_TYPES; i++)
{
int currentQueueIndex = i;
if (!internalQueues[currentQueueIndex].IsEmpty())
{
object result = internalQueues[currentQueueIndex].Dequeue();
// The following statement is fundamental to a subsequent switching
// from priority mode to FIFO mode: the messages that have not been
// dequeued (since they had lower priority) remain in the order in
// which they were queued.
queueIndexes.RemoveFirstOccurrence(currentQueueIndex);
return result;
}
}
}
}
What do you think about this idea?
Are there better or more simple implementations?
Should work. However at a brief glance my thoughts are
a) Not thread safe and a lot of work to make it so.
b) Not exception safe - i.e. an exception en queuing or De-queuing may leave an inconsistent state - maybe not a problem, e.g. if an exception was fatal, but maybe it is.
c) Possibly over complicated and fragile, although I do not know the context it's being used.
Personally unless I had profiled and had shown to have a performance problem, I would have one "container", and the priority mode would walk through the container looking for the next highest priority message - after all it's only 50 messages. I would almost certainly use a linked list. My next optimization would be to have one container with pointers to the first of each message type into that container, and update the pointer on de-queue of message.

One producer, Two consumers and usage of pthread_cond_signal & pthread_mutex_lock

I am fairly new to pthread programming and am trying to get my head around cond_signal & mutex_lock. I am writing a sample program which has One producer thread and Two consumer threads.
There is a queue between producer and the first consumer and a different queue between producer and the second consumer. My producer is basically a communication interface which reads packets from the network and based on a configured filter delivers the packets to either of the consumers.
I am trying to use pthread_cond_signal & pthread_mutex_lock the following way between producer and consumer.
[At producer]
0) Wait for packets to arrive
1) Lock the mutex pthread_mutex_lock(&cons1Mux)
2) Add the packet to the tail of the consumer queue
3) Signal the Consumer 1 process pthread_cond_signal(&msgForCons1)
4) Unlock the mutex pthread_mutex_lock(&cons1Mux)
5) Go to step 0
[At consumer]
1) Lock the mutex pthread_mutex_lock(&cons1Mux)
2) Wait for signal pthread_cond_wait(&msgForCons1,&cons1Mux)
3) After waking up, read the packet
4) Delete from queue.
5) Unlock the mutex pthread_mutex_unlock(&cons1Mux)
6) Goto Step 1
Are the above steps correct? If a switch happens from the consumer thread exactly after step 5 to the producer thread, then is it possible that the producer may signal a packet is waiting even though the consumer hasn't yet started listening for that signal. Will that cause a "missed signal"?
Are there any other problems with these steps?
Yes, you're correct you could have a problem there: if there are no threads waiting, the pthread_cond_signal is a no-op. It isn't queued up somewhere to trigger a subsequent wait.
What's you're supposed to do is, in the consumer, once you've acquired the mutex, test whether there is any work to do. If there is, well, you have a mutex; take ownership, update state, and do it. You only need to wait if there is nothing to do.
The cannonical example is:
decrement_count()
{ pthread_mutex_lock(&count_lock);
while (count == 0)
pthread_cond_wait(&count_nonzero, &count_lock);
count = count - 1;
pthread_mutex_unlock(&count_lock);
}
increment_count()
{ pthread_mutex_lock(&count_lock);
if (count == 0)
pthread_cond_signal(&count_nonzero);
count = count + 1;
pthread_mutex_unlock(&count_lock);
}
Note how the "consumer" decrementing thread doesn't wait around if there's something to decrement. The pattern applies equally well to the case where count is replaced by a queue size or the validity of a struct containing a message.