I'd like to animate some loading points while the app is doing some computation in the background. I achieve this via an NSTimer:
self.timer = [NSTimer scheduledTimerWithTimeInterval:0.3f
target:self
selector:#selector(updateLoadingPoints:)
userInfo:nil
repeats:YES];
Unfortunately, sometimes, when the computation becomes pretty heavy, the method is not fired and the updating therefore doesn't happen. It seems like all the firing is in a queue which is fired after the heavy computation.
Is there a way to give the NSTimer a higher priority to ensure that it's regularly calling my method? Or is there another way to achieve this?
NSTimer works by adding events to the queue on the main run loop; it's the same event queue used for touch events and I/O data received events and so on. The time interval you set isn't a precise schedule; basically on each pass through the run loop, the timers are checked to see if any are due to be fired.
Because of the way they are implemented, there is no way to increase the priority of a timer.
It sounds like your secondary thread is taking a lot of CPU time away from the main thread, and so the timers don't fire as often as you would like. In other words, the main thread is starved for CPU time.
Calling performSelectorOnMainThread: won't necessarily help, because these methods essentially add a single-fire timer to the main thread's event queue. So you'll just be setting up timers in a different way.
To fix your problem, I would suggest that you increase the relative priority of the main thread by decreasing the priority of your computation thread. (See [NSThread setThreadPriority:].)
It may seem counter-intuitive to have your important worker thread running at a lower priority than the main thread, which is just drawing stuff to the screen, but in a human-friendly application, keeping the screen up to date and responding to user input usually is the most important thing that the app should be doing.
In practice, the main thread needs very little CPU, so it won't really be slowing your worker thread down; rather, you are just ensuring that for the small amount of time that the main thread needs to do something, it gets done quickly.
The timer is added to the run loop it's been scheduled with. If you create the timer on a secondary thread (e.g. your worker thread), there's a good chance you also scheduled it on the secondary thread.
You want the UI updates on the main thread. Thus, you want the timer scheduled on the main thread. If your updates are still slow, perhaps your main thread can do less work, and ensure that you have very low number of threads, and that you are locking appropriately.
I suspect you created it on a secondary thread which did not run the run loop as often as the timer wanted to fire. If it is doing a lot of (prolonged) work in the background, and not running the run loop, then the timer would not have a chance to fire because the messages would not have the chance to be fired while its thread is still out processing.
Make your timer call from a separate thread rather than from main thread. this will certainly keep it separate from your other main thread's processing which will give you desired results.
Perform your computation on a separate thread, using performSelectorInBackground:withObject. Always do as little as possible in your UI loop, as any work done here will prevent mouseClicks, cause SPoDs/beachballs, and delay timer handlers.
I suspect that it's not just your TIMER being unresponsive, but the whole UI in general.
Sorry for having called out the wrong API in my earlier revision - copy/paste failure on my part.
Related
It is said semaphores are designed for this but how? It looks like I need to submit the semaphore before waiting for it to signal. Then what's the point of multithreading?
I'm using skia (has its own VkQueue) to draw UI, I don't have access to the commandbuffer, I can only provide semaphores for it. it first waits for the scene complete semaphore then draw ui and signals present ready semaphore.
It works fine when everything happens in a single thread. But after I move the UI part to a second thread. It stopped working and I got validation errors like: VkQueue is waiting on semaphore that has no way to be signaled. Of course, since it's on a different thread, the semaphore might not have been submitted to a queue yet.
The spec for vkQueuePresentKHR says
All elements of the pWaitSemaphores member of pPresentInfo must be semaphores that are signaled, or have semaphore signal operations previously submitted for execution
You can't submit work that waits on a semaphore that you plan to submit later. If you have this kind of dependency in your code you need to externally synchronize the submissions so the command buffers that will signal will be sent BEFORE you submit the dependent command buffers, regardless of the queue.
If you're using multiple threads it sounds like you need to rely on some CPU side synchronization primitives, like a CPU semaphore to properly order the work between them. Pure Vulkan sync primitives won't help you there.
I was reading an interesting blog on avoiding dispatch_sync calls. The author of the post shows a snippet of code where you have a block that if executed, it creates a deadlock.
Here it is.
for (int i=0; i < A_BIG_NUMBER; i++) {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
dispatch_sync(dispatch_get_main_queue(), ^{
// do something in the main queue
});
}
As I understand, there are is a big problem here. A queue is just a pool of threads and GCD, won’t create any more if it has reached a certain number.
What I cannot understand is the following. Maybe it's due to my poor English :-).
What happens is that all of these background threads start queueing up
operations and saturate the thread pool. Meanwhile, the OS, on the
main thread, in the process of doing some of its own work, calls
dispatch_sync, which blocks waiting for a spot to open up in the
thread pool. Deadlock.
Based on my knowledge, when calling dispatch_sync on a queue, I need to verify that this queue is not already the queue I'm running on. But here, maybe I'm wrong, a different thing is written.
Any thoughs?
Here the source of my nightmares Avoid dispatch_sync..
Long story short: in the general case, you can't know which queue you're running on, because queues target other queues in a hierarchical tree. (This is likely why dispatch_get_current_queue is deprecated -- there's really just not one answer to the question, "what queue am I running on?") You can figure out if you're on the main queue by calling [NSThread isMainThread]. The current recommended mechanism for tracking which queue you're on is to use dispatch_queue_set_specific() and dispatch_get_specific(). See this question for all the gory details.
You said the following quote is not clear:
What happens is that all of these background threads start queueing up
operations and saturate the thread pool. Meanwhile, the OS, on the
main thread, in the process of doing some of its own work, calls
dispatch_sync, which blocks waiting for a spot to open up in the
thread pool. Deadlock.
Let me try to re-state: There are a limited number of threads available to GCD. When you dispatch a block and it begins executing, it is "consuming" one of these threads until it returns. If the executing block does blocking I/O or otherwise goes to sleep for a long period of time, you can find yourself in a situation where all the available threads are consumed. If your main thread then attempts to dispatch_sync a block to a different queue, and no more threads are available, it is possible that your main thread will block waiting for a thread to be available. ("Starved" would have been a better word than "saturated" in the original quote.)
In the general case, this situation is not, strictly speaking, a "deadlock", because other blocks might eventually complete, and the threads they were consuming should become available. Unfortunately, based on empirical observation, GCD needs to have at least one thread available in order to wake up queues that are waiting for a thread as other blocks complete. Hence, it is possible, in certain pathological situations, to starve out GCD such that it is effectively deadlocked. Although the example on the linked page probably does meet the strict definition of a deadlock, not all cases in which you end up frozen do; That doesn't change the fact that you're frozen, and so is probably not worth debating.
The standard advice on this is, somewhat unhelpfully, "Don't do that." In the ideal, you should never submit a block to GCD that could do blocking I/O or go to sleep. In the real world, this is probably at least half of what people use GCD for. What can I say? Concurrency is hard. For more detail on the thread limits associated with GCD you can check out my answer over here.
It sounds like what you're trying to achieve here is a recursive lock. The short version of this is that GCD queues are not locks. They can be used in a way that approximates a lock for certain situations, but locks and task queues are two different things, and are not 100% interchangeable.
I have come to believe that it is not possible to approximate a recursive lock using GCD queues, in a way that works for all possible arrangements of queue targeting, and without incurring a greater performance penalty than would be incurred by using an old-fashioned recursive lock. For an extended discussion of recursive locking with GCD, check out my answer over here.
EDIT: Specifically to the code snippet in the question, here's what happens. Imagine that the thread limit is 4. (This is not the actual limit, but the principal is the same no matter what the exact limit is.) Here's the snippet, for reference:
for (int i=0; i < A_BIG_NUMBER; i++) {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
dispatch_sync(dispatch_get_main_queue(), ^{
// do something in the main queue
});
}
The first thing to remember, is the main thread is a run loop, with many things happening on it that you did not directly cause to happen. Let's assume, for the moment, that the main thread is busy doing some drawing.
The first pass of the loop will take 1 thread out of the thread pool and run the enqueued block. That thread will immediately block, because it's waiting to do something on the main thread, but the main thread is still busy drawing. There are 3 threads left in the GCD thread pool at this point.
The second pass of the loop will take 1 thread out of the thread pool and run the enqueued block. That thread will immediately block, because it's waiting to do something on the main thread, but the main thread is still busy drawing. There are 2 threads left in the GCD thread pool at this point.
The third pass of the loop will take 1 thread out of the thread pool and run the enqueued block. That thread will immediately block, because it's waiting to do something on the main thread, but the main thread is still busy drawing. There is 1 thread left in the GCD thread pool at this point.
The fourth pass of the loop will take 1 thread out of the thread pool and run the enqueued block. That thread will immediately block, because it's waiting to do something on the main thread, but the main thread is still busy drawing. There are now no threads left in the GCD thread pool at this point. None of these four background threads can proceed until the main thread becomes available to run their dispatch_sync blocks.
Now, over on the main thread, in the course of drawing, in code not visible to you (in AppKit or whatever) there's a call to dispatch_sync to synchronously perform an operation on some other background queue. This code would normally take a thread from the thread pool and do its work, and eventually the main thread would continue. But in this case, there are no threads left in the pool. All the threads there ever were are waiting for the main thread to be free. But the main thread is waiting for one of those background threads to be free. This is a deadlock. Why would some OS code be doing a dispatch_sync to a background queue? I have no idea, but the claim in the linked document is that this does occur (and I would totally believe it.)
So I restructured a central part in my Cocoa application (I really had to!) and I am running into issues since then.
Quick outline: my application controls the playback of QuickTime movies so that they are in sync with external timecode.
Thus, external timecode arrives on a CoreMIDI callback thread and gets posted to the application about 25 times per sec. The sync is then checked and adjusted if it needs to be.
All this checking and adjusting is done on the main thread.
Even if I put all the processing on a background thread it would be a ton of work as I'm currently using a lot of GCD blocks and I would need to rewrite a lot of functions so that they can be called from NSThread. So I would like to make sure first if it will solve my problem.
The problem
My Core MIDI callback is always called in time, but the GCD block that is dispatched to the main queue is sometimes blocked for up to 500 ms. Understandable that adjusting the sync does not quite work if that happens. I couldn't find a reason for it, so I'm guessing that I'm doing something that blocks the main thread.
I'm familiar with Instruments, but I couldn't find the right mode to see what keeps my messages from being processed in time.
I would appreciate if anyone could help.
Don't know what I can do about it.
Thanks in advance!
Watchdog
You can use watch dog that stop when the main thread stopped for time
https://github.com/wojteklu/Watchdog
you can install it using cocoapod
pod 'Watchdog'
You may be blocking the main thread or you might be flooding it with events.
I would suggest three things:
Grab a timestamp for when the timecode arrives in the CoreMIDI callback thread (see mach_absolute_time(). Then grab the current time when your main thread work is being done. You can then adjust accordingly based on how much time has elapsed between posting to the main thread and it actually being processed.
create some kind of coalescing mechanism such that when your main thread is blocked, interim timecode events (that are now out of date) are tossed. This can be as simple as a global NSUInteger that is incremented every time an event is received. The block dispatched to the main queue could capture the current value on creation, then check it when it is processed. If it differs by more than N (N for you to determine), then toss the event because more are in flight.
consider not sending an event to the main thread for every timecode notification. 25 adjustments per second is a lot of work. If processing only 5 per second yields a "good enough" perceptual experience, then that is an awful lot of work saved.
In general, instrumenting the main event loop is a bit tricky. The CPU profiler in Instruments can be quite helpful. It may come as a surprise, but so can the Allocations instrument. In particular, you can use the Allocations instrument to measure memory throughput. If there are tons of transient (short lived) allocations, it'll chew up a ton of CPU time doing all those allocations/deallocations.
I have an app that needs to send collected data every X milliseconds (and NOT sooner!). My first thought was to stack up the data on an NSMutableArray (array1) on thread1. When thread2 has finished waiting it's X milliseconds, it will swap out the NSMutableArray with a fresh one (array2) and then process its contents. However, I don't want thread1 to further modify array1 once thread2 has it.
This will probably work, but thread safety is not a field where you want to "just try it out." What are the pitfalls to this approach, and what should I do instead?
(Also, if thread2 is actually an NSTimer instance, how does the problem/answer change? Would it all happen on one thread [which would be fine for me, since the processing takes a tiny fraction of a millisecond]?).
You should use either NSOperationQueue or Grand Central Dispatch. Basically, you'll create an operation that receives your data and uploads it when X milliseconds have passed. Each operation will be independent and you can configure the queue wrt how many concurrent ops you allow, op priority, etc.
The Apple docs on concurrency should help:
http://developer.apple.com/library/ios/#documentation/General/Conceptual/ConcurrencyProgrammingGuide/Introduction/Introduction.html
The pitfalls of this approach have to do with when you "swap out" the NSArray for a fresh one. Imagine that thread1 gets a reference to the array, and at the same time thread2 swaps the arrays and finishes processing. Thread1 is now writing to a dead array (one that will no longer be processed), even if it's just for a few milliseconds. The way to prevent this, of couse, is by using synchronized code-blocks (i.e., make your code "thread-safe") in the critical sections, but it's kind of hard not to overshoot the mark and synchronize too much of your code (sacrificing performance).
So the risks are you'll:
Make code that is not thread-safe
Make code that overuses synchronize and is slow (and threads already have a performance overhead)
Make some combination of these two: slow, unsafe code.
The idea is to "migrate away from threads" which is what this link is about.
My software will simulate a few hundred hardware devices, each of which will send several thousand reports to a database server.
Trying it without threading did not give very good results, so now it's time to thread.
Since I am load testing the d/b server, some of those transactions will succeed and a few may fail. The GUI of the main program needs to reflect this. How should the threads communicate their results back to the main program? Update global variables? Send a message? Or something lese?
Now, if I update only at the end of each thread then the GUI is going to look rather boring (and I can't tell if the program hung). It might be nice to update the GUI periodically. But that might cause contention, with threads waiting for other threads to update (for instance, if I am writing to global variables, I need a mutex, which will block each thread which is waiting to write).
I'm new to threading. How is this normally done? Perhaps the main program could poll the threads, instead of the threads iforming the main program?
One way to organize this is for your threads to add messages to a thread-safe queue (e.g. a ConcurrentQueue) as they get data. To keep things simple you can have a timer thread in your UI that periodically dequeues the queued messages to a private list and then renders them. This design allows your threads to easily queue and forget messages with minimal contention, and for your UI to periodically update itself without blocking your writers too much (i.e. for only the period it takes to dequeue current messages to a private list).
Although you are attempting to simulate the load of hundreds of devices, using thread per device is not the way to model this as you can only run so many threads concurrently anyway.