NSThread High CPU usage - ios7

Im trying to using a continually running thread to perform tasks at a high rate in my app.
In this app I have a list of 1000 or so time stamps. I poll them until it's their time and then instruct an ausampler to play.
The problem I have is that I seem to fundamentally not understand how NSThread Works. In the simple example below the cpu shoots up to 100% despite no tasks being run.
In what way I am using NSThread incorrectly?
What would be a better way to create a a very fast polling mechanism that doesnt hog the cpu?
myThread =[[NSThread alloc]initWithTarget:self selector:#selector(test) object:Nil];
[audioThread setThreadPriority:0];
[audioThread start];
-(void)test
{
while(mycondition)
{
// do my work
// cpu == 100%
}
}

The solution was to find the minimum interval between sampler fire times (based on bpm/ resolution) and make the thread sleep for that amount.

Related

Why the Metal triple buffering model matters in official examples?

Metal Best Practices suggest using the triple buffering for dynamic data buffers. But the listing provided in the documentation and the default Metal example generated by the Xcode are blocking every frame waiting for GPU to finish its work:
- (void)render
{
// Wait until the inflight command buffer has completed its work
dispatch_semaphore_wait(_frameBoundarySemaphore, DISPATCH_TIME_FOREVER);
// TODO: Update dynamic buffers and send them to the GPU here !
__weak dispatch_semaphore_t semaphore = _frameBoundarySemaphore;
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> commandBuffer) {
// GPU work is complete
// Signal the semaphore to start the CPU work
dispatch_semaphore_signal(semaphore);
}];
// CPU work is complete
// Commit the command buffer and start the GPU work
[commandBuffer commit];
}
So how does the triple buffering improves anything here?
The important bit you didn't spot in the sample is:
_frameBoundarySemaphore = dispatch_semaphore_create(kMaxInflightBuffers);
As the documentation for dispatch_semaphore_create says:
Passing a value greater than zero is useful for managing a finite pool of resources, where the pool size is equal to the value.
kMaxInflightBuffers is set to 3 for triple buffering. The first 3 calls to dispatch_semaphore_wait will succeed without any waiting.

Measure time a thread spends waiting for a Lock (locking by coredata)

We are using coredata in some threads... (too many concurrent contexts are bad - I know, I experience it)
Now around every fetch/save coredata wraps a lock.
I'd now like to measure the time one thread spends being blocked waiting to aquire that lock.
I thought I could just use time profiler or even thread state instrument or the sampler.
but:
- time profiler just ignores waiting (likely because it isnt a CPU call)
- sampler as well (even though he isntin cpu mode)
- the thread states instrument doesnt show me correct callstack either :(
[but maybe (and that's always a possibility) I did overlook an easy solution]
here I have a very simple app that also has a lock and for which I also fail to get the wait time... maybe you can help me get the time the main thread spends waiting in this example -- using some profiling technique I can then transfer to the coredata case:
#implementation DDAppDelegate
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
NSLock *l = [[NSLock alloc] init];
NSLog(#"made");
[NSThread detachNewThreadSelector:#selector(hogTheLog:) toTarget:self withObject:l];
NSLog(#"wait");
[l lock];
NSLog(#"got");
[l unlock];
NSLog(#"terminate");
[NSApp terminate:nil];
}
- (void)hogTheLog:(NSLock*)l {
[l lock];
sleep(3);
[l unlock];
}
#end
I'm not sure how long Xcode has this feature, I only noticed it in 5.1. In the "Time Profiler" instrument configuration there is now a checkbox "Record Waiting Threads". In this mode it records how much time exactly did the application spend on each method/function etc (instead of measuring just the CPU time).
I set sample interval to 20 ms, because otherwise overall will be too slow.
UPD: also, set "Sample Perspective" (in the bottom left corner on the screenshot) to "Running Sample Times" to discard samples when thread is just idle from the picture.
Use the "System Calls" instrument in Instruments. You get this instrument if you select the "System Trace" template when launching Instruments. This will give you all sorts of scheduling information which should allow you to find your answer.

NSTimer sometimes freezes when app is doing heavy computation

I'd like to animate some loading points while the app is doing some computation in the background. I achieve this via an NSTimer:
self.timer = [NSTimer scheduledTimerWithTimeInterval:0.3f
target:self
selector:#selector(updateLoadingPoints:)
userInfo:nil
repeats:YES];
Unfortunately, sometimes, when the computation becomes pretty heavy, the method is not fired and the updating therefore doesn't happen. It seems like all the firing is in a queue which is fired after the heavy computation.
Is there a way to give the NSTimer a higher priority to ensure that it's regularly calling my method? Or is there another way to achieve this?
NSTimer works by adding events to the queue on the main run loop; it's the same event queue used for touch events and I/O data received events and so on. The time interval you set isn't a precise schedule; basically on each pass through the run loop, the timers are checked to see if any are due to be fired.
Because of the way they are implemented, there is no way to increase the priority of a timer.
It sounds like your secondary thread is taking a lot of CPU time away from the main thread, and so the timers don't fire as often as you would like. In other words, the main thread is starved for CPU time.
Calling performSelectorOnMainThread: won't necessarily help, because these methods essentially add a single-fire timer to the main thread's event queue. So you'll just be setting up timers in a different way.
To fix your problem, I would suggest that you increase the relative priority of the main thread by decreasing the priority of your computation thread. (See [NSThread setThreadPriority:].)
It may seem counter-intuitive to have your important worker thread running at a lower priority than the main thread, which is just drawing stuff to the screen, but in a human-friendly application, keeping the screen up to date and responding to user input usually is the most important thing that the app should be doing.
In practice, the main thread needs very little CPU, so it won't really be slowing your worker thread down; rather, you are just ensuring that for the small amount of time that the main thread needs to do something, it gets done quickly.
The timer is added to the run loop it's been scheduled with. If you create the timer on a secondary thread (e.g. your worker thread), there's a good chance you also scheduled it on the secondary thread.
You want the UI updates on the main thread. Thus, you want the timer scheduled on the main thread. If your updates are still slow, perhaps your main thread can do less work, and ensure that you have very low number of threads, and that you are locking appropriately.
I suspect you created it on a secondary thread which did not run the run loop as often as the timer wanted to fire. If it is doing a lot of (prolonged) work in the background, and not running the run loop, then the timer would not have a chance to fire because the messages would not have the chance to be fired while its thread is still out processing.
Make your timer call from a separate thread rather than from main thread. this will certainly keep it separate from your other main thread's processing which will give you desired results.
Perform your computation on a separate thread, using performSelectorInBackground:withObject. Always do as little as possible in your UI loop, as any work done here will prevent mouseClicks, cause SPoDs/beachballs, and delay timer handlers.
I suspect that it's not just your TIMER being unresponsive, but the whole UI in general.
Sorry for having called out the wrong API in my earlier revision - copy/paste failure on my part.

Using mach_absolute_time() to slow down a simulation. Is this the proper way?

I am writing an application for OS X (Obj-C/Cocoa) that runs a simulation and displays the results to the user. In one case, I want the simulation to run in "real-time" so that the user can watch it go by at the same speed it would happen in real life. The simulation is run with a specific timestep, dt. Right now, I am using mach_absolute_time() to slow down the simulation. When I profile this code, I see that by far, most of my CPU time is spent in mach_absolute_time() and my CPU is pegged at 100%. Am I doing this right? I figured that if I'm slowing down the simulation such that the program isn't simulating anything most of the time then CPU usage should be down but mach_absolute_time() obviously isn't a "free call" so I feel like there might be a better way?
double nextT = mach_absolute_time();
while (runningSimulation)
{
if (mach_absolute_time() >= nextT)
{
nextT += dt_ns;
// Compute the next "frame" of the simulation
// ....
}
}
Do not spin at all.
That is the first rule of writing GUI apps where battery life and app responsiveness matter.
sleep() or nanosleep() can be made to work, but only if used on something other than the main thread.
A better solution is to use any of the time based constructs in GCD as that'll make more efficient use of system resources.
If you want the simulation to appear smooth to the user, you'll really want to lock the slowed version to the refresh rate of the screen. On iOS, there is CADisplayLink. I don't know of a direct equivalent on the Mac.
You are doing busy spinning. If there is lot of time before you need to simulate again considering sleeping instead.
But any sleep won't guarantee that it would sleep exactly for the duration specified. Depending on how accurate you want it to be, you can sleep for a little less and afterwards spin for the rest.

60 hz NSTimer and autoreleased memory

I have an NSTimer firing at 60 fps. It updates a C++ model and then draws via Quartz 2D. This works well except memory accumulates quickly even though I am not allocating anything. Instruments reports no leaks but many CFRunLoopTimers (I guess from the repeating NSTimer?) seem to be accumulating. Clicking the window or pressing a key purges most of them which would seem to point to an autorelease pool not being drained frequently enough. Do I have to rely on events to cycle the autorelease pool(s) or is there a better way to clear out the memory?
Any help is appreciated, Thanks
-Sam
Timer creation (timer is an ivar):
timer = [NSTimer scheduledTimerWithTimeInterval:1.0f / 60 target:self selector:#selector(update:) userInfo:nil repeats:YES];
update: method:
- (void)update:(NSTimer *)timer {
controller->Update();
[self.view setNeedsDisplay:YES];
}
Update:
After messing around with this a little more I've made a couple of additional observations.
1.) [self.view setNeedsDisplay:YES] seems to be the culprit in spawning these CFRunLoopTimers. Replacing it with [self.view display] gets rid of the issue but at the cost of performance.
2.) Lowering the frequency to 20-30 fps and keeping `[self.view setNeedsDisplay:YES]' also causes the issue to go away.
This would seem to imply setNeedsDisplay: doesn't like to be called a lot (maybe more time's per second then can be displayed?). I frankly can't understand what the problem with "overcalling" it if all it does is tell the view to be redisplayed at the end of the eventloop.
I am sure I am missing something here and any additional help is greatly appreciated.
Usually the right solution would be to create a nested NSAutoreleasePool around your object-creations-heavy code.
But in this case, it seems the objects are autoreleased when the timer re-schedule itself — a piece of code you can't control. And you can't ask the topmost autorelease pool to drain itself without releasing it.
In your case, the solution would be to drop your NSTimer for frame-rate syncing, and to use a CADisplayLink instead:
CADisplayLink *frameLink = [CADisplayLink displayLinkWithTarget:self
selector:#selector(update:)];
// Notify the application at the refresh rate of the display (60 Hz)
frameLink.frameInterval = 1;
[frameLink addToRunLoop:[NSRunLoop mainRunLoop]
forMode:NSDefaultRunLoopMode];
CADisplayLink is made to synchronize the drawing to the refresh rate of the screen — so it seems like a good candidate for what you want to do. Besides, NSTimer are not precise enough to sync with the display refresh rate when running at 60 Hz.
Well, regardless of the memory-cleanup issue:
The documentation for NSTimer says: "Because of the various input sources a typical run loop manages, the effective resolution of the time interval for a timer is limited to on the order of 50-100 milliseconds."
1/60 is an interval of approx. 16.6 milliseconds so you're well beyond the effective resolution of NSTimer.
Your followup note indicates that lowering its frequency to 20-30 fps fixes it... 20 fps brings the interval to 50 ms -- within the documented resolution.
Documentation also indicates that this shouldn't break anything... however, I've encountered some odd situations were Instruments caused memory issues that weren't previously there. Do you get memory issues/warnings running the app in Release build, without Xcode or Instruments attached?
I guess at this point I'd recommend just moving on and trying out the tools in the other posted answers.
As already suggested by Kemenaran, I also think that you should try and use a CADisplayLink object. One further reason is that NSTimer have a lower fire limit of 50-100 milliseconds (source):
Because of the various input sources a typical run loop manages, the effective resolution of the time interval for a timer is limited to on the order of 50-100 milliseconds.
On the other hand, I am not sure that this will solve the problem. From what you describe, it seems to me that thing could work like this (or in a similar way):
when you execute [self.view setNeedsDisplay:YES]; the framework schedule a redraw of the view by means of a CFRunLoopTimer; this explains why there are so many being created;
when the CFRunLoopTimer fires, the view is redrawn, the needsDisplay flag reset;
in your case, when the frequency of update is high, what happens is that you call setNeedsDisplay more often than the refresh can actually happen; so, for each actual refresh you have several calls to setNeedsDisplay, and several CFRunLoopTimer are also created;
of all those CFRunLoopTimer that get created between two consecutive actual refresh operations, only the first one get released and destroyed; the other ones either have no chance to fire or find the view with the needsDisplay flag already reset and thus possibly reschedule themselves.
For point 4: I think that the most likely explication is the first one: you build up a queue of CFRunLoopTimers at a frequency much higher than that at which you can "consume" it. I am saying that redrawing takes longer than an update cycle because you say that when you call [view display] performance suffers.
If this is correct, then the problem would persist also with CADisplayLink (because it is related to calling update with too an high frequency compared to redraw speed) and the only solution would be finding a different way to make the redraw (i.e., not using setNeedsDisplay:YES)
In fact, I checked with cocos2d sources and setNeedsDisplay:YES is almost not used. Redrawing (cocos2d provides a frame rate of 60 fps) is done by drawing directly into an OpenGL buffer, and I suspect that this is the critical point to be able to get to that frame rate. You could also check if you can replace your view layer by a CAEAGLLayer (this should be pretty easy) and see whether you can draw directly to the glBuffer.
I hope it helps. Keep in mind that there are many hypothesis that I am making here, so it could well be that any of it be wrong. I am just proposing my reasoning.