I have an NSTimer firing at 60 fps. It updates a C++ model and then draws via Quartz 2D. This works well except memory accumulates quickly even though I am not allocating anything. Instruments reports no leaks but many CFRunLoopTimers (I guess from the repeating NSTimer?) seem to be accumulating. Clicking the window or pressing a key purges most of them which would seem to point to an autorelease pool not being drained frequently enough. Do I have to rely on events to cycle the autorelease pool(s) or is there a better way to clear out the memory?
Any help is appreciated, Thanks
-Sam
Timer creation (timer is an ivar):
timer = [NSTimer scheduledTimerWithTimeInterval:1.0f / 60 target:self selector:#selector(update:) userInfo:nil repeats:YES];
update: method:
- (void)update:(NSTimer *)timer {
controller->Update();
[self.view setNeedsDisplay:YES];
}
Update:
After messing around with this a little more I've made a couple of additional observations.
1.) [self.view setNeedsDisplay:YES] seems to be the culprit in spawning these CFRunLoopTimers. Replacing it with [self.view display] gets rid of the issue but at the cost of performance.
2.) Lowering the frequency to 20-30 fps and keeping `[self.view setNeedsDisplay:YES]' also causes the issue to go away.
This would seem to imply setNeedsDisplay: doesn't like to be called a lot (maybe more time's per second then can be displayed?). I frankly can't understand what the problem with "overcalling" it if all it does is tell the view to be redisplayed at the end of the eventloop.
I am sure I am missing something here and any additional help is greatly appreciated.
Usually the right solution would be to create a nested NSAutoreleasePool around your object-creations-heavy code.
But in this case, it seems the objects are autoreleased when the timer re-schedule itself — a piece of code you can't control. And you can't ask the topmost autorelease pool to drain itself without releasing it.
In your case, the solution would be to drop your NSTimer for frame-rate syncing, and to use a CADisplayLink instead:
CADisplayLink *frameLink = [CADisplayLink displayLinkWithTarget:self
selector:#selector(update:)];
// Notify the application at the refresh rate of the display (60 Hz)
frameLink.frameInterval = 1;
[frameLink addToRunLoop:[NSRunLoop mainRunLoop]
forMode:NSDefaultRunLoopMode];
CADisplayLink is made to synchronize the drawing to the refresh rate of the screen — so it seems like a good candidate for what you want to do. Besides, NSTimer are not precise enough to sync with the display refresh rate when running at 60 Hz.
Well, regardless of the memory-cleanup issue:
The documentation for NSTimer says: "Because of the various input sources a typical run loop manages, the effective resolution of the time interval for a timer is limited to on the order of 50-100 milliseconds."
1/60 is an interval of approx. 16.6 milliseconds so you're well beyond the effective resolution of NSTimer.
Your followup note indicates that lowering its frequency to 20-30 fps fixes it... 20 fps brings the interval to 50 ms -- within the documented resolution.
Documentation also indicates that this shouldn't break anything... however, I've encountered some odd situations were Instruments caused memory issues that weren't previously there. Do you get memory issues/warnings running the app in Release build, without Xcode or Instruments attached?
I guess at this point I'd recommend just moving on and trying out the tools in the other posted answers.
As already suggested by Kemenaran, I also think that you should try and use a CADisplayLink object. One further reason is that NSTimer have a lower fire limit of 50-100 milliseconds (source):
Because of the various input sources a typical run loop manages, the effective resolution of the time interval for a timer is limited to on the order of 50-100 milliseconds.
On the other hand, I am not sure that this will solve the problem. From what you describe, it seems to me that thing could work like this (or in a similar way):
when you execute [self.view setNeedsDisplay:YES]; the framework schedule a redraw of the view by means of a CFRunLoopTimer; this explains why there are so many being created;
when the CFRunLoopTimer fires, the view is redrawn, the needsDisplay flag reset;
in your case, when the frequency of update is high, what happens is that you call setNeedsDisplay more often than the refresh can actually happen; so, for each actual refresh you have several calls to setNeedsDisplay, and several CFRunLoopTimer are also created;
of all those CFRunLoopTimer that get created between two consecutive actual refresh operations, only the first one get released and destroyed; the other ones either have no chance to fire or find the view with the needsDisplay flag already reset and thus possibly reschedule themselves.
For point 4: I think that the most likely explication is the first one: you build up a queue of CFRunLoopTimers at a frequency much higher than that at which you can "consume" it. I am saying that redrawing takes longer than an update cycle because you say that when you call [view display] performance suffers.
If this is correct, then the problem would persist also with CADisplayLink (because it is related to calling update with too an high frequency compared to redraw speed) and the only solution would be finding a different way to make the redraw (i.e., not using setNeedsDisplay:YES)
In fact, I checked with cocos2d sources and setNeedsDisplay:YES is almost not used. Redrawing (cocos2d provides a frame rate of 60 fps) is done by drawing directly into an OpenGL buffer, and I suspect that this is the critical point to be able to get to that frame rate. You could also check if you can replace your view layer by a CAEAGLLayer (this should be pretty easy) and see whether you can draw directly to the glBuffer.
I hope it helps. Keep in mind that there are many hypothesis that I am making here, so it could well be that any of it be wrong. I am just proposing my reasoning.
Related
I am creating an application for Mac, in Objective C, which will run in the Menu-bar and do periodic Desktop operations (such as changing the wallpaper). I am creating the application so that it stays in the Menu bar at all times, allowing easy access to configuration options and other information. My main concern is how to schedule my app to run every X minutes to do the desktop operations.
The most common solution I have seen is using NSTimer, however, I am concerned that it will not be memory efficient (after reading the following page on Apple Developer docs. Using an NSTimer will prevent the laptop from going to sleep, and will need an always-running thread to check for when the NSTimer has elapsed. Is there a more memory-efficient way of using NSTimer to schedule these operations?
Alternately, is there a way to use LaunchD to initiate a call to my application (which is in the Menu bar) so that it can handle the event and do the desktop operations. I think that the second way is better, but am not sure if it is possible.
First, excellent instincts on keeping this low-impact. But you're probably over-worried in this particular case.
When they say "waking the system from an idle state" they don't mean system-level "sleep" where the screen goes black. They mean idle state. The CPU can take little mini-naps for fractions of a second when there isn't work that immediately needs to be done. This can dramatically reduce power requirements, even while the system is technically "awake."
The problem with having lots of timers flying around isn't so much their frequencies as their tolerances. Say one you have 10 timers with a 1 second frequency, but they're offset from each other by 100ms (just by chance of what time it was when they happened to start). That means the longest possible "gap" is 100ms. But if they were configured at 1 second with a 0.9 second tolerance (i.e. between 1s and 1.9s), then the system could schedule them all together, do a bunch of work, and spend most of the second idle. That's much better for power.
To be a good timer citizen, you should first set your timer at the interval you really want to do work. If it is common for your timer to fire, but all you do is check some condition and reschedule the timer, then you're wasting power. (Sounds like you already have this in hand.) And the second thing you should do is set a reasonable tolerance. The default is 0 which is a very small tolerance (it's not actually "0 tolerance," but it's very small compared to a minutes). For your kind of problem, I'd probably use a tolerance of at least 1s.
I highly recommend the Energy Best Practices talk from WWDC 2013. You may also be interested in the later Writing Energy Efficient Code sessions from 2014 and Achieving All-day Battery Life from 2015.
It is possible of course to do with with launchd, but it adds a lot of complexity, especially on installation. I don't recommend it for the problem you're describing.
So I restructured a central part in my Cocoa application (I really had to!) and I am running into issues since then.
Quick outline: my application controls the playback of QuickTime movies so that they are in sync with external timecode.
Thus, external timecode arrives on a CoreMIDI callback thread and gets posted to the application about 25 times per sec. The sync is then checked and adjusted if it needs to be.
All this checking and adjusting is done on the main thread.
Even if I put all the processing on a background thread it would be a ton of work as I'm currently using a lot of GCD blocks and I would need to rewrite a lot of functions so that they can be called from NSThread. So I would like to make sure first if it will solve my problem.
The problem
My Core MIDI callback is always called in time, but the GCD block that is dispatched to the main queue is sometimes blocked for up to 500 ms. Understandable that adjusting the sync does not quite work if that happens. I couldn't find a reason for it, so I'm guessing that I'm doing something that blocks the main thread.
I'm familiar with Instruments, but I couldn't find the right mode to see what keeps my messages from being processed in time.
I would appreciate if anyone could help.
Don't know what I can do about it.
Thanks in advance!
Watchdog
You can use watch dog that stop when the main thread stopped for time
https://github.com/wojteklu/Watchdog
you can install it using cocoapod
pod 'Watchdog'
You may be blocking the main thread or you might be flooding it with events.
I would suggest three things:
Grab a timestamp for when the timecode arrives in the CoreMIDI callback thread (see mach_absolute_time(). Then grab the current time when your main thread work is being done. You can then adjust accordingly based on how much time has elapsed between posting to the main thread and it actually being processed.
create some kind of coalescing mechanism such that when your main thread is blocked, interim timecode events (that are now out of date) are tossed. This can be as simple as a global NSUInteger that is incremented every time an event is received. The block dispatched to the main queue could capture the current value on creation, then check it when it is processed. If it differs by more than N (N for you to determine), then toss the event because more are in flight.
consider not sending an event to the main thread for every timecode notification. 25 adjustments per second is a lot of work. If processing only 5 per second yields a "good enough" perceptual experience, then that is an awful lot of work saved.
In general, instrumenting the main event loop is a bit tricky. The CPU profiler in Instruments can be quite helpful. It may come as a surprise, but so can the Allocations instrument. In particular, you can use the Allocations instrument to measure memory throughput. If there are tons of transient (short lived) allocations, it'll chew up a ton of CPU time doing all those allocations/deallocations.
Hey, im running cocos2d, box2d and several particle systems on the iPhone 4.
I've completed my first level which has numerous sprites allocated as well.
I have set my FPS limit to 30fps.
When the game first runs, it runs well, at a solid 30fps. This continues for about 3-4 minutes of smooth gameplay.
But after a while the fps starts to drop, and it turns into a gradual decline until it hits around the 12fps mark.
I remember I had this problem with a previous game that I abandoned.
Is this caused by a memory leak, possibly from not deallocating items??
In my (void)dealloc methods, I am inputting all my allocated releases, could I have missed one? Or is there some other possibility i'm not considering?
Thanks!
Sounds like you're running out of resources. I'd try Instruments as suggested by SB. Instruments can check for leaks using the Allocation instrument. You could also try the OpenGL profiling suite.
I have the same problem and I cannot get the resolution. When I unload the entire scene and reload it again, everything is back to normal. So it definitely seems to be a leak somewhere, but even with instruments I cannot find the source of it.
The total memory usage is not growing, no leaks reported, so I have the feeling that something in Cocos2d or in Chipmunk increasing the load.
I am using sprites which are moving in and out from the screen in a random manner and creating nd destroying them every time. Can it be that something is not released properly and Cocos or Chipmunk is still calculating with those items?
I'm using AV Foundation to play an MP3 file loaded over the network, with code that is almost identical to the playback example here: Putting it all Together: Playing a Video File Using AVPlayerLayer, except without attaching a layer for video playback. I was trying to make my app respond to the playback buffer becoming empty on a slow network connection. To do this, I planned to use key-value observing on the AVPlayerItem's playbackBufferEmpty property, but the documentation did not say whether that was possible. I thought it might be possible because the status property can be observed (and is the example above) even though the documentation doesn't say that.
So, in an attempt to create conditions where the buffer would empty, I added code on the server to sleep for two seconds after serving up each 8k chunk of the MP3 file. Much to my surprise, this caused my app's UI (updated using NSTimer) to freeze completely for long periods, despite the fact that it shows almost no CPU usage in the profiler. I tried loading the tracks on another queue with dispatch_async, but that didn't help at all.
Even without the sleep on the server, I've noticed that loading streams using AVPlayerItem keeps the UI from updating for the short time that the stream is being downloaded. I can't see why a slow file download should ever block the responsiveness of the UI. Any idea why this is happening or what I can do about it?
Okay, problem solved. It looks like passing AVURLAssetPreferPreciseDurationAndTimingKey in the options to URLAssetWithURL:options: causes the slowdown. This also only happens when the AVURLAsset's duration property or some other property relating to the stream's timing is accessed from the selector fired by the NSTimer. So if you can avoid polling for timing information, this problem may not affect you, but that's not an option for me. If precise timing is not requested, there's still a delay of around 0.75 seconds to 1 second, but that's all.
Looking back through it, the documentation does warn that precise timing might cause slower performance, but I never imagined 10+ second delays. Why the delay should scale with the loading time of the media is beyond me; it seems like it should only scale with the size of the file. Maybe iOS is doing some kind of heavy polling for new data and/or processing the same bytes over and over.
So now, without "precise timing and duration," the duration of the asset is permanently at 0.0, even when it's fully loaded. I can also answer my original goal of doing KVO on AVPlayerItem.isPlaybackBufferEmpty. It seems KVO would be useless anyway, since the property starts out being NO, changes to YES as soon as I start playback, and continues to be YES even as the media is playing for minutes at a time. The documentation says this about the property:
Indicates whether playback has consumed all buffered media and that playback will stall or end.
So I guess that's not accurate, and, at least in this particular case, the property is not very useful.
I'm writing a Cocoa OS X (Leopard 10.5+) end-user program that's using timestamps to calculate statistics for how long something is being displayed on the screen. Time is calculated periodically while the program runs using a repeating NSTimer. [NSDate date] is used to capture timestamps, Start and Finish. Calculating the difference between the two dates in seconds is trivial.
A problem occurs if an end-user or ntp changes the system clock. [NSDate date] relies on the system clock, so if it's changed, the Finish variable will be skewed relative to the Start, messing up the time calculation significantly. My question:
1. How can I accurately calculate the time between Start and Finish, in seconds, even when the system clock is changed mid-way?
I'm thinking that I need a non-changing reference point in time so I can calculate how many seconds has passed since then. For example, system uptime. 10.6 has - (NSTimeInterval)systemUptime, part of NSProcessInfo, which provides system uptime. However, this won't work as my app must work in 10.5.
I've tried creating a time counter using NSTimer, but this isn't accurate. NSTimer has several different run modes and can only run one at a time. NSTimer (by default) is put into the default run mode. If a user starts manipulating the UI for a long enough time, this will enter NSEventTrackingRunLoopMode and skip over the default run mode, which can lead to NSTimer firings being skipped, making it an inaccurate way of counting seconds.
I've also thought about creating a separate thread (NSRunLoop) to run a NSTimer second-counter, keeping it away from UI interactions. But I'm very new to multi-threading and I'd like to stay away from that if possible. Also, I'm not sure if this would work accurately in the event the CPU gets pegged by another application (Photoshop rendering a large image, etc...), causing my NSRunLoop to be put on hold for long enough to mess up its NSTimer.
I appreciate any help. :)
Depending on what's driving this code, you have 2 choices:
For absolute precision, use mach_absolute_time(). It will give the time interval exactly between the points at which you called the function.
But in a GUI app, this is often actually undesirable. Instead, you want the time difference between the events that started and finished your duration. If so, compare [[NSApp currentEvent] timestamp]
Okay so this is a long shot, but you could try implementing something sort of like NSSystemClockDidChangeNotification available in Snow Leopard.
So bear with me here, because this is a strange idea and is definitely non-derterministic. But what if you had a watchdog thread running through the duration of your program? This thread would, every n seconds, read the system time and store it. For the sake of argument, let's just make it 5 seconds. So every 5 seconds, it compares the previous reading to the current system time. If there's a "big enough" difference ("big enough" would need to definitely be greater than 5, but not too much greater, to account for the non-determinism of process scheduling and thread prioritization), post a notification that there has been a significant time change. You would need to play around with fuzzing the value that constitutes "big enough" (or small enough, if the clock was reset to an earlier time) for your accuracy needs.
I know this is kind of hacky, but barring any other solution, what do you think? Might that, or something like that, solve your issue?
Edit
Okay so you modified your original question to say that you'd rather not use a watchdog thread because you are new to multithreading. I understand the fear of doing something a bit more advanced than you are comfortable with, but this might end up being the only solution. In that case, you might have a bit of reading to do. =)
And yeah, I know that something such as Photoshop pegging the crap out of the processor is a problem. Another (even more complicated) solution would be to, instead of having a watchdog thread, have a separate watchdog process that has top priority so it is a bit more immune to processor pegging. But again, this is getting really complicated.
Final Edit
I'm going to leave all my other ideas above for completeness' sake, but it seems that using the system's uptime will also be a valid way to deal with this. Since [[NSProcessInfo processInfo] systemUptime] only works in 10.6+, you can just call mach_absolute_time(). To get access to that function, just #include <mach/mach_time.h>. That should be the same value as returned by NSProcessInfo.
I figured out a way to do this using the UpTime() C function, provided in <CoreServices/CoreServices.h>. This returns Absolute Time (CPU-specific), which can easily be converted into Duration Time (milliseconds, or nanoseconds). Details here: http://www.meandmark.com/timingpart1.html (look under part 3 for UpTime)
I couldn't get mach_absolute_time() to work properly, likely due to my lack of knowledge on it, and not being able to find much documentation on the web about it. It appears to grab the same time as UpTime(), but converting it into a double left me dumbfounded.
[[NSApp currentEvent] timestamp] did work, but only if the application was receiving NSEvents. If the application went into the foreground, it wouldn't receive events, and [[NSApp currentEvent] timestamp] would simply continue to return the same old timestamp again and again in an NSTimer firing method, until the end-user decided to interact with the app again.
Thanks for all your help Marc and Mike! You both definitely sent me in the right direction leading to the answer. :)