Long-running animations in iOS (up to 1/2 hour) - ios7

So I've got this animated pie chart working now. It can indicate e.g. progress over time (similar to UIProgressView).
For legacy reasons I am still using it with a timer that fires approx. every second and increases progress. It should now be possible to get rid of this timer and set the overall duration of a pie animation e.g. to 1/2 hour instead of letting the timer fire 30 * 60 times and starting as many short incremental animations.
So my question is this: are there any good reasons that speak against using such long (say up to 1/2 hour long) animations in iOS? In the example of the pie chart no more than approx. 360 frames would be needed even over 1/2 hour.

There is a good reason against very long animations: memory.
CoreAnimation will create a presentationLayer for every frame (see for example your other question), and (at least up to iOS 7.1) it will allocate and initialize them in background the moment you add the animation to the layer.
The frame rate depends on the device, not on the magnitude of the change of the animated property; moreover, there doesn't seem to be a way to tweak CoreAnimation's frame rate on iOS (while on OSX NSAnimation has a frameRate property), so if you animate progress (but it would be the same with any property) and set a duration time of 30 minutes, you will end up with a lot of memory wasted.
Some numbers. I scheduled some CABasicAnimations with path progress on your DZRoundProgressLayer, and added some logging in -initWithLayer:. This revealed that on the simulator, roughly 50 shadow copies (frames) are needed per second of animation.
This means 90K shadow copies are going to be created for 30 minutes: for several seconds after the beginning of the animation, CoreAnimation was still allocating the first thousands of copies. Adding some data payload to the instance variables of DZRoundProgressLayer showed the memory usage raising by several MB in the first seconds (then some memory management took over the unconstrained allocations, presumably freeing the old copies).
Is it a bad idea? It's a waste of resources, memory and CPU, even if your layer occupies a few bytes in memory, considering that the change in the pie area per frame is too small to be noticed. Setting up a NSTimer or KVO doesn't require many lines of code, so it might be worth to change approach.

Related

Metal drawable musings... ugh

I have two issues in my Metal App.
My call to currentPassDescriptor is stalling. I have too many drawables, apparently.
I'm wholly confused on how to most performantly configure the multiple MTKViews I am using.
Issue (1)
I have a problem with currentPassDescriptor in my app. It is occasionally blocking (for 1.00s) which, according to the docs, is because there is no currentDrawable available.
Background: I have 4 HD 1920x1080 videos playing concurrently, tiled out onto a 3840x2160 second external display as a debugging configuration. The pixel buffers of these AVPlayer instances are captured by 4 independent CVDIsplayLink callbacks and, from within the callback, there is the draw call to its assigned MTKView. A total of 4 MTKViews are subviews tiled on a single NSWindow, and are configured for manual drawing.
I'm using CVDisplayLink callbacks manually. If I don't, then I get stutter when mousing up on the app’s menus, for example.
Within each draw call, I do a bit of kernel shader work then attempt to obtain the currentPassDescriptor. If successful, I do one pass of a fragment/vertex shader and then present the drawable. My code flow follows Apple’s sample code as well as published examples.
According to the Metal System Trace, most of draw calls take under 5ms. The GPU is about 20-25% utilized and there’s about 25% of the GPU memory free. I can also cause the main thread to usleep() for 1 second without any hiccups.
Without any user interaction, there’s about a 5% chance of the videos stalling out in the first minute. If there’s some UI work going then I see that as windowServer work in Instruments. I also note that AVFoundation seems to cache about 15 frames of video onto the GPU for each AVPlayer.
If the cadence of the draw calls is upset, there's about a 10% chance that things stall completely or some of the videos -- some will completely stall, some will stall with 1hz updates, some won't stall at all. There's also less chance of stalling when running Metal System Trace. The movies that have stalled seem to have done so on obtaining a currentPassDescriptor.
This is really a poor design to have this currentPassDescriptor block for ≈1s during a render loop. So much so that I’m thinking of eschewing the MTKView all together and just drawing to a CAMetalLayer myself. But the docs on CAMetalLayer seem to indicate the same blocking behaviour will occur.
I also grab these 4 pixel buffers on the fly and render sub-size regions-of-interest to 4 smaller MTKViews on the main monitor; but the stutters still occur if this code is removed.
Is the drawable buffer limit per MTKView or per the backing CALayer? The docs for maximumDrawableCount on CAMetalLayer say the number needs to be 2 or 3. This question ties into the configuration of the views.
Issue (2)
My current setup is a 3840x2160 NSWindow with a single content view. This subclass of NSView does some hiding/revealing of the mouse cursor by introducing an NSTrackingRectTag. The MTKViews are tiled subviews on this content view.
Is this the best configuration? Namely, one NSWindow with tiled MTKViews… or should I do one MTKView per window?
I'm also not sure how to best configure these windows/layers — ie. by setting (or clearing) wantsLayer, wantsUpdateLayer, and/or canDrawSubviewsIntoLayer. I'm currently just setting wantsLayer to YES on the single content view. Any hints on this would be great.
Does adjusting these properties collapse all the available drawables to the backing layer only; are there still 2 or 3 per MTKView?
NB: I've attached a sample run of my Metal app. The longest 'work' on the top graph is just under 5ms. The clumps of green/blue are rendering on the 4 MTKViews. The 'work' alternates a bit because one of the videos is a 60fps source; the others are all 30fps.

iOS framerate - always 30fps with UIImage?

i want to create a frame-by-frame animation with an array of images.
There is to set the animationDuration, the standard value is 1.
Can i be sure that it is always 30fps on every iOS-device when i start the animation with startAnimating?
So i need exactly 15 images for a 1 second animation - is there a special calculation i can use when i have more or less than 15 images that have to be animated in exactly one second?
E.g. 60/15*(count number of images in array)
I don't think you can count on UIImageView animationImages to give you 30FPS everywhere and everytime.
Indeed, this is meant for very simple and "opportunistic" animations. If you google for it, you will find reports of that method hogging a device (in specific conditions). On the other hand, if your images are small, then chances are that it could work, but you get no guarantees (nor ways to enforce the FPS you need).
So i need exactly 15 images for a 1 second animation - is there a special calculation i can use when i have more or less than 15 images that have to be animated in exactly one second?
If I understand your question right, then you can try with:
duration = number_of_images / FPS; // e.g., 60 images / 30 FPS = 2 seconds
of course, if the device will then show properly the 60 images at 30 FPS is another story.

UIView animateWithDuration: slows down animation frame rate

I am using CADisplayLink to draw frames using the EAGLView method in a game at 60 times per second.
When I call UIView animateWithDuration: the framerate drops down to exactly half, from 60 to 30 fps for the duration of the animation. Once the animation is over, the fps rises instantly back up to 60.
I also tried using NSTimer animation method instead of CADisplayLink and still get the same result.
The same behavior happens when I press the volume buttons while the speaker icon is fading out, so it may be using animateWithDuration. As I would like to be able to handle the speaker icon smoothly in my app, this means I can't just rewrite my animation code to use a different method other than animateWithDuration, but need to find a solution that works with it.
I am aware that there is an option to slow down animations for debug on the simulator, however, I am experiencing this on the device and no such option is enabled. I also tried using various options for animateWithDuration such as the linear one and the user interaction, but none had an improvement.
I am also aware I can design an engine that can still work with a frame rate that varies widely. However, this is not an ideal solution to this problem, as high fps is desirable for games.
Has someone seen this problem or solved it before?
The solution to this is to do your own animation and blit during the CADisplayLink callback.
1) for the volume issue, put a small volume icon in the corner, or show it if the user takes some predefined touch action, and give them touch controls. With that input you can use AVAudioPlayer to vary the volume, and just avoid the system control altogether. you might even be able to determine the user has pressed the volume buttons, and pop some note saying do it your way. This gets you away from any animations happening by the system.
2) When you have an animation you want to do, well, create a series of images in code (either then or before hand), and every so many callbacks in the displayLink blit the image to the screen.
Here's an old thread that describes similar drops in frame rate. In that case, the cause of the problem was adding two or more semi-transparent sprites, but I'd guess that any time you try to composite several layers together you may be doing enough work to cut the frame rate, and animateWithDuration very likely does exactly that kind of thing.
Either use OpenGL or CoreAnimation. They are not compatible.
To test this remove any UIView animation, the frame rate will be what you expect. Add back UIView animation, it will drop to 30fps.
You said:
When I call UIView animateWithDuration: the framerate drops down to exactly half, from 60 to 30 fps for the duration of the animation. Once the animation is over, the fps rises instantly back up to 60
I dont know why your not accepting my answer, this is exactly what happens when you combine UIView animation with CA animation not using a UIView.

Animate screen while loading textures

My RPG-like game has random battles. When the player enters a random battle, it is necessary for my game to load the textures used within that battle (animated monsters, animations, etc). The textures are quite a lot, and rather big (the battles are very graphical intensive).
Such process consumes significant time. And while it is loading, the whole screen freezes.
The game's map freezes, and the wait time is significant - I personally find it annoying.
I can't afford to preload the textures because, after doing some math, I realized:
If I preload all the textures at the beginning of the game, the application will definitely crash.
If I preload the textures that are used in a specific map when the player enters the map, the application is very likely to crash as well.
I can only afford to load the textures when I need them, and dispose of them as soon as the battle ends.
I'd prefer to not use a "loading screen" image because it affects my game's design and concept. I want to avoid this approach.
If I could do some kind of animation while loading the textures, it would be great, which leads to my question: is that possible? What kind of animation, you ask? Well, how about... you remember when Final Fantasy used to distort the screen while apparently loading the textures? Something like that. But well, distorting is quite a time-consuming process as well, so maybe just a cool frame-by-frame animation or something.
While writing this, I realized that I could make small pauses between textures (there are multiple textures), and during such pauses, I update the screen to represent the animation's state. However, this is very unlikely to happen, because each texture is 2048x2048, so the animation would be refreshed at a rather laggy (and annoying) rate. I'd prefer to avoid this as well.
In a similar bind, i chose to
Convert all my animation textures to gzipped PVR. The load time (depending on the device) is improved by a factor of 2 to 4. Any artefacts caused by a conversion to PVR are not noticeable in motion.
I preload the idle animations (almost always on, except during a skill use or when hurt. I do this during the battle scene's fade-in. I control the fade-in myself on a tick rate of 50 ms, and at every frame i fire-up a preload of one of the idles ( there are a maximum of 8 of them, they take 20 or so ms.).
I have an 'engagement' class which computes the entire fight ahead of time. When an animation becomes un-needed, i unload it. Also, during a 'hurt' animation, i prefetch the next skill animation.
loads of fun. Best of luck with your game.
ps. Dont trust the simulator for actual response times. Go to devices rapidly to determine if you really have a performance issue.
pps. About point 1, that caused a significant size reduction for my app.
Since the battles are supposed to be at random, would it be possible to preload the textures for the next battle before that battle happens? Then the battle can start whenever the loading has completed.
Game decides battle should happen soon
Generate random encounter (monsters/background/etc?)
Load textures for the encounter
Start the encounter after the textures have loaded
The battles are still random, it's just that the encounter has been determined a bit before the user is aware a battle is about to happen.
You could load lower-resolution textures first, and in a background thread (NSOperation I think) kick off the load for the bigger textures, and 'swap' them when done.
As for animation, a lot of games start by loading the small textures when the player is far away, and as they get closer, the higher-res textures will 'fade' in

OpenGL - animation stuttering when in full screen

I'm currently running into a problem regarding animation in OpenGL. I have between 200 and 10000 gears on the screen at a time all rotating. When the window is not in maximized view, my CPU runs at about 10-20 % consistently. No spikes, no stuttering in the animation, it runs perfectly smooth regardless of the number of gears on screen. When I maximize the window though, everything falls apart. My CPU maxes out, I begin getting weird spikes in CPU usage, the animation begins stuttering as a result, and it just looks really ugly, even when I have only 200 gears on screen.
My animation technique looks like this:
While Animating
Calculate current rotation angle based on a running timer
draw Image
call glFlush()
End While
If it helps, I'm using the Tao framework in VB.net. I'm not performing any other calculations other than the ones to calculate the rotation angle mentioned above and perform a few glRotateD and glscaleD in the method to draw the image.
In addition, I guess I was under the impression that regardless of the window size in an orthographic 2-dimensional drawing that is scaling on resizing of the window, the drawing time would always take the same amount of time. Is this a correct assumption?
Any help is greatly appreciated =)
Edit
Note that I've seen the animation run perfectly smooth at full screen before. Every once in awhile, OpenGL will decide it's happy and run perfectly at full screen using between 10-20% of the CPU (same as when not maximized). I haven't pinpointed what causes this though, because it will run one time perfectly, then without changing anything, I will run it again and encounter the choppiness. I simply want to pinpoint what causes the animation to slow down and eliminate it.
I've run a dot trace on my program and it says that the swapBuffers method is using 55 % of my processing time even though I'm never explicitly calling the method. Is the method called by something else that I can eliminate, or is this simply OpenGL's "dead time" method to limit the animation to 60 fps?
I was under the impression that regardless of the window size in an orthographic 2-dimensional drawing that is scaling on resizing of the window, the drawing time would always take the same amount of time. Is this a correct assumption?
If only :)
More pixels require more memory bandwidth/shader units/etc. Look into fillrate.