iOS framerate - always 30fps with UIImage? - objective-c

i want to create a frame-by-frame animation with an array of images.
There is to set the animationDuration, the standard value is 1.
Can i be sure that it is always 30fps on every iOS-device when i start the animation with startAnimating?
So i need exactly 15 images for a 1 second animation - is there a special calculation i can use when i have more or less than 15 images that have to be animated in exactly one second?
E.g. 60/15*(count number of images in array)

I don't think you can count on UIImageView animationImages to give you 30FPS everywhere and everytime.
Indeed, this is meant for very simple and "opportunistic" animations. If you google for it, you will find reports of that method hogging a device (in specific conditions). On the other hand, if your images are small, then chances are that it could work, but you get no guarantees (nor ways to enforce the FPS you need).
So i need exactly 15 images for a 1 second animation - is there a special calculation i can use when i have more or less than 15 images that have to be animated in exactly one second?
If I understand your question right, then you can try with:
duration = number_of_images / FPS; // e.g., 60 images / 30 FPS = 2 seconds
of course, if the device will then show properly the 60 images at 30 FPS is another story.

Related

could someone tell me why everything vibrates when the camera in my game moves?

I'm not sure why, but for some reason whenever the camera in my game moves, everything but the character it's focusing on does this weird thing where they move like they should, but they almost vibrate and you can see a little trail of the back of the object, although it's very small. can someone tell me why this is happening? here's the code:
x+= (xTo-x)/camera_speed_width;
y+= (yTo-y)/camera_speed_height;
x=clamp(x, CAMERA_WIDTH/2, room_width-CAMERA_WIDTH/2);
y=clamp(y, CAMERA_HEIGHT/2, room_height-CAMERA_HEIGHT/2);
if (follow != noone)
{
xTo=follow.x;
yTo=follow.y;
}
var _view_matrix = matrix_build_lookat(x,y,-10,x,y,0,0,1,0);
var _projection_matrix = matrix_build_projection_ortho(CAMERA_WIDTH,CAMERA_HEIGHT,-10000,10000)
camera_set_view_mat(camera,_view_matrix);
camera_set_proj_mat(camera,_projection_matrix);
I can think of 2 options:
Your game runs on a low Frames Per Second (30 or lower), a higher FPS will render moving graphics smoother (60 FPS been the usual minimum)
another possibility is that your camera is been set to a target multiple times, perhaps one part (or block code) follows the player earlier than the other. I think you could also let a viewport follow an object in the room editor, perhaps that's set as well.
Try and see if these options will help you out.
If your camera is low-resolution, you should consider rounding/flooring your camera coordinates - otherwise the instances are (relative to camera) at fractional coordinates, at which point you are at mercy of GPU as to how they will be rendered. If the instances themselves also use fractional coordinates, you are going to get wobble as combined fractions round to one or other number.

Long-running animations in iOS (up to 1/2 hour)

So I've got this animated pie chart working now. It can indicate e.g. progress over time (similar to UIProgressView).
For legacy reasons I am still using it with a timer that fires approx. every second and increases progress. It should now be possible to get rid of this timer and set the overall duration of a pie animation e.g. to 1/2 hour instead of letting the timer fire 30 * 60 times and starting as many short incremental animations.
So my question is this: are there any good reasons that speak against using such long (say up to 1/2 hour long) animations in iOS? In the example of the pie chart no more than approx. 360 frames would be needed even over 1/2 hour.
There is a good reason against very long animations: memory.
CoreAnimation will create a presentationLayer for every frame (see for example your other question), and (at least up to iOS 7.1) it will allocate and initialize them in background the moment you add the animation to the layer.
The frame rate depends on the device, not on the magnitude of the change of the animated property; moreover, there doesn't seem to be a way to tweak CoreAnimation's frame rate on iOS (while on OSX NSAnimation has a frameRate property), so if you animate progress (but it would be the same with any property) and set a duration time of 30 minutes, you will end up with a lot of memory wasted.
Some numbers. I scheduled some CABasicAnimations with path progress on your DZRoundProgressLayer, and added some logging in -initWithLayer:. This revealed that on the simulator, roughly 50 shadow copies (frames) are needed per second of animation.
This means 90K shadow copies are going to be created for 30 minutes: for several seconds after the beginning of the animation, CoreAnimation was still allocating the first thousands of copies. Adding some data payload to the instance variables of DZRoundProgressLayer showed the memory usage raising by several MB in the first seconds (then some memory management took over the unconstrained allocations, presumably freeing the old copies).
Is it a bad idea? It's a waste of resources, memory and CPU, even if your layer occupies a few bytes in memory, considering that the change in the pie area per frame is too small to be noticed. Setting up a NSTimer or KVO doesn't require many lines of code, so it might be worth to change approach.

how to correctly create a thumbnail picker with a UISlider for selection from a video?

Hi its a simple question to a hard problem (for me so far anyway)
i dont consider myself to be a beginner programmer but for the love of god I cannot seem to be able to figure this one out.... I currently have a problem related to updating a uiimageview.. its not being updated when my slider moves left and right.. it does slow the application down a little bit when i drag it which tells me there are processes happening and tells me they are connected correctly the itnerface with their methods.. whats happening or what im doing at the moment is trying to retrieve image data for a specific frame specified by time to then be able to select it as a thumbnail depending on the position of the uislider. so its a manual thumbnail picker.
I have tried many things both by connecting it through interface builder and by doing it programatically.
this is what i have one so far:
.h file
.m file
the slider method for sliderValueChanged which gets called
and finally my class method that i use to help retrieve a thumbnail image returning nsdata... passing in a video and a specified time position.
I have read here on stackoverflow that updating a uiimageview can cause memory leaks if updated regularly since it caches images.. and to use [UIImage imageWithData:] instead to avoid any leaks etc. so i have implemented this in my code yet my thumbnail view still fails to load the images based on the slider's position. (the slider is created to have the min and max values set from 0 - to the duration of the video so that the slider can only ever have a value that can correctly pick a time frame in the video in question)
If anyone could guide me in how i could fix this problem.. it is beyond me for hours now! i appreciate the help. thank you
I realised what was happening here... the slider values i had passed into my method werent appropriate for what i required.
in the class method it says CMTimeMake(value 1, value 2).. and after doing some research i understood how it worked.
basically there is a time interval you specify in value 2 in my case it was 60.. based on some code i copied.. whatever you substitute value 1 for becomes a part of the time interval.. They are a numerator and denominator, so 1 /60 equals 1 60th of a second
2/60 is equal to the time position of 2 60th of a second... if i wanted 3 seconds i would need to do 60*3 = 180 so
180/60 equals 180 60th of a second which equates to 3 seconds in total... so because my slider values were mapped to the duration of the video which the maximum was 15 seconds... the higest time frame i could get using the code i wrote in my question... i could only get the
15 60th of a second. which time frame wise not really a difference between the 1st 60th of a second hence the reason why i could not see the uimageview being updated...
so to correct this i multiplied my slider values by 60 so that each value that gets changed because they were mapped to seconds.. i multiplied by 60 .. and of course the image updated like a charm. however now i need to figure out how to speed up this process since at the moment it seems synchronous as it lags the interface greatly

UIView animateWithDuration: slows down animation frame rate

I am using CADisplayLink to draw frames using the EAGLView method in a game at 60 times per second.
When I call UIView animateWithDuration: the framerate drops down to exactly half, from 60 to 30 fps for the duration of the animation. Once the animation is over, the fps rises instantly back up to 60.
I also tried using NSTimer animation method instead of CADisplayLink and still get the same result.
The same behavior happens when I press the volume buttons while the speaker icon is fading out, so it may be using animateWithDuration. As I would like to be able to handle the speaker icon smoothly in my app, this means I can't just rewrite my animation code to use a different method other than animateWithDuration, but need to find a solution that works with it.
I am aware that there is an option to slow down animations for debug on the simulator, however, I am experiencing this on the device and no such option is enabled. I also tried using various options for animateWithDuration such as the linear one and the user interaction, but none had an improvement.
I am also aware I can design an engine that can still work with a frame rate that varies widely. However, this is not an ideal solution to this problem, as high fps is desirable for games.
Has someone seen this problem or solved it before?
The solution to this is to do your own animation and blit during the CADisplayLink callback.
1) for the volume issue, put a small volume icon in the corner, or show it if the user takes some predefined touch action, and give them touch controls. With that input you can use AVAudioPlayer to vary the volume, and just avoid the system control altogether. you might even be able to determine the user has pressed the volume buttons, and pop some note saying do it your way. This gets you away from any animations happening by the system.
2) When you have an animation you want to do, well, create a series of images in code (either then or before hand), and every so many callbacks in the displayLink blit the image to the screen.
Here's an old thread that describes similar drops in frame rate. In that case, the cause of the problem was adding two or more semi-transparent sprites, but I'd guess that any time you try to composite several layers together you may be doing enough work to cut the frame rate, and animateWithDuration very likely does exactly that kind of thing.
Either use OpenGL or CoreAnimation. They are not compatible.
To test this remove any UIView animation, the frame rate will be what you expect. Add back UIView animation, it will drop to 30fps.
You said:
When I call UIView animateWithDuration: the framerate drops down to exactly half, from 60 to 30 fps for the duration of the animation. Once the animation is over, the fps rises instantly back up to 60
I dont know why your not accepting my answer, this is exactly what happens when you combine UIView animation with CA animation not using a UIView.

cameraViewTransform and CGAffineTransformMakeScale

I'm tring to implement a digital zoom in an application and I use the following line to change the zoom factor (it can be called many time while the camera interface is displayed):
picker.cameraViewTransform = CGAffineTransformMakeScale(zoomFactor, zoomFactor);
It work perfectly the first time I display the camera inteface but not after that, the transform used by the camera is not the tranform I set. Any idea?
Not sure I understand exactly what you are doing but I can tell you that transforms are not accumulative unless you feed the existing transform in recursively.
For example, say you have a transform that rotates an object 45 degrees and you want to use it to spin the object. The first time you call it, it rotates the object 45 degrees but it doesn't rotate it any subsequent times. This is because your just setting the same exact transform over and over. A 45 degree transform is always the same.
To make the object rotate you have to call the 45 degree transform then you have to take the resulting transform from the first operation and rotate that by 45 degrees. Then take the results of that and rotate it 45 degrees.
You need to do something like:
picker.cameraViewTransform =CGAffineTransformScale(picker.cameraViewTransform, zoomfactor);
That way, your transforms will accumulate and you can zoom up and down.
This isn't so much an answer as a clue. Each time you bring the camera back to the front of the app (presumably using presentModalViewController:) this causes a new transform to be created at cameraViewTransform. The tricky thing is, it seems to take about a second or so for this process to complete, and I can find no delegate method to let us know exactly when the new transform is safely in place. In my app, I end up waiting for about 1 second and THEN modifying the cameraViewTransform to suit my needs. Hacky, but the only solution I've found so far...