I'm streaming images to a client application. I want the images to display as fast as possible. I am able to receive images at 40-50 frames per second on the client, but can only draw them at a max of around 15 frames per second. I'd like to get at least 30FPS, but would also like to see if I can display them as fast as I receive them, up to 40FPS if possible. This is what I am currently doing, can it be drawn faster?
UIImage * image = [[UIImage alloc] initWithData:imageData];
[self.mainImageView.layer setContents:(__bridge id)(image.CGImage)];
You can animate them GIF style as shown here
and set the animation duration as follows:
self.mainImageView.animationDuration = NumberOfImages * 1.0f / 30
If that doesn't work, you might have to go into GL rendering.
Related
Currently I have an image... A battery Icon...
_MeterBar = [[UIImageView alloc] initWithFrame:CGRectMake(25, 40, 50, 40)];
_MeterBar.image = [UIImage imageNamed:#"battery-empty-icon.png"];
_MeterBar.backgroundColor = [UIColor clearColor];
I am attempting to add an animation that shows the battery loading to a specific point of the battery when a value is fetched from a web server.
Fetching the value is no problem but adding the animation is an issue. I basically copied and pasted some code a user answered before but I am stuck :/. When I add the animation, there is no image.....
CABasicAnimation *scaleToValue = [CABasicAnimation animationWithKeyPath:#"transform.scale.x"];
scaleToValue.toValue = [NSNumber numberWithFloat:100.0f];
scaleToValue.fromValue = [NSNumber numberWithFloat:0];
scaleToValue.duration = 1.0f;
scaleToValue.delegate = self;
_MeterBar.layer.anchorPoint = CGPointMake(0.5, 1);
[_MeterBar.layer addAnimation:scaleToValue forKey:#"scaleRight"];
CGAffineTransform scaleTo = CGAffineTransformMakeScale(1.0f, 50.0f);
_MeterBar.transform = scaleTo;
[_PowerNowView addSubview:_MeterBar];
The battery icon is more wide than tall so I'm animating from 0 -> a percentage on the x axis.
If anyone has any hints, tips, or sample examples that does this exact animation, I would greatly appreciate your help :)
Thank you.
P.S. Heres the URL of the battery Icon i found : Battery Icon
Sorry for not being clear but I would like to have a color layer seperate of the battery icon...
I would like to create a bar over the battery image and have the bar dynamically animate with respect to the size of the battery.... not actually resize the battery image itself....
So there would be two layers, one for the battery that is mainly static in size and the other layer being the colored BAR, maybe a green color to represent the amount of the BATTERY image that is filled.
A good reference of what I would like to do is what the MINT application does, whose purpose is to record your bank statements.
The final animated battery above is very nice but I would just like to show a bar within the battery and animate it with respect to the pulled data value.
A couple of things.
Your scale of 100 would make your layer 100 times larger than normal. If it's 300 pixels on a side, a scale of 100 would display it as 30,000 pixels on a side.
You want your to scale value to 1.0, not 100.
Also, I'm not sure if scaling the transform multiplies the previous scale by a growing factor. If it does, then a starting scale value of 0 would prevent it from ever growing, since anything times zero is still zero. Try a from value of .0001 instead of 0.0,
If you want your battery image to grow from left to right, wouldn't you want your anchor point to be (0,.5) rather than (.5, 1)?
And, when you change the anchor point, it not only changes the center for transforms, but it moves the position of the image. You will need to adjust the position to compensate. I always have to scratch my head and think about it really hard (and usually get it wrong anyway) but I think you would want to shift the X position of your view to the right by half of it's width.
Note that scaling your image will cause it to stretch oddly. It will start out short and fat and stretch out to normal size.
It might be better to attach a CAShapeLayer to the view as a mask layer, and animate the shape layer's path from an empty rectangle (Width = 0, height = full height of view) to a rectangle that is the full size of the view. That would cause the view to animate in a "wipe" animation where it is revealed from left to right without stretching.
I am using UIImageView to display thumbnails of images that can then be selected to be viewed at full size. The UIImageView has its content mode set to aspect fit.
The images are usually scaled down from around 500px x 500px to 100px x 100px. On the retina iPad they display really well while on the iPad2 they are badly aliased until the size gets closer to the native image size.
Examples:
Original Image
Retina iPad rendering at 100px x 100px
iPad 2 rendering at 100px x 100px
The difference between iPad 2 and new iPad might just be the screen resolution or could be that the GPU is better equipped to scale images. Either way, the iPad 2 rendering is very poor.
I have tried first reducing the image size by creating a new context, setting the interpolation quality to high and drawing the image into the context. In this case, the image looks fine on both iPads.
Before I continue down the image copy/resize avenue, I wanted to check there wasn't something simpler I was missing. I appreciate that UIImage isn't there to be scaled but I was under the impression UIImageView was there to handle scaling but at the moment it doesn't seem to be doing a good job scaling down. What (if anything) am I missing?
Update: Note: The drop shadow on the rendered / resized images is added in code. Disabling this made no difference to the quality of the scaling.
Another approach I've tried that does seem to be improving things is to set the minificationFilter:
[imageView.layer setMinificationFilter:kCAFilterTrilinear]
The quality is certainly improved and I haven't noticed a performance hit.
Applying a small minification filter bias can help out with this if you don't want to resample the image yourself:
imageView.layer.minificationFilter = kCAFilterTrilinear
imageView.layer.minificationFilterBias = 0.1
The left image has no filtering applied to it. The right image has a 0.1 filter bias.
Note that no explicit rasterization is required.
Playing around with very small values, you can usually come up with a value that smooths out the scaling artifacts just enough, and it's a lot easier than resizing the bitmap yourself. Certainly, you lose detail as the bias increases, so values even less than 0.1 are probably sufficient, though it all depends on the size the image view's frame that's displaying the image.
Just realize that trilinear filtering effectively enables mipmapping on the layer, which basically means it generates extra copies of the bitmap at progressively smaller scales. It's a very common technique used in rendering to increase render speed and also reduce scaling aliasing. The tradeoff is that it requires more memory, though the memory usage for successive downsampled bitmaps reduces exponentially.
Another potential advantage to this technique, though I have not tried it myself, is that you can animate minificationFilterBias. So if you're going to be scaling an image view down quite a lot as part of an animation, consider also animating the filter bias from 0.0 to whatever small value you've determined is appropriate for the scaled down size.
Finally, as others have noted, if your source image is very large, this technique isn't appropriate if overused, because Core Animation will always keep around the original bitmap. It's better to resize the image then discard the source image instead of using mipmapping in most cases, but for one-offs or cases where your image views are going to be deallocated quickly enough, this is fine.
if you just put the large image in a small imageview it will look real bad.
the solution is to properly resize the image... i'll add an example function that does the trick:
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
CGContextDrawImage(context, newRect, imageRef);
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
this function might take some time .. so you might want to save the result to a cache file.
If you're not afraid of wasting memory and know what you're doing for a particular case, this works beautifully.
myView.layer.shouldRasterize = YES;
myView.layer.rasterizationScale = 2;
The resulting quality is much better than setMinificationFilter.
I am using images that are 256x256 and scaling them to something like 48 px. Obviously a saner solution here would be to downscale the images to the exact destination size.
Next helped to me:
imageView.layer.minificationFilter = kCAFilterTrilinear
imageView.layer.shouldRasterize = true
imageView.layer.rasterizationScale = UIScreen.mainScreen().scale
Keep an eye on performance if used in scroll lists.
I'm trying to animate some images. The images are working well on non-retina iPads but their retina counterparts are slow and the animations will not cycle through at the specified rate. The code i'm using is below with the method called every 1/25th second. This method appears to perform better than UIViewAnimations.
if (counter < 285) {
NSString *file = [[NSBundle mainBundle] pathForResource:[NSString stringWithFormat:#"Animation HD1.2 png sequence/file_HD1.2_%d", counter] ofType:#"png"];
#autoreleasepool {
UIImage *someImage = [UIImage imageWithContentsOfFile:file];
falling.image = someImage;
}
counter ++;
} else {
NSLog(#"Timer invalidated");
[timer invalidate];
timer = nil;
counter = 1;
}
}
I realise there are a lot of images but the performance is the same for animations with less frames. Like i said, the non-retina animations work well. Each image above is about 90KB. Am i doing something wrong or is this simply a limitation of the iPad? To be honest, i find it hard to believe that it couldn't handle something like this when it can handle the likes of complex 3D games so i imagine i'm doing something wrong. Any help would be appreciated.
EDIT 1:
From the answers below, I have edited my code but to no avail. Executing the code below results in the device crashing.
in viewDidLoad
NSString *fileName;
myArray = [[NSMutableArray alloc] init];
for(int i = 1; i < 285; i++) {
fileName = [NSString stringWithFormat:#"Animation HD1.2 png sequence/HD1.2_%d.png", i];
[myArray addObject:[UIImage imageNamed:fileName]];
NSLog(#"Loaded image: %d", i);
}
falling.userInteractionEnabled = NO;
falling.animationImages = humptyArray;
falling.animationDuration = 11.3;
falling.animationRepeatCount = 1;
falling.contentMode = UIViewContentModeCenter;
the animation method
-(void) triggerAnimation {
[falling startAnimating];
}
First of all, animation performance on the retina iPad is notoriously choppy. That said, there are a few things you could do to make sure your getting the best performance for your animation (in no particular order).
Preloading the images - As some others have mentioned, your animation speed suffers when you have to wait for the reading of your image before you draw it. If you use UIImageView's animation properties this preloading will be taken care of automatically.
Using the right image type - Despite the advantage in file size, using JPEGs instead of PNGs will slow your animation down significantly. PNGs are less compressed and are easier for the system to decompress. Also, Apple has significantly optimized the iOS system for reading and drawing PNG images.
Reducing Blending - If at all possible, try and remove any transparency from your animation images. Make sure there is no alpha channel in your images even if it seems completely opaque. You can verify by opening the image in Preview and opening the inspector. By reducing or removing these transparent pixels, you eliminate extra rendering passes the system has to do when displaying the image. This can make a significant difference.
Using a GPU backed animation - Your current method of using a timer to animate the image is not recommended for optimal performance. By not using UIViewAnimation or CAAnimation you are forcing the CPU to do most of the animation work. Many of the animation techniques of Core Animation and UIViewAnimation are optimized and backed by OpenGL which using the GPU to process images and animate. Graphics processing is what the GPU is made for and by utilizing it you will maximize your animation performance.
Avoiding pixel misalignment - Make sure your animation images are at the right size on screen when displaying them. If you are stretching your image while animating or using an incorrect frame, the system has to do more work to process each frame. Also, using whole numbers for any frame or point values will keep from anti-aliasing when the system tries to position an image on a fractional pixel.
Be wary of shadows and rounded corners - CALayer has lots of easy ways to create shadows and rounded corners, but if you are moving these layers in animations, often times the system will redraw the layer in each frame of the animation. This is the case when specifying a shadow using the shadowOffset property (using UILabel's shadow properties will not render every frame). Also, borders and using maskToBounds and clipToBounds will be more performance intensive rather than just using an image editor to crop the actual asset.
There are a few things to notice here:
If "falling" is UIImageView, make sure it's content mode says something like "center" and not some sort of scaling (make sure your images fit it, of course).
Other than that, as #FogleBird said, test if your device have enough memory to preload all images, if not, try to at least preload the data by creating NSData objects with the image files.
Your use of #autorelease pool is not very useful, you end up creating an auto release object that does a single thing - remove a reference to an already retained object - no memory gain, but performance loss.
If anything, you should have wrapped the file name formatter code, and considering this method is called by an NSTimer, it is already wrapped in an autorelease pool.
just wanted to point out - when you are creating the NSString with the image name - what is the "Animation HD1.2 png sequence/HD1.2_%d.png" ?
It looks likey you are trying to put a path there, try just the image name - eg. "HD1.2_%d.png".
I have a 64px by 64px redSquare.png file at a 326ppi resolution. I'm drawing it at the top left corner of my View Controller's window as follows:
myImage = [UIImage imageNamed:#"redSquare.png"];
myImageView = [[UIImageView alloc] initWithImage:myImage];
[self.view addSubview:myImageView];
Given that the iPhone 4S has a screen resolution of 960x640 (326ppi) there should be enough room for 9 more squares to fit next to the first one. However there's only room for 4 more. i.e. the square is drawn larger than what it should given my measurements.
// even tried resizing UIImageView in case it was
// resizing my image to a different size, by adding
// this next line, but no success there either :
myImageView.frame = CGRectMake(0, 0, 64, 64);
I believe it has to do with the way the device is "translating" my pixels. I read about the distinction between Points Versus Pixels in Apple's documentation but it doesn't mention how one can work around this problem. I know I'm measuring in pixels. Should I be measuring in points? And how could I do that? How exactly am I to resize my image so that it can hold 9 more same-sized squares next to it (i.e. on the same horizontal..) ?
Thank you
To display an image at full resolution on a Retina display, it needs to have #2x appended to the end of its name. In practice, this means you should save the image you're currently using as redSquare#2x.png and a version of that image in 32x32 pixels as redSquare.png.
Once you have done this, there is no need to change your code. The appropriate image will be displayed depending on the device's capabilities. This will allow your app to render correctly on both Retina and non-Retina devices.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to efficiently show many Images? (iPhone programming)
I have hundreds of images, which are frame images of one animation (24 images per second). Each image size is 1024x690.
My problem is, I need to make smooth animation iterating each image frame in UIImageView.
I know I can use animationImages of UIImageView. But it crashes, because of memory problem.
Also, I can use imageView.image = [UIImage imageNamed:#""] that would cache each image, so that the next repeat animation will be smooth. But, caching a lot of images crashed app.
Now I use imageView.image = [UIImage imageWithContentsOfFile:#""], which does not crash app, but doesn't make animation so smooth.
Maybe there is a better way to make good animation of frame images?
Maybe I need to make some preparations, in order to somehow achieve better result. I need your advices. Thank you!
You could try caching say 10 images at a time in memory (you may have to play around with the correct limit -- i doubt it's 10). Everytime you change the image of the imageView you could do something like this:
// remove the image that is currently displayed from the cache
[images removeObjectAtIndex:0];
// set the image to the next image in the cache
imageView.image = [images objectAtIndex:0];
// add a new image to the end of the FIFO
[images addObject:[UIImage imageNamed:#"10thImage.png"]];
You can find your answer here: How to efficiently show many Images? (iPhone programming)
To summarize what is said in that link, you get better performance when showing many images if you use low level API's like Core Animation and OpenGL, as oppose to UIKit.
You could create a buffer of several images using an array. This buffer array of images can be loaded/populated using imageWithContentsOfFile from a background thread (a concurrent GCD asynchronous, for example).