This does exactly what it needs to, except that it takes about 400 milliseconds, which is 350 milliseconds too much:
- (void) updateCompositeImage { //blends together the background and the sprites
UIGraphicsBeginImageContext(CGSizeMake(480, 320));
[bgImageView.image drawInRect:CGRectMake(0, 0, 480, 320)];
for (int i=0;i<numSprites;i++) {
[spriteImage[spriteType[i]] drawInRect:spriteRect[i] blendMode:kCGBlendModeScreen alpha:spriteAlpha[i]];
}
compositeImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
The images are fairly small, and there are only three of them (the for loop only iterates twice)
Is there any way of doing this faster? While still being able to use kCGBlendModeScreen and alpha?
you can:
get the UIImages' CGImages
then draw them to a CGBitmapContext
produce an image from that
using CoreGraphics in itself may be faster. the other bonus is that you can perform the rendering on a background thread. also consider how you can optimize that loop and profile using Instruments.
other considerations:
can you reduce the interpolation quality?
are the source images resized in any way (it can help if you resize them)
drawInRect is slow. Period. Even in small images it's grossly inefficient.
If you are doing a lot of repeat drawing, then have a look at CGLayer, which is designed to facilitate repeat-rendering of the same bits
Related
I wrote the following code to apply a Sepia filter to an image:
- (void)applySepiaFilter {
// Set previous image
NSData *buffer = [NSKeyedArchiver archivedDataWithRootObject: self.mainImage.image];
[_images push:[NSKeyedUnarchiver unarchiveObjectWithData: buffer]];
UIImage* u = self.mainImage.image;
CIImage *image = [[CIImage alloc] initWithCGImage:u.CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CISepiaTone"
keysAndValues: kCIInputImageKey, image,
#"inputIntensity", #0.8, nil];
CIImage *outputImage = [filter outputImage];
self.mainImage.image = [self imageFromCIImage:outputImage];
}
- (UIImage *)imageFromCIImage:(CIImage *)ciImage {
CIContext *ciContext = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [ciContext createCGImage:ciImage fromRect:[ciImage extent]];
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
return image;
}
When I run this code it seems to lag for 1-2 seconds. I heard that core image is faster than core graphics but I am unimpressed with the rendering time. I was wondering if this would be faster processing in CoreGraphics or even OpenCV(which is being used elsewhere in the project)? If not is there any way I can optimize this code to run faster?
I can almost guarantee it will be slower in Core Graphics than using Core Image, depending on the size of the image. If the image is small, Core Graphics may be fine, but if you are doing a lot of processing, it will be much slower than rendering using the GPU.
Core Image is very fast, however, you have to be very conscious of what is going on. Most of the performance hit with Core Image is due to setting up of the context, and copying images to/from Core Image. In addition to just copying bytes, Core Image may be converting between image formats as well.
Your code is doing the following every time:
Creating a CIContext. (slow)
Taking bytes from a CGImage and creating a CIImage.
Copying image data to GPU (slow).
Processing Sepia filter (fast).
Copying result image back to CGImage. (slow)
This is not a recipe for peak performance. Bytes from CGImage will typically live in CPU memory, but Core Image wants to use the GPU for its processing.
An excellent reference for performance considerations are provided in Getting the Best Performance documentation for Core Image:
Don’t create a CIContext object every time you render.
Contexts store a lot of state information; it’s more efficient to reuse them.
Evaluate whether you app needs color management. Don’t use it unless you need it. See Does Your App Need Color Management?.
Avoid Core Animation animations while rendering CIImage objects with a GPU context.
If you need to use both simultaneously, you can set up both to use the CPU.
Make sure images don’t exceed CPU and GPU limits. (iOS)
Use smaller images when possible.
Performance scales with the number of output pixels. You can have Core Image render into a smaller view, texture, or framebuffer. Allow Core Animation to upscale to display size.
Use Core Graphics or Image I/O functions to crop or downsample, such as the functions CGImageCreateWithImageInRect or CGImageSourceCreateThumbnailAtIndex.
The UIImageView class works best with static images.
If your app needs to get the best performance, use lower-level APIs.
Avoid unnecessary texture transfers between the CPU and GPU.
Render to a rectangle that is the same size as the source image before applying a contents scale factor.
Consider using simpler filters that can produce results similar to algorithmic filters.
For example, CIColorCube can produce output similar to CISepiaTone, and do so more efficiently.
Take advantage of the support for YUV image in iOS 6.0 and later.
If you demand real-time processing performance, you will want to use an OpenGL view that CoreImage can render its output to, and read your image bytes directly into the GPU instead of pulling it from a CGImage. Using a GLKView, and overriding drawRect: is a fairly simple solution to get a view that Core Image can render directly to. Keeping data on the GPU is the best way to get peak performance out of Core Image.
Try to reuse as much as possible. Keep a CIContext around for subsequent renders (like the doc says). If you end up using an OpenGL view, these are also things you may want to re-use as much as possible.
You may also be able to get better performance by using software rendering. Software rendering would avoid a copy to/from GPU. [CIContext contextWithOptions:#{kCIContextUseSoftwareRenderer: #(YES)}] However, this will have performance limitations in the actual render, since the CPU render is usually slower than a GPU render.
So, you can choose your level of difficulty to get maximum performance. The best performance can be more challenging, but a few tweaks may get you to "acceptable" performance for your use case.
Have page flip type application that needs to convert the contents of a fullscreen UIWebView to a UIImage very quickly (e.g. 200ms tops). Sacrificing quality for speed is OK. Having a real tough time getting anywhere near this on a iPad3 retina.
Do create a UIImage i am doing the common renderInContext method:
UIGraphicsBeginImageContextWithOptions(frame.size, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();
On a ipad3 retina display I typically see 400-500ms. Interestingly the second time the method is run it is much quicker (around 100ms), which doesn't help but suggests some sort of caching is happening.
I have tried the following things:
I have tried playing with the scale and opaque parameters UIGraphicsBeginImageContextWithOptions to ever possible combination. Setting the scale to say .5 or 1.0 actually makes it even slower.
Adding CGContextSetInterpolationQuality(context, kCGInterpolationNone). No change to performance.
webview.shouldRasterize=YES. No change to performance.
Shrinking the UIWebView in half and then scaling to fullscreen . This does help but is still around 300ms.
Any other ideas? Running this on a ipad1-2 is ok - but the ipad3 retina just kills the performance.
I am using UIImageView to display thumbnails of images that can then be selected to be viewed at full size. The UIImageView has its content mode set to aspect fit.
The images are usually scaled down from around 500px x 500px to 100px x 100px. On the retina iPad they display really well while on the iPad2 they are badly aliased until the size gets closer to the native image size.
Examples:
Original Image
Retina iPad rendering at 100px x 100px
iPad 2 rendering at 100px x 100px
The difference between iPad 2 and new iPad might just be the screen resolution or could be that the GPU is better equipped to scale images. Either way, the iPad 2 rendering is very poor.
I have tried first reducing the image size by creating a new context, setting the interpolation quality to high and drawing the image into the context. In this case, the image looks fine on both iPads.
Before I continue down the image copy/resize avenue, I wanted to check there wasn't something simpler I was missing. I appreciate that UIImage isn't there to be scaled but I was under the impression UIImageView was there to handle scaling but at the moment it doesn't seem to be doing a good job scaling down. What (if anything) am I missing?
Update: Note: The drop shadow on the rendered / resized images is added in code. Disabling this made no difference to the quality of the scaling.
Another approach I've tried that does seem to be improving things is to set the minificationFilter:
[imageView.layer setMinificationFilter:kCAFilterTrilinear]
The quality is certainly improved and I haven't noticed a performance hit.
Applying a small minification filter bias can help out with this if you don't want to resample the image yourself:
imageView.layer.minificationFilter = kCAFilterTrilinear
imageView.layer.minificationFilterBias = 0.1
The left image has no filtering applied to it. The right image has a 0.1 filter bias.
Note that no explicit rasterization is required.
Playing around with very small values, you can usually come up with a value that smooths out the scaling artifacts just enough, and it's a lot easier than resizing the bitmap yourself. Certainly, you lose detail as the bias increases, so values even less than 0.1 are probably sufficient, though it all depends on the size the image view's frame that's displaying the image.
Just realize that trilinear filtering effectively enables mipmapping on the layer, which basically means it generates extra copies of the bitmap at progressively smaller scales. It's a very common technique used in rendering to increase render speed and also reduce scaling aliasing. The tradeoff is that it requires more memory, though the memory usage for successive downsampled bitmaps reduces exponentially.
Another potential advantage to this technique, though I have not tried it myself, is that you can animate minificationFilterBias. So if you're going to be scaling an image view down quite a lot as part of an animation, consider also animating the filter bias from 0.0 to whatever small value you've determined is appropriate for the scaled down size.
Finally, as others have noted, if your source image is very large, this technique isn't appropriate if overused, because Core Animation will always keep around the original bitmap. It's better to resize the image then discard the source image instead of using mipmapping in most cases, but for one-offs or cases where your image views are going to be deallocated quickly enough, this is fine.
if you just put the large image in a small imageview it will look real bad.
the solution is to properly resize the image... i'll add an example function that does the trick:
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
CGContextDrawImage(context, newRect, imageRef);
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
this function might take some time .. so you might want to save the result to a cache file.
If you're not afraid of wasting memory and know what you're doing for a particular case, this works beautifully.
myView.layer.shouldRasterize = YES;
myView.layer.rasterizationScale = 2;
The resulting quality is much better than setMinificationFilter.
I am using images that are 256x256 and scaling them to something like 48 px. Obviously a saner solution here would be to downscale the images to the exact destination size.
Next helped to me:
imageView.layer.minificationFilter = kCAFilterTrilinear
imageView.layer.shouldRasterize = true
imageView.layer.rasterizationScale = UIScreen.mainScreen().scale
Keep an eye on performance if used in scroll lists.
I'm trying to animate some images. The images are working well on non-retina iPads but their retina counterparts are slow and the animations will not cycle through at the specified rate. The code i'm using is below with the method called every 1/25th second. This method appears to perform better than UIViewAnimations.
if (counter < 285) {
NSString *file = [[NSBundle mainBundle] pathForResource:[NSString stringWithFormat:#"Animation HD1.2 png sequence/file_HD1.2_%d", counter] ofType:#"png"];
#autoreleasepool {
UIImage *someImage = [UIImage imageWithContentsOfFile:file];
falling.image = someImage;
}
counter ++;
} else {
NSLog(#"Timer invalidated");
[timer invalidate];
timer = nil;
counter = 1;
}
}
I realise there are a lot of images but the performance is the same for animations with less frames. Like i said, the non-retina animations work well. Each image above is about 90KB. Am i doing something wrong or is this simply a limitation of the iPad? To be honest, i find it hard to believe that it couldn't handle something like this when it can handle the likes of complex 3D games so i imagine i'm doing something wrong. Any help would be appreciated.
EDIT 1:
From the answers below, I have edited my code but to no avail. Executing the code below results in the device crashing.
in viewDidLoad
NSString *fileName;
myArray = [[NSMutableArray alloc] init];
for(int i = 1; i < 285; i++) {
fileName = [NSString stringWithFormat:#"Animation HD1.2 png sequence/HD1.2_%d.png", i];
[myArray addObject:[UIImage imageNamed:fileName]];
NSLog(#"Loaded image: %d", i);
}
falling.userInteractionEnabled = NO;
falling.animationImages = humptyArray;
falling.animationDuration = 11.3;
falling.animationRepeatCount = 1;
falling.contentMode = UIViewContentModeCenter;
the animation method
-(void) triggerAnimation {
[falling startAnimating];
}
First of all, animation performance on the retina iPad is notoriously choppy. That said, there are a few things you could do to make sure your getting the best performance for your animation (in no particular order).
Preloading the images - As some others have mentioned, your animation speed suffers when you have to wait for the reading of your image before you draw it. If you use UIImageView's animation properties this preloading will be taken care of automatically.
Using the right image type - Despite the advantage in file size, using JPEGs instead of PNGs will slow your animation down significantly. PNGs are less compressed and are easier for the system to decompress. Also, Apple has significantly optimized the iOS system for reading and drawing PNG images.
Reducing Blending - If at all possible, try and remove any transparency from your animation images. Make sure there is no alpha channel in your images even if it seems completely opaque. You can verify by opening the image in Preview and opening the inspector. By reducing or removing these transparent pixels, you eliminate extra rendering passes the system has to do when displaying the image. This can make a significant difference.
Using a GPU backed animation - Your current method of using a timer to animate the image is not recommended for optimal performance. By not using UIViewAnimation or CAAnimation you are forcing the CPU to do most of the animation work. Many of the animation techniques of Core Animation and UIViewAnimation are optimized and backed by OpenGL which using the GPU to process images and animate. Graphics processing is what the GPU is made for and by utilizing it you will maximize your animation performance.
Avoiding pixel misalignment - Make sure your animation images are at the right size on screen when displaying them. If you are stretching your image while animating or using an incorrect frame, the system has to do more work to process each frame. Also, using whole numbers for any frame or point values will keep from anti-aliasing when the system tries to position an image on a fractional pixel.
Be wary of shadows and rounded corners - CALayer has lots of easy ways to create shadows and rounded corners, but if you are moving these layers in animations, often times the system will redraw the layer in each frame of the animation. This is the case when specifying a shadow using the shadowOffset property (using UILabel's shadow properties will not render every frame). Also, borders and using maskToBounds and clipToBounds will be more performance intensive rather than just using an image editor to crop the actual asset.
There are a few things to notice here:
If "falling" is UIImageView, make sure it's content mode says something like "center" and not some sort of scaling (make sure your images fit it, of course).
Other than that, as #FogleBird said, test if your device have enough memory to preload all images, if not, try to at least preload the data by creating NSData objects with the image files.
Your use of #autorelease pool is not very useful, you end up creating an auto release object that does a single thing - remove a reference to an already retained object - no memory gain, but performance loss.
If anything, you should have wrapped the file name formatter code, and considering this method is called by an NSTimer, it is already wrapped in an autorelease pool.
just wanted to point out - when you are creating the NSString with the image name - what is the "Animation HD1.2 png sequence/HD1.2_%d.png" ?
It looks likey you are trying to put a path there, try just the image name - eg. "HD1.2_%d.png".
What I'm doing is merge 2 images into single image.
Here is the code.
UIGraphicsBeginImageContext(CGSizeMake(1024, 768));
[image1 drawInRect:CGRectMake(0, 0, 512, 768)];
[image2 drawInRect:CGRectMake(512, 0, 512, 768)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But drawInRect seems too slow.
Is there faster way to merge images?
Option 0 - Make sure you draw only when you need to draw, and only what you need to draw.
Option 1
If you know the destination size (perhaps self.frame.size?), you can create one image from the two source images (flatten) at the destination size and avoid interpolation. So that could:
Reduce the memory you need
Reduce the number of images you must draw
Avoid interpolation (High CPU if you want it to look good)
Look better - the composite can use High Quality interpolation.
Of course, this only makes sense when the composite varies at a frequency lower than it must be drawn.
Option 2
Even if you want two images and you know their sizes will not change, just resize them to the size they must be drawn at (well, monitor your memory usage if you are enlarging them).
Option 3
If that's not an option, you could alter the CGContext's state, and reduce interpolation quality. If you're used to similar CALayer transformations, you would probably be satisfied with low quality or no interpolation.
One thing you could do is not implement -drawRect: at all, but instead have two CALayers with your two images as their contents, then put one in front of the other. Not sure that would be faster, but I think it's likely (since CA could then handle the drawing asynchronously, on the GPU).