How get a High Performance UIWebView -> UIImage on ipad retina? - objective-c

Have page flip type application that needs to convert the contents of a fullscreen UIWebView to a UIImage very quickly (e.g. 200ms tops). Sacrificing quality for speed is OK. Having a real tough time getting anywhere near this on a iPad3 retina.
Do create a UIImage i am doing the common renderInContext method:
UIGraphicsBeginImageContextWithOptions(frame.size, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();
On a ipad3 retina display I typically see 400-500ms. Interestingly the second time the method is run it is much quicker (around 100ms), which doesn't help but suggests some sort of caching is happening.
I have tried the following things:
I have tried playing with the scale and opaque parameters UIGraphicsBeginImageContextWithOptions to ever possible combination. Setting the scale to say .5 or 1.0 actually makes it even slower.
Adding CGContextSetInterpolationQuality(context, kCGInterpolationNone). No change to performance.
webview.shouldRasterize=YES. No change to performance.
Shrinking the UIWebView in half and then scaling to fullscreen . This does help but is still around 300ms.
Any other ideas? Running this on a ipad1-2 is ok - but the ipad3 retina just kills the performance.

Related

How to save UIView with subviews without losing quality?

I am trying to save the view with its subview, but the saved image is little bit blurry (especially the label's text)
I tried all the solutions given in stackoverflow - no use.
Can anyone help me on the same?
I am using the below code
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
And getting the blurred text, also the picture quality is low.
You could try a higher resolution image. It should be fine if you compress a high resolution image to down, but scaling up a low resolution image to a larger size will generally blur the image contents, as it stretches everything.
The preferred approach is [UIView snapshotViewAfterScreenUpdates:]. You should only use drawViewHierarchyInRect:afterScreenUpdates: if you plan to apply additional effects.
That said, there are several likely causes, depending on how you're manipulating or saving the image. For example, saving text in JPEG format will cause blurriness. Rotating or scaling the image without great care can make the text blurry. Drawing the image incorrectly (for instance, failing to pixel-align it) can make the text blurry. You should simplify your problem if you're making multiple steps, and validate the quality at each step. To discuss it further on StackOverflow, you need to provide details on how you're manipulating and displaying the image, not just how you generate it.
Text is extremely susceptible to artifacts. If you must take pictures of it (something you generally should avoid if at all possible), you should make sure to manipulate it as little as possible. It is always better to manipulate the text before it's drawn rather than after.

iOS UIImageView scaling image down produces aliased image on iPad 2

I am using UIImageView to display thumbnails of images that can then be selected to be viewed at full size. The UIImageView has its content mode set to aspect fit.
The images are usually scaled down from around 500px x 500px to 100px x 100px. On the retina iPad they display really well while on the iPad2 they are badly aliased until the size gets closer to the native image size.
Examples:
Original Image
Retina iPad rendering at 100px x 100px
iPad 2 rendering at 100px x 100px
The difference between iPad 2 and new iPad might just be the screen resolution or could be that the GPU is better equipped to scale images. Either way, the iPad 2 rendering is very poor.
I have tried first reducing the image size by creating a new context, setting the interpolation quality to high and drawing the image into the context. In this case, the image looks fine on both iPads.
Before I continue down the image copy/resize avenue, I wanted to check there wasn't something simpler I was missing. I appreciate that UIImage isn't there to be scaled but I was under the impression UIImageView was there to handle scaling but at the moment it doesn't seem to be doing a good job scaling down. What (if anything) am I missing?
Update: Note: The drop shadow on the rendered / resized images is added in code. Disabling this made no difference to the quality of the scaling.
Another approach I've tried that does seem to be improving things is to set the minificationFilter:
[imageView.layer setMinificationFilter:kCAFilterTrilinear]
The quality is certainly improved and I haven't noticed a performance hit.
Applying a small minification filter bias can help out with this if you don't want to resample the image yourself:
imageView.layer.minificationFilter = kCAFilterTrilinear
imageView.layer.minificationFilterBias = 0.1
The left image has no filtering applied to it. The right image has a 0.1 filter bias.
Note that no explicit rasterization is required.
Playing around with very small values, you can usually come up with a value that smooths out the scaling artifacts just enough, and it's a lot easier than resizing the bitmap yourself. Certainly, you lose detail as the bias increases, so values even less than 0.1 are probably sufficient, though it all depends on the size the image view's frame that's displaying the image.
Just realize that trilinear filtering effectively enables mipmapping on the layer, which basically means it generates extra copies of the bitmap at progressively smaller scales. It's a very common technique used in rendering to increase render speed and also reduce scaling aliasing. The tradeoff is that it requires more memory, though the memory usage for successive downsampled bitmaps reduces exponentially.
Another potential advantage to this technique, though I have not tried it myself, is that you can animate minificationFilterBias. So if you're going to be scaling an image view down quite a lot as part of an animation, consider also animating the filter bias from 0.0 to whatever small value you've determined is appropriate for the scaled down size.
Finally, as others have noted, if your source image is very large, this technique isn't appropriate if overused, because Core Animation will always keep around the original bitmap. It's better to resize the image then discard the source image instead of using mipmapping in most cases, but for one-offs or cases where your image views are going to be deallocated quickly enough, this is fine.
if you just put the large image in a small imageview it will look real bad.
the solution is to properly resize the image... i'll add an example function that does the trick:
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
CGContextDrawImage(context, newRect, imageRef);
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
this function might take some time .. so you might want to save the result to a cache file.
If you're not afraid of wasting memory and know what you're doing for a particular case, this works beautifully.
myView.layer.shouldRasterize = YES;
myView.layer.rasterizationScale = 2;
The resulting quality is much better than setMinificationFilter.
I am using images that are 256x256 and scaling them to something like 48 px. Obviously a saner solution here would be to downscale the images to the exact destination size.
Next helped to me:
imageView.layer.minificationFilter = kCAFilterTrilinear
imageView.layer.shouldRasterize = true
imageView.layer.rasterizationScale = UIScreen.mainScreen().scale
Keep an eye on performance if used in scroll lists.

Draw UIImage From CIImage In drawRect:

I'm learning about drawing UIImages and CGImages, using CIFilters etc. To test my knowledge I made a small test app with sliders that programmatically change the color of a potion sprite and display it on screen (using a CIHueBlendMode CIFilter). After I finished, I wanted to cleanup the relatively lengthy code and noticed that instead of going from the filter's outputted CIImage to a CGImage and then a UIImage, I could go directly from a CIImage to UIImage using UIImage's imageWithCIImage: method.
However, when I tried to draw the resultant UIImage using drawInRect:, nothing was drawn. Going through the CGImage stage rectifies this, of course. My understanding of this is that making a UIImage from a CIImage results in a NULL CGImage property in the UIImage, which is used in drawInRect:. Is this correct? If so, is there a better way to display a CIImage than to go through CGImage followed by UIImage? I could just draw a CGImage made with the CIImage, but that would flip the image, which leads to another question. Currently, I wrap anything I draw in a UIImage first to take care of flipping. Is there another, more efficient way?
Too Long; Didn't Read: Is there a better way to draw CIImages other than turning it into a CGImage, then a UIImage and drawing that? What's the best way to handle flipping when drawing CGImages?
Thanks to anyone who can answer some of my questions. :)
After doing some research into what a CIImage is, I realize now that you cannot skip the step of making a CGImage from the CIImage, and even if you could, it wouldn't really be any more efficient, since you'd still have to process the CIImage regardless. A CIImage is not really an image, as noted in Apple's documentation, which is processed when it's turned into a CGImage. That's also why if I use Time Profiler on my project I see that 99% of my time in my drawRect: method is spent on createCGImage:, and not using CIFilters.
As for the most efficient way to cope with the coordinate system change between Core Graphics and the iPhone, it seems that wrapping the object in a UIImage instance is the easiest (not sure about best) way to go. It's simple, and relatively efficient. Another option would be to transform the graphics context.
If I don't get a better answer than my own within three days, I'll mark it as accepted.

View behaviour differences between IOS 5.x and IOS 4.x

I've been trying to animate an image view to slide upwards on the screen. The views height is also increased whilst the position is moved upwards.
On my iPhone 4S (5.x) the image view behaves as expected, the view only moves upwards as its height is increased, however on my iPhone 3G (4.1), the view moves down a little bit during this animation.
Such a level of accuracy is needed as the image view is used to create a non expensive shadow effect. Its alignment is important for the effect. The image is a resizable graphic.
This is how I change the position and size of the view
CGRect oldShadow = self.shaddowView.frame;
oldShadow.size.height = oldShadow.size.height+200;
oldShadow.origin.y = oldShadow.origin.y - 200;
self.shaddowView.frame =oldShadow;
This is how the image for the view is set up as resizable:
UIImage* shadow = [[UIImage imageNamed:#"shadow.png"] stretchableImageWithLeftCapWidth:20 topCapHeight:20];
self.shaddowView.image = shadow;
Thanks.
I used the following animation, with a border around not only the starting position but also the intended final position, so that I could confirm whether any undesired shifting of the view (other than the obvious upward expansion) took place, but it worked fine on iOS 4.2.1:
[UIView animateWithDuration:2.0
animations:^{
CGRect newImageFrame = imageView.frame;
newImageFrame.origin.y -= stretchBy;
newImageFrame.size.height += stretchBy;
imageView.frame = newImageFrame;
}];
I don't have iOS 4.1 device sitting around (my old 3G test phone is running iOS 4.2.1, the latest supported iOS version for that device), so I can't speak to that, but it's fine with iOS 4.2.1.
I have to confess, though, that I find it very unlikely that when you animate the changing of a frame, that the final frame would not be precisely what you specified it to be. If you NSLog the frame when you're done, it is not the correct value? Or are you saying that it momentarily moves during the animation but then ends up in the correct location? Or that it shifts down during animation and ends up in the wrong position even when the animation is done?
I wonder if there's something else going on (e.g. is your animated view a subview of a scroll view, which itself might be shifting? or is there some code not shown here that is accidentally further adjusting the frame after the animation? etc.). Seems like a little debugging should confirm whether the frame is actually not what you intended, or whether there is some other issue going on.
I originally answered suggesting shouldRasterize option, but if you're trying to support old iPhone 3G devices, then maybe that's not good enough. Definitely stutters a little on these old phones. Anyway, this was my original answer:
I assume you're doing this because the layer shadow feature is a little CPU intensive. But I've heard (but can't speak to it) that if you use the shouldRasterize option, it's a little better:
viewThatNeedsShadow.layer.shadowColor = [UIColor blackColor].CGColor;
viewThatNeedsShadow.layer.shadowOffset = CGSizeMake(3.0, 3.0);
viewThatNeedsShadow.layer.shadowOpacity = 0.5;
viewThatNeedsShadow.layer.shouldRasterize = YES;
viewThatNeedsShadow.clipsToBounds = NO;

UIImage drawInRect: is very slow; is there a faster way?

This does exactly what it needs to, except that it takes about 400 milliseconds, which is 350 milliseconds too much:
- (void) updateCompositeImage { //blends together the background and the sprites
UIGraphicsBeginImageContext(CGSizeMake(480, 320));
[bgImageView.image drawInRect:CGRectMake(0, 0, 480, 320)];
for (int i=0;i<numSprites;i++) {
[spriteImage[spriteType[i]] drawInRect:spriteRect[i] blendMode:kCGBlendModeScreen alpha:spriteAlpha[i]];
}
compositeImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
The images are fairly small, and there are only three of them (the for loop only iterates twice)
Is there any way of doing this faster? While still being able to use kCGBlendModeScreen and alpha?
you can:
get the UIImages' CGImages
then draw them to a CGBitmapContext
produce an image from that
using CoreGraphics in itself may be faster. the other bonus is that you can perform the rendering on a background thread. also consider how you can optimize that loop and profile using Instruments.
other considerations:
can you reduce the interpolation quality?
are the source images resized in any way (it can help if you resize them)
drawInRect is slow. Period. Even in small images it's grossly inefficient.
If you are doing a lot of repeat drawing, then have a look at CGLayer, which is designed to facilitate repeat-rendering of the same bits