How to save UIView with subviews without losing quality? - objective-c

I am trying to save the view with its subview, but the saved image is little bit blurry (especially the label's text)
I tried all the solutions given in stackoverflow - no use.
Can anyone help me on the same?
I am using the below code
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
And getting the blurred text, also the picture quality is low.

You could try a higher resolution image. It should be fine if you compress a high resolution image to down, but scaling up a low resolution image to a larger size will generally blur the image contents, as it stretches everything.

The preferred approach is [UIView snapshotViewAfterScreenUpdates:]. You should only use drawViewHierarchyInRect:afterScreenUpdates: if you plan to apply additional effects.
That said, there are several likely causes, depending on how you're manipulating or saving the image. For example, saving text in JPEG format will cause blurriness. Rotating or scaling the image without great care can make the text blurry. Drawing the image incorrectly (for instance, failing to pixel-align it) can make the text blurry. You should simplify your problem if you're making multiple steps, and validate the quality at each step. To discuss it further on StackOverflow, you need to provide details on how you're manipulating and displaying the image, not just how you generate it.
Text is extremely susceptible to artifacts. If you must take pictures of it (something you generally should avoid if at all possible), you should make sure to manipulate it as little as possible. It is always better to manipulate the text before it's drawn rather than after.

Related

NSImageView with high-resolution image causes extreme slowdown when resizing the window

I am creating a simple photo filter app for OS X and I am displaying a photo on an NSImageView (actually two photos on top of each other with two NSImageViews, but the question still applies for a single view too). Everything works super, but when I try to resize the window that contains the NSImageViews, the window (which also resizes the NSImageViews) resizes very slowly, at about less than 1fps, creating a negative impact on the user experience. I want resizing windows to be as smooth as possible. When I disable resizing the image views, the window resizes smoothly, so the cause of the slowdown is those NSImageViews.
I'm loading 20-megapixel images from my DSLR. When I scale them down to a reasonable size for screen (e.g. 1024x768), they scale smoothly, so the problem is the way NSImageView renders the images. It (I assume as the result of this behavior) tries to re-render 20MP image every time it needs to redraw it into whatever the target frame of the view is.
How can I make NSImageView rescale more smoothly? Should I feed it with a scaled-down version of my images? I don't want to do that as it's a photo editing app that also targets retina display screens and the viewport would actually be quite large. I can do it, but it's my final option. Other than scaling down, how can I make NSImageView resize faster?
I believe part of the solution your are looking for is in NSImage's representations. You can add many representations to an image with addRepresentation: I believe there is some intelligent selection done when drawing. In your case, I think you would need to add both representations (the scaled-down and the full resolution bitmap) to NSImage. I strongly suspect drawRect: should pick the low resolution version. I would make sure "scale up or down" is selected in NSImageView, because the default is scale down only, which may force your full resolution image to be used most of the time. There are some discussion in Apple's documentation regarding "matching" under "Setting the Image Representation Selection Criteria" in NSImage, although at first sight this may not be sufficient.
Then, whenever you need to do something with the full image, you would request the full resolution image by going through the representations ([NSImage representations] returns an array of NSImageRep).

iOS UIImageView scaling image down produces aliased image on iPad 2

I am using UIImageView to display thumbnails of images that can then be selected to be viewed at full size. The UIImageView has its content mode set to aspect fit.
The images are usually scaled down from around 500px x 500px to 100px x 100px. On the retina iPad they display really well while on the iPad2 they are badly aliased until the size gets closer to the native image size.
Examples:
Original Image
Retina iPad rendering at 100px x 100px
iPad 2 rendering at 100px x 100px
The difference between iPad 2 and new iPad might just be the screen resolution or could be that the GPU is better equipped to scale images. Either way, the iPad 2 rendering is very poor.
I have tried first reducing the image size by creating a new context, setting the interpolation quality to high and drawing the image into the context. In this case, the image looks fine on both iPads.
Before I continue down the image copy/resize avenue, I wanted to check there wasn't something simpler I was missing. I appreciate that UIImage isn't there to be scaled but I was under the impression UIImageView was there to handle scaling but at the moment it doesn't seem to be doing a good job scaling down. What (if anything) am I missing?
Update: Note: The drop shadow on the rendered / resized images is added in code. Disabling this made no difference to the quality of the scaling.
Another approach I've tried that does seem to be improving things is to set the minificationFilter:
[imageView.layer setMinificationFilter:kCAFilterTrilinear]
The quality is certainly improved and I haven't noticed a performance hit.
Applying a small minification filter bias can help out with this if you don't want to resample the image yourself:
imageView.layer.minificationFilter = kCAFilterTrilinear
imageView.layer.minificationFilterBias = 0.1
The left image has no filtering applied to it. The right image has a 0.1 filter bias.
Note that no explicit rasterization is required.
Playing around with very small values, you can usually come up with a value that smooths out the scaling artifacts just enough, and it's a lot easier than resizing the bitmap yourself. Certainly, you lose detail as the bias increases, so values even less than 0.1 are probably sufficient, though it all depends on the size the image view's frame that's displaying the image.
Just realize that trilinear filtering effectively enables mipmapping on the layer, which basically means it generates extra copies of the bitmap at progressively smaller scales. It's a very common technique used in rendering to increase render speed and also reduce scaling aliasing. The tradeoff is that it requires more memory, though the memory usage for successive downsampled bitmaps reduces exponentially.
Another potential advantage to this technique, though I have not tried it myself, is that you can animate minificationFilterBias. So if you're going to be scaling an image view down quite a lot as part of an animation, consider also animating the filter bias from 0.0 to whatever small value you've determined is appropriate for the scaled down size.
Finally, as others have noted, if your source image is very large, this technique isn't appropriate if overused, because Core Animation will always keep around the original bitmap. It's better to resize the image then discard the source image instead of using mipmapping in most cases, but for one-offs or cases where your image views are going to be deallocated quickly enough, this is fine.
if you just put the large image in a small imageview it will look real bad.
the solution is to properly resize the image... i'll add an example function that does the trick:
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
CGContextDrawImage(context, newRect, imageRef);
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
this function might take some time .. so you might want to save the result to a cache file.
If you're not afraid of wasting memory and know what you're doing for a particular case, this works beautifully.
myView.layer.shouldRasterize = YES;
myView.layer.rasterizationScale = 2;
The resulting quality is much better than setMinificationFilter.
I am using images that are 256x256 and scaling them to something like 48 px. Obviously a saner solution here would be to downscale the images to the exact destination size.
Next helped to me:
imageView.layer.minificationFilter = kCAFilterTrilinear
imageView.layer.shouldRasterize = true
imageView.layer.rasterizationScale = UIScreen.mainScreen().scale
Keep an eye on performance if used in scroll lists.

Images in NSButton and NSImageView Blurred

I am completely stumped here; I have a series of small images I'm tinkering with and making into buttons:
And as you can see they are all decently crisp and sharp, and retain this when I open the png files in Preview and what not.
However, when I use them in NSButtons and NSImageViews in Interface Builder, setting Scaling to None:
The images become horribly blurred. What am I doing wrong? I don't know where to start and what to try; should I go back to the icons and try to make them pixel perfect? Does it have to do with anti-aliasing or something along those lines?
EDIT:
For some reason, it seems as if the NSButtons and NSImageViews are loading the high resolution versions of the images, even though I'm on a normal display, which can be identified by a slight light blue stroke I added to them. For some reason, Quartz Debug does not identify these as high resolution images and there's no red tint. Removing references to the #2x images does fix the problem... but...
If you check out session 245 in the WWDC 2012 videos Advanced Tips and Tricks for High Resolution on OS X in the first section on NSImage you'll find out why.
NSImage doesn't have any concept of high resolution - it just uses the smallest image that has more pixels than the space it has to fill - so if your NSImageView is bigger in dimension than your 1x image it will use the 2x image as it has more pixels.
I have this problem before. It seems that if your image's DPI isn't 72, the image size will be wrong. You can get the real size use the code below.
NSImage *image = [NSImage imageNamed:#"image"];
NSBitmapImageRep *rep = [NSBitmapImageRep imageRepWithData:[image TIFFRepresentation]];
NSSize size = NSMakeSize([rep pixelsWide], [rep pixelsHigh]);
[image setSize: size];
When specifying image names in Interface Builder and [NSImage imageNamed:], make sure to use foo instead of foo.png. While iOS is smart enough to add the #2x in the later case, Mac OS X is not. It will load the non-retina image in the later case, but will add the #2x in the first case (if such an image is present).
Are you assigning the images to your Buttons in IB or in Code?
If you are doing it in code, maybe creating a copy of the image (e.g. [myImage copy]), and assigning that copy to your button may solve this.
In my case (drawing icons in custom NSOutlineView), I had to make sure that the x,y origin of the drawRect is rounded to int values:
NSMakeRect( round(NSMinX(cellFrame)-iconSize.width),
round(NSMidY(cellFrame)-(iconSize.height/2.0f)), …);
This is actually a response to the earlier post about DPI, but I was unable to reply directly to it. The code in that post gave the true pixel dimensions for me (that is, it did not indicate any trouble). However, image DPI was definitely the culprit in my case. The symptoms I was seeing were:
With my NSImageViews set to No Scaling, the images would appear squashed.
With my NSImageViews set to Axes Independently, most images would appear correctly if the dimensions of the NSImageViews were set to exactly match the dimensions of the image.
However, even in this case, some images had strange artifacts in them that were not there when viewing the same image via Preview or elsewhere (or even via Interface Builder, for that matter -- they only appeared at runtime).
The images that had trouble were at a DPI other than 72. When I re-created the images at 72 DPI, all of the above behavior disappeared.
This was a pretty confounding issue -- I hope this helps someone!
For me, I just needed to set image scaling to none:
In Interface Builder
In code
NSImageCell *image;
[image setImageScaling:NSImageScaleNone];
NSButtonCell *button;
[button setImageScaling:NSImageScaleNone];

iOS: storing thumbnail images in Core Data

I need to take some images from the iPhone / iPad photo library from within my app and store them in a Core Data entity, and display them as small thumbnail images (48x48 pixels) in a UITableViewCell, and about 80x80 pixels in a detail UIView. I've followed the Recipes sample app, where they use UIImageJPEGRepresentation(value, 0.1) to convert to NSData and store the bytes inside Core Data, and it doesn't end up taking much space, which is good. But when retrieve the data, using UIImage *uiImage = [[UIImage alloc] initWithData:value]; and display it as a thumbnail image with "Aspect Fit", it looks terrible and grainy. I tried changing the image quality variable in the JPEG compression, but even setting it to 0.9 doesn't help.
Is that normal? Is there a better way to compress the image that doesn't cause so much grainee-ness? Since I just want to show a small thumbnail, and then a slightly bigger thumbnail, I feel Core Data would be great for storing this, since it should (theoretically) also support iCloud. But if it's going to look terrible, then I'll have to reconsider.
Two things, are you resizing the image to the right size? Have you tried UIImagePNGRepresentation()? That should compress it without losing quality.
If UIImagePNGRepresentation (which is lossless) is giving you bad images, then the problem is in your image resizing code. Core Data is just giving you what you back what you put in, so if you get bad images out, it's because you put bad images in.
Is one of your iPhone/iPad retina and the other isn't? If so, perhaps the problem is that you don't really want 48x48 pixel images, you want 48x48 point (which means you'll need 2x images 96x96 for retina quality display).

How to transparent UIImage gradually?

I have a small UIImage, I want to make it become solid at the center, and transparent gradually forward to the border ? Anyone has a good idea on this?
Thanks so much !
At least two of the ways you could accomplish this are:
Create a series of images with the varying opacity in them. Then animate the UIImage by showing them in sequence.
Use a similar technique but use mask images. This involves using a series of images as a mask for your original image. Then animate the UIImage by redrawing it repeatedly using a different mask image each time to achieve the effect you want. See CALayer mask.
The second may be preferred because it will work with any image and allows you to change the image without having to generate the animation for it or load the image dynamically.