UIImageview shadowpath - objective-c

I've noticed that when you set the shadowpath on UIImageView's layer property, it kills the image quality. Can someone tell me why that happens and what the correct way of doing it is?
imageView.layer.shouldRasterize = YES;
imageView.layer.shadowPath = [UIBezierPath bezierPathWithRect:imageView.bounds].CGPath;
Update
It was the rasterization scale. You need to set that to your screen's scale. Else it uses the non retina image when creating the bitmap!

When you set should rasterize on a layer to yes it causes the layer to draw out its contents into a bitmap. Thats why the image becomes somewhat blurry.
If you omit the first line the graphic quality won't change, but if you have a lot of content it will hurt performance.

Related

How to save UIView with subviews without losing quality?

I am trying to save the view with its subview, but the saved image is little bit blurry (especially the label's text)
I tried all the solutions given in stackoverflow - no use.
Can anyone help me on the same?
I am using the below code
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
And getting the blurred text, also the picture quality is low.
You could try a higher resolution image. It should be fine if you compress a high resolution image to down, but scaling up a low resolution image to a larger size will generally blur the image contents, as it stretches everything.
The preferred approach is [UIView snapshotViewAfterScreenUpdates:]. You should only use drawViewHierarchyInRect:afterScreenUpdates: if you plan to apply additional effects.
That said, there are several likely causes, depending on how you're manipulating or saving the image. For example, saving text in JPEG format will cause blurriness. Rotating or scaling the image without great care can make the text blurry. Drawing the image incorrectly (for instance, failing to pixel-align it) can make the text blurry. You should simplify your problem if you're making multiple steps, and validate the quality at each step. To discuss it further on StackOverflow, you need to provide details on how you're manipulating and displaying the image, not just how you generate it.
Text is extremely susceptible to artifacts. If you must take pictures of it (something you generally should avoid if at all possible), you should make sure to manipulate it as little as possible. It is always better to manipulate the text before it's drawn rather than after.

How do I control the size of a UIViews background pattern colour?

I have set a patterned background on my UIView using:
myView.backgroundColor = [UIColor colorWithPatternImage:[UIImage imageNamed:#"backgroundImage.png"]];
But the image appears to be stretched or scaled up and doesn't appear at the desired resolution. Is there a way to set the size or scale of a background pattern? Or is the images size used as a default. Does the image DPI have an affect?
The pattern is constructed by tiling the image until it fills the given area.
So there is no control on tile size other than the original image dimensions.
Now, if you want to provide retina images you should just have a #2x version and iOS will take care of that automatically (btw change the method call to [UIImage imageNamed:#"backgroundImage"] - the file extension is optional for png images).
Do not provide higher dpi images for retina, instead provide an image that is twice the size of the non-retina one (and obviously not by oversampling the image).
Finally the only control you seem to have on the pattern (at least the only one that is documented) is the phase. Here is the relevant part from the official documentation:
By default, the phase of the returned color is 0, which causes the top-left corner of the image to be aligned with the drawing origin. To change the phase, make the color the current color and then use the CGContextSetPatternPhase function to change the phase.
Turns out I wasn't using the #2x naming convention so images were appearing stretched. I added it in and it fixed everything.

iOS UIImageView scaling image down produces aliased image on iPad 2

I am using UIImageView to display thumbnails of images that can then be selected to be viewed at full size. The UIImageView has its content mode set to aspect fit.
The images are usually scaled down from around 500px x 500px to 100px x 100px. On the retina iPad they display really well while on the iPad2 they are badly aliased until the size gets closer to the native image size.
Examples:
Original Image
Retina iPad rendering at 100px x 100px
iPad 2 rendering at 100px x 100px
The difference between iPad 2 and new iPad might just be the screen resolution or could be that the GPU is better equipped to scale images. Either way, the iPad 2 rendering is very poor.
I have tried first reducing the image size by creating a new context, setting the interpolation quality to high and drawing the image into the context. In this case, the image looks fine on both iPads.
Before I continue down the image copy/resize avenue, I wanted to check there wasn't something simpler I was missing. I appreciate that UIImage isn't there to be scaled but I was under the impression UIImageView was there to handle scaling but at the moment it doesn't seem to be doing a good job scaling down. What (if anything) am I missing?
Update: Note: The drop shadow on the rendered / resized images is added in code. Disabling this made no difference to the quality of the scaling.
Another approach I've tried that does seem to be improving things is to set the minificationFilter:
[imageView.layer setMinificationFilter:kCAFilterTrilinear]
The quality is certainly improved and I haven't noticed a performance hit.
Applying a small minification filter bias can help out with this if you don't want to resample the image yourself:
imageView.layer.minificationFilter = kCAFilterTrilinear
imageView.layer.minificationFilterBias = 0.1
The left image has no filtering applied to it. The right image has a 0.1 filter bias.
Note that no explicit rasterization is required.
Playing around with very small values, you can usually come up with a value that smooths out the scaling artifacts just enough, and it's a lot easier than resizing the bitmap yourself. Certainly, you lose detail as the bias increases, so values even less than 0.1 are probably sufficient, though it all depends on the size the image view's frame that's displaying the image.
Just realize that trilinear filtering effectively enables mipmapping on the layer, which basically means it generates extra copies of the bitmap at progressively smaller scales. It's a very common technique used in rendering to increase render speed and also reduce scaling aliasing. The tradeoff is that it requires more memory, though the memory usage for successive downsampled bitmaps reduces exponentially.
Another potential advantage to this technique, though I have not tried it myself, is that you can animate minificationFilterBias. So if you're going to be scaling an image view down quite a lot as part of an animation, consider also animating the filter bias from 0.0 to whatever small value you've determined is appropriate for the scaled down size.
Finally, as others have noted, if your source image is very large, this technique isn't appropriate if overused, because Core Animation will always keep around the original bitmap. It's better to resize the image then discard the source image instead of using mipmapping in most cases, but for one-offs or cases where your image views are going to be deallocated quickly enough, this is fine.
if you just put the large image in a small imageview it will look real bad.
the solution is to properly resize the image... i'll add an example function that does the trick:
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
CGContextDrawImage(context, newRect, imageRef);
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
this function might take some time .. so you might want to save the result to a cache file.
If you're not afraid of wasting memory and know what you're doing for a particular case, this works beautifully.
myView.layer.shouldRasterize = YES;
myView.layer.rasterizationScale = 2;
The resulting quality is much better than setMinificationFilter.
I am using images that are 256x256 and scaling them to something like 48 px. Obviously a saner solution here would be to downscale the images to the exact destination size.
Next helped to me:
imageView.layer.minificationFilter = kCAFilterTrilinear
imageView.layer.shouldRasterize = true
imageView.layer.rasterizationScale = UIScreen.mainScreen().scale
Keep an eye on performance if used in scroll lists.

How do I override the Points to Pixels iOS specificity and have my image drawn at the right size?

I have a 64px by 64px redSquare.png file at a 326ppi resolution. I'm drawing it at the top left corner of my View Controller's window as follows:
myImage = [UIImage imageNamed:#"redSquare.png"];
myImageView = [[UIImageView alloc] initWithImage:myImage];
[self.view addSubview:myImageView];
Given that the iPhone 4S has a screen resolution of 960x640 (326ppi) there should be enough room for 9 more squares to fit next to the first one. However there's only room for 4 more. i.e. the square is drawn larger than what it should given my measurements.
// even tried resizing UIImageView in case it was
// resizing my image to a different size, by adding
// this next line, but no success there either :
myImageView.frame = CGRectMake(0, 0, 64, 64);
I believe it has to do with the way the device is "translating" my pixels. I read about the distinction between Points Versus Pixels in Apple's documentation but it doesn't mention how one can work around this problem. I know I'm measuring in pixels. Should I be measuring in points? And how could I do that? How exactly am I to resize my image so that it can hold 9 more same-sized squares next to it (i.e. on the same horizontal..) ?
Thank you
To display an image at full resolution on a Retina display, it needs to have #2x appended to the end of its name. In practice, this means you should save the image you're currently using as redSquare#2x.png and a version of that image in 32x32 pixels as redSquare.png.
Once you have done this, there is no need to change your code. The appropriate image will be displayed depending on the device's capabilities. This will allow your app to render correctly on both Retina and non-Retina devices.

Images in NSButton and NSImageView Blurred

I am completely stumped here; I have a series of small images I'm tinkering with and making into buttons:
And as you can see they are all decently crisp and sharp, and retain this when I open the png files in Preview and what not.
However, when I use them in NSButtons and NSImageViews in Interface Builder, setting Scaling to None:
The images become horribly blurred. What am I doing wrong? I don't know where to start and what to try; should I go back to the icons and try to make them pixel perfect? Does it have to do with anti-aliasing or something along those lines?
EDIT:
For some reason, it seems as if the NSButtons and NSImageViews are loading the high resolution versions of the images, even though I'm on a normal display, which can be identified by a slight light blue stroke I added to them. For some reason, Quartz Debug does not identify these as high resolution images and there's no red tint. Removing references to the #2x images does fix the problem... but...
If you check out session 245 in the WWDC 2012 videos Advanced Tips and Tricks for High Resolution on OS X in the first section on NSImage you'll find out why.
NSImage doesn't have any concept of high resolution - it just uses the smallest image that has more pixels than the space it has to fill - so if your NSImageView is bigger in dimension than your 1x image it will use the 2x image as it has more pixels.
I have this problem before. It seems that if your image's DPI isn't 72, the image size will be wrong. You can get the real size use the code below.
NSImage *image = [NSImage imageNamed:#"image"];
NSBitmapImageRep *rep = [NSBitmapImageRep imageRepWithData:[image TIFFRepresentation]];
NSSize size = NSMakeSize([rep pixelsWide], [rep pixelsHigh]);
[image setSize: size];
When specifying image names in Interface Builder and [NSImage imageNamed:], make sure to use foo instead of foo.png. While iOS is smart enough to add the #2x in the later case, Mac OS X is not. It will load the non-retina image in the later case, but will add the #2x in the first case (if such an image is present).
Are you assigning the images to your Buttons in IB or in Code?
If you are doing it in code, maybe creating a copy of the image (e.g. [myImage copy]), and assigning that copy to your button may solve this.
In my case (drawing icons in custom NSOutlineView), I had to make sure that the x,y origin of the drawRect is rounded to int values:
NSMakeRect( round(NSMinX(cellFrame)-iconSize.width),
round(NSMidY(cellFrame)-(iconSize.height/2.0f)), …);
This is actually a response to the earlier post about DPI, but I was unable to reply directly to it. The code in that post gave the true pixel dimensions for me (that is, it did not indicate any trouble). However, image DPI was definitely the culprit in my case. The symptoms I was seeing were:
With my NSImageViews set to No Scaling, the images would appear squashed.
With my NSImageViews set to Axes Independently, most images would appear correctly if the dimensions of the NSImageViews were set to exactly match the dimensions of the image.
However, even in this case, some images had strange artifacts in them that were not there when viewing the same image via Preview or elsewhere (or even via Interface Builder, for that matter -- they only appeared at runtime).
The images that had trouble were at a DPI other than 72. When I re-created the images at 72 DPI, all of the above behavior disappeared.
This was a pretty confounding issue -- I hope this helps someone!
For me, I just needed to set image scaling to none:
In Interface Builder
In code
NSImageCell *image;
[image setImageScaling:NSImageScaleNone];
NSButtonCell *button;
[button setImageScaling:NSImageScaleNone];