I need a method for resizing UIImage like in photoshop with "nearest neighbour" resampling. I was looking for some, but everything I found was about CoreGraphics thicks to improve bicubic resampling quality. I have pixel-style design in my app, and a lot of stuff I create by pixel and then enlarge it with x5 multiplier (and it takes a lot of time, so I even close to writing a script for Photoshop). For example:
>
But I really don't need this like result of resampling:
Maybe anyone will show me the right way.
When you draw your image into a graphics context, you can set the graphics context's interpolation quality to "none", like this (e.g. in a view's drawRect method):
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(c, kCGInterpolationNone);
UIImage *image = [UIImage imageNamed:#"pixels.png"];
[image drawInRect:self.bounds];
If you need the result as a UIImage (e.g. to assign it to a built-in UI control), you could do this with UIGraphicsBeginImageContext (you'll find lots of examples for that).
An alternative would be to set the magnificationFilter property of an image view's layer:
pixelatedImageView.layer.magnificationFilter = kCAFilterNearest;
This is probably faster and more memory-efficient, because you don't need to redraw the image.
Related
I've noticed that when you set the shadowpath on UIImageView's layer property, it kills the image quality. Can someone tell me why that happens and what the correct way of doing it is?
imageView.layer.shouldRasterize = YES;
imageView.layer.shadowPath = [UIBezierPath bezierPathWithRect:imageView.bounds].CGPath;
Update
It was the rasterization scale. You need to set that to your screen's scale. Else it uses the non retina image when creating the bitmap!
When you set should rasterize on a layer to yes it causes the layer to draw out its contents into a bitmap. Thats why the image becomes somewhat blurry.
If you omit the first line the graphic quality won't change, but if you have a lot of content it will hurt performance.
Here is the view I got, I got a layer view, detect user touch, and a image view, which showing the image. The layer view is cover on top of the image view. The image view's image is aspect fit. So, it won't lost the ratio. If in my layer view touch on 100, 240, it is a layer view coordinate, but not the image's coordinate. I would like to know how to convert the layer view's coordinate to a image's coordinate. In this example, the image size may be 180*180, so, the coordinate in layer view in the image is 60, 90.
Thanks.
If I'm understanding this question correctly, you want to take a point, which is currently in relation to the layer's coordinate system, and convert it to the image view's coordinate system?
In that case, there are a couple of ways to do this.
Easiest is to use convertPoint:fromView: or convertPoint:toView:
CGPoint imageViewTouchPoint = [layerView convertPoint:touchPoint fromView:imageView];
CGPoint imageViewTouchPoint = [imageView convertPoint:touchPoint toView:layerView];
Either one should work.
EDIT - I realize now that this is only if the UIImageView has the same frame as the UIImage, which you said it might not, due to the UIViewContentModeScaleAspectFit property.
In this case, unless I'm mistaken, the image frame is calculated inside the UIImageView drawRect: method and isn't a property that gets set. This means you'll have to calculate this on your own.
Definitely get the imageViewTouchPoint from one of the methods above (just in case you want to use the same logic on a UIImageView which isn't the full screen size).
You will then need to calculate the scaled image frame. There are a couple of ways to do this. Some people go brute force and manually calculate based on which side of the image is longer, then determining which side should be scaled. Then they calculate the origin by by centering the image and subtracting the image and image view's sides and dividing by two.
I like to write as little code as possible if it's unnecessary, even if it means importing a framework. If you import AVFoundation you get a method AVMakeRectWithAspectRatioInsideRect which you can use to actually calculate the scaled rectangle in one line of code.
CGRect imageRect = AVMakeRectWithAspectRatioInsideRect(image.size, imageView.frame);
Whichever method you use, you will then simply translate your touched point with the scaled image origin:
CGPoint imageTouchPoint = CGPointMake(imageViewTouchPoint.x - imageRect.origin.x, imageViewTouchPoint.y - imageRect.origin.y);
You have to do the math yourself. Calculate the aspect ratio of your image and compare with the aspect ratio of the image view's bounds.
Look at this question: How to Get Image position in ImageView
After searching more, got a hack:
CGSize imageInViewSize = [photo resizedImageWithContentMode:UIViewContentModeScaleAspectFit bounds:imageView.size interpolationQuality:kCGInterpolationNone].size;
CGRect overlayRect = CGRectMake((imageView.frame.size.width - imageInViewSize.width) / 2,
(imageView.frame.size.height - imageInViewSize.height) / 2,
imageInViewSize.width,
imageInViewSize.height);
NSLog(#"Frame of Image inside UIImageView: Left:%f Top:%f Width:%f Height:%f \n", overlayRect.origin.x, overlayRect.origin.y, overlayRect.size.width, overlayRect.size.height);
I am using UIImageView to display thumbnails of images that can then be selected to be viewed at full size. The UIImageView has its content mode set to aspect fit.
The images are usually scaled down from around 500px x 500px to 100px x 100px. On the retina iPad they display really well while on the iPad2 they are badly aliased until the size gets closer to the native image size.
Examples:
Original Image
Retina iPad rendering at 100px x 100px
iPad 2 rendering at 100px x 100px
The difference between iPad 2 and new iPad might just be the screen resolution or could be that the GPU is better equipped to scale images. Either way, the iPad 2 rendering is very poor.
I have tried first reducing the image size by creating a new context, setting the interpolation quality to high and drawing the image into the context. In this case, the image looks fine on both iPads.
Before I continue down the image copy/resize avenue, I wanted to check there wasn't something simpler I was missing. I appreciate that UIImage isn't there to be scaled but I was under the impression UIImageView was there to handle scaling but at the moment it doesn't seem to be doing a good job scaling down. What (if anything) am I missing?
Update: Note: The drop shadow on the rendered / resized images is added in code. Disabling this made no difference to the quality of the scaling.
Another approach I've tried that does seem to be improving things is to set the minificationFilter:
[imageView.layer setMinificationFilter:kCAFilterTrilinear]
The quality is certainly improved and I haven't noticed a performance hit.
Applying a small minification filter bias can help out with this if you don't want to resample the image yourself:
imageView.layer.minificationFilter = kCAFilterTrilinear
imageView.layer.minificationFilterBias = 0.1
The left image has no filtering applied to it. The right image has a 0.1 filter bias.
Note that no explicit rasterization is required.
Playing around with very small values, you can usually come up with a value that smooths out the scaling artifacts just enough, and it's a lot easier than resizing the bitmap yourself. Certainly, you lose detail as the bias increases, so values even less than 0.1 are probably sufficient, though it all depends on the size the image view's frame that's displaying the image.
Just realize that trilinear filtering effectively enables mipmapping on the layer, which basically means it generates extra copies of the bitmap at progressively smaller scales. It's a very common technique used in rendering to increase render speed and also reduce scaling aliasing. The tradeoff is that it requires more memory, though the memory usage for successive downsampled bitmaps reduces exponentially.
Another potential advantage to this technique, though I have not tried it myself, is that you can animate minificationFilterBias. So if you're going to be scaling an image view down quite a lot as part of an animation, consider also animating the filter bias from 0.0 to whatever small value you've determined is appropriate for the scaled down size.
Finally, as others have noted, if your source image is very large, this technique isn't appropriate if overused, because Core Animation will always keep around the original bitmap. It's better to resize the image then discard the source image instead of using mipmapping in most cases, but for one-offs or cases where your image views are going to be deallocated quickly enough, this is fine.
if you just put the large image in a small imageview it will look real bad.
the solution is to properly resize the image... i'll add an example function that does the trick:
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
CGContextDrawImage(context, newRect, imageRef);
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
this function might take some time .. so you might want to save the result to a cache file.
If you're not afraid of wasting memory and know what you're doing for a particular case, this works beautifully.
myView.layer.shouldRasterize = YES;
myView.layer.rasterizationScale = 2;
The resulting quality is much better than setMinificationFilter.
I am using images that are 256x256 and scaling them to something like 48 px. Obviously a saner solution here would be to downscale the images to the exact destination size.
Next helped to me:
imageView.layer.minificationFilter = kCAFilterTrilinear
imageView.layer.shouldRasterize = true
imageView.layer.rasterizationScale = UIScreen.mainScreen().scale
Keep an eye on performance if used in scroll lists.
I'm learning about drawing UIImages and CGImages, using CIFilters etc. To test my knowledge I made a small test app with sliders that programmatically change the color of a potion sprite and display it on screen (using a CIHueBlendMode CIFilter). After I finished, I wanted to cleanup the relatively lengthy code and noticed that instead of going from the filter's outputted CIImage to a CGImage and then a UIImage, I could go directly from a CIImage to UIImage using UIImage's imageWithCIImage: method.
However, when I tried to draw the resultant UIImage using drawInRect:, nothing was drawn. Going through the CGImage stage rectifies this, of course. My understanding of this is that making a UIImage from a CIImage results in a NULL CGImage property in the UIImage, which is used in drawInRect:. Is this correct? If so, is there a better way to display a CIImage than to go through CGImage followed by UIImage? I could just draw a CGImage made with the CIImage, but that would flip the image, which leads to another question. Currently, I wrap anything I draw in a UIImage first to take care of flipping. Is there another, more efficient way?
Too Long; Didn't Read: Is there a better way to draw CIImages other than turning it into a CGImage, then a UIImage and drawing that? What's the best way to handle flipping when drawing CGImages?
Thanks to anyone who can answer some of my questions. :)
After doing some research into what a CIImage is, I realize now that you cannot skip the step of making a CGImage from the CIImage, and even if you could, it wouldn't really be any more efficient, since you'd still have to process the CIImage regardless. A CIImage is not really an image, as noted in Apple's documentation, which is processed when it's turned into a CGImage. That's also why if I use Time Profiler on my project I see that 99% of my time in my drawRect: method is spent on createCGImage:, and not using CIFilters.
As for the most efficient way to cope with the coordinate system change between Core Graphics and the iPhone, it seems that wrapping the object in a UIImage instance is the easiest (not sure about best) way to go. It's simple, and relatively efficient. Another option would be to transform the graphics context.
If I don't get a better answer than my own within three days, I'll mark it as accepted.
I'm building an iPhone application which requires some images to be built up in a very specific way. The problem is quite difficult-to-explain so below is a diagram of what I'm trying to achieve. Basically, I want to "paint bucket" fill onto a UIImage (which will be a PNG). I assume the term "paint bucket" here will equate to a tint?
After that, I want to create a mask object (which will be updatable and may consist of multiple shapes) and then when I apply another tint/paint bucket to the original image, the areas covered by the built-up mask will be unaffected. It's basically like wrapping some tape around an object, painting it and then removing the tape. As promised, here's a diagram of what I'm after. It's important to note that although I'm using a cross here, eventually the patterns may be quite complex and will have to be inside PNGs and not created in code. Thanks for any help you might be able to give!
Create your cross (or whatever shape you want) as a black image on a white background. Apply it to your graphics context using CGContextClipToMask. Then use CGContextFillRect to fill the bounds of your context with blue. Something like this should do it:
CGRect bounds = your context bounds;
CGContextRef gc = your context;
UIImage *cross = [UIImage imageNamed:#"cross"];
CGContextSaveGState(gc); {
CGContextClipToMask(gc, bounds, cross.CGImage);
CGContextSetColorWithColor(gc, [UIColor blueColor].CGColor);
CGContextFillRect(gc, bounds);
} CGContextRestoreGState(gc);