Image Context produces artifacts when compositing UIImages - objective-c

I'm trying to overlay a custom semi-transparent image over a base image. The overlay image is stretchable and created like this:
[[UIImage imageNamed:#"overlay.png"] stretchableImageWithLeftCapWidth:5.0 topCapHeight:5.0]
Then I pass that off to a method that overlays it onto the background image for a button:
- (void)overlayImage:(UIImage *)overlay forState:(UIControlState)state {
UIImage *baseImage = [self backgroundImageForState:state];
CGRect frame = CGRectZero;
frame.size = baseImage.size;
// create a new image context
UIGraphicsBeginImageContext(baseImage.size);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// clear context
CGContextClearRect(context, frame);
// draw images
[baseImage drawInRect:frame];
[overlay drawInRect:frame];// blendMode:kCGBlendModeNormal alpha:1.0];
// get UIImage
UIImage *overlaidImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up context
UIGraphicsEndImageContext();
[self setBackgroundImage:overlaidImage forState:state];
}
The resulting overlaidImage looks mostly correct, it is the correct size, the alpha is blended correctly, etc. however it has vertical artifacts/noise.
UIImage artifacts example http://acaciatreesoftware.com/img/UIImage-artifacts.png
(example at http://acaciatreesoftware.com/img/UIImage-artifacts.png)
I tried clearing the context first and then turning off PNG compression--which reduces the artifacting some (completely on non stretched images I think).
Does anyone know a method for drawing stretchable UIImages with out this sort of artifacting happening?

So the answer is: Don't do this. Instead you can paint your overlay procedurally. Like so:
- (void)overlayWithColor:(UIColor *)overlayColor forState:(UIControlState)state {
UIImage *baseImage = [self backgroundImageForState:state];
CGRect frame = CGRectZero;
frame.size = baseImage.size;
// create a new image context
UIGraphicsBeginImageContext(baseImage.size);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// draw background image
[baseImage drawInRect:frame];
// overlay color
CGContextSetFillColorWithColor(context, [overlayColor CGColor]);
CGContextSetBlendMode(context, kCGBlendModeSourceAtop);
CGContextFillRect(context, frame);
// get UIImage
UIImage *overlaidImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up context
UIGraphicsEndImageContext();
[self setBackgroundImage:overlaidImage forState:state];
}

Are you being too miserly with your original image, and forcing it to stretch rather than shrink? I've found best results out of images that fit the same aspect ratio and were reduced in size. Might not solve your problem tho.

Related

UIScrollview changes UIButton's image quality on zoom out

I've got a large display area that can be panned and zoomed to view different objects. The problem that I'm running into is that the quality of the PNG images UIButton becomes somewhat degraded if I'm zoomed out (however it is back to normal when I zoom back in to 100%). It almost looks as if the image becomes oversharpened. Is this something that I'm going to have to live with, or is there a way to get rid of this grainy edge effect? The aspect ratio of the images are always 1:1, by the way.
I was able to solve this by using the answer found here in my scrollViewDidEndZooming method. Here is my code:
Resize function
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
ScrollView Method
(Widget is a UIViewController subclass which contains a button and a "widgetImage" which stores the full resolution of the image that the button should display)
- (void)scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(float)scale
{
for(Widget *theWidget in widgets){
UIImage *newScaledImage = [self resizeImage:theWidget.widgetImage newSize:CGSizeMake(theWidget.view.frame.size.width * scale, theWidget.view.frame.size.height * scale)];
[theWidget.widgetButton setImage:newScaledImage forState:UIControlStateNormal];
// theWidget.widgetButton.currentImage = newScaledImage;
}
}

iPhone4 iOS take screenshot of the view without NavBar and TabBar?

I got this code to be able to take a screenshot of the view.
UIGraphicsBeginImageContext(scrollView.bounds.size);
[scrollView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * data = UIImagePNGRepresentation(image);
However, even if I set the context to be 320x480, parts of the scroll view are still not shown. The view that the scrollview manages can perfectly fit within the 320x480 frame, but parts of it are covered by the status bar, navBar and TabBar.
I would like to take full screen (320x480) screenshot of the view with the parts of the view, covered by status bar, TabBar and NavBar visible. Any pointers on how to do this?
An extra question, which may be related: the resulting image is using x1 scale, and looks very blurry on the retina display, which scales takes a larger image and scales it down. This means I'll need to render the 640x960 screenshot to reproduce the original crisp quality. How would I go about doing that?
Thank you!
I found the following on this site: http://www.icodeblog.com/2009/07/27/1188/
UIGraphicsBeginImageContext(YourView.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
You may also want to check out this (Apple's example of how to make a screenshot): http://developer.apple.com/library/ios/#qa/qa1703/_index.html
First make a screenshot of the whole screen:
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0.0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
Then crop it to the right size
CGImageRef subImageRef = CGImageCreateWithImageInRect(screenshot.CGImage, rect);
CGRect smallBounds = CGRectMake(0, 64, 320, 372); //You should remove the hard coded numbers
UIGraphicsBeginImageContext(smallBounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, smallBounds, subImageRef);
UIImage* cropped = [UIImage imageWithCGImage:subImageRef];
UIGraphicsEndImageContext();

UIButton image rotation issue

I have a [UIButton buttonWithType:UIButtonTypeCustom] that has an image (or a background image - same problem) created by [UIImage imageWithContentsOfFile:] pointing to a JPG file taken by the camera and saved in the documents folder by the application.
If I define the image for UIControlStateNormal only, then when I touch the button the image gets darker as expected, but it also rotates either 90 degrees or 180 degrees. When I remove my finger it returns to normal.
This does not happen if I use the same image for UIControlStateHighlighted, but then I lose the touch indication (darker image).
This only happens with an image read from a file. It does not happen with [UIImage ImageNamed:].
I tried saving the file in PNG format rather than as JPG. In this case the image shows up in the wrong orientation to begin with, and is not rotated again when touched. This is not a good solution anyhow because the PNG is far too large and slow to handle.
Is this a bug or am I doing something wrong?
I was not able to find a proper solution to this and I needed a quick workaround. Below is a function which, given a UIImage, returns a new image which is darkened with a dark alpha fill. The context fill commands could be replaced with other draw or fill routines to provide different types of darkening.
This is un-optimized and was made with minimal knowledge of the graphics api.
You can use this function to set the UIControlStateHighlighted state image so that at least it will be darker.
+ (UIImage *)darkenedImageWithImage:(UIImage *)sourceImage
{
UIImage * darkenedImage = nil;
if (sourceImage)
{
// drawing prep
CGImageRef source = sourceImage.CGImage;
CGRect drawRect = CGRectMake(0.f,
0.f,
sourceImage.size.width,
sourceImage.size.height);
CGContextRef context = CGBitmapContextCreate(NULL,
drawRect.size.width,
drawRect.size.height,
CGImageGetBitsPerComponent(source),
CGImageGetBytesPerRow(source),
CGImageGetColorSpace(source),
CGImageGetBitmapInfo(source)
);
// draw given image and then darken fill it
CGContextDrawImage(context, drawRect, source);
CGContextSetBlendMode(context, kCGBlendModeOverlay);
CGContextSetRGBFillColor(context, 0.f, 0.f, 0.f, 0.5f);
CGContextFillRect(context, drawRect);
// get context result
CGImageRef darkened = CGBitmapContextCreateImage(context);
CGContextRelease(context);
// convert to UIImage and preserve original orientation
darkenedImage = [UIImage imageWithCGImage:darkened
scale:1.f
orientation:sourceImage.imageOrientation];
CGImageRelease(darkened);
}
return darkenedImage;
}
To fix this you need additional normalization function like this:
public extension UIImage {
func normalizedImage() -> UIImage! {
if self.imageOrientation == .Up {
return self
}
UIGraphicsBeginImageContextWithOptions(self.size, false, self.scale)
self.drawInRect(CGRectMake(0, 0, self.size.width, self.size.height))
let normalized = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return normalized
}
}
then you can use it like that:
self.photoButton.sd_setImageWithURL(avatarURL,
forState: .Normal,
placeholderImage: UIImage(named: "user_avatar_placeholder")) {
[weak self] (image, error, cacheType, url) in
guard let strongSelf = self else {
return
}
strongSelf.photoButton.setImage(image.normalizedImage(), forState: .Normal
}

Image Cropping API for iOS

Is there any cropping image API for objective C that crops images dynamically in Xcode project? Please provide some tricks or techniques how could I crop camera images in iPhone.
You can use below simple code to crop an image. You have to pass the image and the CGRect which is the cropping area. Here, I crop image so that I get center part of original image and returned image is square.
// Returns largest possible centered cropped image.
- (UIImage *)centerCropImage:(UIImage *)image
{
// Use smallest side length as crop square length
CGFloat squareLength = MIN(image.size.width, image.size.height);
// Center the crop area
CGRect clippedRect = CGRectMake((image.size.width - squareLength) / 2, (image.size.height - squareLength) / 2, squareLength, squareLength);
// Crop logic
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], clippedRect);
UIImage * croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}
EDIT - Swift Version
let imageView = UIImageView(image: image)
imageView.contentMode = .scaleAspectFill
imageView.clipsToBounds = true
All these solutions seem quite complicated and many of them actually degrade the quality the image.
You can do much simpler using UIImageView's out of the box methods.
Objective-C
self.imageView.contentMode = UIViewContentModeScaleAspectFill;
[self.imageView setClipsToBounds:YES];
[self.imageView setImage:img];
This will crop your image based on the dimensions you've set for your UIImageView (I've called mine imageView here).
It's that simple and works much better than the other solutions.
You can use CoreGraphics framework to cropping image dynamically.
Here is a example code part of dynamic image crop. I hope this will be helpful for you.
- (void)drawMaskLineSegmentTo:(CGPoint)ptTo withMaskWidth:(CGFloat)maskWidth inContext:(NXMaskDrawContext)context{
if (context == nil)
return;
if (context.count <= 0){
[context addObject:[NSValue valueWithCGPoint:ptTo]];
return;
}
CGPoint ptFrom = [context.lastObject CGPointValue];
UIGraphicsBeginImageContext(self.maskImage.size);
[self.maskImage drawInRect:CGRectMake(0, 0, self.maskImage.size.width, self.maskImage.size.height)];
CGContextRef graphicsContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(graphicsContext, kCGBlendModeCopy);
CGContextSetRGBStrokeColor(graphicsContext, 1, 1, 1, 1);
CGContextSetLineWidth(graphicsContext, maskWidth);
CGContextSetLineCap(graphicsContext, kCGLineCapRound);
CGContextMoveToPoint(graphicsContext, ptFrom.x, ptFrom.y);
CGContextAddLineToPoint(graphicsContext, ptTo.x, ptTo.y);
CGContextStrokePath(graphicsContext);
self.maskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIGraphicsBeginImageContext(self.displayableMaskImage.size);
[self.displayableMaskImage drawInRect:CGRectMake(0, 0, self.displayableMaskImage.size.width, self.displayableMaskImage.size.height)];
graphicsContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(graphicsContext, kCGBlendModeCopy);
CGContextSetStrokeColorWithColor(graphicsContext, self.displayableMaskColor.CGColor);
CGContextSetLineWidth(graphicsContext, maskWidth);
CGContextSetLineCap(graphicsContext, kCGLineCapRound);
CGContextMoveToPoint(graphicsContext, ptFrom.x, ptFrom.y);
CGContextAddLineToPoint(graphicsContext, ptTo.x, ptTo.y);
CGContextStrokePath(graphicsContext);
self.displayableMaskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[context addObject:[NSValue valueWithCGPoint:ptTo]];
}
Xcode 5, iOS 7, and 4-inch screen example: Here is an open source example of a
SimpleImageCropEditor (Project Zip and Source Code Example. You can load the Image Crop Editor as a Modal View Controller and reuse. Look at the code and please leave constructive comments concerning if this example code answers the question "Image Cropping API for iOS".
Demonstrates, is example source Objective-C code, use of UIImagePickerController, #protocol, UIActionSheet, UIScrollView, UINavigationController, MFMailComposeViewController, and UIGestureRecognizer.

Creating a gradient fill for text using [UIColor colorWithPatternImage:]

I want to create a gradient for the fill color of my text. Currently I am doing it by setting the color of a UILabel's text as
UIImage *image = [UIImage imageNamed:#"GradientFillImage.png"];
myLabel.textColor = [UIColor colorWithPatternImage:image];
Where GradientFillImage.png is a simple image file with a linear gradient painted on it.
This works fine until I want to resize the font. Since the image file is of constant dimensions and does not resize when I resize the font, the gradient fill for the font gets messed up.
How do I create a custom size pattern image and apply it as a fill pattern for text?
I've just finished a UIColor class extension that makes this a 1 line + block thing.
https://github.com/bigkm/UIColor-BlockPattern
CGRect rect = CGRectMake(0.0,0.0,10.0,10.0);
[UIColor colorPatternWithSize:rect.size andDrawingBlock:[[^(CGContextRef c) {
UIImage *image = [UIImage imageNamed:#"FontGradientPink.png"];
CGContextDrawImage(context, rect, [image CGImage]);
} copy] autorelease]];
Ok, I figured it out. Basically, we can override drawRectInText and use our own pattern to color the fill. The advantage of doing this is that we can resize the image into our pattern frame.
First we create a CGPattern object and define a callback to draw the pattern. We also pass the size of the label as a parameter in the callback. We then use the pattern that is drawn in the callback and set it as the fill color of the text:
- (void)drawTextInRect:(CGRect)rect
{
//set gradient as a pattern fill
CGRect info[1] = {rect};
static const CGPatternCallbacks callbacks = {0, &drawImagePattern, NULL};
CGAffineTransform transform = CGAffineTransformMakeScale(1.0, -1.0);
CGPatternRef pattern = CGPatternCreate((void *) info, rect, transform, 10.0, rect.size.height, kCGPatternTilingConstantSpacing, true, &callbacks);
CGColorSpaceRef patternSpace = CGColorSpaceCreatePattern(NULL);
CGFloat alpha = 1.0;
CGColorRef patternColorRef = CGColorCreateWithPattern(patternSpace, pattern, &alpha);
CGColorSpaceRelease(patternSpace);
CGPatternRelease(pattern);
self.textColor = [UIColor colorWithCGColor:patternColorRef];
self.shadowOffset = CGSizeZero;
[super drawTextInRect:rect];
}
The callback draws the image into the context. The image is resized as per the frame size that is passed into the callback.
void drawImagePattern(void *info, CGContextRef context)
{
UIImage *image = [UIImage imageNamed:#"FontGradientPink.png"];
CGImageRef imageRef = [image CGImage];
CGRect *rect = info;
CGContextDrawImage(context, rect[0], imageRef);
}