UIButton image rotation issue - uibutton

I have a [UIButton buttonWithType:UIButtonTypeCustom] that has an image (or a background image - same problem) created by [UIImage imageWithContentsOfFile:] pointing to a JPG file taken by the camera and saved in the documents folder by the application.
If I define the image for UIControlStateNormal only, then when I touch the button the image gets darker as expected, but it also rotates either 90 degrees or 180 degrees. When I remove my finger it returns to normal.
This does not happen if I use the same image for UIControlStateHighlighted, but then I lose the touch indication (darker image).
This only happens with an image read from a file. It does not happen with [UIImage ImageNamed:].
I tried saving the file in PNG format rather than as JPG. In this case the image shows up in the wrong orientation to begin with, and is not rotated again when touched. This is not a good solution anyhow because the PNG is far too large and slow to handle.
Is this a bug or am I doing something wrong?

I was not able to find a proper solution to this and I needed a quick workaround. Below is a function which, given a UIImage, returns a new image which is darkened with a dark alpha fill. The context fill commands could be replaced with other draw or fill routines to provide different types of darkening.
This is un-optimized and was made with minimal knowledge of the graphics api.
You can use this function to set the UIControlStateHighlighted state image so that at least it will be darker.
+ (UIImage *)darkenedImageWithImage:(UIImage *)sourceImage
{
UIImage * darkenedImage = nil;
if (sourceImage)
{
// drawing prep
CGImageRef source = sourceImage.CGImage;
CGRect drawRect = CGRectMake(0.f,
0.f,
sourceImage.size.width,
sourceImage.size.height);
CGContextRef context = CGBitmapContextCreate(NULL,
drawRect.size.width,
drawRect.size.height,
CGImageGetBitsPerComponent(source),
CGImageGetBytesPerRow(source),
CGImageGetColorSpace(source),
CGImageGetBitmapInfo(source)
);
// draw given image and then darken fill it
CGContextDrawImage(context, drawRect, source);
CGContextSetBlendMode(context, kCGBlendModeOverlay);
CGContextSetRGBFillColor(context, 0.f, 0.f, 0.f, 0.5f);
CGContextFillRect(context, drawRect);
// get context result
CGImageRef darkened = CGBitmapContextCreateImage(context);
CGContextRelease(context);
// convert to UIImage and preserve original orientation
darkenedImage = [UIImage imageWithCGImage:darkened
scale:1.f
orientation:sourceImage.imageOrientation];
CGImageRelease(darkened);
}
return darkenedImage;
}

To fix this you need additional normalization function like this:
public extension UIImage {
func normalizedImage() -> UIImage! {
if self.imageOrientation == .Up {
return self
}
UIGraphicsBeginImageContextWithOptions(self.size, false, self.scale)
self.drawInRect(CGRectMake(0, 0, self.size.width, self.size.height))
let normalized = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return normalized
}
}
then you can use it like that:
self.photoButton.sd_setImageWithURL(avatarURL,
forState: .Normal,
placeholderImage: UIImage(named: "user_avatar_placeholder")) {
[weak self] (image, error, cacheType, url) in
guard let strongSelf = self else {
return
}
strongSelf.photoButton.setImage(image.normalizedImage(), forState: .Normal
}

Related

How can I make an UIImage programmatically?

This isn't what you probably thought it was to begin with. I know how to use UIImage's, but I now need to know how to create a "blank" UIImage using:
CGRect screenRect = [self.view bounds];
Well, those dimensions. Anyway, I want to know how I can create a UIImage with those dimensions colored all white. No actual images here.
Is this even possible? I am sure it is, but maybe I am wrong.
Edit
This needs to be a "white" image. Not a blank one. :)
You need to use CoreGraphics, as follows.
CGSize size = CGSizeMake(desiredWidth, desiredHeight);
UIGraphicsBeginImageContextWithOptions(size, YES, 0);
[[UIColor whiteColor] setFill];
UIRectFill(CGRectMake(0, 0, size.width, size.height));
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The code creates a new CoreGraphics image context with the options passed as parameters; size, opaqueness, and scale. By passing 0 for scale, iOS automatically chooses the appropriate value for the current device.
Then, the context fill colour is set to [UIColor whiteColor]. Immediately, the canvas is then actually filled with that color, by using UIRectFill() and passing a rectangle which fills the canvas.
A UIImage is then created of the current context, and the context is closed. Therefore, the image variable contains a UIImage of the desired size, filled white.
Swift version:
extension UIImage {
static func emptyImage(with size: CGSize) -> UIImage? {
UIGraphicsBeginImageContext(size)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
If you want to draw just an empty image, you could use UIKit UIImageBeginImageContextWithOptions: method.
UIGraphicsBeginImageContext(CGSizeMake(width, height));
CGContextAddRect(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, width, height)); // this may not be necessary
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The code above assumes that you draw a image with a size of width x height. It adds rectangle into the graphics context, but it may not be necessary. Try it yourself. This is the way to go. :)
Or, if you want to create a snapshot of your current view you would type code like;
UIGraphicsBeginImageContext(CGSizeMake(self.view.size.width, self.view.size.height));
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Don't forget to include Quartz library if you use layer.
Based on #HixField and #Rudolf Adamkovič answer. here's an extension which returns an optional, which I believe is the correct way to do this (correct me if I'm wrong!)?
This extension allows you to create a an empty UIImage of what ever size you need (up to memory limit) with what ever fill color you want, which defaults to white, if you want the image to be the clear color you would use something like the following:
let size = CGSize(width: 32.0, height: 32.0)
if var image = UIImage.imageWithSize(size:size, UIColor.clear) {
//image was successfully created, do additional stuff with it here.
}
This is for swift 3.x:
extension UIImage {
static func imageWithSize(size : CGSize, color : UIColor = UIColor.white) -> UIImage? {
var image:UIImage? = nil
UIGraphicsBeginImageContext(size)
if let context = UIGraphicsGetCurrentContext() {
context.setFillColor(color.cgColor)
context.addRect(CGRect(origin: CGPoint.zero, size: size));
context.drawPath(using: .fill)
image = UIGraphicsGetImageFromCurrentImageContext();
}
UIGraphicsEndImageContext()
return image
}
}
Using the latest UIGraphics classes, and in swift, this looks like this (note the CGContextDrawPath that is missing in the answer from user1834305, this is the reason that is produces an transparant image) :
static func imageWithSize(size : CGSize, color : UIColor) -> UIImage {
UIGraphicsBeginImageContext(size)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, color.CGColor)
CGContextAddRect(context, CGRect(x: 0, y: 0, width: size.width, height: size.height));
CGContextDrawPath(context, .Fill)
let image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image
}
Same solution in C# for Xamarin.iOS :
UIGraphics.BeginImageContext(new CGSize(width, height));
var image = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();

Cocoa Touch - Adding texture with overlay view

I have a set of tiles as UIViews that have a programmable background color, and each one
can be a different color. I want to add texture, like a side-lit bevel, to each one. Can this be done with an overlay view or by some other method?
I'm looking for suggestions that don't require a custom image file for each case.
This may help someone, although this was pieced together from other topics on SO.
To create a beveled tile image with an arbitrary color for normal and for retina display, I made a beveled image in photoshop and set the saturation to zero, making a grayscale image called tileBevel.png
I also created one for the retina display (tileBevel#2x.png)
Here is the code:
+ (UIImage*) createTileWithColor:(UIColor*)tileColor {
int pixelsHigh = 44;
int pixelsWide = 46;
UIImage *bottomImage;
if([UIScreen respondsToSelector:#selector(scale)] && [[UIScreen mainScreen] scale] == 2.0) {
pixelsHigh *= 2;
pixelsWide *= 2;
bottomImage = [UIImage imageNamed:#"tileBevel#2x.png"];
}
else {
bottomImage = [UIImage imageNamed:#"tileBevel.png"];
}
CGImageRef theCGImage = NULL;
CGContextRef tileBitmapContext = NULL;
CGRect rectangle = CGRectMake(0,0,pixelsWide,pixelsHigh);
UIGraphicsBeginImageContext(rectangle.size);
[bottomImage drawInRect:rectangle];
tileBitmapContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(tileBitmapContext, kCGBlendModeOverlay);
CGContextSetFillColorWithColor(tileBitmapContext, tileColor.CGColor);
CGContextFillRect(tileBitmapContext, rectangle);
theCGImage=CGBitmapContextCreateImage(tileBitmapContext);
UIGraphicsEndImageContext();
return [UIImage imageWithCGImage:theCGImage];
}
This checks to see if the retina display is used, sizes the rectangle to draw in, picks the appropriate grayscale base image, set the blending mode to overlay, then draws a rectangle on top of the bottom image. All of this is done inside a graphics context bracketed by the BeginImageContext and EndImageContext calls. These set the current context needed by the UIImage drawRect: method. The Core Graphics functions need the context as a parameter, which is obtained by a call to get the current context.
And the result looks like this:
If you want to preserve the alpha channel of the source image, just add this to jim's code before the fill rect:
// Apply mask
CGContextTranslateCTM(tileBitmapContext, 0, rectangle.size.height);
CGContextScaleCTM(tileBitmapContext, 1.0f, -1.0f);
CGContextClipToMask(tileBitmapContext, rectangle, bottomImage.CGImage);
Swift 3 solution, essentially based on Jim's answer with Scriptease's addition, and some minor changes:
class func image(bottomImage: UIImage, topImage: UIImage, tileColor: UIColor) -> UIImage? {
let pixelsHigh: CGFloat = bottomImage.size.height
let pixelsWide: CGFloat = bottomImage.size.width
let rectangle = CGRect.init(x: 0, y: 0, width: pixelsWide, height: pixelsHigh)
UIGraphicsBeginImageContext(rectangle.size);
bottomImage.draw(in: rectangle)
if let tileBitmapContext = UIGraphicsGetCurrentContext() {
tileBitmapContext.setBlendMode(.overlay)
tileBitmapContext.setFillColor(tileColor.cgColor)
tileBitmapContext.scaleBy(x: 1.0, y: -1.0)
tileBitmapContext.clip(to: rectangle, mask: bottomImage.cgImage!)
tileBitmapContext.fill(rectangle)
let theCGImage = tileBitmapContext.makeImage()
UIGraphicsEndImageContext();
if let theImage = theCGImage {
return UIImage.init(cgImage: theImage)
}
}
return nil
}

Image Cropping API for iOS

Is there any cropping image API for objective C that crops images dynamically in Xcode project? Please provide some tricks or techniques how could I crop camera images in iPhone.
You can use below simple code to crop an image. You have to pass the image and the CGRect which is the cropping area. Here, I crop image so that I get center part of original image and returned image is square.
// Returns largest possible centered cropped image.
- (UIImage *)centerCropImage:(UIImage *)image
{
// Use smallest side length as crop square length
CGFloat squareLength = MIN(image.size.width, image.size.height);
// Center the crop area
CGRect clippedRect = CGRectMake((image.size.width - squareLength) / 2, (image.size.height - squareLength) / 2, squareLength, squareLength);
// Crop logic
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], clippedRect);
UIImage * croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}
EDIT - Swift Version
let imageView = UIImageView(image: image)
imageView.contentMode = .scaleAspectFill
imageView.clipsToBounds = true
All these solutions seem quite complicated and many of them actually degrade the quality the image.
You can do much simpler using UIImageView's out of the box methods.
Objective-C
self.imageView.contentMode = UIViewContentModeScaleAspectFill;
[self.imageView setClipsToBounds:YES];
[self.imageView setImage:img];
This will crop your image based on the dimensions you've set for your UIImageView (I've called mine imageView here).
It's that simple and works much better than the other solutions.
You can use CoreGraphics framework to cropping image dynamically.
Here is a example code part of dynamic image crop. I hope this will be helpful for you.
- (void)drawMaskLineSegmentTo:(CGPoint)ptTo withMaskWidth:(CGFloat)maskWidth inContext:(NXMaskDrawContext)context{
if (context == nil)
return;
if (context.count <= 0){
[context addObject:[NSValue valueWithCGPoint:ptTo]];
return;
}
CGPoint ptFrom = [context.lastObject CGPointValue];
UIGraphicsBeginImageContext(self.maskImage.size);
[self.maskImage drawInRect:CGRectMake(0, 0, self.maskImage.size.width, self.maskImage.size.height)];
CGContextRef graphicsContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(graphicsContext, kCGBlendModeCopy);
CGContextSetRGBStrokeColor(graphicsContext, 1, 1, 1, 1);
CGContextSetLineWidth(graphicsContext, maskWidth);
CGContextSetLineCap(graphicsContext, kCGLineCapRound);
CGContextMoveToPoint(graphicsContext, ptFrom.x, ptFrom.y);
CGContextAddLineToPoint(graphicsContext, ptTo.x, ptTo.y);
CGContextStrokePath(graphicsContext);
self.maskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIGraphicsBeginImageContext(self.displayableMaskImage.size);
[self.displayableMaskImage drawInRect:CGRectMake(0, 0, self.displayableMaskImage.size.width, self.displayableMaskImage.size.height)];
graphicsContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(graphicsContext, kCGBlendModeCopy);
CGContextSetStrokeColorWithColor(graphicsContext, self.displayableMaskColor.CGColor);
CGContextSetLineWidth(graphicsContext, maskWidth);
CGContextSetLineCap(graphicsContext, kCGLineCapRound);
CGContextMoveToPoint(graphicsContext, ptFrom.x, ptFrom.y);
CGContextAddLineToPoint(graphicsContext, ptTo.x, ptTo.y);
CGContextStrokePath(graphicsContext);
self.displayableMaskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[context addObject:[NSValue valueWithCGPoint:ptTo]];
}
Xcode 5, iOS 7, and 4-inch screen example: Here is an open source example of a
SimpleImageCropEditor (Project Zip and Source Code Example. You can load the Image Crop Editor as a Modal View Controller and reuse. Look at the code and please leave constructive comments concerning if this example code answers the question "Image Cropping API for iOS".
Demonstrates, is example source Objective-C code, use of UIImagePickerController, #protocol, UIActionSheet, UIScrollView, UINavigationController, MFMailComposeViewController, and UIGestureRecognizer.

AVFoundation crop captured still image according to the preview aspect ratio

My question is mostly similar to this one:
Cropping image captured by AVCaptureSession
I have an application which uses AVFoundation for capturing still images. My AVCaptureVideoPreviewLayer has AVLayerVideoGravityResizeAspectFill video gravity thus making preview picture which is shown to the user to be cropped from the top and from the bottom parts.
When user is pressing "Capture" button, the image actually captured is differs from the preview picture shown to user. My question is how to crop captured image accordingly?
Thanks in advance.
I used UIImage+Resize category provided in here with some new methods I wrote to do the job. I reformatted some code to look better and not tested, but it should work. :))
- (UIImage*)cropAndResizeAspectFillWithSize:(CGSize)targetSize
interpolationQuality:(CGInterpolationQuality)quality {
UIImage *outputImage = nil;
UIImage *imageForProcessing = self;
// crop center square (AspectFill)
if (self.size.width != self.size.height) {
CGFloat shorterLength = 0;
CGPoint origin = CGPointZero;
if (self.size.width > self.size.height) {
origin.x = (self.size.width - self.size.height)/2;
shorterLength = self.size.height;
}
else {
origin.y = (self.size.height - self.size.width)/2;
shorterLength = self.size.width;
}
imageForProcessing = [imageForProcessing normalizedImage];
imageForProcessing = [imageForProcessing croppedImage:CGRectMake(origin.x, origin.y, shorterLength, shorterLength)];
}
outputImage = [imageForProcessing resizedImage:targetSize interpolationQuality:quality];
return outputImage;
}
// fix image orientation, which may wrongly rotate the output.
- (UIImage *)normalizedImage {
if (self.imageOrientation == UIImageOrientationUp) return self;
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
[self drawInRect:(CGRect){0, 0, self.size}];
UIImage *normalizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return normalizedImage;
}

Image Context produces artifacts when compositing UIImages

I'm trying to overlay a custom semi-transparent image over a base image. The overlay image is stretchable and created like this:
[[UIImage imageNamed:#"overlay.png"] stretchableImageWithLeftCapWidth:5.0 topCapHeight:5.0]
Then I pass that off to a method that overlays it onto the background image for a button:
- (void)overlayImage:(UIImage *)overlay forState:(UIControlState)state {
UIImage *baseImage = [self backgroundImageForState:state];
CGRect frame = CGRectZero;
frame.size = baseImage.size;
// create a new image context
UIGraphicsBeginImageContext(baseImage.size);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// clear context
CGContextClearRect(context, frame);
// draw images
[baseImage drawInRect:frame];
[overlay drawInRect:frame];// blendMode:kCGBlendModeNormal alpha:1.0];
// get UIImage
UIImage *overlaidImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up context
UIGraphicsEndImageContext();
[self setBackgroundImage:overlaidImage forState:state];
}
The resulting overlaidImage looks mostly correct, it is the correct size, the alpha is blended correctly, etc. however it has vertical artifacts/noise.
UIImage artifacts example http://acaciatreesoftware.com/img/UIImage-artifacts.png
(example at http://acaciatreesoftware.com/img/UIImage-artifacts.png)
I tried clearing the context first and then turning off PNG compression--which reduces the artifacting some (completely on non stretched images I think).
Does anyone know a method for drawing stretchable UIImages with out this sort of artifacting happening?
So the answer is: Don't do this. Instead you can paint your overlay procedurally. Like so:
- (void)overlayWithColor:(UIColor *)overlayColor forState:(UIControlState)state {
UIImage *baseImage = [self backgroundImageForState:state];
CGRect frame = CGRectZero;
frame.size = baseImage.size;
// create a new image context
UIGraphicsBeginImageContext(baseImage.size);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// draw background image
[baseImage drawInRect:frame];
// overlay color
CGContextSetFillColorWithColor(context, [overlayColor CGColor]);
CGContextSetBlendMode(context, kCGBlendModeSourceAtop);
CGContextFillRect(context, frame);
// get UIImage
UIImage *overlaidImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up context
UIGraphicsEndImageContext();
[self setBackgroundImage:overlaidImage forState:state];
}
Are you being too miserly with your original image, and forcing it to stretch rather than shrink? I've found best results out of images that fit the same aspect ratio and were reduced in size. Might not solve your problem tho.