Invert colors of image with transparent background - cocoa-touch

I have an uiimage with transparent background. if I invert the colors with the follow method, the transparent background change to white and then to black.
I want that it doesnt invert transparent pixels? how can i do that?
- (UIImage *) getImageWithInvertedPixelsOfImage:(UIImage *)image {
UIGraphicsBeginImageContextWithOptions(image.size, NO, 2);
CGContextSetBlendMode(UIGraphicsGetCurrentContext(), kCGBlendModeCopy);
[image drawInRect:CGRectMake(0, 0, image.size.width, image.size.height)];
CGContextSetBlendMode(UIGraphicsGetCurrentContext(), kCGBlendModeDifference);
CGContextSetFillColorWithColor(UIGraphicsGetCurrentContext(),[UIColor whiteColor].CGColor);
CGContextFillRect(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, image.size.width, image.size.height));
UIImage * result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}

this is how I solved.
It is the code that you can find in other answers, but with the masking of the current image with the same image, with rotated context (for transforming from CG* coords to UI* coords).
- (UIImage *)invertImage:(UIImage *)originalImage
{
UIGraphicsBeginImageContext(originalImage.size);
CGContextSetBlendMode(UIGraphicsGetCurrentContext(), kCGBlendModeCopy);
CGRect imageRect = CGRectMake(0, 0, originalImage.size.width, originalImage.size.height);
[originalImage drawInRect:imageRect];
CGContextSetBlendMode(UIGraphicsGetCurrentContext(), kCGBlendModeDifference);
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0, originalImage.size.height);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
//mask the image
CGContextClipToMask(UIGraphicsGetCurrentContext(), imageRect, originalImage.CGImage);
CGContextSetFillColorWithColor(UIGraphicsGetCurrentContext(),[UIColor whiteColor].CGColor);
CGContextFillRect(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, originalImage.size.width, originalImage.size.height));
UIImage *returnImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return returnImage;
}

I created a Pod from #maggix's answer:
pod 'UIImage+InvertedImage'

This is a Swift 4.0 version of #maggix answer...
func invertImage(with originalImage: UIImage) -> UIImage? {
var image: UIImage?
UIGraphicsBeginImageContext(originalImage.size)
if let context = UIGraphicsGetCurrentContext() {
context.setBlendMode(.copy)
let imageRect = CGRect(x: 0, y: 0, width: originalImage.size.width, height: originalImage.size.height)
originalImage.draw(in: imageRect)
context.setBlendMode(.difference)
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
context.translateBy(x: 0, y: originalImage.size.height)
context.scaleBy(x: 1.0, y: -1.0)
//mask the image
context.clip(to: imageRect, mask: originalImage.cgImage!)
context.setFillColor(UIColor.white.cgColor)
context.fill(CGRect(x: 0, y:0, width: originalImage.size.width, height: originalImage.size.height))
image = UIGraphicsGetImageFromCurrentImageContext()
}
UIGraphicsEndImageContext()
return image
}

Related

iOS: create an darker version of UIImage and leave transparent pixels unchanged?

I found
Create new UIImage by adding shadow to existing UIImage
and
UIImage, is there an easy way to make it darker or all black
But the selected answers do not work for me.
I have an UIImage, which may have some transparent pixels in it, I need to create a new UIImage with non-transparent pixels darkened, is there any way to do this? I was thinking of using UIBezierPath but I don't know how to do it for only non-transparent pixels.
This is the class I use to color images even if they are transparent.
+ (UIImage *)colorizeImage:(UIImage *)image withColor:(UIColor *)color {
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect area = CGRectMake(0, 0, image.size.width, image.size.height);
CGContextScaleCTM(context, 1, -1);
CGContextTranslateCTM(context, 0, -area.size.height);
CGContextSaveGState(context);
CGContextClipToMask(context, area, image.CGImage);
[color set];
CGContextFillRect(context, area);
CGContextRestoreGState(context);
CGContextSetBlendMode(context, kCGBlendModeMultiply);
CGContextDrawImage(context, area, image.CGImage);
UIImage *colorizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return colorizedImage;
}
To darken the image you would pass the method a black or gray UIColor with lowered transparency.
How about trying a CoreImage Filter?
You could use the CIColorControls filter to adjust the input brightness and contrast to darken the image.
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:sourceImage]; //your input image
CIFilter *filter= [CIFilter filterWithName:#"CIColorControls"];
[filter setValue:inputImage forKey:#"inputImage"];
[filter setValue:[NSNumber numberWithFloat:0.5] forKey:#"inputBrightness"];
// Your output image
UIImage *outputImage = [UIImage imageWithCGImage:[context createCGImage:filter.outputImage fromRect:filter.outputImage.extent]];
Read more about the CIFilter parameters here:
http://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html%23//apple_ref/doc/filter/ci/CIColorControls
Here's a quick Swift version, using a CIExposureAdjust CIFilter :)
// Get the original image and set up the CIExposureAdjust filter
guard let originalImage = UIImage(named: "myImage"),
let inputImage = CIImage(image: originalImage),
let filter = CIFilter(name: "CIExposureAdjust") else { return }
// The inputEV value on the CIFilter adjusts exposure (negative values darken, positive values brighten)
filter.setValue(inputImage, forKey: "inputImage")
filter.setValue(-2.0, forKey: "inputEV")
// Break early if the filter was not a success (.outputImage is optional in Swift)
guard let filteredImage = filter.outputImage else { return }
let context = CIContext(options: nil)
let outputImage = UIImage(CGImage: context.createCGImage(filteredImage, fromRect: filteredImage.extent))
myImageView.image = outputImage // use the filtered UIImage as required.
Here's David's answer in Swift 5:
class func colorizeImage(_ image: UIImage?, with color: UIColor?) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(image?.size ?? CGSize.zero, _: false, _: image?.scale ?? 0.0)
let context = UIGraphicsGetCurrentContext()
let area = CGRect(x: 0, y: 0, width: image?.size.width ?? 0.0, height: image?.size.height ?? 0.0)
context?.scaleBy(x: 1, y: -1)
context?.translateBy(x: 0, y: -area.size.height)
context?.saveGState()
context?.clip(to: area, mask: (image?.cgImage)!)
color?.set()
context?.fill(area)
context?.restoreGState()
if let context = context {
context.setBlendMode(.multiply)
}
context!.draw((image?.cgImage)!, in: area)
let colorizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return colorizedImage
}

CoreGraphics draw an image on a white canvas

I have a source image which has a variable width and height, which I must show on a fullscreen iPad UIImageView but with addition of borders around the image itself. So my task is to create a new image with a white border around it, but not overlapping on the image itself. I'm currently doing it with overlapping via this code:
- (UIImage*)imageWithBorderFromImage:(UIImage*)source
{
CGSize size = [source size];
UIGraphicsBeginImageContext(size);
CGRect rect = CGRectMake(0, 0, size.width, size.height);
[source drawInRect:rect blendMode:kCGBlendModeNormal alpha:1.0];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBStrokeColor(context, 1.0, 1.0, 1.0, 1.0);
CGContextSetLineWidth(context, 40.0);
CGContextStrokeRect(context, rect);
UIImage *testImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return testImg;
}
Can anyone tell me how do I first draw a white canvas that is 40 pixels bigger in each direction than the source image and then draw that image on it?
I adjusted your code to make it work. Basically what it does:
Set canvas size to the size of your image + 2*margins
Fill the whole canvas with background color (white in your case)
Draw your image in the appropriate rectangle (with initial size and required margins)
Resulting code is:
- (UIImage*)imageWithBorderFromImage:(UIImage*)source
{
const CGFloat margin = 40.0f;
CGSize size = CGSizeMake([source size].width + 2*margin, [source size].height + 2*margin);
UIGraphicsBeginImageContext(size);
[[UIColor whiteColor] setFill];
[[UIBezierPath bezierPathWithRect:CGRectMake(0, 0, size.width, size.height)] fill];
CGRect rect = CGRectMake(margin, margin, size.width-2*margin, size.height-2*margin);
[source drawInRect:rect blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *testImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return testImg;
}

objective-c: draw line on top of UIImage or UIImageView

How do I draw a simple line on top of a UIImage or UIImageView?
The following code works by creating a new image the same size as the original, drawing a copy of the original image onto the new image, then drawing a 1 pixel line along to the top of the new image.
// UIImage *originalImage = <the image you want to add a line to>
// UIColor *lineColor = <the color of the line>
UIGraphicsBeginImageContext(originalImage.size);
// Pass 1: Draw the original image as the background
[originalImage drawAtPoint:CGPointMake(0,0)];
// Pass 2: Draw the line on top of original image
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 1.0);
CGContextMoveToPoint(context, 0, 0);
CGContextAddLineToPoint(context, originalImage.size.width, 0);
CGContextSetStrokeColorWithColor(context, [lineColor CGColor]);
CGContextStrokePath(context);
// Create new image
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
// Tidy up
UIGraphicsEndImageContext();
Alternatively, you could create the line as a CAShapeLayer then add it as a subview to the UIImageView (see this answer).
For Swift 3.0
UIGraphicsBeginImageContext(originalImage.size)
originalImage.draw(at: CGPoint(x: 0, y: 0))
let context = UIGraphicsGetCurrentContext()!
context.setLineWidth(2)
context.move(to: CGPoint(x: 0, y: originalImage.size.height - 2))
context.addLine(to: CGPoint(x: originalImage.size.width, y: originalImage.size.height - 2))
context.setStrokeColor(UIColor.red.cgColor)
context.strokePath()
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
For Swift 2.3
func drawLineOnImage(size: CGSize, image: UIImage, from: CGPoint, to: CGPoint) -> UIImage {
UIGraphicsBeginImageContext(size)
image.drawAtPoint(CGPointZero)
let context = UIGraphicsGetCurrentContext()
CGContextSetLineWidth(context, 1.0)
CGContextSetStrokeColorWithColor(context, UIColor.blueColor().CGColor)
CGContextMoveToPoint(context, from.x, from.y)
CGContextAddLineToPoint(context, to.x, to.y)
CGContextStrokePath(context)
let resultImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return resultImage
}

Slightly-transparent white line around UIImage after adding overlays?

I have a function that will overlay images over another image and return as a UIImage. However, a white line-ish thing comes up on it. Here's what I have
- (UIImage * ) romanticFilterImage: (UIImage *) imageA {
UIImage *vintageOne = [UIImage imageNamed:#"filter_vintage_1.png"];
UIImage *vintageTwo= [UIImage imageNamed:#"filter_vintage_2.png"];
UIImage *vintageThree = [UIImage imageNamed:#"filter_vintage_3.png"];
UIGraphicsBeginImageContextWithOptions(CGSizeMake(imageA.size.width, imageA.size.height), YES, 0.0);
[imageA drawAtPoint: CGPointMake(0,0)];
[vintageThree drawInRect:CGRectMake(0, 0, imageA.size.width, imageA.size.height) blendMode: kCGBlendModeMultiply alpha: 1.0];
[vintageTwo drawInRect:CGRectMake(0, 0, imageA.size.width + 2, imageA.size.height + 2) blendMode: kCGBlendModeColorBurn alpha: 1.0];
[vintageOne drawInRect:CGRectMake(0, 0, imageA.size.width, imageA.size.height) blendMode: kCGBlendModeMultiply alpha: 1.0];
UIImage *source = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGSize size = [source size];
UIGraphicsBeginImageContext(size);
CGRect rect = CGRectMake(0, 0, size.width, size.height);
[source drawInRect:rect blendMode:kCGBlendModeNormal alpha:1.0];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBStrokeColor(context, 5.0, 5.0, 5.0, 5.0);
CGContextStrokeRect(context, rect);
UIImage *testImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return testImg;
}
Here is a screenshot: (I have added a black background to enhance the slighly transparent border.
Aren't you stroking the image white in these lines?
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBStrokeColor(context, 5.0, 5.0, 5.0, 5.0);
CGContextStrokeRect(context, rect);
UIImage *testImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Although I am not sure if you intended GContextSetRGBStrokeColor(context, 5.0, 5.0, 5.0, 5.0); as 1.0 is the maximum value for the rgba values in the function.

How to erase some portion of a UIImageView's image on iOS?

I have a view with UIImageView and an image set to it. I want to erase the image as something like we do in photoshop with an eraser. How do I achieve this? Also, how do I undo erase?
If you know what area you want to erase, you can create a new image of the same size, set the mask to the full image minus the area you want to erase, draw the full image into the new image, and use that as the new image. To undo, simply use the previous image.
Edit
Sample code. Say the area you want to erase from image view imgView is specified with by erasePath:
- (void) clipImage
{
UIImage *img = imgView.image;
CGSize s = img.size;
UIGraphicsBeginImageContext(s);
CGContextRef g = UIGraphicsGetCurrentContext();
CGContextAddPath(g,erasePath);
CGContextAddRect(g,CGRectMake(0,0,s.width,s.height));
CGContextEOClip(g);
[img drawAtPoint:CGPointZero];
imageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
UIGraphicsBeginImageContext(imgBlankView.frame.size);
[imgBlankView.image drawInRect:CGRectMake(0, 0, imgBlankView.frame.size.width, imgBlankView.frame.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(),lineWidth);
CGContextSetBlendMode(UIGraphicsGetCurrentContext(), kCGBlendModeClear);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), red, green, blue, 1.0);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextSetShouldAntialias(UIGraphicsGetCurrentContext(), YES);
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint1.x, lastPoint1.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
imgBlankView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Swift 5.2 Version
UIGraphicsBeginImageContextWithOptions(imageView.frame.size, false, 1.0)
let currentContext = UIGraphicsGetCurrentContext()
currentContext?.addPath(shapeLayer.path!)
currentContext?.addRect(CGRect(x: 0, y: 0, width: imageView.frame.size.width, height: imageView.frame.size.height))
currentContext?.clip(using: .evenOdd)
imageView.image?.draw(at: .zero)
let imageTeemp = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext()