iOS: create an darker version of UIImage and leave transparent pixels unchanged? - objective-c

I found
Create new UIImage by adding shadow to existing UIImage
and
UIImage, is there an easy way to make it darker or all black
But the selected answers do not work for me.
I have an UIImage, which may have some transparent pixels in it, I need to create a new UIImage with non-transparent pixels darkened, is there any way to do this? I was thinking of using UIBezierPath but I don't know how to do it for only non-transparent pixels.

This is the class I use to color images even if they are transparent.
+ (UIImage *)colorizeImage:(UIImage *)image withColor:(UIColor *)color {
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect area = CGRectMake(0, 0, image.size.width, image.size.height);
CGContextScaleCTM(context, 1, -1);
CGContextTranslateCTM(context, 0, -area.size.height);
CGContextSaveGState(context);
CGContextClipToMask(context, area, image.CGImage);
[color set];
CGContextFillRect(context, area);
CGContextRestoreGState(context);
CGContextSetBlendMode(context, kCGBlendModeMultiply);
CGContextDrawImage(context, area, image.CGImage);
UIImage *colorizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return colorizedImage;
}
To darken the image you would pass the method a black or gray UIColor with lowered transparency.

How about trying a CoreImage Filter?
You could use the CIColorControls filter to adjust the input brightness and contrast to darken the image.
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:sourceImage]; //your input image
CIFilter *filter= [CIFilter filterWithName:#"CIColorControls"];
[filter setValue:inputImage forKey:#"inputImage"];
[filter setValue:[NSNumber numberWithFloat:0.5] forKey:#"inputBrightness"];
// Your output image
UIImage *outputImage = [UIImage imageWithCGImage:[context createCGImage:filter.outputImage fromRect:filter.outputImage.extent]];
Read more about the CIFilter parameters here:
http://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html%23//apple_ref/doc/filter/ci/CIColorControls

Here's a quick Swift version, using a CIExposureAdjust CIFilter :)
// Get the original image and set up the CIExposureAdjust filter
guard let originalImage = UIImage(named: "myImage"),
let inputImage = CIImage(image: originalImage),
let filter = CIFilter(name: "CIExposureAdjust") else { return }
// The inputEV value on the CIFilter adjusts exposure (negative values darken, positive values brighten)
filter.setValue(inputImage, forKey: "inputImage")
filter.setValue(-2.0, forKey: "inputEV")
// Break early if the filter was not a success (.outputImage is optional in Swift)
guard let filteredImage = filter.outputImage else { return }
let context = CIContext(options: nil)
let outputImage = UIImage(CGImage: context.createCGImage(filteredImage, fromRect: filteredImage.extent))
myImageView.image = outputImage // use the filtered UIImage as required.

Here's David's answer in Swift 5:
class func colorizeImage(_ image: UIImage?, with color: UIColor?) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(image?.size ?? CGSize.zero, _: false, _: image?.scale ?? 0.0)
let context = UIGraphicsGetCurrentContext()
let area = CGRect(x: 0, y: 0, width: image?.size.width ?? 0.0, height: image?.size.height ?? 0.0)
context?.scaleBy(x: 1, y: -1)
context?.translateBy(x: 0, y: -area.size.height)
context?.saveGState()
context?.clip(to: area, mask: (image?.cgImage)!)
color?.set()
context?.fill(area)
context?.restoreGState()
if let context = context {
context.setBlendMode(.multiply)
}
context!.draw((image?.cgImage)!, in: area)
let colorizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return colorizedImage
}

Related

The rounded pure color image created by code is blur

In objective-c, I make a circle shape programmatically by following codes:
+(UIImage *)makeRoundedImage:(CGSize) size backgroundColor:(UIColor *) backgroundColor cornerRadius:(int) cornerRadius
{
UIImage* bgImage = [self imageWithColor:backgroundColor andSize:size];;
CALayer *imageLayer = [CALayer layer];
imageLayer.frame = CGRectMake(0, 0, size.width, size.height);
imageLayer.contents = (id) bgImage.CGImage;
imageLayer.masksToBounds = YES;
imageLayer.cornerRadius = cornerRadius;
UIGraphicsBeginImageContext(bgImage.size);
[imageLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return roundedImage;
}
The imageWithColor method is as following:
+(UIImage *)imageWithColor:(UIColor *)color andSize:(CGSize)size
{
//quick fix, or pop up CG invalid context 0x0 bug
if(size.width == 0) size.width = 1;
if(size.height == 0) size.height = 1;
//---quick fix
UIImage *img = nil;
CGRect rect = CGRectMake(0, 0, size.width, size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context,
color.CGColor);
CGContextFillRect(context, rect);
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Then I used it to create a pure color circle shape image, but what I found is the circle image is not perfect rounded. As an example, please see following code:
CGSize size = CGSizeMake(diameter, diameter);
int r = ceil((float)diameter/2.0);
UIImage *imageNormal = [self makeRoundedImage:size backgroundColor:backgroundColor cornerRadius:r];
[slider setThumbImage:imageNormal forState:UIControlStateNormal];
First I created a circle image, then I set the image as the thumb to a UISlider. But what shown is as the picture shown below:
You can see the circle is not an exact circle. I'm thinking probably it caused by the screen resolution issue? Because if I use an image resource for the thumb, I need add #2x. Anybody know the reason? Thanx in advance.
updated on 8th Aug 2015.
Further to this question and the answer from #Noah Witherspoon, I found the blurry edge issue has been solved. But still, the circle looks like being cut. I used the code as following:
CGRect rect = CGRectMake(0.0f, 0.0f, radius*2.0f, radius*2.0f);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillEllipseInRect(context, rect);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
And the circle looks like:
You can see the edge has been cut.
I changed the code as following:
CGRect rect = CGRectMake(0.0f, 0.0f, radius*2.0f+4, radius*2.0f+4);
CGRect rectmin = CGRectMake(2.0f, 2.0f, radius*2, radius*2);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillEllipseInRect(context, rectmin);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
You can see the circle looks better(The top edge and the bottom edge):
I made the fill rect size smaller, and the edge looks better, but I don't think it's a nice solution. Still, does anybody know why this happen?
From your screenshot it looks like you do actually have a circular image, but its scale is wrong—it’s not Retina—so it looks blurry and not-circular. The key thing is that instead of using UIGraphicsBeginImageContext which defaults to a scale of 1.0 (as compared to your screen, which is at a scale of 2.0 or 3.0), you should be using UIGraphicsBeginImageContextWithOptions. Also, you don’t need to make a layer or a view to draw a circle in an image context.
+ (UIImage *)makeCircleImageWithDiameter:(CGFloat)diameter color:(UIColor *)color {
UIGraphicsBeginImageContextWithOptions(CGSizeMake(diameter, diameter), NO, 0 /* scale (0 means “use the current screen’s scale”) */);
[color setFill];
CGContextFillEllipseInRect(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, diameter, diameter));
UIImage *image = UIGraphicsGetImageFromCurrentContext();
UIGraphicsEndImageContext();
return image;
}
If you want to get a circle every time try this:
- (UIImage *)makeCircularImage:(CGSize)size backgroundColor:(UIColor *)backgroundColor {
CGSize squareSize = CGSizeMake((size.width > size.height) ? size.width : size.height,
(size.width > size.height) ? size.width : size.height);
UIView *circleView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, squareSize.width, squareSize.height)];
circleView.layer.cornerRadius = circleView.frame.size.height * 0.5f;
circleView.backgroundColor = backgroundColor;
circleView.opaque = NO;
UIGraphicsBeginImageContextWithOptions(circleView.bounds.size, circleView.opaque, 0.0);
[circleView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}

Correct crop of CIGaussianBlur

As I noticed when CIGaussianBlur is applied to image, image's corners gets blurred so that it looks like being smaller than original. So I figured out that I need to crop it correctly to avoid having transparent edges of image. But how to calculate how much I need to crop in dependence of blur amount?
Example:
Original image:
Image with 50 inputRadius of CIGaussianBlur (blue color is background of everything):
Take the following code as an example...
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:5.0f] forKey:#"inputRadius"];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[result extent]];
This results in the images you provided above. But if I instead use the original images rect to create the CGImage off of the context the resulting image is the desired size.
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
There are two issues. The first is that the blur filter samples pixels outside the edges of the input image. These pixels are transparent. That's where the transparent pixels come from.
The trick is to extend the edges before you apply the blur filter. This can be done by a clamp filter e.g. like this:
CIFilter *affineClampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
CGAffineTransform xform = CGAffineTransformMakeScale(1.0, 1.0);
[affineClampFilter setValue:[NSValue valueWithBytes:&xform
objCType:#encode(CGAffineTransform)]
forKey:#"inputTransform"];
This filter extends the edges infinitely and eliminates the transparency. The next step would be to apply the blur filter.
The second issue is a bit weird. Some renderers produce a bigger output image for the blur filter and you must adapt the origin of the resulting CIImage by some offset e.g. like this:
CGImageRef cgImage = [context createCGImage:outputImage
fromRect:CGRectOffset([inputImage extend],
offset, offset)];
The software renderer on my iPhone needs three times the blur radius as offset. The hardware renderer on the same iPhone does not need any offset at all. Maybee you could deduce the offset from the size difference of input and output images, but I did not try...
To get a nice blurred version of an image with hard edges you first need to apply a CIAffineClamp to the source image, extending its edges out and then you need to ensure that you use the input image's extents when generating the output image.
The code is as follows:
CIContext *context = [CIContext contextWithOptions:nil];
UIImage *image = [UIImage imageNamed:#"Flower"];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *clampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
[clampFilter setDefaults];
[clampFilter setValue:inputImage forKey:kCIInputImageKey];
CIFilter *blurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[blurFilter setValue:clampFilter.outputImage forKey:kCIInputImageKey];
[blurFilter setValue:#10.0f forKey:#"inputRadius"];
CIImage *result = [blurFilter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
UIImage *result = [[UIImage alloc] initWithCGImage:cgImage scale:image.scale orientation:UIImageOrientationUp];
CGImageRelease(cgImage);
Note this code was tested on iOS. It should be the similar for OS X (substituting NSImage for UIImage).
I saw some of the solutions and wanted to recommend a more modern one, based off some of the ideas shared here:
private lazy var coreImageContext = CIContext() // Re-use this.
func blurredImage(image: CIImage, radius: CGFloat) -> CGImage? {
let blurredImage = image
.clampedToExtent()
.applyingFilter(
"CIGaussianBlur",
parameters: [
kCIInputRadiusKey: radius,
]
)
.cropped(to: image.extent)
return coreImageContext.createCGImage(blurredImage, from: blurredImage.extent)
}
If you need a UIImage afterward, you can of course get it like so:
let image = UIImage(cgImage: cgImage)
... For those wondering, the reason for returning a CGImage is (as noted in the Apple documentation):
Due to Core Image's coordinate system mismatch with UIKit, this filtering approach may yield unexpected results when displayed in a UIImageView with "contentMode". Be sure to back it with a CGImage so that it handles contentMode properly.
If you need a CIImage you could return that, but in this case if you're displaying the image, you'd probably want to be careful.
This works for me :)
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *blurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[blurFilter setDefaults];
[blurFilter setValue:inputImage forKey:#"inputImage"];
CGFloat blurLevel = 20.0f; // Set blur level
[blurFilter setValue:[NSNumber numberWithFloat:blurLevel] forKey:#"inputRadius"]; // set value for blur level
CIImage *outputImage = [blurFilter valueForKey:#"outputImage"];
CGRect rect = inputImage.extent; // Create Rect
rect.origin.x += blurLevel; // and set custom params
rect.origin.y += blurLevel; //
rect.size.height -= blurLevel*2.0f; //
rect.size.width -= blurLevel*2.0f; //
CGImageRef cgImage = [context createCGImage:outputImage fromRect:rect]; // Then apply new rect
imageView.image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
Here is the Swift 5 version of blurring the image. Set the Clamp filter to defaults so you will no need to give transform.
func applyBlurEffect() -> UIImage? {
let context = CIContext(options: nil)
let imageToBlur = CIImage(image: self)
let clampFilter = CIFilter(name: "CIAffineClamp")!
clampFilter.setDefaults()
clampFilter.setValue(imageToBlur, forKey: kCIInputImageKey)
//The CIAffineClamp filter is setting your extent as infinite, which then confounds your context. Try saving off the pre-clamp extent CGRect, and then supplying that to the context initializer.
let inputImageExtent = imageToBlur!.extent
guard let currentFilter = CIFilter(name: "CIGaussianBlur") else {
return nil
}
currentFilter.setValue(clampFilter.outputImage, forKey: kCIInputImageKey)
currentFilter.setValue(10, forKey: "inputRadius")
guard let output = currentFilter.outputImage, let cgimg = context.createCGImage(output, from: inputImageExtent) else {
return nil
}
return UIImage(cgImage: cgimg)
}
Here is Swift version:
func applyBlurEffect(image: UIImage) -> UIImage {
let context = CIContext(options: nil)
let imageToBlur = CIImage(image: image)
let blurfilter = CIFilter(name: "CIGaussianBlur")
blurfilter!.setValue(imageToBlur, forKey: "inputImage")
blurfilter!.setValue(5.0, forKey: "inputRadius")
let resultImage = blurfilter!.valueForKey("outputImage") as! CIImage
let cgImage = context.createCGImage(resultImage, fromRect: resultImage.extent)
let blurredImage = UIImage(CGImage: cgImage)
return blurredImage
}
See below two implementations for Xamarin (C#).
1) Works for iOS 6
public static UIImage Blur(UIImage image)
{
using(var blur = new CIGaussianBlur())
{
blur.Image = new CIImage(image);
blur.Radius = 6.5f;
using(CIImage output = blur.OutputImage)
using(CIContext context = CIContext.FromOptions(null))
using(CGImage cgimage = context.CreateCGImage (output, new RectangleF(0, 0, image.Size.Width, image.Size.Height)))
{
return UIImage.FromImage(cgimage);
}
}
}
2) Implementation for iOS 7
Using the way shown above isn't working properly on iOS 7 anymore (at least at the moment with Xamarin 7.0.1). So I decided to add cropping another way (measures may depend on the blur radius).
private static UIImage BlurImage(UIImage image)
{
using(var blur = new CIGaussianBlur())
{
blur.Image = new CIImage(image);
blur.Radius = 6.5f;
using(CIImage output = blur.OutputImage)
using(CIContext context = CIContext.FromOptions(null))
using(CGImage cgimage = context.CreateCGImage (output, new RectangleF(0, 0, image.Size.Width, image.Size.Height)))
{
return UIImage.FromImage(Crop(CIImage.FromCGImage(cgimage), image.Size.Width, image.Size.Height));
}
}
}
private static CIImage Crop(CIImage image, float width, float height)
{
var crop = new CICrop
{
Image = image,
Rectangle = new CIVector(10, 10, width - 20, height - 20)
};
return crop.OutputImage;
}
Try this, let the input's extent be -createCGImage:fromRect:'s parameter:
-(UIImage *)gaussianBlurImageWithRadius:(CGFloat)radius {
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *input = [CIImage imageWithCGImage:self.CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:input forKey:kCIInputImageKey];
[filter setValue:#(radius) forKey:kCIInputRadiusKey];
CIImage *output = [filter valueForKey:kCIOutputImageKey];
CGImageRef imgRef = [context createCGImage:output
fromRect:input.extent];
UIImage *outImage = [UIImage imageWithCGImage:imgRef
scale:UIScreen.mainScreen.scale
orientation:UIImageOrientationUp];
CGImageRelease(imgRef);
return outImage;
}

Making a UIImageView grey

I'm setting UImageViews within table cells using setImageWithUrl from the AFNetworking library, but I need the images to be greyscale... is there any way I can do this. I've tried a few UIImage greyscale converters, but I guess they don't work because I'm setting something that hasn't downloaded yet.
Try this method:
- (UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
return newImage;
}
I found it here: http://mobiledevelopertips.com/graphics/convert-an-image-uiimage-to-grayscale.html
I took the code from #Jesse Gumpo's example, above, but here it is as an interface.
#implementation UIImage ( Greyscale )
- (UIImage *)greyscaleImage
{
CIImage * beginImage = [ self CIImage ] ;
CIImage * evAdjustedCIImage = nil ;
{
CIFilter * filter = [ CIFilter filterWithName:#"CIColorControls"
keysAndValues:kCIInputImageKey, beginImage
, #"inputBrightness", #0.0
, #"inputContrast", #1.1
, #"inputSaturation", #0.0
, nil ] ;
evAdjustedCIImage = [ filter outputImage ] ;
}
CIImage * resultCIImage = nil ;
{
CIFilter * filter = [ CIFilter filterWithName:#"CIExposureAdjust"
keysAndValues:kCIInputImageKey, evAdjustedCIImage
, #"inputEV", #0.7
, nil ] ;
resultCIImage = [ filter outputImage ] ;
}
CIContext * context = [ CIContext contextWithOptions:nil ] ;
CGImageRef resultCGImage = [ context createCGImage:resultCIImage
fromRect:resultCIImage.extent ] ;
UIImage * result = [ UIImage imageWithCGImage:resultCGImage ] ;
CGImageRelease( resultCGImage ) ;
return result;
}
#end
Now you can just do this:
UIImage * downloadedImage = ... get from AFNetwork results ... ;
downloadedImage = [ downloadedImage greyscaleImage ] ;
... use 'downloadedImage' ...
instead of a uiimageView, subclass a UIView, give it
#property (strong, nonatomic) UIImage *image;
with
#synthesize image = _image;
override the setter
-(void)setImage:(UIImage *)image{
_image = [self makeImageBW:image];
[self setNeedsDisplay];
}
- (UIImage *)makeImageBW:(UIImage *)source
{
CIImage *beginImage = source.CIImage;
CIImage *blackAndWhite = [CIFilter filterWithName:#"CIColorControls" keysAndValues:kCIInputImageKey, beginImage, #"inputBrightness", [NSNumber numberWithFloat:0.0], #"inputContrast", [NSNumber numberWithFloat:1.1], #"inputSaturation", [NSNumber numberWithFloat:0.0], nil].outputImage;
CIImage *output = [CIFilter filterWithName:#"CIExposureAdjust" keysAndValues:kCIInputImageKey, blackAndWhite, #"inputEV", [NSNumber numberWithFloat:0.7], nil].outputImage;
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef ref = [context createCGImage:output fromRect:output.extent];
UIImage *newImage = [UIImage imageWithCGImage:ref];
CGImageRelease(cgiimage);
return newImage;
}
-(void)drawRect:(CGRect)rect{
[_image drawInRect:rect];
}
somewhere else you can set the image with your NSURLRequest:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0),^{
NSData *data = [NSURLConnection sendSynchronousRequest:(NSURLRequest *) returningResponse:nil error:nil];
if (data){
UIImage *image = [UIImage imageWithData:data];
if (image) dispatch_async(dispatch_get_main_queue(),^{
[view setImage:image];
});
}
});
This thread is a bit old but I came across it on my search. In this post I've pointed out 2 different methods in Swift to create a grayscale image that can be displayed in an imageView considering the alpha and scale of the image.
import CoreImage
extension UIImage
{
/// Applies grayscale with CIColorControls by settings saturation to 0.0.
/// - Parameter brightness: Default is 0.0.
/// - Parameter contrast: Default is 1.0.
/// - Returns: The grayscale image of self if available.
func grayscaleImage(brightness: Double = 0.0, contrast: Double = 1.0) -> UIImage?
{
if let ciImage = CoreImage.CIImage(image: self, options: nil)
{
let paramsColor: [String : AnyObject] = [ kCIInputBrightnessKey: NSNumber(double: brightness),
kCIInputContrastKey: NSNumber(double: contrast),
kCIInputSaturationKey: NSNumber(double: 0.0) ]
let grayscale = ciImage.imageByApplyingFilter("CIColorControls", withInputParameters: paramsColor)
let processedCGImage = CIContext().createCGImage(grayscale, fromRect: grayscale.extent)
return UIImage(CGImage: processedCGImage, scale: self.scale, orientation: self.imageOrientation)
}
return nil
}
/// Create a grayscale image with alpha channel. Is 5 times faster than grayscaleImage().
/// - Returns: The grayscale image of self if available.
func convertToGrayScale() -> UIImage?
{
// Create image rectangle with current image width/height * scale
let pixelSize = CGSize(width: self.size.width * self.scale, height: self.size.height * self.scale)
let imageRect = CGRect(origin: CGPointZero, size: pixelSize)
// Grayscale color space
if let colorSpace: CGColorSpaceRef = CGColorSpaceCreateDeviceGray()
{
// Create bitmap content with current image size and grayscale colorspace
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.None.rawValue)
if let context: CGContextRef = CGBitmapContextCreate(nil, Int(pixelSize.width), Int(pixelSize.height), 8, 0, colorSpace, bitmapInfo.rawValue)
{
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, self.CGImage)
// Create bitmap image info from pixel data in current context
if let imageRef: CGImageRef = CGBitmapContextCreateImage(context)
{
let bitmapInfoAlphaOnly = CGBitmapInfo(rawValue: CGImageAlphaInfo.Only.rawValue)
if let contextAlpha = CGBitmapContextCreate(nil, Int(pixelSize.width), Int(pixelSize.height), 8, 0, nil, bitmapInfoAlphaOnly.rawValue)
{
CGContextDrawImage(contextAlpha, imageRect, self.CGImage)
if let mask: CGImageRef = CGBitmapContextCreateImage(contextAlpha)
{
// Create a new UIImage object
if let newCGImage = CGImageCreateWithMask(imageRef, mask)
{
// Return the new grayscale image
return UIImage(CGImage: newCGImage, scale: self.scale, orientation: self.imageOrientation)
}
}
}
}
}
}
// A required variable was unexpected nil
return nil
}
}

How to use a UIImage as a mask over a color in Objective-C

I have a UIImage that is all black with an alpha channel so some parts are grayish and some parts are completely see-through. I want to use that images as a mask over some other color (let's say white to make it easy), so the final product is now a white image with parts of it transparent.
I've been looking around on the Apple documentation site here:
https://developer.apple.com/library/archive/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_images/dq_images.html#//apple_ref/doc/uid/TP30001066-CH212-CJBHIJEB
But I don't can't really make sense of those examples.
In iOS 7+ you should use UIImageRenderingModeAlwaysTemplate instead. See https://stackoverflow.com/a/26965557/870313
Creating arbitrarily-colored icons from a black-with-alpha master image (iOS).
// Usage: UIImage *buttonImage = [UIImage ipMaskedImageNamed:#"UIButtonBarAction.png" color:[UIColor redColor]];
+ (UIImage *)ipMaskedImageNamed:(NSString *)name color:(UIColor *)color
{
UIImage *image = [UIImage imageNamed:name];
CGRect rect = CGRectMake(0, 0, image.size.width, image.size.height);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, image.scale);
CGContextRef c = UIGraphicsGetCurrentContext();
[image drawInRect:rect];
CGContextSetFillColorWithColor(c, [color CGColor]);
CGContextSetBlendMode(c, kCGBlendModeSourceAtop);
CGContextFillRect(c, rect);
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
Credits to Ole Zorn: https://gist.github.com/1102091
Translating Jano's answer into Swift:
func ipMaskedImageNamed(name:String, color:UIColor) -> UIImage {
let image = UIImage(named: name)
let rect:CGRect = CGRect(origin: CGPoint(x: 0, y: 0), size: CGSize(width: image!.size.width, height: image!.size.height))
UIGraphicsBeginImageContextWithOptions(rect.size, false, image!.scale)
let c:CGContextRef = UIGraphicsGetCurrentContext()
image?.drawInRect(rect)
CGContextSetFillColorWithColor(c, color.CGColor)
CGContextSetBlendMode(c, kCGBlendModeSourceAtop)
CGContextFillRect(c, rect)
let result:UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return result
}
Usage:
myButton.setImage(ipMaskedImageNamed("grayScalePNG", color: UIColor.redColor()), forState: .Normal)
Edit: According to this article, you can turn an image into a mask whose opaque areas are represented by the tint color. Within the asset catalog, under the Attributes Inspector of the image, change Render As to Template Image. No code necessary.
Here's a variation in Swift 3, written as an extension to UIImage:
extension UIImage {
func tinted(color:UIColor) -> UIImage? {
let image = self
let rect:CGRect = CGRect(origin: CGPoint(x: 0, y: 0), size: image.size)
UIGraphicsBeginImageContextWithOptions(rect.size, false, image.scale)
let context = UIGraphicsGetCurrentContext()!
image.draw(in: rect)
context.setFillColor(color.cgColor)
context.setBlendMode(.sourceAtop)
context.fill(rect)
if let result:UIImage = UIGraphicsGetImageFromCurrentImageContext() {
UIGraphicsEndImageContext()
return result
}
else {
return nil
}
}
}
// Usage
let myImage = #imageLiteral(resourceName: "Check.png") // any image
let blueImage = myImage.tinted(color: .blue)
I converted this for OSX here as a Category and copied below. Note this is for non-ARC projects. For ARC projects, the autorelease can be removed.
- (NSImage *)cdsMaskedWithColor:(NSColor *)color
{
CGRect rect = CGRectMake(0, 0, self.size.width, self.size.height);
NSImage *result = [[NSImage alloc] initWithSize:self.size];
[result lockFocusFlipped:self.isFlipped];
NSGraphicsContext *context = [NSGraphicsContext currentContext];
CGContextRef c = (CGContextRef)[context graphicsPort];
[self drawInRect:NSRectFromCGRect(rect)];
CGContextSetFillColorWithColor(c, [color CGColor]);
CGContextSetBlendMode(c, kCGBlendModeSourceAtop);
CGContextFillRect(c, rect);
[result unlockFocus];
return [result autorelease];
}
+ (NSImage *)cdsMaskedImageNamed:(NSString *)name color:(NSColor *)color
{
NSImage *image = [NSImage imageNamed:name];
return [image cdsMaskedWithColor:color];
}

objective-c: draw line on top of UIImage or UIImageView

How do I draw a simple line on top of a UIImage or UIImageView?
The following code works by creating a new image the same size as the original, drawing a copy of the original image onto the new image, then drawing a 1 pixel line along to the top of the new image.
// UIImage *originalImage = <the image you want to add a line to>
// UIColor *lineColor = <the color of the line>
UIGraphicsBeginImageContext(originalImage.size);
// Pass 1: Draw the original image as the background
[originalImage drawAtPoint:CGPointMake(0,0)];
// Pass 2: Draw the line on top of original image
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 1.0);
CGContextMoveToPoint(context, 0, 0);
CGContextAddLineToPoint(context, originalImage.size.width, 0);
CGContextSetStrokeColorWithColor(context, [lineColor CGColor]);
CGContextStrokePath(context);
// Create new image
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
// Tidy up
UIGraphicsEndImageContext();
Alternatively, you could create the line as a CAShapeLayer then add it as a subview to the UIImageView (see this answer).
For Swift 3.0
UIGraphicsBeginImageContext(originalImage.size)
originalImage.draw(at: CGPoint(x: 0, y: 0))
let context = UIGraphicsGetCurrentContext()!
context.setLineWidth(2)
context.move(to: CGPoint(x: 0, y: originalImage.size.height - 2))
context.addLine(to: CGPoint(x: originalImage.size.width, y: originalImage.size.height - 2))
context.setStrokeColor(UIColor.red.cgColor)
context.strokePath()
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
For Swift 2.3
func drawLineOnImage(size: CGSize, image: UIImage, from: CGPoint, to: CGPoint) -> UIImage {
UIGraphicsBeginImageContext(size)
image.drawAtPoint(CGPointZero)
let context = UIGraphicsGetCurrentContext()
CGContextSetLineWidth(context, 1.0)
CGContextSetStrokeColorWithColor(context, UIColor.blueColor().CGColor)
CGContextMoveToPoint(context, from.x, from.y)
CGContextAddLineToPoint(context, to.x, to.y)
CGContextStrokePath(context)
let resultImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return resultImage
}