Making a UIImageView grey - objective-c

I'm setting UImageViews within table cells using setImageWithUrl from the AFNetworking library, but I need the images to be greyscale... is there any way I can do this. I've tried a few UIImage greyscale converters, but I guess they don't work because I'm setting something that hasn't downloaded yet.

Try this method:
- (UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
return newImage;
}
I found it here: http://mobiledevelopertips.com/graphics/convert-an-image-uiimage-to-grayscale.html

I took the code from #Jesse Gumpo's example, above, but here it is as an interface.
#implementation UIImage ( Greyscale )
- (UIImage *)greyscaleImage
{
CIImage * beginImage = [ self CIImage ] ;
CIImage * evAdjustedCIImage = nil ;
{
CIFilter * filter = [ CIFilter filterWithName:#"CIColorControls"
keysAndValues:kCIInputImageKey, beginImage
, #"inputBrightness", #0.0
, #"inputContrast", #1.1
, #"inputSaturation", #0.0
, nil ] ;
evAdjustedCIImage = [ filter outputImage ] ;
}
CIImage * resultCIImage = nil ;
{
CIFilter * filter = [ CIFilter filterWithName:#"CIExposureAdjust"
keysAndValues:kCIInputImageKey, evAdjustedCIImage
, #"inputEV", #0.7
, nil ] ;
resultCIImage = [ filter outputImage ] ;
}
CIContext * context = [ CIContext contextWithOptions:nil ] ;
CGImageRef resultCGImage = [ context createCGImage:resultCIImage
fromRect:resultCIImage.extent ] ;
UIImage * result = [ UIImage imageWithCGImage:resultCGImage ] ;
CGImageRelease( resultCGImage ) ;
return result;
}
#end
Now you can just do this:
UIImage * downloadedImage = ... get from AFNetwork results ... ;
downloadedImage = [ downloadedImage greyscaleImage ] ;
... use 'downloadedImage' ...

instead of a uiimageView, subclass a UIView, give it
#property (strong, nonatomic) UIImage *image;
with
#synthesize image = _image;
override the setter
-(void)setImage:(UIImage *)image{
_image = [self makeImageBW:image];
[self setNeedsDisplay];
}
- (UIImage *)makeImageBW:(UIImage *)source
{
CIImage *beginImage = source.CIImage;
CIImage *blackAndWhite = [CIFilter filterWithName:#"CIColorControls" keysAndValues:kCIInputImageKey, beginImage, #"inputBrightness", [NSNumber numberWithFloat:0.0], #"inputContrast", [NSNumber numberWithFloat:1.1], #"inputSaturation", [NSNumber numberWithFloat:0.0], nil].outputImage;
CIImage *output = [CIFilter filterWithName:#"CIExposureAdjust" keysAndValues:kCIInputImageKey, blackAndWhite, #"inputEV", [NSNumber numberWithFloat:0.7], nil].outputImage;
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef ref = [context createCGImage:output fromRect:output.extent];
UIImage *newImage = [UIImage imageWithCGImage:ref];
CGImageRelease(cgiimage);
return newImage;
}
-(void)drawRect:(CGRect)rect{
[_image drawInRect:rect];
}
somewhere else you can set the image with your NSURLRequest:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0),^{
NSData *data = [NSURLConnection sendSynchronousRequest:(NSURLRequest *) returningResponse:nil error:nil];
if (data){
UIImage *image = [UIImage imageWithData:data];
if (image) dispatch_async(dispatch_get_main_queue(),^{
[view setImage:image];
});
}
});

This thread is a bit old but I came across it on my search. In this post I've pointed out 2 different methods in Swift to create a grayscale image that can be displayed in an imageView considering the alpha and scale of the image.
import CoreImage
extension UIImage
{
/// Applies grayscale with CIColorControls by settings saturation to 0.0.
/// - Parameter brightness: Default is 0.0.
/// - Parameter contrast: Default is 1.0.
/// - Returns: The grayscale image of self if available.
func grayscaleImage(brightness: Double = 0.0, contrast: Double = 1.0) -> UIImage?
{
if let ciImage = CoreImage.CIImage(image: self, options: nil)
{
let paramsColor: [String : AnyObject] = [ kCIInputBrightnessKey: NSNumber(double: brightness),
kCIInputContrastKey: NSNumber(double: contrast),
kCIInputSaturationKey: NSNumber(double: 0.0) ]
let grayscale = ciImage.imageByApplyingFilter("CIColorControls", withInputParameters: paramsColor)
let processedCGImage = CIContext().createCGImage(grayscale, fromRect: grayscale.extent)
return UIImage(CGImage: processedCGImage, scale: self.scale, orientation: self.imageOrientation)
}
return nil
}
/// Create a grayscale image with alpha channel. Is 5 times faster than grayscaleImage().
/// - Returns: The grayscale image of self if available.
func convertToGrayScale() -> UIImage?
{
// Create image rectangle with current image width/height * scale
let pixelSize = CGSize(width: self.size.width * self.scale, height: self.size.height * self.scale)
let imageRect = CGRect(origin: CGPointZero, size: pixelSize)
// Grayscale color space
if let colorSpace: CGColorSpaceRef = CGColorSpaceCreateDeviceGray()
{
// Create bitmap content with current image size and grayscale colorspace
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.None.rawValue)
if let context: CGContextRef = CGBitmapContextCreate(nil, Int(pixelSize.width), Int(pixelSize.height), 8, 0, colorSpace, bitmapInfo.rawValue)
{
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, self.CGImage)
// Create bitmap image info from pixel data in current context
if let imageRef: CGImageRef = CGBitmapContextCreateImage(context)
{
let bitmapInfoAlphaOnly = CGBitmapInfo(rawValue: CGImageAlphaInfo.Only.rawValue)
if let contextAlpha = CGBitmapContextCreate(nil, Int(pixelSize.width), Int(pixelSize.height), 8, 0, nil, bitmapInfoAlphaOnly.rawValue)
{
CGContextDrawImage(contextAlpha, imageRect, self.CGImage)
if let mask: CGImageRef = CGBitmapContextCreateImage(contextAlpha)
{
// Create a new UIImage object
if let newCGImage = CGImageCreateWithMask(imageRef, mask)
{
// Return the new grayscale image
return UIImage(CGImage: newCGImage, scale: self.scale, orientation: self.imageOrientation)
}
}
}
}
}
}
// A required variable was unexpected nil
return nil
}
}

Related

Apply Black and White Filter to UIImage

I need to apply a black-and-white filter on a UIImage. I have a view in which there's a photo taken by the user, but I don't have any ideas on transforming the colors of the image.
- (void)viewDidLoad {
[super viewDidLoad];
self.navigationItem.title = NSLocalizedString(#"#Paint!", nil);
imageView.image = image;
}
How can I do that?
Objective C
- (UIImage *)convertImageToGrayScale:(UIImage *)image {
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
return newImage;
}
Swift
func convertToGrayScale(image: UIImage) -> UIImage {
// Create image rectangle with current image width/height
let imageRect:CGRect = CGRect(x:0, y:0, width:image.size.width, height: image.size.height)
// Grayscale color space
let colorSpace = CGColorSpaceCreateDeviceGray()
let width = image.size.width
let height = image.size.height
// Create bitmap content with current image size and grayscale colorspace
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
context?.draw(image.cgImage!, in: imageRect)
let imageRef = context!.makeImage()
// Create a new UIImage object
let newImage = UIImage(cgImage: imageRef!)
return newImage
}
Judging by the ciimage tag, perhaps the OP was thinking (correctly) that Core Image would provide a quick and easy way to do this?
Here's that, both in ObjC:
- (UIImage *)grayscaleImage:(UIImage *)image {
CIImage *ciImage = [[CIImage alloc] initWithImage:image];
CIImage *grayscale = [ciImage imageByApplyingFilter:#"CIColorControls"
withInputParameters: #{kCIInputSaturationKey : #0.0}];
return [UIImage imageWithCIImage:grayscale];
}
and Swift:
func grayscaleImage(image: UIImage) -> UIImage {
let ciImage = CIImage(image: image)
let grayscale = ciImage.imageByApplyingFilter("CIColorControls",
withInputParameters: [ kCIInputSaturationKey: 0.0 ])
return UIImage(CIImage: grayscale)
}
CIColorControls is just one of several built-in Core Image filters that can convert an image to grayscale. CIPhotoEffectMono, CIPhotoEffectNoir, and CIPhotoEffectTonal are different tone-mapping presets (each takes no parameters), and you can do your own tone mapping with filters like CIColorMap.
Unlike alternatives that involve creating and drawing into one's own CGBitmapContext, these preserve the size/scale and alpha of the original image without extra work.
While PiratM's solution works you lose the alpha channel. To preserve the alpha channel you need to do a few extra steps.
+(UIImage *)convertImageToGrayScale:(UIImage *)image {
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
context = CGBitmapContextCreate(nil,image.size.width, image.size.height, 8, 0, nil, kCGImageAlphaOnly );
CGContextDrawImage(context, imageRect, [image CGImage]);
CGImageRef mask = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:CGImageCreateWithMask(imageRef, mask)];
CGImageRelease(imageRef);
CGImageRelease(mask);
// Return the new grayscale image
return newImage;
}
The Version of #rickster looks good considering the alpha channel. But a UIImageView without .AspectFit or Fill contentMode can't display it. Therefore the UIImage has to be created with an CGImage. This Version implemented as Swift UIImage extension keeps the current scale and gives some optional input parameters:
import CoreImage
extension UIImage
{
/// Applies grayscale with CIColorControls by settings saturation to 0.0.
/// - Parameter brightness: Default is 0.0.
/// - Parameter contrast: Default is 1.0.
/// - Returns: The grayscale image of self if available.
func grayscaleImage(brightness: Double = 0.0, contrast: Double = 1.0) -> UIImage?
{
if let ciImage = CoreImage.CIImage(image: self, options: nil)
{
let paramsColor: [String : AnyObject] = [ kCIInputBrightnessKey: NSNumber(double: brightness),
kCIInputContrastKey: NSNumber(double: contrast),
kCIInputSaturationKey: NSNumber(double: 0.0) ]
let grayscale = ciImage.imageByApplyingFilter("CIColorControls", withInputParameters: paramsColor)
let processedCGImage = CIContext().createCGImage(grayscale, fromRect: grayscale.extent)
return UIImage(CGImage: processedCGImage, scale: self.scale, orientation: self.imageOrientation)
}
return nil
}
}
The longer but faster way is the modificated version of #ChrisStillwells answer. Implemented as an UIImage extension considering the alpha channel and current scale in Swift:
extension UIImage
{
/// Create a grayscale image with alpha channel. Is 5 times faster than grayscaleImage().
/// - Returns: The grayscale image of self if available.
func convertToGrayScale() -> UIImage?
{
// Create image rectangle with current image width/height * scale
let pixelSize = CGSize(width: self.size.width * self.scale, height: self.size.height * self.scale)
let imageRect = CGRect(origin: CGPointZero, size: pixelSize)
// Grayscale color space
if let colorSpace: CGColorSpaceRef = CGColorSpaceCreateDeviceGray()
{
// Create bitmap content with current image size and grayscale colorspace
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.None.rawValue)
if let context: CGContextRef = CGBitmapContextCreate(nil, Int(pixelSize.width), Int(pixelSize.height), 8, 0, colorSpace, bitmapInfo.rawValue)
{
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, self.CGImage)
// Create bitmap image info from pixel data in current context
if let imageRef: CGImageRef = CGBitmapContextCreateImage(context)
{
let bitmapInfoAlphaOnly = CGBitmapInfo(rawValue: CGImageAlphaInfo.Only.rawValue)
if let contextAlpha = CGBitmapContextCreate(nil, Int(pixelSize.width), Int(pixelSize.height), 8, 0, nil, bitmapInfoAlphaOnly.rawValue)
{
CGContextDrawImage(contextAlpha, imageRect, self.CGImage)
if let mask: CGImageRef = CGBitmapContextCreateImage(contextAlpha)
{
// Create a new UIImage object
if let newCGImage = CGImageCreateWithMask(imageRef, mask)
{
// Return the new grayscale image
return UIImage(CGImage: newCGImage, scale: self.scale, orientation: self.imageOrientation)
}
}
}
}
}
}
// A required variable was unexpected nil
return nil
}
}
In Swift 5, using CoreImage to do image filter,
thanks #rickster
extension UIImage{
var grayscaled: UIImage?{
let ciImage = CIImage(image: self)
let grayscale = ciImage?.applyingFilter("CIColorControls",
parameters: [ kCIInputSaturationKey: 0.0 ])
if let gray = grayscale{
return UIImage(ciImage: gray)
}
else{
return nil
}
}
}
updated #FBente's to Swift 5,
using CoreImage to do image filtering,
extension UIImage
{
/// Create a grayscale image with alpha channel. Is 5 times faster than grayscaleImage().
/// - Returns: The grayscale image of self if available.
var grayScaled: UIImage?
{
// Create image rectangle with current image width/height * scale
let pixelSize = CGSize(width: self.size.width * self.scale, height: self.size.height * self.scale)
let imageRect = CGRect(origin: CGPoint.zero, size: pixelSize)
// Grayscale color space
let colorSpace: CGColorSpace = CGColorSpaceCreateDeviceGray()
// Create bitmap content with current image size and grayscale colorspace
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
if let context: CGContext = CGContext(data: nil, width: Int(pixelSize.width), height: Int(pixelSize.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
{
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
guard let cg = self.cgImage else{
return nil
}
context.draw(cg, in: imageRect)
// Create bitmap image info from pixel data in current context
if let imageRef: CGImage = context.makeImage(){
let bitmapInfoAlphaOnly = CGBitmapInfo(rawValue: CGImageAlphaInfo.alphaOnly.rawValue)
guard let context = CGContext(data: nil, width: Int(pixelSize.width), height: Int(pixelSize.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfoAlphaOnly.rawValue) else{
return nil
}
context.draw(cg, in: imageRect)
if let mask: CGImage = context.makeImage() {
// Create a new UIImage object
if let newCGImage = imageRef.masking(mask){
// Return the new grayscale image
return UIImage(cgImage: newCGImage, scale: self.scale, orientation: self.imageOrientation)
}
}
}
}
// A required variable was unexpected nil
return nil
}
}
Swift 3.0 version:
extension UIImage {
func convertedToGrayImage() -> UIImage? {
let width = self.size.width
let height = self.size.height
let rect = CGRect(x: 0.0, y: 0.0, width: width, height: height)
let colorSpace = CGColorSpaceCreateDeviceGray()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
guard let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue) else {
return nil
}
guard let cgImage = cgImage else { return nil }
context.draw(cgImage, in: rect)
guard let imageRef = context.makeImage() else { return nil }
let newImage = UIImage(cgImage: imageRef.copy()!)
return newImage
}
}
Swift3 + GPUImage
import GPUImage
extension UIImage {
func blackWhite() -> UIImage? {
guard let image: GPUImagePicture = GPUImagePicture(image: self) else {
print("unable to create GPUImagePicture")
return nil
}
let filter = GPUImageAverageLuminanceThresholdFilter()
image.addTarget(filter)
filter.useNextFrameForImageCapture()
image.processImage()
guard let processedImage: UIImage = filter.imageFromCurrentFramebuffer(with: UIImageOrientation.up) else {
print("unable to obtain UIImage from filter")
return nil
}
return processedImage
}
}
Swift 4 Solution
extension UIImage {
var withGrayscale: UIImage {
guard let ciImage = CIImage(image: self, options: nil) else { return self }
let paramsColor: [String: AnyObject] = [kCIInputBrightnessKey: NSNumber(value: 0.0), kCIInputContrastKey: NSNumber(value: 1.0), kCIInputSaturationKey: NSNumber(value: 0.0)]
let grayscale = ciImage.applyingFilter("CIColorControls", parameters: paramsColor)
guard let processedCGImage = CIContext().createCGImage(grayscale, from: grayscale.extent) else { return self }
return UIImage(cgImage: processedCGImage, scale: scale, orientation: imageOrientation)
}
}
Here is the swift 1.2 version
/// convert background image to gray scale
///
/// param: flag if true, image will be rendered in grays scale
func convertBackgroundColorToGrayScale(flag: Bool) {
if flag == true {
let imageRect = self.myImage.frame
let colorSpace = CGColorSpaceCreateDeviceGray()
let width = imageRect.width
let height = imageRect.height
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.None.rawValue)
var context = CGBitmapContextCreate(nil, Int(width), Int(height), 8, 0, colorSpace, bitmapInfo)
let image = self.musicBackgroundColor.image!.CGImage
CGContextDrawImage(context, imageRect, image)
let imageRef = CGBitmapContextCreateImage(context)
let newImage = UIImage(CGImage: CGImageCreateCopy(imageRef))
self.myImage.image = newImage
} else {
// do something else
}
}
in swift 3.0
func convertImageToGrayScale(image: UIImage) -> UIImage {
// Create image rectangle with current image width/height
let imageRect = CGRect(x: 0, y: 0,width: image.size.width, height : image.size.height)
// Grayscale color space
let colorSpace = CGColorSpaceCreateDeviceGray()
// Create bitmap content with current image size and grayscale colorspace
let context = CGContext(data: nil, width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: CGImageAlphaInfo.none.rawValue)
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
context?.draw(image.cgImage!, in: imageRect)
// Create bitmap image info from pixel data in current context
let imageRef = context!.makeImage()
// Create a new UIImage object
let newImage = UIImage(cgImage: imageRef!)
// Release colorspace, context and bitmap information
//MARK: ToolBar Button Methods
// Return the new grayscale image
return newImage
}
This code (objective c) work:
CIImage * ciimage = ...;
CIFilter * filter = [CIFilter filterWithName:#"CIColorControls" withInputParameters:#{kCIInputSaturationKey : #0.0,kCIInputContrastKey : #10.0,kCIInputImageKey : ciimage}];
CIImage * grayscale = [filtre outputImage];
The kCIInputContrastKey : #10.0 is to obtain an almost black and white image.
(based on FBente answer).
Swift 5.4.2:
func grayscaled(brightness: Double = 0.0, contrast: Double = 1.0) -> UIImage? {
guard let ciImage = CoreImage.CIImage(image: self, options: nil) else { return nil }
let paramsColor: [String : AnyObject] = [ kCIInputBrightnessKey: NSNumber(value: brightness),
kCIInputContrastKey: NSNumber(value: contrast),
kCIInputSaturationKey: NSNumber(value: 0.0) ]
let grayscale = ciImage.applyingFilter("CIColorControls", parameters: paramsColor)
guard let processedCGImage = CIContext().createCGImage(grayscale, from: grayscale.extent) else { return nil }
return UIImage(cgImage: processedCGImage, scale: self.scale, orientation: self.imageOrientation)
}

Correct crop of CIGaussianBlur

As I noticed when CIGaussianBlur is applied to image, image's corners gets blurred so that it looks like being smaller than original. So I figured out that I need to crop it correctly to avoid having transparent edges of image. But how to calculate how much I need to crop in dependence of blur amount?
Example:
Original image:
Image with 50 inputRadius of CIGaussianBlur (blue color is background of everything):
Take the following code as an example...
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:5.0f] forKey:#"inputRadius"];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[result extent]];
This results in the images you provided above. But if I instead use the original images rect to create the CGImage off of the context the resulting image is the desired size.
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
There are two issues. The first is that the blur filter samples pixels outside the edges of the input image. These pixels are transparent. That's where the transparent pixels come from.
The trick is to extend the edges before you apply the blur filter. This can be done by a clamp filter e.g. like this:
CIFilter *affineClampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
CGAffineTransform xform = CGAffineTransformMakeScale(1.0, 1.0);
[affineClampFilter setValue:[NSValue valueWithBytes:&xform
objCType:#encode(CGAffineTransform)]
forKey:#"inputTransform"];
This filter extends the edges infinitely and eliminates the transparency. The next step would be to apply the blur filter.
The second issue is a bit weird. Some renderers produce a bigger output image for the blur filter and you must adapt the origin of the resulting CIImage by some offset e.g. like this:
CGImageRef cgImage = [context createCGImage:outputImage
fromRect:CGRectOffset([inputImage extend],
offset, offset)];
The software renderer on my iPhone needs three times the blur radius as offset. The hardware renderer on the same iPhone does not need any offset at all. Maybee you could deduce the offset from the size difference of input and output images, but I did not try...
To get a nice blurred version of an image with hard edges you first need to apply a CIAffineClamp to the source image, extending its edges out and then you need to ensure that you use the input image's extents when generating the output image.
The code is as follows:
CIContext *context = [CIContext contextWithOptions:nil];
UIImage *image = [UIImage imageNamed:#"Flower"];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *clampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
[clampFilter setDefaults];
[clampFilter setValue:inputImage forKey:kCIInputImageKey];
CIFilter *blurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[blurFilter setValue:clampFilter.outputImage forKey:kCIInputImageKey];
[blurFilter setValue:#10.0f forKey:#"inputRadius"];
CIImage *result = [blurFilter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
UIImage *result = [[UIImage alloc] initWithCGImage:cgImage scale:image.scale orientation:UIImageOrientationUp];
CGImageRelease(cgImage);
Note this code was tested on iOS. It should be the similar for OS X (substituting NSImage for UIImage).
I saw some of the solutions and wanted to recommend a more modern one, based off some of the ideas shared here:
private lazy var coreImageContext = CIContext() // Re-use this.
func blurredImage(image: CIImage, radius: CGFloat) -> CGImage? {
let blurredImage = image
.clampedToExtent()
.applyingFilter(
"CIGaussianBlur",
parameters: [
kCIInputRadiusKey: radius,
]
)
.cropped(to: image.extent)
return coreImageContext.createCGImage(blurredImage, from: blurredImage.extent)
}
If you need a UIImage afterward, you can of course get it like so:
let image = UIImage(cgImage: cgImage)
... For those wondering, the reason for returning a CGImage is (as noted in the Apple documentation):
Due to Core Image's coordinate system mismatch with UIKit, this filtering approach may yield unexpected results when displayed in a UIImageView with "contentMode". Be sure to back it with a CGImage so that it handles contentMode properly.
If you need a CIImage you could return that, but in this case if you're displaying the image, you'd probably want to be careful.
This works for me :)
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *blurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[blurFilter setDefaults];
[blurFilter setValue:inputImage forKey:#"inputImage"];
CGFloat blurLevel = 20.0f; // Set blur level
[blurFilter setValue:[NSNumber numberWithFloat:blurLevel] forKey:#"inputRadius"]; // set value for blur level
CIImage *outputImage = [blurFilter valueForKey:#"outputImage"];
CGRect rect = inputImage.extent; // Create Rect
rect.origin.x += blurLevel; // and set custom params
rect.origin.y += blurLevel; //
rect.size.height -= blurLevel*2.0f; //
rect.size.width -= blurLevel*2.0f; //
CGImageRef cgImage = [context createCGImage:outputImage fromRect:rect]; // Then apply new rect
imageView.image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
Here is the Swift 5 version of blurring the image. Set the Clamp filter to defaults so you will no need to give transform.
func applyBlurEffect() -> UIImage? {
let context = CIContext(options: nil)
let imageToBlur = CIImage(image: self)
let clampFilter = CIFilter(name: "CIAffineClamp")!
clampFilter.setDefaults()
clampFilter.setValue(imageToBlur, forKey: kCIInputImageKey)
//The CIAffineClamp filter is setting your extent as infinite, which then confounds your context. Try saving off the pre-clamp extent CGRect, and then supplying that to the context initializer.
let inputImageExtent = imageToBlur!.extent
guard let currentFilter = CIFilter(name: "CIGaussianBlur") else {
return nil
}
currentFilter.setValue(clampFilter.outputImage, forKey: kCIInputImageKey)
currentFilter.setValue(10, forKey: "inputRadius")
guard let output = currentFilter.outputImage, let cgimg = context.createCGImage(output, from: inputImageExtent) else {
return nil
}
return UIImage(cgImage: cgimg)
}
Here is Swift version:
func applyBlurEffect(image: UIImage) -> UIImage {
let context = CIContext(options: nil)
let imageToBlur = CIImage(image: image)
let blurfilter = CIFilter(name: "CIGaussianBlur")
blurfilter!.setValue(imageToBlur, forKey: "inputImage")
blurfilter!.setValue(5.0, forKey: "inputRadius")
let resultImage = blurfilter!.valueForKey("outputImage") as! CIImage
let cgImage = context.createCGImage(resultImage, fromRect: resultImage.extent)
let blurredImage = UIImage(CGImage: cgImage)
return blurredImage
}
See below two implementations for Xamarin (C#).
1) Works for iOS 6
public static UIImage Blur(UIImage image)
{
using(var blur = new CIGaussianBlur())
{
blur.Image = new CIImage(image);
blur.Radius = 6.5f;
using(CIImage output = blur.OutputImage)
using(CIContext context = CIContext.FromOptions(null))
using(CGImage cgimage = context.CreateCGImage (output, new RectangleF(0, 0, image.Size.Width, image.Size.Height)))
{
return UIImage.FromImage(cgimage);
}
}
}
2) Implementation for iOS 7
Using the way shown above isn't working properly on iOS 7 anymore (at least at the moment with Xamarin 7.0.1). So I decided to add cropping another way (measures may depend on the blur radius).
private static UIImage BlurImage(UIImage image)
{
using(var blur = new CIGaussianBlur())
{
blur.Image = new CIImage(image);
blur.Radius = 6.5f;
using(CIImage output = blur.OutputImage)
using(CIContext context = CIContext.FromOptions(null))
using(CGImage cgimage = context.CreateCGImage (output, new RectangleF(0, 0, image.Size.Width, image.Size.Height)))
{
return UIImage.FromImage(Crop(CIImage.FromCGImage(cgimage), image.Size.Width, image.Size.Height));
}
}
}
private static CIImage Crop(CIImage image, float width, float height)
{
var crop = new CICrop
{
Image = image,
Rectangle = new CIVector(10, 10, width - 20, height - 20)
};
return crop.OutputImage;
}
Try this, let the input's extent be -createCGImage:fromRect:'s parameter:
-(UIImage *)gaussianBlurImageWithRadius:(CGFloat)radius {
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *input = [CIImage imageWithCGImage:self.CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:input forKey:kCIInputImageKey];
[filter setValue:#(radius) forKey:kCIInputRadiusKey];
CIImage *output = [filter valueForKey:kCIOutputImageKey];
CGImageRef imgRef = [context createCGImage:output
fromRect:input.extent];
UIImage *outImage = [UIImage imageWithCGImage:imgRef
scale:UIScreen.mainScreen.scale
orientation:UIImageOrientationUp];
CGImageRelease(imgRef);
return outImage;
}

iOS: create an darker version of UIImage and leave transparent pixels unchanged?

I found
Create new UIImage by adding shadow to existing UIImage
and
UIImage, is there an easy way to make it darker or all black
But the selected answers do not work for me.
I have an UIImage, which may have some transparent pixels in it, I need to create a new UIImage with non-transparent pixels darkened, is there any way to do this? I was thinking of using UIBezierPath but I don't know how to do it for only non-transparent pixels.
This is the class I use to color images even if they are transparent.
+ (UIImage *)colorizeImage:(UIImage *)image withColor:(UIColor *)color {
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect area = CGRectMake(0, 0, image.size.width, image.size.height);
CGContextScaleCTM(context, 1, -1);
CGContextTranslateCTM(context, 0, -area.size.height);
CGContextSaveGState(context);
CGContextClipToMask(context, area, image.CGImage);
[color set];
CGContextFillRect(context, area);
CGContextRestoreGState(context);
CGContextSetBlendMode(context, kCGBlendModeMultiply);
CGContextDrawImage(context, area, image.CGImage);
UIImage *colorizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return colorizedImage;
}
To darken the image you would pass the method a black or gray UIColor with lowered transparency.
How about trying a CoreImage Filter?
You could use the CIColorControls filter to adjust the input brightness and contrast to darken the image.
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:sourceImage]; //your input image
CIFilter *filter= [CIFilter filterWithName:#"CIColorControls"];
[filter setValue:inputImage forKey:#"inputImage"];
[filter setValue:[NSNumber numberWithFloat:0.5] forKey:#"inputBrightness"];
// Your output image
UIImage *outputImage = [UIImage imageWithCGImage:[context createCGImage:filter.outputImage fromRect:filter.outputImage.extent]];
Read more about the CIFilter parameters here:
http://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html%23//apple_ref/doc/filter/ci/CIColorControls
Here's a quick Swift version, using a CIExposureAdjust CIFilter :)
// Get the original image and set up the CIExposureAdjust filter
guard let originalImage = UIImage(named: "myImage"),
let inputImage = CIImage(image: originalImage),
let filter = CIFilter(name: "CIExposureAdjust") else { return }
// The inputEV value on the CIFilter adjusts exposure (negative values darken, positive values brighten)
filter.setValue(inputImage, forKey: "inputImage")
filter.setValue(-2.0, forKey: "inputEV")
// Break early if the filter was not a success (.outputImage is optional in Swift)
guard let filteredImage = filter.outputImage else { return }
let context = CIContext(options: nil)
let outputImage = UIImage(CGImage: context.createCGImage(filteredImage, fromRect: filteredImage.extent))
myImageView.image = outputImage // use the filtered UIImage as required.
Here's David's answer in Swift 5:
class func colorizeImage(_ image: UIImage?, with color: UIColor?) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(image?.size ?? CGSize.zero, _: false, _: image?.scale ?? 0.0)
let context = UIGraphicsGetCurrentContext()
let area = CGRect(x: 0, y: 0, width: image?.size.width ?? 0.0, height: image?.size.height ?? 0.0)
context?.scaleBy(x: 1, y: -1)
context?.translateBy(x: 0, y: -area.size.height)
context?.saveGState()
context?.clip(to: area, mask: (image?.cgImage)!)
color?.set()
context?.fill(area)
context?.restoreGState()
if let context = context {
context.setBlendMode(.multiply)
}
context!.draw((image?.cgImage)!, in: area)
let colorizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return colorizedImage
}

How to use a UIImage as a mask over a color in Objective-C

I have a UIImage that is all black with an alpha channel so some parts are grayish and some parts are completely see-through. I want to use that images as a mask over some other color (let's say white to make it easy), so the final product is now a white image with parts of it transparent.
I've been looking around on the Apple documentation site here:
https://developer.apple.com/library/archive/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_images/dq_images.html#//apple_ref/doc/uid/TP30001066-CH212-CJBHIJEB
But I don't can't really make sense of those examples.
In iOS 7+ you should use UIImageRenderingModeAlwaysTemplate instead. See https://stackoverflow.com/a/26965557/870313
Creating arbitrarily-colored icons from a black-with-alpha master image (iOS).
// Usage: UIImage *buttonImage = [UIImage ipMaskedImageNamed:#"UIButtonBarAction.png" color:[UIColor redColor]];
+ (UIImage *)ipMaskedImageNamed:(NSString *)name color:(UIColor *)color
{
UIImage *image = [UIImage imageNamed:name];
CGRect rect = CGRectMake(0, 0, image.size.width, image.size.height);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, image.scale);
CGContextRef c = UIGraphicsGetCurrentContext();
[image drawInRect:rect];
CGContextSetFillColorWithColor(c, [color CGColor]);
CGContextSetBlendMode(c, kCGBlendModeSourceAtop);
CGContextFillRect(c, rect);
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
Credits to Ole Zorn: https://gist.github.com/1102091
Translating Jano's answer into Swift:
func ipMaskedImageNamed(name:String, color:UIColor) -> UIImage {
let image = UIImage(named: name)
let rect:CGRect = CGRect(origin: CGPoint(x: 0, y: 0), size: CGSize(width: image!.size.width, height: image!.size.height))
UIGraphicsBeginImageContextWithOptions(rect.size, false, image!.scale)
let c:CGContextRef = UIGraphicsGetCurrentContext()
image?.drawInRect(rect)
CGContextSetFillColorWithColor(c, color.CGColor)
CGContextSetBlendMode(c, kCGBlendModeSourceAtop)
CGContextFillRect(c, rect)
let result:UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return result
}
Usage:
myButton.setImage(ipMaskedImageNamed("grayScalePNG", color: UIColor.redColor()), forState: .Normal)
Edit: According to this article, you can turn an image into a mask whose opaque areas are represented by the tint color. Within the asset catalog, under the Attributes Inspector of the image, change Render As to Template Image. No code necessary.
Here's a variation in Swift 3, written as an extension to UIImage:
extension UIImage {
func tinted(color:UIColor) -> UIImage? {
let image = self
let rect:CGRect = CGRect(origin: CGPoint(x: 0, y: 0), size: image.size)
UIGraphicsBeginImageContextWithOptions(rect.size, false, image.scale)
let context = UIGraphicsGetCurrentContext()!
image.draw(in: rect)
context.setFillColor(color.cgColor)
context.setBlendMode(.sourceAtop)
context.fill(rect)
if let result:UIImage = UIGraphicsGetImageFromCurrentImageContext() {
UIGraphicsEndImageContext()
return result
}
else {
return nil
}
}
}
// Usage
let myImage = #imageLiteral(resourceName: "Check.png") // any image
let blueImage = myImage.tinted(color: .blue)
I converted this for OSX here as a Category and copied below. Note this is for non-ARC projects. For ARC projects, the autorelease can be removed.
- (NSImage *)cdsMaskedWithColor:(NSColor *)color
{
CGRect rect = CGRectMake(0, 0, self.size.width, self.size.height);
NSImage *result = [[NSImage alloc] initWithSize:self.size];
[result lockFocusFlipped:self.isFlipped];
NSGraphicsContext *context = [NSGraphicsContext currentContext];
CGContextRef c = (CGContextRef)[context graphicsPort];
[self drawInRect:NSRectFromCGRect(rect)];
CGContextSetFillColorWithColor(c, [color CGColor]);
CGContextSetBlendMode(c, kCGBlendModeSourceAtop);
CGContextFillRect(c, rect);
[result unlockFocus];
return [result autorelease];
}
+ (NSImage *)cdsMaskedImageNamed:(NSString *)name color:(NSColor *)color
{
NSImage *image = [NSImage imageNamed:name];
return [image cdsMaskedWithColor:color];
}

kCAFilterNearest maginifcation filter (UIImageView)

I am using QREncoder library found here: https://github.com/jverkoey/ObjQREncoder
Basically, i looked at the example code by this author, and when he creates the QRCode it comes out perfectly with no pixelation. The image itself that the library provides is 33 x 33 pixels, but he uses kCAFilterNearest to magnify and make it very clear (no pixilation). Here is his code:
UIImage* image = [QREncoder encode:#"http://www.google.com/"];
UIImageView* imageView = [[UIImageView alloc] initWithImage:image];
CGFloat qrSize = self.view.bounds.size.width - kPadding * 2;
imageView.frame = CGRectMake(kPadding, (self.view.bounds.size.height - qrSize) / 2,
qrSize, qrSize);
[imageView layer].magnificationFilter = kCAFilterNearest;
[self.view addSubview:imageView];
I have a UIImageView in a xib, and I am setting it's image like this:
[[template imageVQRCode] setImage:[QREncoder encode:ticketNum]];
[[[template imageVQRCode] layer] setMagnificationFilter:kCAFilterNearest];
but the qrcode is really blurry. In the example, it comes out crystal clear.
What am i doing wrong?
Thanks!
UPDATE:I found out that the problem isn't with scaling or anything to do with kCAFFilterNearest. It has to do with generating the PNG image from the view. Here's how it looks on the deive vs how it looks like when i save the UIView to the PNG representation (Notice the QRCodes quality):
UPDATE 2: This is how I am generating the PNG file from UIView:
UIGraphicsBeginImageContextWithOptions([[template view] bounds].size, YES, 0.0);
[[[template view] layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[UIImagePNGRepresentation(viewImage) writeToFile:plistPath atomically:YES];
I have used below function for editing image.
- (UIImage *)resizedImage:(CGSize)newSize interpolationQuality:(CGInterpolationQuality)quality
{
BOOL drawTransposed;
switch (self.imageOrientation) {
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
drawTransposed = YES;
break;
default:
drawTransposed = NO;
}
return [self resizedImage:newSize
transform:[self transformForOrientation:newSize]
drawTransposed:drawTransposed
interpolationQuality:quality];
}
- (UIImage *)resizedImage:(CGSize)newSize
transform:(CGAffineTransform)transform
drawTransposed:(BOOL)transpose
interpolationQuality:(CGInterpolationQuality)quality
{
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGRect transposedRect = CGRectMake(0, 0, newRect.size.height, newRect.size.width);
CGImageRef imageRef = self.CGImage;
// Build a context that's the same dimensions as the new size
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
if((bitmapInfo == kCGImageAlphaLast) || (bitmapInfo == kCGImageAlphaNone))
bitmapInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
bitmapInfo);
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(bitmap, quality);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, transpose ? transposedRect : newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
return newImage;
}
UIImageWriteToSavedPhotosAlbum([image resizedImage:CGSizeMake(300, 300) interpolationQuality:kCGInterpolationNone], nil, nil, nil);
Please find below image and let me know if you need any help.