How to create SKTexture with rounded corners without using a mask - ios7

I wish to create a simple node with facebook profile picture of a user, where the picture has rounded corners (or a complete circle). I create the node as follows:
SKNode *friend = [[SKNode alloc] init];
SKTexture *texture = [SKTexture textureWithImage:user[#"fbProfilePicture"]];
SKSpriteNode *profilePic = [SKSpriteNode spriteNodeWithTexture:texture];
[friend addChild:profilePic];
I could not find any proper documentation to create the image with rounded corners, other than using SKCropNode (which seems to be a bad workaround)

This is how it looks in Swift by translating the above answer, Sebastian's answer. The method receives the name of the picture and returns a node with rounded corners.
class func roundSquareImage(imageName: String) -> SKSpriteNode {
let originalPicture = UIImage(named: imageName)
// create the image with rounded corners
UIGraphicsBeginImageContextWithOptions(originalPicture!.size, false, 0)
let rect = CGRectMake(0, 0, originalPicture!.size.width, originalPicture!.size.height)
let rectPath : UIBezierPath = UIBezierPath(roundedRect: rect, cornerRadius: 30.0)
rectPath.addClip()
originalPicture!.drawInRect(rect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
let texture = SKTexture(image: scaledImage)
let roundedImage = SKSpriteNode(texture: texture, size: CGSizeMake(originalPicture!.size.width, originalPicture!.size.height))
roundedImage.name = imageName
return roundedImage
}

Try this:
// your profile picture
UIImage *fbProfilePicture = [UIImage imageNamed:#"fbProfilePicture"];
// create the image with rounded corners
UIGraphicsBeginImageContextWithOptions(fbProfilePicture.size, NO, 0);
CGRect rect = CGRectMake(0, 0, fbProfilePicture.size.width, fbProfilePicture.size.height);
[[UIBezierPath bezierPathWithRoundedRect:rect cornerRadius:20.0] addClip];
[fbProfilePicture drawInRect:rect];
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// use the roundedImage as texture for your sprite
SKTexture *texture = [SKTexture textureWithImage:roundedImage];
SKSpriteNode *profilePic = [SKSpriteNode spriteNodeWithTexture:texture size:CGSizeMake(fbProfilePicture.size.width, fbProfilePicture.size.height)];
[self addChild:profilePic];
The rounded corner part comes from this answer.

For 2017...
class YourSprite: SKSpriteNode {
func yourSetupFunction() {
texture = SKTexture( image: UIImage(named: "cat")!.circleMasked! )
It's that easy...
Code for circleMasked:
(All projects that deal with images will need this anyway.)
extension UIImage {
var isPortrait: Bool { return size.height > size.width }
var isLandscape: Bool { return size.width > size.height }
var breadth: CGFloat { return min(size.width, size.height) }
var breadthSize: CGSize { return CGSize(width: breadth, height: breadth) }
var breadthRect: CGRect { return CGRect(origin: .zero, size: breadthSize) }
var circleMasked: UIImage? {
UIGraphicsBeginImageContextWithOptions(breadthSize, false, scale)
defer { UIGraphicsEndImageContext() }
guard let cgImage = cgImage?.cropping(to: CGRect(origin:
CGPoint(
x: isLandscape ? floor((size.width - size.height) / 2) : 0,
y: isPortrait ? floor((size.height - size.width) / 2) : 0),
size: breadthSize))
else { return nil }
UIBezierPath(ovalIn: breadthRect).addClip()
UIImage(cgImage: cgImage, scale: 1, orientation: imageOrientation)
.draw(in: breadthRect)
return UIGraphicsGetImageFromCurrentImageContext()
}
// classic 'circleMasked' stackoverflow fragment
// courtesy Leo Dabius /a/29047372/294884
}

Updated cipri.l Answer for Swift5.2
class func roundSquareImage(imageName: String) -> SKSpriteNode {
let originalPicture = UIImage(named: imageName)
// create the image with rounded corners
UIGraphicsBeginImageContextWithOptions(originalPicture!.size, false, 0)
let rect = CGRect(x: 0, y: 0, width: originalPicture!.size.width, height: originalPicture!.size.height)
let rectPath : UIBezierPath = UIBezierPath(roundedRect: rect, cornerRadius: 30.0)
rectPath.addClip()
originalPicture!.draw(in: rect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
let texture = SKTexture(image: scaledImage!)
let roundedImage = SKSpriteNode(texture: texture, size: CGSize(width: originalPicture!.size.width, height: originalPicture!.size.height))
roundedImage.name = imageName
return roundedImage
}

Related

Apply Black and White Filter to UIImage

I need to apply a black-and-white filter on a UIImage. I have a view in which there's a photo taken by the user, but I don't have any ideas on transforming the colors of the image.
- (void)viewDidLoad {
[super viewDidLoad];
self.navigationItem.title = NSLocalizedString(#"#Paint!", nil);
imageView.image = image;
}
How can I do that?
Objective C
- (UIImage *)convertImageToGrayScale:(UIImage *)image {
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
return newImage;
}
Swift
func convertToGrayScale(image: UIImage) -> UIImage {
// Create image rectangle with current image width/height
let imageRect:CGRect = CGRect(x:0, y:0, width:image.size.width, height: image.size.height)
// Grayscale color space
let colorSpace = CGColorSpaceCreateDeviceGray()
let width = image.size.width
let height = image.size.height
// Create bitmap content with current image size and grayscale colorspace
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
context?.draw(image.cgImage!, in: imageRect)
let imageRef = context!.makeImage()
// Create a new UIImage object
let newImage = UIImage(cgImage: imageRef!)
return newImage
}
Judging by the ciimage tag, perhaps the OP was thinking (correctly) that Core Image would provide a quick and easy way to do this?
Here's that, both in ObjC:
- (UIImage *)grayscaleImage:(UIImage *)image {
CIImage *ciImage = [[CIImage alloc] initWithImage:image];
CIImage *grayscale = [ciImage imageByApplyingFilter:#"CIColorControls"
withInputParameters: #{kCIInputSaturationKey : #0.0}];
return [UIImage imageWithCIImage:grayscale];
}
and Swift:
func grayscaleImage(image: UIImage) -> UIImage {
let ciImage = CIImage(image: image)
let grayscale = ciImage.imageByApplyingFilter("CIColorControls",
withInputParameters: [ kCIInputSaturationKey: 0.0 ])
return UIImage(CIImage: grayscale)
}
CIColorControls is just one of several built-in Core Image filters that can convert an image to grayscale. CIPhotoEffectMono, CIPhotoEffectNoir, and CIPhotoEffectTonal are different tone-mapping presets (each takes no parameters), and you can do your own tone mapping with filters like CIColorMap.
Unlike alternatives that involve creating and drawing into one's own CGBitmapContext, these preserve the size/scale and alpha of the original image without extra work.
While PiratM's solution works you lose the alpha channel. To preserve the alpha channel you need to do a few extra steps.
+(UIImage *)convertImageToGrayScale:(UIImage *)image {
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
context = CGBitmapContextCreate(nil,image.size.width, image.size.height, 8, 0, nil, kCGImageAlphaOnly );
CGContextDrawImage(context, imageRect, [image CGImage]);
CGImageRef mask = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:CGImageCreateWithMask(imageRef, mask)];
CGImageRelease(imageRef);
CGImageRelease(mask);
// Return the new grayscale image
return newImage;
}
The Version of #rickster looks good considering the alpha channel. But a UIImageView without .AspectFit or Fill contentMode can't display it. Therefore the UIImage has to be created with an CGImage. This Version implemented as Swift UIImage extension keeps the current scale and gives some optional input parameters:
import CoreImage
extension UIImage
{
/// Applies grayscale with CIColorControls by settings saturation to 0.0.
/// - Parameter brightness: Default is 0.0.
/// - Parameter contrast: Default is 1.0.
/// - Returns: The grayscale image of self if available.
func grayscaleImage(brightness: Double = 0.0, contrast: Double = 1.0) -> UIImage?
{
if let ciImage = CoreImage.CIImage(image: self, options: nil)
{
let paramsColor: [String : AnyObject] = [ kCIInputBrightnessKey: NSNumber(double: brightness),
kCIInputContrastKey: NSNumber(double: contrast),
kCIInputSaturationKey: NSNumber(double: 0.0) ]
let grayscale = ciImage.imageByApplyingFilter("CIColorControls", withInputParameters: paramsColor)
let processedCGImage = CIContext().createCGImage(grayscale, fromRect: grayscale.extent)
return UIImage(CGImage: processedCGImage, scale: self.scale, orientation: self.imageOrientation)
}
return nil
}
}
The longer but faster way is the modificated version of #ChrisStillwells answer. Implemented as an UIImage extension considering the alpha channel and current scale in Swift:
extension UIImage
{
/// Create a grayscale image with alpha channel. Is 5 times faster than grayscaleImage().
/// - Returns: The grayscale image of self if available.
func convertToGrayScale() -> UIImage?
{
// Create image rectangle with current image width/height * scale
let pixelSize = CGSize(width: self.size.width * self.scale, height: self.size.height * self.scale)
let imageRect = CGRect(origin: CGPointZero, size: pixelSize)
// Grayscale color space
if let colorSpace: CGColorSpaceRef = CGColorSpaceCreateDeviceGray()
{
// Create bitmap content with current image size and grayscale colorspace
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.None.rawValue)
if let context: CGContextRef = CGBitmapContextCreate(nil, Int(pixelSize.width), Int(pixelSize.height), 8, 0, colorSpace, bitmapInfo.rawValue)
{
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, self.CGImage)
// Create bitmap image info from pixel data in current context
if let imageRef: CGImageRef = CGBitmapContextCreateImage(context)
{
let bitmapInfoAlphaOnly = CGBitmapInfo(rawValue: CGImageAlphaInfo.Only.rawValue)
if let contextAlpha = CGBitmapContextCreate(nil, Int(pixelSize.width), Int(pixelSize.height), 8, 0, nil, bitmapInfoAlphaOnly.rawValue)
{
CGContextDrawImage(contextAlpha, imageRect, self.CGImage)
if let mask: CGImageRef = CGBitmapContextCreateImage(contextAlpha)
{
// Create a new UIImage object
if let newCGImage = CGImageCreateWithMask(imageRef, mask)
{
// Return the new grayscale image
return UIImage(CGImage: newCGImage, scale: self.scale, orientation: self.imageOrientation)
}
}
}
}
}
}
// A required variable was unexpected nil
return nil
}
}
In Swift 5, using CoreImage to do image filter,
thanks #rickster
extension UIImage{
var grayscaled: UIImage?{
let ciImage = CIImage(image: self)
let grayscale = ciImage?.applyingFilter("CIColorControls",
parameters: [ kCIInputSaturationKey: 0.0 ])
if let gray = grayscale{
return UIImage(ciImage: gray)
}
else{
return nil
}
}
}
updated #FBente's to Swift 5,
using CoreImage to do image filtering,
extension UIImage
{
/// Create a grayscale image with alpha channel. Is 5 times faster than grayscaleImage().
/// - Returns: The grayscale image of self if available.
var grayScaled: UIImage?
{
// Create image rectangle with current image width/height * scale
let pixelSize = CGSize(width: self.size.width * self.scale, height: self.size.height * self.scale)
let imageRect = CGRect(origin: CGPoint.zero, size: pixelSize)
// Grayscale color space
let colorSpace: CGColorSpace = CGColorSpaceCreateDeviceGray()
// Create bitmap content with current image size and grayscale colorspace
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
if let context: CGContext = CGContext(data: nil, width: Int(pixelSize.width), height: Int(pixelSize.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
{
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
guard let cg = self.cgImage else{
return nil
}
context.draw(cg, in: imageRect)
// Create bitmap image info from pixel data in current context
if let imageRef: CGImage = context.makeImage(){
let bitmapInfoAlphaOnly = CGBitmapInfo(rawValue: CGImageAlphaInfo.alphaOnly.rawValue)
guard let context = CGContext(data: nil, width: Int(pixelSize.width), height: Int(pixelSize.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfoAlphaOnly.rawValue) else{
return nil
}
context.draw(cg, in: imageRect)
if let mask: CGImage = context.makeImage() {
// Create a new UIImage object
if let newCGImage = imageRef.masking(mask){
// Return the new grayscale image
return UIImage(cgImage: newCGImage, scale: self.scale, orientation: self.imageOrientation)
}
}
}
}
// A required variable was unexpected nil
return nil
}
}
Swift 3.0 version:
extension UIImage {
func convertedToGrayImage() -> UIImage? {
let width = self.size.width
let height = self.size.height
let rect = CGRect(x: 0.0, y: 0.0, width: width, height: height)
let colorSpace = CGColorSpaceCreateDeviceGray()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
guard let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue) else {
return nil
}
guard let cgImage = cgImage else { return nil }
context.draw(cgImage, in: rect)
guard let imageRef = context.makeImage() else { return nil }
let newImage = UIImage(cgImage: imageRef.copy()!)
return newImage
}
}
Swift3 + GPUImage
import GPUImage
extension UIImage {
func blackWhite() -> UIImage? {
guard let image: GPUImagePicture = GPUImagePicture(image: self) else {
print("unable to create GPUImagePicture")
return nil
}
let filter = GPUImageAverageLuminanceThresholdFilter()
image.addTarget(filter)
filter.useNextFrameForImageCapture()
image.processImage()
guard let processedImage: UIImage = filter.imageFromCurrentFramebuffer(with: UIImageOrientation.up) else {
print("unable to obtain UIImage from filter")
return nil
}
return processedImage
}
}
Swift 4 Solution
extension UIImage {
var withGrayscale: UIImage {
guard let ciImage = CIImage(image: self, options: nil) else { return self }
let paramsColor: [String: AnyObject] = [kCIInputBrightnessKey: NSNumber(value: 0.0), kCIInputContrastKey: NSNumber(value: 1.0), kCIInputSaturationKey: NSNumber(value: 0.0)]
let grayscale = ciImage.applyingFilter("CIColorControls", parameters: paramsColor)
guard let processedCGImage = CIContext().createCGImage(grayscale, from: grayscale.extent) else { return self }
return UIImage(cgImage: processedCGImage, scale: scale, orientation: imageOrientation)
}
}
Here is the swift 1.2 version
/// convert background image to gray scale
///
/// param: flag if true, image will be rendered in grays scale
func convertBackgroundColorToGrayScale(flag: Bool) {
if flag == true {
let imageRect = self.myImage.frame
let colorSpace = CGColorSpaceCreateDeviceGray()
let width = imageRect.width
let height = imageRect.height
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.None.rawValue)
var context = CGBitmapContextCreate(nil, Int(width), Int(height), 8, 0, colorSpace, bitmapInfo)
let image = self.musicBackgroundColor.image!.CGImage
CGContextDrawImage(context, imageRect, image)
let imageRef = CGBitmapContextCreateImage(context)
let newImage = UIImage(CGImage: CGImageCreateCopy(imageRef))
self.myImage.image = newImage
} else {
// do something else
}
}
in swift 3.0
func convertImageToGrayScale(image: UIImage) -> UIImage {
// Create image rectangle with current image width/height
let imageRect = CGRect(x: 0, y: 0,width: image.size.width, height : image.size.height)
// Grayscale color space
let colorSpace = CGColorSpaceCreateDeviceGray()
// Create bitmap content with current image size and grayscale colorspace
let context = CGContext(data: nil, width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: CGImageAlphaInfo.none.rawValue)
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
context?.draw(image.cgImage!, in: imageRect)
// Create bitmap image info from pixel data in current context
let imageRef = context!.makeImage()
// Create a new UIImage object
let newImage = UIImage(cgImage: imageRef!)
// Release colorspace, context and bitmap information
//MARK: ToolBar Button Methods
// Return the new grayscale image
return newImage
}
This code (objective c) work:
CIImage * ciimage = ...;
CIFilter * filter = [CIFilter filterWithName:#"CIColorControls" withInputParameters:#{kCIInputSaturationKey : #0.0,kCIInputContrastKey : #10.0,kCIInputImageKey : ciimage}];
CIImage * grayscale = [filtre outputImage];
The kCIInputContrastKey : #10.0 is to obtain an almost black and white image.
(based on FBente answer).
Swift 5.4.2:
func grayscaled(brightness: Double = 0.0, contrast: Double = 1.0) -> UIImage? {
guard let ciImage = CoreImage.CIImage(image: self, options: nil) else { return nil }
let paramsColor: [String : AnyObject] = [ kCIInputBrightnessKey: NSNumber(value: brightness),
kCIInputContrastKey: NSNumber(value: contrast),
kCIInputSaturationKey: NSNumber(value: 0.0) ]
let grayscale = ciImage.applyingFilter("CIColorControls", parameters: paramsColor)
guard let processedCGImage = CIContext().createCGImage(grayscale, from: grayscale.extent) else { return nil }
return UIImage(cgImage: processedCGImage, scale: self.scale, orientation: self.imageOrientation)
}

iOS: create an darker version of UIImage and leave transparent pixels unchanged?

I found
Create new UIImage by adding shadow to existing UIImage
and
UIImage, is there an easy way to make it darker or all black
But the selected answers do not work for me.
I have an UIImage, which may have some transparent pixels in it, I need to create a new UIImage with non-transparent pixels darkened, is there any way to do this? I was thinking of using UIBezierPath but I don't know how to do it for only non-transparent pixels.
This is the class I use to color images even if they are transparent.
+ (UIImage *)colorizeImage:(UIImage *)image withColor:(UIColor *)color {
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect area = CGRectMake(0, 0, image.size.width, image.size.height);
CGContextScaleCTM(context, 1, -1);
CGContextTranslateCTM(context, 0, -area.size.height);
CGContextSaveGState(context);
CGContextClipToMask(context, area, image.CGImage);
[color set];
CGContextFillRect(context, area);
CGContextRestoreGState(context);
CGContextSetBlendMode(context, kCGBlendModeMultiply);
CGContextDrawImage(context, area, image.CGImage);
UIImage *colorizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return colorizedImage;
}
To darken the image you would pass the method a black or gray UIColor with lowered transparency.
How about trying a CoreImage Filter?
You could use the CIColorControls filter to adjust the input brightness and contrast to darken the image.
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:sourceImage]; //your input image
CIFilter *filter= [CIFilter filterWithName:#"CIColorControls"];
[filter setValue:inputImage forKey:#"inputImage"];
[filter setValue:[NSNumber numberWithFloat:0.5] forKey:#"inputBrightness"];
// Your output image
UIImage *outputImage = [UIImage imageWithCGImage:[context createCGImage:filter.outputImage fromRect:filter.outputImage.extent]];
Read more about the CIFilter parameters here:
http://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html%23//apple_ref/doc/filter/ci/CIColorControls
Here's a quick Swift version, using a CIExposureAdjust CIFilter :)
// Get the original image and set up the CIExposureAdjust filter
guard let originalImage = UIImage(named: "myImage"),
let inputImage = CIImage(image: originalImage),
let filter = CIFilter(name: "CIExposureAdjust") else { return }
// The inputEV value on the CIFilter adjusts exposure (negative values darken, positive values brighten)
filter.setValue(inputImage, forKey: "inputImage")
filter.setValue(-2.0, forKey: "inputEV")
// Break early if the filter was not a success (.outputImage is optional in Swift)
guard let filteredImage = filter.outputImage else { return }
let context = CIContext(options: nil)
let outputImage = UIImage(CGImage: context.createCGImage(filteredImage, fromRect: filteredImage.extent))
myImageView.image = outputImage // use the filtered UIImage as required.
Here's David's answer in Swift 5:
class func colorizeImage(_ image: UIImage?, with color: UIColor?) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(image?.size ?? CGSize.zero, _: false, _: image?.scale ?? 0.0)
let context = UIGraphicsGetCurrentContext()
let area = CGRect(x: 0, y: 0, width: image?.size.width ?? 0.0, height: image?.size.height ?? 0.0)
context?.scaleBy(x: 1, y: -1)
context?.translateBy(x: 0, y: -area.size.height)
context?.saveGState()
context?.clip(to: area, mask: (image?.cgImage)!)
color?.set()
context?.fill(area)
context?.restoreGState()
if let context = context {
context.setBlendMode(.multiply)
}
context!.draw((image?.cgImage)!, in: area)
let colorizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return colorizedImage
}

Making a UIImageView grey

I'm setting UImageViews within table cells using setImageWithUrl from the AFNetworking library, but I need the images to be greyscale... is there any way I can do this. I've tried a few UIImage greyscale converters, but I guess they don't work because I'm setting something that hasn't downloaded yet.
Try this method:
- (UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
return newImage;
}
I found it here: http://mobiledevelopertips.com/graphics/convert-an-image-uiimage-to-grayscale.html
I took the code from #Jesse Gumpo's example, above, but here it is as an interface.
#implementation UIImage ( Greyscale )
- (UIImage *)greyscaleImage
{
CIImage * beginImage = [ self CIImage ] ;
CIImage * evAdjustedCIImage = nil ;
{
CIFilter * filter = [ CIFilter filterWithName:#"CIColorControls"
keysAndValues:kCIInputImageKey, beginImage
, #"inputBrightness", #0.0
, #"inputContrast", #1.1
, #"inputSaturation", #0.0
, nil ] ;
evAdjustedCIImage = [ filter outputImage ] ;
}
CIImage * resultCIImage = nil ;
{
CIFilter * filter = [ CIFilter filterWithName:#"CIExposureAdjust"
keysAndValues:kCIInputImageKey, evAdjustedCIImage
, #"inputEV", #0.7
, nil ] ;
resultCIImage = [ filter outputImage ] ;
}
CIContext * context = [ CIContext contextWithOptions:nil ] ;
CGImageRef resultCGImage = [ context createCGImage:resultCIImage
fromRect:resultCIImage.extent ] ;
UIImage * result = [ UIImage imageWithCGImage:resultCGImage ] ;
CGImageRelease( resultCGImage ) ;
return result;
}
#end
Now you can just do this:
UIImage * downloadedImage = ... get from AFNetwork results ... ;
downloadedImage = [ downloadedImage greyscaleImage ] ;
... use 'downloadedImage' ...
instead of a uiimageView, subclass a UIView, give it
#property (strong, nonatomic) UIImage *image;
with
#synthesize image = _image;
override the setter
-(void)setImage:(UIImage *)image{
_image = [self makeImageBW:image];
[self setNeedsDisplay];
}
- (UIImage *)makeImageBW:(UIImage *)source
{
CIImage *beginImage = source.CIImage;
CIImage *blackAndWhite = [CIFilter filterWithName:#"CIColorControls" keysAndValues:kCIInputImageKey, beginImage, #"inputBrightness", [NSNumber numberWithFloat:0.0], #"inputContrast", [NSNumber numberWithFloat:1.1], #"inputSaturation", [NSNumber numberWithFloat:0.0], nil].outputImage;
CIImage *output = [CIFilter filterWithName:#"CIExposureAdjust" keysAndValues:kCIInputImageKey, blackAndWhite, #"inputEV", [NSNumber numberWithFloat:0.7], nil].outputImage;
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef ref = [context createCGImage:output fromRect:output.extent];
UIImage *newImage = [UIImage imageWithCGImage:ref];
CGImageRelease(cgiimage);
return newImage;
}
-(void)drawRect:(CGRect)rect{
[_image drawInRect:rect];
}
somewhere else you can set the image with your NSURLRequest:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0),^{
NSData *data = [NSURLConnection sendSynchronousRequest:(NSURLRequest *) returningResponse:nil error:nil];
if (data){
UIImage *image = [UIImage imageWithData:data];
if (image) dispatch_async(dispatch_get_main_queue(),^{
[view setImage:image];
});
}
});
This thread is a bit old but I came across it on my search. In this post I've pointed out 2 different methods in Swift to create a grayscale image that can be displayed in an imageView considering the alpha and scale of the image.
import CoreImage
extension UIImage
{
/// Applies grayscale with CIColorControls by settings saturation to 0.0.
/// - Parameter brightness: Default is 0.0.
/// - Parameter contrast: Default is 1.0.
/// - Returns: The grayscale image of self if available.
func grayscaleImage(brightness: Double = 0.0, contrast: Double = 1.0) -> UIImage?
{
if let ciImage = CoreImage.CIImage(image: self, options: nil)
{
let paramsColor: [String : AnyObject] = [ kCIInputBrightnessKey: NSNumber(double: brightness),
kCIInputContrastKey: NSNumber(double: contrast),
kCIInputSaturationKey: NSNumber(double: 0.0) ]
let grayscale = ciImage.imageByApplyingFilter("CIColorControls", withInputParameters: paramsColor)
let processedCGImage = CIContext().createCGImage(grayscale, fromRect: grayscale.extent)
return UIImage(CGImage: processedCGImage, scale: self.scale, orientation: self.imageOrientation)
}
return nil
}
/// Create a grayscale image with alpha channel. Is 5 times faster than grayscaleImage().
/// - Returns: The grayscale image of self if available.
func convertToGrayScale() -> UIImage?
{
// Create image rectangle with current image width/height * scale
let pixelSize = CGSize(width: self.size.width * self.scale, height: self.size.height * self.scale)
let imageRect = CGRect(origin: CGPointZero, size: pixelSize)
// Grayscale color space
if let colorSpace: CGColorSpaceRef = CGColorSpaceCreateDeviceGray()
{
// Create bitmap content with current image size and grayscale colorspace
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.None.rawValue)
if let context: CGContextRef = CGBitmapContextCreate(nil, Int(pixelSize.width), Int(pixelSize.height), 8, 0, colorSpace, bitmapInfo.rawValue)
{
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, self.CGImage)
// Create bitmap image info from pixel data in current context
if let imageRef: CGImageRef = CGBitmapContextCreateImage(context)
{
let bitmapInfoAlphaOnly = CGBitmapInfo(rawValue: CGImageAlphaInfo.Only.rawValue)
if let contextAlpha = CGBitmapContextCreate(nil, Int(pixelSize.width), Int(pixelSize.height), 8, 0, nil, bitmapInfoAlphaOnly.rawValue)
{
CGContextDrawImage(contextAlpha, imageRect, self.CGImage)
if let mask: CGImageRef = CGBitmapContextCreateImage(contextAlpha)
{
// Create a new UIImage object
if let newCGImage = CGImageCreateWithMask(imageRef, mask)
{
// Return the new grayscale image
return UIImage(CGImage: newCGImage, scale: self.scale, orientation: self.imageOrientation)
}
}
}
}
}
}
// A required variable was unexpected nil
return nil
}
}

IOS: create a UIImage or UIImageView with rounded corners

Is it possible create an UIImage or an UIImageView with rounded corners? Because I want take an UIImage and show it inside an UIImageView, but I don't know how to do it.
Yes, it is possible.
Import the QuartzCore (#import <QuartzCore/QuartzCore.h>) header and play with the layer property of the UIImageView.
yourImageView.layer.cornerRadius = yourRadius;
yourImageView.clipsToBounds = YES;
See the CALayer class reference for more info.
Try this Code For Round Image Import QuartzCore framework
simple way to create Round Image
imageView.layer.backgroundColor=[[UIColor clearColor] CGColor];
imageView.layer.cornerRadius=20;
imageView.layer.borderWidth=2.0;
imageView.layer.masksToBounds = YES;
imageView.layer.borderColor=[[UIColor redColor] CGColor];
Objective-C
-(UIImage *)makeRoundedImage:(UIImage *) image
radius: (float) radius;
{
CALayer *imageLayer = [CALayer layer];
imageLayer.frame = CGRectMake(0, 0, image.size.width, image.size.height);
imageLayer.contents = (id) image.CGImage;
imageLayer.masksToBounds = YES;
imageLayer.cornerRadius = radius;
UIGraphicsBeginImageContext(image.size);
[imageLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return roundedImage;
}
Swift 3
func makeRoundedImage(image: UIImage, radius: Float) -> UIImage {
var imageLayer = CALayer()
imageLayer.frame = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
imageLayer.contents = image.cgImage
imageLayer.masksToBounds = true
imageLayer.cornerRadius = radius
UIGraphicsBeginImageContext(image.size)
imageLayer.render(in: UIGraphicsGetCurrentContext())
var roundedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return roundedImage
}
uiimageview.layer.cornerRadius = uiimageview.frame.size.height/2;
uiimageview.clipToBounds = YES;
#import <QuartzCore/QuartzCore.h>
// UIImageView+OSExt.h
#import <UIKit/UIKit.h>
#interface UIImageView (OSExt)
- (void)setBorder:(CGFloat)borderWidth color:(UIColor*)color;
#end
// UIImageView+OSExt.m
#import "UIImageView+OSExt.h"
#implementation UIImageView (OSExt)
- (void)layoutSublayersOfLayer:(CALayer *)layer
{
for ( CALayer *sub in layer.sublayers )
{
if ( YES == [sub.name isEqual:#"border-shape"])
{
CGFloat borderHalf = floor([(CAShapeLayer*)sub lineWidth] * .5);
sub.frame = layer.bounds;
[sub setBounds:CGRectInset(layer.bounds, borderHalf, borderHalf)];
[sub setPosition:CGPointMake(CGRectGetMidX(layer.bounds),
CGRectGetMidY(layer.bounds))];
}
}
}
- (void)setBorder:(CGFloat)borderWidth color:(UIColor*)color
{
assert(self.frame.size.width == self.frame.size.height);
for ( CALayer *sub in [NSArray arrayWithArray:self.layer.sublayers] )
{
if ( YES == [sub.name isEqual:#"border-shape"])
{
[sub removeFromSuperlayer];
break;
}
}
CGFloat borderHalf = floor(borderWidth * .5);
self.layer.cornerRadius = self.layer.bounds.size.width * .5;
CAShapeLayer *circleLayer = [CAShapeLayer layer];
self.layer.delegate = (id<CALayerDelegate>)self;
circleLayer.name = #"border-shape";
[circleLayer setBounds:CGRectInset(self.bounds, borderHalf, borderHalf)];
[circleLayer setPosition:CGPointMake(CGRectGetMidX(self.layer.bounds),
CGRectGetMidY(self.layer.bounds))];
[circleLayer setPath:[[UIBezierPath bezierPathWithOvalInRect:circleLayer.bounds] CGPath]];
[circleLayer setStrokeColor:color.CGColor];
[circleLayer setFillColor:[UIColor clearColor].CGColor];
[circleLayer setLineWidth:borderWidth];
{
circleLayer.shadowOffset = CGSizeZero;
circleLayer.shadowColor = [[UIColor whiteColor] CGColor];
circleLayer.shadowRadius = borderWidth;
circleLayer.shadowOpacity = .9f;
circleLayer.shadowOffset = CGSizeZero;
}
// Add the sublayer to the image view's layer tree
[self.layer addSublayer:circleLayer];
// old variant
//CALayer *layer = self.layer;
//layer.masksToBounds = YES;
//layer.cornerRadius = self.frame.size.width * 0.5;
//layer.borderWidth = borderWidth;
//layer.borderColor = color;
}
#end
Setting cornerRadius and clipsToBounds is the right way to do this. However if the view's size changes, the radius will not update. In order to get proper resizing and animation behavior, you need to create a UIImageView subclass.
class RoundImageView: UIImageView {
override var bounds: CGRect {
get {
return super.bounds
}
set {
super.bounds = newValue
setNeedsLayout()
}
}
override func layoutSubviews() {
super.layoutSubviews()
layer.cornerRadius = bounds.width / 2.0
clipsToBounds = true
}
}
Try this to get rounded corners of the image View and also to colour the corners:
imageView.layer.cornerRadius = imageView.frame.size.height/2;
imageView.layer.masksToBounds = YES;
imageView.layer.borderColor = [UIColor colorWithRed:148/255. green:79/255. blue:216/255. alpha:1.0].CGColor;
imageView.layer.borderWidth=2;
Condition*: The height and the width of the imageView must be same to get rounded corners.
layer.cornerRadius = imageviewHeight/2
layer.masksToBounds = true
It is possible but I'll advice you to create transparent png image (mask) with round corners and place it over you image with UIImageView. It might be quicker solution (for example if you need animations or scrolling).
Here how i set my rounded avatar at the center of it contain view:
-(void)setRoundedAvatar:(UIImageView *)avatarView toDiameter:(float)newSize atView:(UIView *)containedView;
{
avatarView.layer.cornerRadius = newSize/2;
avatarView.clipsToBounds = YES;
avatarView.frame = CGRectMake(0, 0, newSize, newSize);
CGPoint centerValue = CGPointMake(containView.frame.size.width/2, containedView.frame.size.height/2);
avatarView.center = centerValue;
}
Circle with UIBeizerPath #Swift-3 && #imageExtension
class ViewController: UIViewController {
#IBOutlet weak var imageOutlet: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
let image = UIImage(named: "IMG_0001.JPG")
if let image = image {
let renderimage = image.imageCroppingBezierPath(path: UIBezierPath(arcCenter: CGPoint(x:image.size.width/2,y:image.size.width/2 ) , radius: 200, startAngle: 0, endAngle: (2 * CGFloat(M_PI) ), clockwise: true) )
imageOutlet.image = renderimage
}
}
}
extension UIImage {
func imageCroppingBezierPath(path:UIBezierPath) ->UIImage {
let frame = CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height)
//Defining a graphic context to paint on
UIGraphicsBeginImageContextWithOptions(self.size, false, 0.0)
//Get the current graphics context (if it exists)
let context = UIGraphicsGetCurrentContext()
//save the current graphic context
context?.saveGState()
// clipping area
path.addClip()
self.draw(in: frame)
//To extract an image from our canvas
let image = UIGraphicsGetImageFromCurrentImageContext()
//restore graphic context
context?.restoreGState()
//remove current context from stack
UIGraphicsEndImageContext()
return image!
}
}
# import QuartzCore framework
imageView.layer.cornerRadius=imgvwUser.frame.size.width/2;
imageView.layer.masksToBounds = YES;
The height and the width of the imageView must be same to get rounded corners.

How to use a UIImage as a mask over a color in Objective-C

I have a UIImage that is all black with an alpha channel so some parts are grayish and some parts are completely see-through. I want to use that images as a mask over some other color (let's say white to make it easy), so the final product is now a white image with parts of it transparent.
I've been looking around on the Apple documentation site here:
https://developer.apple.com/library/archive/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_images/dq_images.html#//apple_ref/doc/uid/TP30001066-CH212-CJBHIJEB
But I don't can't really make sense of those examples.
In iOS 7+ you should use UIImageRenderingModeAlwaysTemplate instead. See https://stackoverflow.com/a/26965557/870313
Creating arbitrarily-colored icons from a black-with-alpha master image (iOS).
// Usage: UIImage *buttonImage = [UIImage ipMaskedImageNamed:#"UIButtonBarAction.png" color:[UIColor redColor]];
+ (UIImage *)ipMaskedImageNamed:(NSString *)name color:(UIColor *)color
{
UIImage *image = [UIImage imageNamed:name];
CGRect rect = CGRectMake(0, 0, image.size.width, image.size.height);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, image.scale);
CGContextRef c = UIGraphicsGetCurrentContext();
[image drawInRect:rect];
CGContextSetFillColorWithColor(c, [color CGColor]);
CGContextSetBlendMode(c, kCGBlendModeSourceAtop);
CGContextFillRect(c, rect);
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
Credits to Ole Zorn: https://gist.github.com/1102091
Translating Jano's answer into Swift:
func ipMaskedImageNamed(name:String, color:UIColor) -> UIImage {
let image = UIImage(named: name)
let rect:CGRect = CGRect(origin: CGPoint(x: 0, y: 0), size: CGSize(width: image!.size.width, height: image!.size.height))
UIGraphicsBeginImageContextWithOptions(rect.size, false, image!.scale)
let c:CGContextRef = UIGraphicsGetCurrentContext()
image?.drawInRect(rect)
CGContextSetFillColorWithColor(c, color.CGColor)
CGContextSetBlendMode(c, kCGBlendModeSourceAtop)
CGContextFillRect(c, rect)
let result:UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return result
}
Usage:
myButton.setImage(ipMaskedImageNamed("grayScalePNG", color: UIColor.redColor()), forState: .Normal)
Edit: According to this article, you can turn an image into a mask whose opaque areas are represented by the tint color. Within the asset catalog, under the Attributes Inspector of the image, change Render As to Template Image. No code necessary.
Here's a variation in Swift 3, written as an extension to UIImage:
extension UIImage {
func tinted(color:UIColor) -> UIImage? {
let image = self
let rect:CGRect = CGRect(origin: CGPoint(x: 0, y: 0), size: image.size)
UIGraphicsBeginImageContextWithOptions(rect.size, false, image.scale)
let context = UIGraphicsGetCurrentContext()!
image.draw(in: rect)
context.setFillColor(color.cgColor)
context.setBlendMode(.sourceAtop)
context.fill(rect)
if let result:UIImage = UIGraphicsGetImageFromCurrentImageContext() {
UIGraphicsEndImageContext()
return result
}
else {
return nil
}
}
}
// Usage
let myImage = #imageLiteral(resourceName: "Check.png") // any image
let blueImage = myImage.tinted(color: .blue)
I converted this for OSX here as a Category and copied below. Note this is for non-ARC projects. For ARC projects, the autorelease can be removed.
- (NSImage *)cdsMaskedWithColor:(NSColor *)color
{
CGRect rect = CGRectMake(0, 0, self.size.width, self.size.height);
NSImage *result = [[NSImage alloc] initWithSize:self.size];
[result lockFocusFlipped:self.isFlipped];
NSGraphicsContext *context = [NSGraphicsContext currentContext];
CGContextRef c = (CGContextRef)[context graphicsPort];
[self drawInRect:NSRectFromCGRect(rect)];
CGContextSetFillColorWithColor(c, [color CGColor]);
CGContextSetBlendMode(c, kCGBlendModeSourceAtop);
CGContextFillRect(c, rect);
[result unlockFocus];
return [result autorelease];
}
+ (NSImage *)cdsMaskedImageNamed:(NSString *)name color:(NSColor *)color
{
NSImage *image = [NSImage imageNamed:name];
return [image cdsMaskedWithColor:color];
}