How do I draw a simple line on top of a UIImage or UIImageView?
The following code works by creating a new image the same size as the original, drawing a copy of the original image onto the new image, then drawing a 1 pixel line along to the top of the new image.
// UIImage *originalImage = <the image you want to add a line to>
// UIColor *lineColor = <the color of the line>
UIGraphicsBeginImageContext(originalImage.size);
// Pass 1: Draw the original image as the background
[originalImage drawAtPoint:CGPointMake(0,0)];
// Pass 2: Draw the line on top of original image
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 1.0);
CGContextMoveToPoint(context, 0, 0);
CGContextAddLineToPoint(context, originalImage.size.width, 0);
CGContextSetStrokeColorWithColor(context, [lineColor CGColor]);
CGContextStrokePath(context);
// Create new image
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
// Tidy up
UIGraphicsEndImageContext();
Alternatively, you could create the line as a CAShapeLayer then add it as a subview to the UIImageView (see this answer).
For Swift 3.0
UIGraphicsBeginImageContext(originalImage.size)
originalImage.draw(at: CGPoint(x: 0, y: 0))
let context = UIGraphicsGetCurrentContext()!
context.setLineWidth(2)
context.move(to: CGPoint(x: 0, y: originalImage.size.height - 2))
context.addLine(to: CGPoint(x: originalImage.size.width, y: originalImage.size.height - 2))
context.setStrokeColor(UIColor.red.cgColor)
context.strokePath()
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
For Swift 2.3
func drawLineOnImage(size: CGSize, image: UIImage, from: CGPoint, to: CGPoint) -> UIImage {
UIGraphicsBeginImageContext(size)
image.drawAtPoint(CGPointZero)
let context = UIGraphicsGetCurrentContext()
CGContextSetLineWidth(context, 1.0)
CGContextSetStrokeColorWithColor(context, UIColor.blueColor().CGColor)
CGContextMoveToPoint(context, from.x, from.y)
CGContextAddLineToPoint(context, to.x, to.y)
CGContextStrokePath(context)
let resultImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return resultImage
}
Related
In objective-c, I make a circle shape programmatically by following codes:
+(UIImage *)makeRoundedImage:(CGSize) size backgroundColor:(UIColor *) backgroundColor cornerRadius:(int) cornerRadius
{
UIImage* bgImage = [self imageWithColor:backgroundColor andSize:size];;
CALayer *imageLayer = [CALayer layer];
imageLayer.frame = CGRectMake(0, 0, size.width, size.height);
imageLayer.contents = (id) bgImage.CGImage;
imageLayer.masksToBounds = YES;
imageLayer.cornerRadius = cornerRadius;
UIGraphicsBeginImageContext(bgImage.size);
[imageLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return roundedImage;
}
The imageWithColor method is as following:
+(UIImage *)imageWithColor:(UIColor *)color andSize:(CGSize)size
{
//quick fix, or pop up CG invalid context 0x0 bug
if(size.width == 0) size.width = 1;
if(size.height == 0) size.height = 1;
//---quick fix
UIImage *img = nil;
CGRect rect = CGRectMake(0, 0, size.width, size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context,
color.CGColor);
CGContextFillRect(context, rect);
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Then I used it to create a pure color circle shape image, but what I found is the circle image is not perfect rounded. As an example, please see following code:
CGSize size = CGSizeMake(diameter, diameter);
int r = ceil((float)diameter/2.0);
UIImage *imageNormal = [self makeRoundedImage:size backgroundColor:backgroundColor cornerRadius:r];
[slider setThumbImage:imageNormal forState:UIControlStateNormal];
First I created a circle image, then I set the image as the thumb to a UISlider. But what shown is as the picture shown below:
You can see the circle is not an exact circle. I'm thinking probably it caused by the screen resolution issue? Because if I use an image resource for the thumb, I need add #2x. Anybody know the reason? Thanx in advance.
updated on 8th Aug 2015.
Further to this question and the answer from #Noah Witherspoon, I found the blurry edge issue has been solved. But still, the circle looks like being cut. I used the code as following:
CGRect rect = CGRectMake(0.0f, 0.0f, radius*2.0f, radius*2.0f);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillEllipseInRect(context, rect);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
And the circle looks like:
You can see the edge has been cut.
I changed the code as following:
CGRect rect = CGRectMake(0.0f, 0.0f, radius*2.0f+4, radius*2.0f+4);
CGRect rectmin = CGRectMake(2.0f, 2.0f, radius*2, radius*2);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillEllipseInRect(context, rectmin);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
You can see the circle looks better(The top edge and the bottom edge):
I made the fill rect size smaller, and the edge looks better, but I don't think it's a nice solution. Still, does anybody know why this happen?
From your screenshot it looks like you do actually have a circular image, but its scale is wrong—it’s not Retina—so it looks blurry and not-circular. The key thing is that instead of using UIGraphicsBeginImageContext which defaults to a scale of 1.0 (as compared to your screen, which is at a scale of 2.0 or 3.0), you should be using UIGraphicsBeginImageContextWithOptions. Also, you don’t need to make a layer or a view to draw a circle in an image context.
+ (UIImage *)makeCircleImageWithDiameter:(CGFloat)diameter color:(UIColor *)color {
UIGraphicsBeginImageContextWithOptions(CGSizeMake(diameter, diameter), NO, 0 /* scale (0 means “use the current screen’s scale”) */);
[color setFill];
CGContextFillEllipseInRect(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, diameter, diameter));
UIImage *image = UIGraphicsGetImageFromCurrentContext();
UIGraphicsEndImageContext();
return image;
}
If you want to get a circle every time try this:
- (UIImage *)makeCircularImage:(CGSize)size backgroundColor:(UIColor *)backgroundColor {
CGSize squareSize = CGSizeMake((size.width > size.height) ? size.width : size.height,
(size.width > size.height) ? size.width : size.height);
UIView *circleView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, squareSize.width, squareSize.height)];
circleView.layer.cornerRadius = circleView.frame.size.height * 0.5f;
circleView.backgroundColor = backgroundColor;
circleView.opaque = NO;
UIGraphicsBeginImageContextWithOptions(circleView.bounds.size, circleView.opaque, 0.0);
[circleView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
I have an uiimage with transparent background. if I invert the colors with the follow method, the transparent background change to white and then to black.
I want that it doesnt invert transparent pixels? how can i do that?
- (UIImage *) getImageWithInvertedPixelsOfImage:(UIImage *)image {
UIGraphicsBeginImageContextWithOptions(image.size, NO, 2);
CGContextSetBlendMode(UIGraphicsGetCurrentContext(), kCGBlendModeCopy);
[image drawInRect:CGRectMake(0, 0, image.size.width, image.size.height)];
CGContextSetBlendMode(UIGraphicsGetCurrentContext(), kCGBlendModeDifference);
CGContextSetFillColorWithColor(UIGraphicsGetCurrentContext(),[UIColor whiteColor].CGColor);
CGContextFillRect(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, image.size.width, image.size.height));
UIImage * result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
this is how I solved.
It is the code that you can find in other answers, but with the masking of the current image with the same image, with rotated context (for transforming from CG* coords to UI* coords).
- (UIImage *)invertImage:(UIImage *)originalImage
{
UIGraphicsBeginImageContext(originalImage.size);
CGContextSetBlendMode(UIGraphicsGetCurrentContext(), kCGBlendModeCopy);
CGRect imageRect = CGRectMake(0, 0, originalImage.size.width, originalImage.size.height);
[originalImage drawInRect:imageRect];
CGContextSetBlendMode(UIGraphicsGetCurrentContext(), kCGBlendModeDifference);
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0, originalImage.size.height);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
//mask the image
CGContextClipToMask(UIGraphicsGetCurrentContext(), imageRect, originalImage.CGImage);
CGContextSetFillColorWithColor(UIGraphicsGetCurrentContext(),[UIColor whiteColor].CGColor);
CGContextFillRect(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, originalImage.size.width, originalImage.size.height));
UIImage *returnImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return returnImage;
}
I created a Pod from #maggix's answer:
pod 'UIImage+InvertedImage'
This is a Swift 4.0 version of #maggix answer...
func invertImage(with originalImage: UIImage) -> UIImage? {
var image: UIImage?
UIGraphicsBeginImageContext(originalImage.size)
if let context = UIGraphicsGetCurrentContext() {
context.setBlendMode(.copy)
let imageRect = CGRect(x: 0, y: 0, width: originalImage.size.width, height: originalImage.size.height)
originalImage.draw(in: imageRect)
context.setBlendMode(.difference)
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
context.translateBy(x: 0, y: originalImage.size.height)
context.scaleBy(x: 1.0, y: -1.0)
//mask the image
context.clip(to: imageRect, mask: originalImage.cgImage!)
context.setFillColor(UIColor.white.cgColor)
context.fill(CGRect(x: 0, y:0, width: originalImage.size.width, height: originalImage.size.height))
image = UIGraphicsGetImageFromCurrentImageContext()
}
UIGraphicsEndImageContext()
return image
}
How to create a reflection of UIImage without using UIImageView ? I have seen the apple code for reflection using two imageViews but i don't want to add imageview in my application i just want whole image of the original image and the reflected image. Does any body knows how to do that.
This is from UIImage+FX.m, created by Nick Lockwood
UIImage * processedImage = //here your image
processedImage = [processedImage imageWithReflectionWithScale:0.15f
gap:10.0f
alpha:0.305f];
Scale is the size of the reflection, gap the distance between the image and the reflection, and alpha the alpha of the reflection
- (UIImage *)imageWithReflectionWithScale:(CGFloat)scale gap:(CGFloat)gap alpha:(CGFloat)alpha
{
//get reflected image
UIImage *reflection = [self reflectedImageWithScale:scale];
CGFloat reflectionOffset = reflection.size.height + gap;
//create drawing context
UIGraphicsBeginImageContextWithOptions(CGSizeMake(self.size.width, self.size.height + reflectionOffset * 2.0f), NO, 0.0f);
//draw reflection
[reflection drawAtPoint:CGPointMake(0.0f, reflectionOffset + self.size.height + gap) blendMode:kCGBlendModeNormal alpha:alpha];
//draw image
[self drawAtPoint:CGPointMake(0.0f, reflectionOffset)];
//capture resultant image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return image
return image;
}
and
- (UIImage *)reflectedImageWithScale:(CGFloat)scale
{
//get reflection dimensions
CGFloat height = ceil(self.size.height * scale);
CGSize size = CGSizeMake(self.size.width, height);
CGRect bounds = CGRectMake(0.0f, 0.0f, size.width, size.height);
//create drawing context
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
//clip to gradient
CGContextClipToMask(context, bounds, [[self class] gradientMask]);
//draw reflected image
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextTranslateCTM(context, 0.0f, -self.size.height);
[self drawInRect:CGRectMake(0.0f, 0.0f, self.size.width, self.size.height)];
//capture resultant image
UIImage *reflection = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return reflection image
return reflection;
}
gradientMask
+ (CGImageRef)gradientMask
{
static CGImageRef sharedMask = NULL;
if (sharedMask == NULL)
{
//create gradient mask
UIGraphicsBeginImageContextWithOptions(CGSizeMake(1, 256), YES, 0.0);
CGContextRef gradientContext = UIGraphicsGetCurrentContext();
CGFloat colors[] = {0.0, 1.0, 1.0, 1.0};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGGradientRef gradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2);
CGPoint gradientStartPoint = CGPointMake(0, 0);
CGPoint gradientEndPoint = CGPointMake(0, 256);
CGContextDrawLinearGradient(gradientContext, gradient, gradientStartPoint,
gradientEndPoint, kCGGradientDrawsAfterEndLocation);
sharedMask = CGBitmapContextCreateImage(gradientContext);
CGGradientRelease(gradient);
CGColorSpaceRelease(colorSpace);
UIGraphicsEndImageContext();
}
return sharedMask;
}
it returns the image with the reflection.
I wrote a blog post awhile ago that uses a view's CAReplicatorLayer. It's really designed for handling dynamics updates to a view with reflection, but I think it would work for what you want to do as well.
You can render the image and its reflection inside a graphics context and then get a CGImage from the context and from that again a UIImage.
But the question is: why not use two views? Why would you think it is a problem or limitation?
I found
Create new UIImage by adding shadow to existing UIImage
and
UIImage, is there an easy way to make it darker or all black
But the selected answers do not work for me.
I have an UIImage, which may have some transparent pixels in it, I need to create a new UIImage with non-transparent pixels darkened, is there any way to do this? I was thinking of using UIBezierPath but I don't know how to do it for only non-transparent pixels.
This is the class I use to color images even if they are transparent.
+ (UIImage *)colorizeImage:(UIImage *)image withColor:(UIColor *)color {
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect area = CGRectMake(0, 0, image.size.width, image.size.height);
CGContextScaleCTM(context, 1, -1);
CGContextTranslateCTM(context, 0, -area.size.height);
CGContextSaveGState(context);
CGContextClipToMask(context, area, image.CGImage);
[color set];
CGContextFillRect(context, area);
CGContextRestoreGState(context);
CGContextSetBlendMode(context, kCGBlendModeMultiply);
CGContextDrawImage(context, area, image.CGImage);
UIImage *colorizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return colorizedImage;
}
To darken the image you would pass the method a black or gray UIColor with lowered transparency.
How about trying a CoreImage Filter?
You could use the CIColorControls filter to adjust the input brightness and contrast to darken the image.
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:sourceImage]; //your input image
CIFilter *filter= [CIFilter filterWithName:#"CIColorControls"];
[filter setValue:inputImage forKey:#"inputImage"];
[filter setValue:[NSNumber numberWithFloat:0.5] forKey:#"inputBrightness"];
// Your output image
UIImage *outputImage = [UIImage imageWithCGImage:[context createCGImage:filter.outputImage fromRect:filter.outputImage.extent]];
Read more about the CIFilter parameters here:
http://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html%23//apple_ref/doc/filter/ci/CIColorControls
Here's a quick Swift version, using a CIExposureAdjust CIFilter :)
// Get the original image and set up the CIExposureAdjust filter
guard let originalImage = UIImage(named: "myImage"),
let inputImage = CIImage(image: originalImage),
let filter = CIFilter(name: "CIExposureAdjust") else { return }
// The inputEV value on the CIFilter adjusts exposure (negative values darken, positive values brighten)
filter.setValue(inputImage, forKey: "inputImage")
filter.setValue(-2.0, forKey: "inputEV")
// Break early if the filter was not a success (.outputImage is optional in Swift)
guard let filteredImage = filter.outputImage else { return }
let context = CIContext(options: nil)
let outputImage = UIImage(CGImage: context.createCGImage(filteredImage, fromRect: filteredImage.extent))
myImageView.image = outputImage // use the filtered UIImage as required.
Here's David's answer in Swift 5:
class func colorizeImage(_ image: UIImage?, with color: UIColor?) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(image?.size ?? CGSize.zero, _: false, _: image?.scale ?? 0.0)
let context = UIGraphicsGetCurrentContext()
let area = CGRect(x: 0, y: 0, width: image?.size.width ?? 0.0, height: image?.size.height ?? 0.0)
context?.scaleBy(x: 1, y: -1)
context?.translateBy(x: 0, y: -area.size.height)
context?.saveGState()
context?.clip(to: area, mask: (image?.cgImage)!)
color?.set()
context?.fill(area)
context?.restoreGState()
if let context = context {
context.setBlendMode(.multiply)
}
context!.draw((image?.cgImage)!, in: area)
let colorizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return colorizedImage
}
I have a source image which has a variable width and height, which I must show on a fullscreen iPad UIImageView but with addition of borders around the image itself. So my task is to create a new image with a white border around it, but not overlapping on the image itself. I'm currently doing it with overlapping via this code:
- (UIImage*)imageWithBorderFromImage:(UIImage*)source
{
CGSize size = [source size];
UIGraphicsBeginImageContext(size);
CGRect rect = CGRectMake(0, 0, size.width, size.height);
[source drawInRect:rect blendMode:kCGBlendModeNormal alpha:1.0];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBStrokeColor(context, 1.0, 1.0, 1.0, 1.0);
CGContextSetLineWidth(context, 40.0);
CGContextStrokeRect(context, rect);
UIImage *testImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return testImg;
}
Can anyone tell me how do I first draw a white canvas that is 40 pixels bigger in each direction than the source image and then draw that image on it?
I adjusted your code to make it work. Basically what it does:
Set canvas size to the size of your image + 2*margins
Fill the whole canvas with background color (white in your case)
Draw your image in the appropriate rectangle (with initial size and required margins)
Resulting code is:
- (UIImage*)imageWithBorderFromImage:(UIImage*)source
{
const CGFloat margin = 40.0f;
CGSize size = CGSizeMake([source size].width + 2*margin, [source size].height + 2*margin);
UIGraphicsBeginImageContext(size);
[[UIColor whiteColor] setFill];
[[UIBezierPath bezierPathWithRect:CGRectMake(0, 0, size.width, size.height)] fill];
CGRect rect = CGRectMake(margin, margin, size.width-2*margin, size.height-2*margin);
[source drawInRect:rect blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *testImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return testImg;
}