CoreGraphics draw an image on a white canvas - objective-c

I have a source image which has a variable width and height, which I must show on a fullscreen iPad UIImageView but with addition of borders around the image itself. So my task is to create a new image with a white border around it, but not overlapping on the image itself. I'm currently doing it with overlapping via this code:
- (UIImage*)imageWithBorderFromImage:(UIImage*)source
{
CGSize size = [source size];
UIGraphicsBeginImageContext(size);
CGRect rect = CGRectMake(0, 0, size.width, size.height);
[source drawInRect:rect blendMode:kCGBlendModeNormal alpha:1.0];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBStrokeColor(context, 1.0, 1.0, 1.0, 1.0);
CGContextSetLineWidth(context, 40.0);
CGContextStrokeRect(context, rect);
UIImage *testImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return testImg;
}
Can anyone tell me how do I first draw a white canvas that is 40 pixels bigger in each direction than the source image and then draw that image on it?

I adjusted your code to make it work. Basically what it does:
Set canvas size to the size of your image + 2*margins
Fill the whole canvas with background color (white in your case)
Draw your image in the appropriate rectangle (with initial size and required margins)
Resulting code is:
- (UIImage*)imageWithBorderFromImage:(UIImage*)source
{
const CGFloat margin = 40.0f;
CGSize size = CGSizeMake([source size].width + 2*margin, [source size].height + 2*margin);
UIGraphicsBeginImageContext(size);
[[UIColor whiteColor] setFill];
[[UIBezierPath bezierPathWithRect:CGRectMake(0, 0, size.width, size.height)] fill];
CGRect rect = CGRectMake(margin, margin, size.width-2*margin, size.height-2*margin);
[source drawInRect:rect blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *testImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return testImg;
}

Related

Draw lines in UIView not supported for multiple screen sizes?

I want to draw lines in a UIView with based the UIView's height. (i.e) line height should be equal to the UIView's height. For that i'm using the following code.
- (void)testDraw :(void(^)(UIImage *image, UIImage *selectedImage))done
{
CGSize imageSize = CGSizeMake(1000, self.bounds.size.height);
//Begin the animation by checking the screen is retina / ordinary
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0.0f);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
for (NSInteger intSample=0; intSample < 1000; intSample++) {
if(intSample % self.samplesPerSecond == 0){
//draw separator line
[self drawLine:context fromPoint:CGPointMake(intSample, 0) toPoint:CGPointMake(intSample, self.frame.size.height) color:self.lineColor lineWidth:2.0];
}
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
CGRect drawRect = CGRectMake(0, 0, image.size.width, self.frame.size.height);
[self.progressColor set];
UIRectFillUsingBlendMode(drawRect, kCGBlendModeSourceAtop);
UIImage *tintedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
done(image, tintedImage);
}
//Draw the line based on the value received
- (void) drawLine:(CGContextRef) context fromPoint:(CGPoint) fromPoint toPoint:(CGPoint) toPoint color:(UIColor*) color lineWidth:(float) lineWidth{
CGContextBeginPath(context);
//add the lines with sizes that u want
CGContextSetAlpha(context,1.0);
CGContextSetLineWidth(context, lineWidth);
CGContextSetStrokeColorWithColor(context, [color CGColor]);
CGContextMoveToPoint(context, fromPoint.x, fromPoint.y);
CGContextAddLineToPoint(context, toPoint.x, toPoint.y);
CGContextStrokePath(context);
}
When i execute the above code with it working fine in 3.5 inch display, It's working fine. But when i run the same code in 4-inch the line height is not increasing based on the UIView. The lines are not working as like auto layouts.
Thanks in advance, Can anyone give your suggestion to fix this issue.

The rounded pure color image created by code is blur

In objective-c, I make a circle shape programmatically by following codes:
+(UIImage *)makeRoundedImage:(CGSize) size backgroundColor:(UIColor *) backgroundColor cornerRadius:(int) cornerRadius
{
UIImage* bgImage = [self imageWithColor:backgroundColor andSize:size];;
CALayer *imageLayer = [CALayer layer];
imageLayer.frame = CGRectMake(0, 0, size.width, size.height);
imageLayer.contents = (id) bgImage.CGImage;
imageLayer.masksToBounds = YES;
imageLayer.cornerRadius = cornerRadius;
UIGraphicsBeginImageContext(bgImage.size);
[imageLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return roundedImage;
}
The imageWithColor method is as following:
+(UIImage *)imageWithColor:(UIColor *)color andSize:(CGSize)size
{
//quick fix, or pop up CG invalid context 0x0 bug
if(size.width == 0) size.width = 1;
if(size.height == 0) size.height = 1;
//---quick fix
UIImage *img = nil;
CGRect rect = CGRectMake(0, 0, size.width, size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context,
color.CGColor);
CGContextFillRect(context, rect);
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Then I used it to create a pure color circle shape image, but what I found is the circle image is not perfect rounded. As an example, please see following code:
CGSize size = CGSizeMake(diameter, diameter);
int r = ceil((float)diameter/2.0);
UIImage *imageNormal = [self makeRoundedImage:size backgroundColor:backgroundColor cornerRadius:r];
[slider setThumbImage:imageNormal forState:UIControlStateNormal];
First I created a circle image, then I set the image as the thumb to a UISlider. But what shown is as the picture shown below:
You can see the circle is not an exact circle. I'm thinking probably it caused by the screen resolution issue? Because if I use an image resource for the thumb, I need add #2x. Anybody know the reason? Thanx in advance.
updated on 8th Aug 2015.
Further to this question and the answer from #Noah Witherspoon, I found the blurry edge issue has been solved. But still, the circle looks like being cut. I used the code as following:
CGRect rect = CGRectMake(0.0f, 0.0f, radius*2.0f, radius*2.0f);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillEllipseInRect(context, rect);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
And the circle looks like:
You can see the edge has been cut.
I changed the code as following:
CGRect rect = CGRectMake(0.0f, 0.0f, radius*2.0f+4, radius*2.0f+4);
CGRect rectmin = CGRectMake(2.0f, 2.0f, radius*2, radius*2);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillEllipseInRect(context, rectmin);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
You can see the circle looks better(The top edge and the bottom edge):
I made the fill rect size smaller, and the edge looks better, but I don't think it's a nice solution. Still, does anybody know why this happen?
From your screenshot it looks like you do actually have a circular image, but its scale is wrong—it’s not Retina—so it looks blurry and not-circular. The key thing is that instead of using UIGraphicsBeginImageContext which defaults to a scale of 1.0 (as compared to your screen, which is at a scale of 2.0 or 3.0), you should be using UIGraphicsBeginImageContextWithOptions. Also, you don’t need to make a layer or a view to draw a circle in an image context.
+ (UIImage *)makeCircleImageWithDiameter:(CGFloat)diameter color:(UIColor *)color {
UIGraphicsBeginImageContextWithOptions(CGSizeMake(diameter, diameter), NO, 0 /* scale (0 means “use the current screen’s scale”) */);
[color setFill];
CGContextFillEllipseInRect(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, diameter, diameter));
UIImage *image = UIGraphicsGetImageFromCurrentContext();
UIGraphicsEndImageContext();
return image;
}
If you want to get a circle every time try this:
- (UIImage *)makeCircularImage:(CGSize)size backgroundColor:(UIColor *)backgroundColor {
CGSize squareSize = CGSizeMake((size.width > size.height) ? size.width : size.height,
(size.width > size.height) ? size.width : size.height);
UIView *circleView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, squareSize.width, squareSize.height)];
circleView.layer.cornerRadius = circleView.frame.size.height * 0.5f;
circleView.backgroundColor = backgroundColor;
circleView.opaque = NO;
UIGraphicsBeginImageContextWithOptions(circleView.bounds.size, circleView.opaque, 0.0);
[circleView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}

Add a white background to an NSImage that is larger than the original image

I am currently adding a solid white background to an NSImage so that in the case that the image has a transparent background it becomes opaque.
What I would like to be able to do is increase the size of the white background so that it extends out all around the original image (thus giving it a white border).
This is what I currently have, but I can't work out how to create the white image larger that the original.
NSImage* importedImage = [bitmapImage image];
NSSize imageSize = [importedImage size];
NSImage* background = [[NSImage alloc] initWithSize:imageSize];
[background lockFocus];
[[NSColor whiteColor] setFill];
[NSBezierPath fillRect:NSMakeRect(0, 0, imageSize.width, imageSize.height)];
[background unlockFocus];
theImage = [[NSImage alloc] initWithSize:imageSize];
[theImage lockFocus];
CGRect newImageRect = CGRectZero;
newImageRect.size = [theImage size];
[background drawInRect:newImageRect];
[importedImage drawInRect:newImageRect];
[theImage unlockFocus];
I finally solved this. First I get the size of the original image and create the white background to this size plus twice the width of the border that I want.
Then, I create a new image and draw in the white background, then the original image making sure that its origin = the border width.
-(NSImage*) addBorderToImage:(NSImage *)theImage
{
//get the border slider value...
float borderWidth = [borderSlider floatValue];
//get the size of the original image and incease it to include the border size...
NSBitmapImageRep* bitmap = [[theImage representations] objectAtIndex:0];
NSSize imageSize = NSMakeSize(bitmap.pixelsWide+(borderWidth*2),bitmap.pixelsHigh+(borderWidth*2));
//create a new image with this size...
NSImage* borderedImage = [[NSImage alloc] initWithSize:imageSize];
[borderedImage lockFocus];
//fill the new image with white...
[[NSColor whiteColor] setFill];
[NSBezierPath fillRect:NSMakeRect(0, 0, imageSize.width, imageSize.height)];
//rect of new image size...
CGRect newImageRect = CGRectZero;
newImageRect.size = [borderedImage size];
//rect of old image size positioned inside the border...
CGRect oldImageRect = CGRectZero;
oldImageRect.size = [theImage size];
oldImageRect.origin.x = borderWidth;
oldImageRect.origin.y = borderWidth;
//draw in the white filled image...
[borderedImage drawInRect:newImageRect];
//draw in the original image...
[theImage drawInRect:oldImageRect];
[borderedImage unlockFocus];
return borderedImage;
}

Reflection of UIImage without UIImageView

How to create a reflection of UIImage without using UIImageView ? I have seen the apple code for reflection using two imageViews but i don't want to add imageview in my application i just want whole image of the original image and the reflected image. Does any body knows how to do that.
This is from UIImage+FX.m, created by Nick Lockwood
UIImage * processedImage = //here your image
processedImage = [processedImage imageWithReflectionWithScale:0.15f
gap:10.0f
alpha:0.305f];
Scale is the size of the reflection, gap the distance between the image and the reflection, and alpha the alpha of the reflection
- (UIImage *)imageWithReflectionWithScale:(CGFloat)scale gap:(CGFloat)gap alpha:(CGFloat)alpha
{
//get reflected image
UIImage *reflection = [self reflectedImageWithScale:scale];
CGFloat reflectionOffset = reflection.size.height + gap;
//create drawing context
UIGraphicsBeginImageContextWithOptions(CGSizeMake(self.size.width, self.size.height + reflectionOffset * 2.0f), NO, 0.0f);
//draw reflection
[reflection drawAtPoint:CGPointMake(0.0f, reflectionOffset + self.size.height + gap) blendMode:kCGBlendModeNormal alpha:alpha];
//draw image
[self drawAtPoint:CGPointMake(0.0f, reflectionOffset)];
//capture resultant image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return image
return image;
}
and
- (UIImage *)reflectedImageWithScale:(CGFloat)scale
{
//get reflection dimensions
CGFloat height = ceil(self.size.height * scale);
CGSize size = CGSizeMake(self.size.width, height);
CGRect bounds = CGRectMake(0.0f, 0.0f, size.width, size.height);
//create drawing context
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
//clip to gradient
CGContextClipToMask(context, bounds, [[self class] gradientMask]);
//draw reflected image
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextTranslateCTM(context, 0.0f, -self.size.height);
[self drawInRect:CGRectMake(0.0f, 0.0f, self.size.width, self.size.height)];
//capture resultant image
UIImage *reflection = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return reflection image
return reflection;
}
gradientMask
+ (CGImageRef)gradientMask
{
static CGImageRef sharedMask = NULL;
if (sharedMask == NULL)
{
//create gradient mask
UIGraphicsBeginImageContextWithOptions(CGSizeMake(1, 256), YES, 0.0);
CGContextRef gradientContext = UIGraphicsGetCurrentContext();
CGFloat colors[] = {0.0, 1.0, 1.0, 1.0};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGGradientRef gradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2);
CGPoint gradientStartPoint = CGPointMake(0, 0);
CGPoint gradientEndPoint = CGPointMake(0, 256);
CGContextDrawLinearGradient(gradientContext, gradient, gradientStartPoint,
gradientEndPoint, kCGGradientDrawsAfterEndLocation);
sharedMask = CGBitmapContextCreateImage(gradientContext);
CGGradientRelease(gradient);
CGColorSpaceRelease(colorSpace);
UIGraphicsEndImageContext();
}
return sharedMask;
}
it returns the image with the reflection.
I wrote a blog post awhile ago that uses a view's CAReplicatorLayer. It's really designed for handling dynamics updates to a view with reflection, but I think it would work for what you want to do as well.
You can render the image and its reflection inside a graphics context and then get a CGImage from the context and from that again a UIImage.
But the question is: why not use two views? Why would you think it is a problem or limitation?

UIGraphicsGetImageFromCurrentImageContext counterpart in desktop Cocoa?

I'm trying to crate a simple Mac paint application. On iOS I used UIGraphicsGetImageFromCurrentImageContext to update a UIImageView's image when touchesMoved. I'm now trying to active the same thing but on a Mac app, I want to do:
myNSImageView.image = UIGraphicsGetImageFromCurrentImageContext();
But that function does only exist on cocoa-touch and not within the cocoa framework, any tips on where/if I can find a similar function in Cocoa?
Thanks.
Update
Some additional code for completeness.
- (void)mouseDragged:(NSEvent *)event
{
NSInteger width = canvasView.frame.size.width;
NSInteger height = canvasView.frame.size.height;
NSGraphicsContext * graphicsContext = [NSGraphicsContext currentContext];
CGContextRef contextRef = (CGContextRef) [graphicsContext graphicsPort];
CGContextRef imageContextRef = CGBitmapContextCreate(0, width, height, 8, width*4, [NSColorSpace genericRGBColorSpace].CGColorSpace, kCGImageAlphaPremultipliedFirst);
NSPoint currentPoint = [event locationInWindow];
CGContextSetRGBStrokeColor(contextRef, 49.0/255.0, 147.0/255.0, 239.0/255.0, 1.0);
CGContextSetLineWidth(contextRef, 1.0);
CGContextBeginPath(contextRef);
CGContextMoveToPoint(contextRef, lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(contextRef, currentPoint.x, currentPoint.y);
CGContextDrawPath(contextRef,kCGPathStroke);
CGImageRef imageRef = CGBitmapContextCreateImage(imageContextRef);
NSImage* image = [[NSImage alloc] initWithCGImage:imageRef size:NSMakeSize(width, height)];
canvasView.image = image;
CFRelease(imageRef);
CFRelease(imageContextRef);
lastPoint = currentPoint;
}
The result I'm getting now that the painting works fine in the upper part of the view, but bot the rest. Like on this image (the red part should be the canvasView):
NSInteger width = 1024;
NSInteger height = 768;
CGContextRef contextRef = CGBitmapContextCreate(0, width, height, 8, width*4, [NSColorSpace genericRGBColorSpace].CGColorSpace, kCGImageAlphaPremultipliedFirst);
CGContextSetRGBStrokeColor(contextRef, 49.0/255.0, 147.0/255.0, 239.0/255.0, 1.0);
CGContextFillRect(contextRef, CGRectMake(0.0f, 0.0f, width, height));
CGImageRef imageRef = CGBitmapContextCreateImage(contextRef);
NSImage* image = [[NSImage alloc] initWithCGImage:imageRef size:NSMakeSize(width, height)];
CFRelease(imageRef);
CFRelease(contextRef);
NSGraphicsContext * graphicsContext = [NSGraphicsContext currentContext];
CGContextRef contextRef = (CGContextRef) [graphicsContext graphicsPort];
CGContextRef imageContextRef = CGBitmapContextCreate(0, width, height, 8, width*4, [NSColorSpace genericRGBColorSpace].CGColorSpace, kCGImageAlphaPremultipliedFirst);
You're assuming, first off, that there is a current context, and then you create a separate context.
CGContextSetRGBStrokeColor(contextRef, 49.0/255.0, 147.0/255.0, 239.0/255.0, 1.0);
CGContextSetLineWidth(contextRef, 1.0);
(etc.)
And then you draw into the context you assume exists, not the context you created.
CGImageRef imageRef = CGBitmapContextCreateImage(imageContextRef);
And then you create an image from the context you created and never drew into.
Cut out the attempt to get [NSGraphicsContext currentContext] and its graphics port. You can't assume that there is a current context outside of drawRect:, anyway. Instead, draw into the context you created.