This is what I am trying to do
Create a UIImage View.
Do some drawing on it
Press a share button to share the image with the drawings on it.
My code works perfect on iPad and iPhone. Problem comes with retina display. So my guess is some scale is not handled correctly, but not sure what I am doing wrong. This is my code
// Create the UIImageView named centerCanvas
// Do the drawing
CGPoint origin = centerCanvas.frame.origin;
CGSize size = centerCanvas.frame.size;
CGSize screenSize = [self returnScreenSize];
CGRect rect = CGRectMake(origin.x, origin.y, size.width, size.height);
UIGraphicsBeginImageContext(screenSize);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef imageRef = CGImageCreateWithImageInRect([img CGImage], rect);
[shareView setImage:[UIImage imageWithCGImage:imageRef]];
-(CGSize) returnScreenSize
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
CGFloat screenScale = [[UIScreen mainScreen] scale];
CGSize screenSize = CGSizeMake(screenBounds.size.width * screenScale,screenBounds.size.height * screenScale);
return screenSize;
}
Try using
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
instead of the old UIGraphicsBeginImageContext, which uses a scale factor (third argument above) equal to 1. By giving 0.0 as a scale factor you get a scale factor for your current screen. Have a look at the reference.
Related
I want to draw lines in a UIView with based the UIView's height. (i.e) line height should be equal to the UIView's height. For that i'm using the following code.
- (void)testDraw :(void(^)(UIImage *image, UIImage *selectedImage))done
{
CGSize imageSize = CGSizeMake(1000, self.bounds.size.height);
//Begin the animation by checking the screen is retina / ordinary
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0.0f);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
for (NSInteger intSample=0; intSample < 1000; intSample++) {
if(intSample % self.samplesPerSecond == 0){
//draw separator line
[self drawLine:context fromPoint:CGPointMake(intSample, 0) toPoint:CGPointMake(intSample, self.frame.size.height) color:self.lineColor lineWidth:2.0];
}
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
CGRect drawRect = CGRectMake(0, 0, image.size.width, self.frame.size.height);
[self.progressColor set];
UIRectFillUsingBlendMode(drawRect, kCGBlendModeSourceAtop);
UIImage *tintedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
done(image, tintedImage);
}
//Draw the line based on the value received
- (void) drawLine:(CGContextRef) context fromPoint:(CGPoint) fromPoint toPoint:(CGPoint) toPoint color:(UIColor*) color lineWidth:(float) lineWidth{
CGContextBeginPath(context);
//add the lines with sizes that u want
CGContextSetAlpha(context,1.0);
CGContextSetLineWidth(context, lineWidth);
CGContextSetStrokeColorWithColor(context, [color CGColor]);
CGContextMoveToPoint(context, fromPoint.x, fromPoint.y);
CGContextAddLineToPoint(context, toPoint.x, toPoint.y);
CGContextStrokePath(context);
}
When i execute the above code with it working fine in 3.5 inch display, It's working fine. But when i run the same code in 4-inch the line height is not increasing based on the UIView. The lines are not working as like auto layouts.
Thanks in advance, Can anyone give your suggestion to fix this issue.
In objective-c, I make a circle shape programmatically by following codes:
+(UIImage *)makeRoundedImage:(CGSize) size backgroundColor:(UIColor *) backgroundColor cornerRadius:(int) cornerRadius
{
UIImage* bgImage = [self imageWithColor:backgroundColor andSize:size];;
CALayer *imageLayer = [CALayer layer];
imageLayer.frame = CGRectMake(0, 0, size.width, size.height);
imageLayer.contents = (id) bgImage.CGImage;
imageLayer.masksToBounds = YES;
imageLayer.cornerRadius = cornerRadius;
UIGraphicsBeginImageContext(bgImage.size);
[imageLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return roundedImage;
}
The imageWithColor method is as following:
+(UIImage *)imageWithColor:(UIColor *)color andSize:(CGSize)size
{
//quick fix, or pop up CG invalid context 0x0 bug
if(size.width == 0) size.width = 1;
if(size.height == 0) size.height = 1;
//---quick fix
UIImage *img = nil;
CGRect rect = CGRectMake(0, 0, size.width, size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context,
color.CGColor);
CGContextFillRect(context, rect);
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Then I used it to create a pure color circle shape image, but what I found is the circle image is not perfect rounded. As an example, please see following code:
CGSize size = CGSizeMake(diameter, diameter);
int r = ceil((float)diameter/2.0);
UIImage *imageNormal = [self makeRoundedImage:size backgroundColor:backgroundColor cornerRadius:r];
[slider setThumbImage:imageNormal forState:UIControlStateNormal];
First I created a circle image, then I set the image as the thumb to a UISlider. But what shown is as the picture shown below:
You can see the circle is not an exact circle. I'm thinking probably it caused by the screen resolution issue? Because if I use an image resource for the thumb, I need add #2x. Anybody know the reason? Thanx in advance.
updated on 8th Aug 2015.
Further to this question and the answer from #Noah Witherspoon, I found the blurry edge issue has been solved. But still, the circle looks like being cut. I used the code as following:
CGRect rect = CGRectMake(0.0f, 0.0f, radius*2.0f, radius*2.0f);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillEllipseInRect(context, rect);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
And the circle looks like:
You can see the edge has been cut.
I changed the code as following:
CGRect rect = CGRectMake(0.0f, 0.0f, radius*2.0f+4, radius*2.0f+4);
CGRect rectmin = CGRectMake(2.0f, 2.0f, radius*2, radius*2);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillEllipseInRect(context, rectmin);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
You can see the circle looks better(The top edge and the bottom edge):
I made the fill rect size smaller, and the edge looks better, but I don't think it's a nice solution. Still, does anybody know why this happen?
From your screenshot it looks like you do actually have a circular image, but its scale is wrong—it’s not Retina—so it looks blurry and not-circular. The key thing is that instead of using UIGraphicsBeginImageContext which defaults to a scale of 1.0 (as compared to your screen, which is at a scale of 2.0 or 3.0), you should be using UIGraphicsBeginImageContextWithOptions. Also, you don’t need to make a layer or a view to draw a circle in an image context.
+ (UIImage *)makeCircleImageWithDiameter:(CGFloat)diameter color:(UIColor *)color {
UIGraphicsBeginImageContextWithOptions(CGSizeMake(diameter, diameter), NO, 0 /* scale (0 means “use the current screen’s scale”) */);
[color setFill];
CGContextFillEllipseInRect(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, diameter, diameter));
UIImage *image = UIGraphicsGetImageFromCurrentContext();
UIGraphicsEndImageContext();
return image;
}
If you want to get a circle every time try this:
- (UIImage *)makeCircularImage:(CGSize)size backgroundColor:(UIColor *)backgroundColor {
CGSize squareSize = CGSizeMake((size.width > size.height) ? size.width : size.height,
(size.width > size.height) ? size.width : size.height);
UIView *circleView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, squareSize.width, squareSize.height)];
circleView.layer.cornerRadius = circleView.frame.size.height * 0.5f;
circleView.backgroundColor = backgroundColor;
circleView.opaque = NO;
UIGraphicsBeginImageContextWithOptions(circleView.bounds.size, circleView.opaque, 0.0);
[circleView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Hi i want to covert my UIView to UIImage in particular frame size kindly help me.
I have 'UITableView` which is added as subview of 'UIScrollView' for horizontal scroll, my table view frame size is (0, 0, 12000, 768).
I want to convert the current visible are of my UITableView as UIImage after scrolling.
Example:
if i scrolled my table view horizontally some distance means that current visible are is (150,0,1200,768) that means full device screen.
if i use the following code:
UIGraphicsBeginImageContext(self.frame.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *backgroundImage = UIGraphicsGetImageFromCurrentImageContext();
its capturing always the frame size of (0, 0, 1200, 768) only.
so, how can i set the origin of image capturing.
Kindly help me out.... Thanks in advance....
- (UIImage *)rasterizedImageInView:(UIView *)view atRect:(CGRect)rect {
UIGraphicsBeginImageContextWithOptions(rect.size, view.opaque, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, -rect.origin.x, -rect.origin.y);
[view.layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
This will allow you to capture a UIImage from a particular region within a UIView as defined by a CGRect.
CGRect rect = [self.view bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Try this.
UIView *viewprint=[[UIView alloc] initWithFrame:CGRectMake(0, 0, width, height)];
viewprint.backgroundColor=[UIColor whiteColor];
tableView.frame = CGRectMake(0, 0, width, height);
[viewprint addSubview:tableView];
UIGraphicsBeginImageContext(viewprint.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
How to create a reflection of UIImage without using UIImageView ? I have seen the apple code for reflection using two imageViews but i don't want to add imageview in my application i just want whole image of the original image and the reflected image. Does any body knows how to do that.
This is from UIImage+FX.m, created by Nick Lockwood
UIImage * processedImage = //here your image
processedImage = [processedImage imageWithReflectionWithScale:0.15f
gap:10.0f
alpha:0.305f];
Scale is the size of the reflection, gap the distance between the image and the reflection, and alpha the alpha of the reflection
- (UIImage *)imageWithReflectionWithScale:(CGFloat)scale gap:(CGFloat)gap alpha:(CGFloat)alpha
{
//get reflected image
UIImage *reflection = [self reflectedImageWithScale:scale];
CGFloat reflectionOffset = reflection.size.height + gap;
//create drawing context
UIGraphicsBeginImageContextWithOptions(CGSizeMake(self.size.width, self.size.height + reflectionOffset * 2.0f), NO, 0.0f);
//draw reflection
[reflection drawAtPoint:CGPointMake(0.0f, reflectionOffset + self.size.height + gap) blendMode:kCGBlendModeNormal alpha:alpha];
//draw image
[self drawAtPoint:CGPointMake(0.0f, reflectionOffset)];
//capture resultant image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return image
return image;
}
and
- (UIImage *)reflectedImageWithScale:(CGFloat)scale
{
//get reflection dimensions
CGFloat height = ceil(self.size.height * scale);
CGSize size = CGSizeMake(self.size.width, height);
CGRect bounds = CGRectMake(0.0f, 0.0f, size.width, size.height);
//create drawing context
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
//clip to gradient
CGContextClipToMask(context, bounds, [[self class] gradientMask]);
//draw reflected image
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextTranslateCTM(context, 0.0f, -self.size.height);
[self drawInRect:CGRectMake(0.0f, 0.0f, self.size.width, self.size.height)];
//capture resultant image
UIImage *reflection = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return reflection image
return reflection;
}
gradientMask
+ (CGImageRef)gradientMask
{
static CGImageRef sharedMask = NULL;
if (sharedMask == NULL)
{
//create gradient mask
UIGraphicsBeginImageContextWithOptions(CGSizeMake(1, 256), YES, 0.0);
CGContextRef gradientContext = UIGraphicsGetCurrentContext();
CGFloat colors[] = {0.0, 1.0, 1.0, 1.0};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGGradientRef gradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2);
CGPoint gradientStartPoint = CGPointMake(0, 0);
CGPoint gradientEndPoint = CGPointMake(0, 256);
CGContextDrawLinearGradient(gradientContext, gradient, gradientStartPoint,
gradientEndPoint, kCGGradientDrawsAfterEndLocation);
sharedMask = CGBitmapContextCreateImage(gradientContext);
CGGradientRelease(gradient);
CGColorSpaceRelease(colorSpace);
UIGraphicsEndImageContext();
}
return sharedMask;
}
it returns the image with the reflection.
I wrote a blog post awhile ago that uses a view's CAReplicatorLayer. It's really designed for handling dynamics updates to a view with reflection, but I think it would work for what you want to do as well.
You can render the image and its reflection inside a graphics context and then get a CGImage from the context and from that again a UIImage.
But the question is: why not use two views? Why would you think it is a problem or limitation?
I am using QREncoder library found here: https://github.com/jverkoey/ObjQREncoder
Basically, i looked at the example code by this author, and when he creates the QRCode it comes out perfectly with no pixelation. The image itself that the library provides is 33 x 33 pixels, but he uses kCAFilterNearest to magnify and make it very clear (no pixilation). Here is his code:
UIImage* image = [QREncoder encode:#"http://www.google.com/"];
UIImageView* imageView = [[UIImageView alloc] initWithImage:image];
CGFloat qrSize = self.view.bounds.size.width - kPadding * 2;
imageView.frame = CGRectMake(kPadding, (self.view.bounds.size.height - qrSize) / 2,
qrSize, qrSize);
[imageView layer].magnificationFilter = kCAFilterNearest;
[self.view addSubview:imageView];
I have a UIImageView in a xib, and I am setting it's image like this:
[[template imageVQRCode] setImage:[QREncoder encode:ticketNum]];
[[[template imageVQRCode] layer] setMagnificationFilter:kCAFilterNearest];
but the qrcode is really blurry. In the example, it comes out crystal clear.
What am i doing wrong?
Thanks!
UPDATE:I found out that the problem isn't with scaling or anything to do with kCAFFilterNearest. It has to do with generating the PNG image from the view. Here's how it looks on the deive vs how it looks like when i save the UIView to the PNG representation (Notice the QRCodes quality):
UPDATE 2: This is how I am generating the PNG file from UIView:
UIGraphicsBeginImageContextWithOptions([[template view] bounds].size, YES, 0.0);
[[[template view] layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[UIImagePNGRepresentation(viewImage) writeToFile:plistPath atomically:YES];
I have used below function for editing image.
- (UIImage *)resizedImage:(CGSize)newSize interpolationQuality:(CGInterpolationQuality)quality
{
BOOL drawTransposed;
switch (self.imageOrientation) {
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
drawTransposed = YES;
break;
default:
drawTransposed = NO;
}
return [self resizedImage:newSize
transform:[self transformForOrientation:newSize]
drawTransposed:drawTransposed
interpolationQuality:quality];
}
- (UIImage *)resizedImage:(CGSize)newSize
transform:(CGAffineTransform)transform
drawTransposed:(BOOL)transpose
interpolationQuality:(CGInterpolationQuality)quality
{
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGRect transposedRect = CGRectMake(0, 0, newRect.size.height, newRect.size.width);
CGImageRef imageRef = self.CGImage;
// Build a context that's the same dimensions as the new size
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
if((bitmapInfo == kCGImageAlphaLast) || (bitmapInfo == kCGImageAlphaNone))
bitmapInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
bitmapInfo);
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(bitmap, quality);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, transpose ? transposedRect : newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
return newImage;
}
UIImageWriteToSavedPhotosAlbum([image resizedImage:CGSizeMake(300, 300) interpolationQuality:kCGInterpolationNone], nil, nil, nil);
Please find below image and let me know if you need any help.