I want to rotate a image with UIslider control.
I have done that with the below function
- (void)rotateImage:(UIImageView *)image duration:(NSTimeInterval)duration
curve:(int)curve degrees:(CGFloat)degrees
{
// Setup the animation
[UIView beginAnimations:nil context:NULL];
[UIView setAnimationDuration:duration];
[UIView setAnimationCurve:curve];
[UIView setAnimationBeginsFromCurrentState:YES];
// The transform matrix
CGAffineTransform transform = CGAffineTransformMakeRotation(DEGREES_TO_RADIANS(degrees));
image.transform = transform;
// Commit the changes
[UIView commitAnimations];}
by using this function it will work perfect
but the problem is that i can not save the image reference that is rotated.
i have to use that rotated image for further processing.
So how can i save the image that is in Rotated position?
please Help me over this issue
Thanks
i found the solution of my question
use This below method for This
- (UIImage*)upsideDownBunny:(CGFloat)radians withImage:(UIImage*)testImage {
__block CGImageRef cgImg;
__block CGSize imgSize;
__block UIImageOrientation orientation;
dispatch_block_t createStartImgBlock = ^(void) {
// UIImages should only be accessed from the main thread
UIImage *img =testImage
imgSize = [img size]; // this size will be pre rotated
orientation = [img imageOrientation];
cgImg = CGImageRetain([img CGImage]); // this data is not rotated
};
if([NSThread isMainThread]) {
createStartImgBlock();
} else {
dispatch_sync(dispatch_get_main_queue(), createStartImgBlock);
}
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
// in iOS4+ you can let the context allocate memory by passing NULL
CGContextRef context = CGBitmapContextCreate( NULL,
imgSize.width,
imgSize.height,
8,
imgSize.width * 4,
colorspace,
kCGImageAlphaPremultipliedLast);
// rotate so the image respects the original UIImage's orientation
switch (orientation) {
case UIImageOrientationDown:
CGContextTranslateCTM(context, imgSize.width, imgSize.height);
CGContextRotateCTM(context, -radians);
break;
case UIImageOrientationLeft:
CGContextTranslateCTM(context, 0.0, imgSize.height);
CGContextRotateCTM(context, 3.0 * -radians / 2.0);
break;
case UIImageOrientationRight:
CGContextTranslateCTM(context,imgSize.width, 0.0);
CGContextRotateCTM(context, -radians / 2.0);
break;
default:
// there are mirrored modes possible
// but they aren't generated by the iPhone's camera
break;
}
// rotate the image upside down
CGContextTranslateCTM(context, +(imgSize.width * 0.5f), +(imgSize.height * 0.5f));
CGContextRotateCTM(context, -radians);
//CGContextDrawImage( context, CGRectMake(0.0, 0.0, imgSize.width, imgSize.height), cgImg );
CGContextDrawImage(context, (CGRect){.origin.x = -imgSize.width* 0.5f , .origin.y = -imgSize.width* 0.5f , .size.width = imgSize.width, .size.height = imgSize.width}, cgImg);
// grab the new rotated image
CGContextFlush(context);
CGImageRef newCgImg = CGBitmapContextCreateImage(context);
__block UIImage *newImage;
dispatch_block_t createRotatedImgBlock = ^(void) {
// UIImages should only be accessed from the main thread
newImage = [UIImage imageWithCGImage:newCgImg];
};
if([NSThread isMainThread]) {
createRotatedImgBlock();
} else {
dispatch_sync(dispatch_get_main_queue(), createRotatedImgBlock);
}
CGColorSpaceRelease(colorspace);
CGImageRelease(newCgImg);
CGContextRelease(context);
return newImage;
}
call this method with
UIImage *rotated2 = [self upsideDownBunny:rotation];
where rotation is sliderValue between 0 to 360.
now we can save the rotated state.
I tried the code above, it's working but only for a SQUARE image, here is another working solution, which will redraw the image and keep the right width/height :
- (UIImage *) rotatedImage:(UIImage *)imageRotation and:(CGFloat) rotation
{
// Calculate Destination Size
CGAffineTransform t = CGAffineTransformMakeRotation(rotation);
CGRect sizeRect = (CGRect) {.size = imageRotation.size};
CGRect destRect = CGRectApplyAffineTransform(sizeRect, t);
CGSize destinationSize = destRect.size;
// Draw image
UIGraphicsBeginImageContext(destinationSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, destinationSize.width / 2.0f, destinationSize.height / 2.0f);
CGContextRotateCTM(context, rotation);
[imageRotation drawInRect:CGRectMake(-imageRotation.size.width / 2.0f, -imageRotation.size.height / 2.0f, imageRotation.size.width, imageRotation.size.height)];
// Save image
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
The rotation parameter here is to give in Radian. You can perform the conversion by adding this above your function (or in you .m file directly)
#define M_PI 3.14159265358979323846264338327950288 /* pi */
#define DEGREES_TO_RADIANS(angle) (angle / 180.0 * M_PI)
Finally, I called this with an animation :
[UIView animateWithDuration:0.5f delay:0 options:UIViewAnimationCurveEaseIn animations:^{
//Only used for the ANIMATION of the uiimage
imageView.transform = CGAffineTransformRotate(editorPad.transform, DEGREES_TO_RADIANS(90));
}completion:^(BOOL finished) {
// Workaround : Need to set previous transformation back or the next step will turn the image more
imageView.transform = CGAffineTransformRotate(imageView.transform, DEGREES_TO_RADIANS(-90));
imageView.image = [self imageView.image and:DEGREES_TO_RADIANS(90)];
}];
I hope this could help too !
Cheers.
One approach to further processing is applying more transformation in sequence, like in:
CGAffineTransform transform = CGAffineTransformScale(previousTransform, newScale, newScale);
in this case you would apply a scaling to your rotated image.
If you need saving this information in order to be able to redo the transformation at some later point, you can simply store the angle of the rotation, the scaling factor (in my example), and the build the transform once again.
You could also think of storing your CGAffineTransform in a ivar of your class or other mechanism.
EDIT:
if by saving you mean save to a file, you can convert your view to an image with this code:
NSData *data;
NSBitmapImageRep *rep;
rep = [self bitmapImageRepForCachingDisplayInRect:[self frame]];
[self cacheDisplayInRect:[self frame] toBitmapImageRep:rep];
data = [rep TIFFRepresentation];
then you save the NSData to file
For PNG:
UIGraphicsBeginImageContext(self.bounds.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* image1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(image1);
[imageData writeToFile:filePath atomically:YES];
You can render your rotated image into another image:
UIGraphicsBeginImageContext(aCGRectToHoldYourImage);
CALayer* layer = myRotatedImageView.layer;
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
and then get the PNG data from that and save it to a file
NSData* myPNGData = [viewImage UIImagePNGRepresentation];
[myPNGData writeToFile:#"aFileName.png" atomically:YES];
Proviso, I typed this into StackOverflow, not a compiler :-)
Save the transform value of your UIImageView and apply this transform value next time when you want to use this UIImageView or even in anoother UIImageView .
Related
In objective-c, I make a circle shape programmatically by following codes:
+(UIImage *)makeRoundedImage:(CGSize) size backgroundColor:(UIColor *) backgroundColor cornerRadius:(int) cornerRadius
{
UIImage* bgImage = [self imageWithColor:backgroundColor andSize:size];;
CALayer *imageLayer = [CALayer layer];
imageLayer.frame = CGRectMake(0, 0, size.width, size.height);
imageLayer.contents = (id) bgImage.CGImage;
imageLayer.masksToBounds = YES;
imageLayer.cornerRadius = cornerRadius;
UIGraphicsBeginImageContext(bgImage.size);
[imageLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return roundedImage;
}
The imageWithColor method is as following:
+(UIImage *)imageWithColor:(UIColor *)color andSize:(CGSize)size
{
//quick fix, or pop up CG invalid context 0x0 bug
if(size.width == 0) size.width = 1;
if(size.height == 0) size.height = 1;
//---quick fix
UIImage *img = nil;
CGRect rect = CGRectMake(0, 0, size.width, size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context,
color.CGColor);
CGContextFillRect(context, rect);
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Then I used it to create a pure color circle shape image, but what I found is the circle image is not perfect rounded. As an example, please see following code:
CGSize size = CGSizeMake(diameter, diameter);
int r = ceil((float)diameter/2.0);
UIImage *imageNormal = [self makeRoundedImage:size backgroundColor:backgroundColor cornerRadius:r];
[slider setThumbImage:imageNormal forState:UIControlStateNormal];
First I created a circle image, then I set the image as the thumb to a UISlider. But what shown is as the picture shown below:
You can see the circle is not an exact circle. I'm thinking probably it caused by the screen resolution issue? Because if I use an image resource for the thumb, I need add #2x. Anybody know the reason? Thanx in advance.
updated on 8th Aug 2015.
Further to this question and the answer from #Noah Witherspoon, I found the blurry edge issue has been solved. But still, the circle looks like being cut. I used the code as following:
CGRect rect = CGRectMake(0.0f, 0.0f, radius*2.0f, radius*2.0f);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillEllipseInRect(context, rect);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
And the circle looks like:
You can see the edge has been cut.
I changed the code as following:
CGRect rect = CGRectMake(0.0f, 0.0f, radius*2.0f+4, radius*2.0f+4);
CGRect rectmin = CGRectMake(2.0f, 2.0f, radius*2, radius*2);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillEllipseInRect(context, rectmin);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
You can see the circle looks better(The top edge and the bottom edge):
I made the fill rect size smaller, and the edge looks better, but I don't think it's a nice solution. Still, does anybody know why this happen?
From your screenshot it looks like you do actually have a circular image, but its scale is wrong—it’s not Retina—so it looks blurry and not-circular. The key thing is that instead of using UIGraphicsBeginImageContext which defaults to a scale of 1.0 (as compared to your screen, which is at a scale of 2.0 or 3.0), you should be using UIGraphicsBeginImageContextWithOptions. Also, you don’t need to make a layer or a view to draw a circle in an image context.
+ (UIImage *)makeCircleImageWithDiameter:(CGFloat)diameter color:(UIColor *)color {
UIGraphicsBeginImageContextWithOptions(CGSizeMake(diameter, diameter), NO, 0 /* scale (0 means “use the current screen’s scale”) */);
[color setFill];
CGContextFillEllipseInRect(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, diameter, diameter));
UIImage *image = UIGraphicsGetImageFromCurrentContext();
UIGraphicsEndImageContext();
return image;
}
If you want to get a circle every time try this:
- (UIImage *)makeCircularImage:(CGSize)size backgroundColor:(UIColor *)backgroundColor {
CGSize squareSize = CGSizeMake((size.width > size.height) ? size.width : size.height,
(size.width > size.height) ? size.width : size.height);
UIView *circleView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, squareSize.width, squareSize.height)];
circleView.layer.cornerRadius = circleView.frame.size.height * 0.5f;
circleView.backgroundColor = backgroundColor;
circleView.opaque = NO;
UIGraphicsBeginImageContextWithOptions(circleView.bounds.size, circleView.opaque, 0.0);
[circleView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
I am trying to create a snapshot of a UICollectionViewCell by creating a CGBitMapContext. I am not entirely clear on how to do this or how to use the associated classes, but after a bit of research, I have written the following method which is called from inside my UICollectionViewCell subclass:
- (void)snapShotOfCell
{
float scaleFactor = [[UIScreen mainScreen] scale];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, self.frame.size.width * scaleFactor, self.frame.size.height * scaleFactor, 8, self.frame.size.width * scaleFactor * 4, colorSpace, kCGImageAlphaPremultipliedFirst);
CGImageRef image = CGBitmapContextCreateImage(context);
UIImage *snapShot = [[UIImage alloc]initWithCGImage:image];
UIImageView *imageView = [[UIImageView alloc]initWithFrame:self.frame];
imageView.image = snapShot;
imageView.opaque = YES;
[self addSubview:imageView];
CGImageRelease(image);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
}
The result is that the image does not appear. Upon debugging, I can determine that I have a valid (non nil) context, CGImage, UIImage and UIImageView, but nothing appears onscreen. Can someone tell me what I am missing?
You can add this as a category to UIView and it will be accessible for any view
- (UIImage*) snapshot
{
UIGraphicsBeginImageContextWithOptions(self.frame.size, YES /*opaque*/, 0 /*auto scale*/);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Then you just need to do [self addSubview:[[UIImageView alloc] initWithImage:self.snapshot]] from you cell object.
[EDIT]
Providing the need for asynchronous rendering (totally understandable) this can be achieved using dispatch queues. I think this would work:
typedef void(^ImageOutBlock)(UIImage* image);
- (void) snapshotAsync:(ImageOutBlock)block
{
CGFloat scale = [[UIScreen mainScreen] scale];
CALayer* layer = self.layer;
CGRect frame = self.frame;
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^() {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, frame.size.width * scaleFactor, frame.size.height * scaleFactor, 8, frame.size.width * scaleFactor * 4, colorSpace, kCGImageAlphaPremultipliedFirst);
UIGraphicsBeginImageContextWithOptions(frame.size, YES /*opaque*/, scale);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
dispatch_async(dispatch_get_main_queue(), ^() {
block(image);
});
});
}
[EDIT]
- (void) execute
{
__weak typeof(self) weakSelf = self;
[self snapshotAsync:^(UIImage* image) {
[weakSelf addSubview:[[UIImageView alloc] initWithImage:image]]
}];
}
I have an App that takes a screenshot of a UIImageView with the following code:
-(IBAction) screenShot: (id) sender{
UIGraphicsBeginImageContext(sshot.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage,nil, nil, nil);
}
This works well but I need to be able to position where I take the screenshot basically I need to grad only a third of the screen (center portion). I tried using
UIGraphicsBeginImageContext(CGSize 150,150);
But have found that every thing is taken from 0,0 coordinates, has anyone any idea how to position this correctly.
Well the screenshot is taken from a canvas you draw.
So instead of drawing your layer in the whole context, with a reference to top left corner, you will draw it where you want to take the screenshot....
//first we will make an UIImage from your view
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *sourceImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//now we will position the image, X/Y away from top left corner to get the portion we want
UIGraphicsBeginImageContext(sshot.frame.size);
[sourceImage drawAtPoint:CGPointMake(-50, -100)];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(croppedImage,nil, nil, nil);
From this
UIGraphicsBeginImageContext(sshot.frame.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(c, 150, 150); // <-- shift everything up to required position when drawing.
[self.view.layer renderInContext:c];
UIImage* viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
Use this method to crop if u have image with specfic rect to crop:
-(UIImage *)cropImage:(UIImage *)image rect:(CGRect)cropRect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect);
UIImage *img = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return img;
}
Use like this:
UIImage *img = [self cropImage:viewImage rect:CGRectMake(150,150,100,100)]; //example
If you like you can refer this code.
In this example you can get the image covered by the rect from any position and any zoom scale.
Happy Coding :)
Some extracted code for reference is as below
Main function or code used to crop the photo
- (UIImage *) croppedPhoto
{
CGFloat ox = self.scrollView.contentOffset.x;
CGFloat oy = self.scrollView.contentOffset.y;
CGFloat zoomScale = self.scrollView.zoomScale;
CGFloat cx = (ox + self.cropRectangleButton.frame.origin.x + 15.0f) * 2.0f / zoomScale;
CGFloat cy = (oy + self.cropRectangleButton.frame.origin.y + 15.0f) * 2.0f / zoomScale;
CGFloat cw = 300.0f / zoomScale;
CGFloat ch = 300.0f / zoomScale;
CGRect cropRect = CGRectMake(cx, cy, cw, ch);
NSLog(#"---------- cropRect: %#", NSStringFromCGRect(cropRect));
NSLog(#"--- self.photo.size: %#", NSStringFromCGSize(self.photo.size));
CGImageRef imageRef = CGImageCreateWithImageInRect([self.photo CGImage], cropRect);
UIImage *result = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
NSLog(#"------- result.size: %#", NSStringFromCGSize(result.size));
return result;
}
The details how to use the example is given here.
Enjoy Coding :)
How to create a reflection of UIImage without using UIImageView ? I have seen the apple code for reflection using two imageViews but i don't want to add imageview in my application i just want whole image of the original image and the reflected image. Does any body knows how to do that.
This is from UIImage+FX.m, created by Nick Lockwood
UIImage * processedImage = //here your image
processedImage = [processedImage imageWithReflectionWithScale:0.15f
gap:10.0f
alpha:0.305f];
Scale is the size of the reflection, gap the distance between the image and the reflection, and alpha the alpha of the reflection
- (UIImage *)imageWithReflectionWithScale:(CGFloat)scale gap:(CGFloat)gap alpha:(CGFloat)alpha
{
//get reflected image
UIImage *reflection = [self reflectedImageWithScale:scale];
CGFloat reflectionOffset = reflection.size.height + gap;
//create drawing context
UIGraphicsBeginImageContextWithOptions(CGSizeMake(self.size.width, self.size.height + reflectionOffset * 2.0f), NO, 0.0f);
//draw reflection
[reflection drawAtPoint:CGPointMake(0.0f, reflectionOffset + self.size.height + gap) blendMode:kCGBlendModeNormal alpha:alpha];
//draw image
[self drawAtPoint:CGPointMake(0.0f, reflectionOffset)];
//capture resultant image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return image
return image;
}
and
- (UIImage *)reflectedImageWithScale:(CGFloat)scale
{
//get reflection dimensions
CGFloat height = ceil(self.size.height * scale);
CGSize size = CGSizeMake(self.size.width, height);
CGRect bounds = CGRectMake(0.0f, 0.0f, size.width, size.height);
//create drawing context
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
//clip to gradient
CGContextClipToMask(context, bounds, [[self class] gradientMask]);
//draw reflected image
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextTranslateCTM(context, 0.0f, -self.size.height);
[self drawInRect:CGRectMake(0.0f, 0.0f, self.size.width, self.size.height)];
//capture resultant image
UIImage *reflection = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return reflection image
return reflection;
}
gradientMask
+ (CGImageRef)gradientMask
{
static CGImageRef sharedMask = NULL;
if (sharedMask == NULL)
{
//create gradient mask
UIGraphicsBeginImageContextWithOptions(CGSizeMake(1, 256), YES, 0.0);
CGContextRef gradientContext = UIGraphicsGetCurrentContext();
CGFloat colors[] = {0.0, 1.0, 1.0, 1.0};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGGradientRef gradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2);
CGPoint gradientStartPoint = CGPointMake(0, 0);
CGPoint gradientEndPoint = CGPointMake(0, 256);
CGContextDrawLinearGradient(gradientContext, gradient, gradientStartPoint,
gradientEndPoint, kCGGradientDrawsAfterEndLocation);
sharedMask = CGBitmapContextCreateImage(gradientContext);
CGGradientRelease(gradient);
CGColorSpaceRelease(colorSpace);
UIGraphicsEndImageContext();
}
return sharedMask;
}
it returns the image with the reflection.
I wrote a blog post awhile ago that uses a view's CAReplicatorLayer. It's really designed for handling dynamics updates to a view with reflection, but I think it would work for what you want to do as well.
You can render the image and its reflection inside a graphics context and then get a CGImage from the context and from that again a UIImage.
But the question is: why not use two views? Why would you think it is a problem or limitation?
I am using QREncoder library found here: https://github.com/jverkoey/ObjQREncoder
Basically, i looked at the example code by this author, and when he creates the QRCode it comes out perfectly with no pixelation. The image itself that the library provides is 33 x 33 pixels, but he uses kCAFilterNearest to magnify and make it very clear (no pixilation). Here is his code:
UIImage* image = [QREncoder encode:#"http://www.google.com/"];
UIImageView* imageView = [[UIImageView alloc] initWithImage:image];
CGFloat qrSize = self.view.bounds.size.width - kPadding * 2;
imageView.frame = CGRectMake(kPadding, (self.view.bounds.size.height - qrSize) / 2,
qrSize, qrSize);
[imageView layer].magnificationFilter = kCAFilterNearest;
[self.view addSubview:imageView];
I have a UIImageView in a xib, and I am setting it's image like this:
[[template imageVQRCode] setImage:[QREncoder encode:ticketNum]];
[[[template imageVQRCode] layer] setMagnificationFilter:kCAFilterNearest];
but the qrcode is really blurry. In the example, it comes out crystal clear.
What am i doing wrong?
Thanks!
UPDATE:I found out that the problem isn't with scaling or anything to do with kCAFFilterNearest. It has to do with generating the PNG image from the view. Here's how it looks on the deive vs how it looks like when i save the UIView to the PNG representation (Notice the QRCodes quality):
UPDATE 2: This is how I am generating the PNG file from UIView:
UIGraphicsBeginImageContextWithOptions([[template view] bounds].size, YES, 0.0);
[[[template view] layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[UIImagePNGRepresentation(viewImage) writeToFile:plistPath atomically:YES];
I have used below function for editing image.
- (UIImage *)resizedImage:(CGSize)newSize interpolationQuality:(CGInterpolationQuality)quality
{
BOOL drawTransposed;
switch (self.imageOrientation) {
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
drawTransposed = YES;
break;
default:
drawTransposed = NO;
}
return [self resizedImage:newSize
transform:[self transformForOrientation:newSize]
drawTransposed:drawTransposed
interpolationQuality:quality];
}
- (UIImage *)resizedImage:(CGSize)newSize
transform:(CGAffineTransform)transform
drawTransposed:(BOOL)transpose
interpolationQuality:(CGInterpolationQuality)quality
{
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGRect transposedRect = CGRectMake(0, 0, newRect.size.height, newRect.size.width);
CGImageRef imageRef = self.CGImage;
// Build a context that's the same dimensions as the new size
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
if((bitmapInfo == kCGImageAlphaLast) || (bitmapInfo == kCGImageAlphaNone))
bitmapInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
bitmapInfo);
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(bitmap, quality);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, transpose ? transposedRect : newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
return newImage;
}
UIImageWriteToSavedPhotosAlbum([image resizedImage:CGSizeMake(300, 300) interpolationQuality:kCGInterpolationNone], nil, nil, nil);
Please find below image and let me know if you need any help.