I am working on an iOS app. I need crop an image which is generate from PDF.
Sometimes, the image resolution can be very big.
I use follow code to generate the cropped image. My problem is the memory is increasing all the time. Never released.
- (UIImage *)croppedImageWithFrame:(CGRect)frame angle:(NSInteger)angle{
UIImage *croppedImage = nil;
CGPoint drawPoint = CGPointZero;
UIGraphicsBeginImageContextWithOptions(frame.size, YES, self.scale);
{
CGContextRef context = UIGraphicsGetCurrentContext();
//To conserve memory in not needing to completely re-render the image re-rotated,
//map the image to a view and then use Core Animation to manipulate its rotation
if (angle != 0) {
UIImageView *imageView = [[UIImageView alloc] initWithImage:self];
imageView.layer.minificationFilter = #"nearest";
imageView.layer.magnificationFilter = #"neareset";
imageView.transform = CGAffineTransformRotate(CGAffineTransformIdentity, angle * (M_PI/180.0f));
CGRect rotatedRect = CGRectApplyAffineTransform(imageView.bounds, imageView.transform);
UIView *containerView = [[UIView alloc] initWithFrame:(CGRect){CGPointZero, rotatedRect.size}];
[containerView addSubview:imageView];
imageView.center = containerView.center;
CGContextTranslateCTM(context, -frame.origin.x, -frame.origin.y);
[containerView.layer renderInContext:context];
}
else {
CGContextTranslateCTM(context, -frame.origin.x, -frame.origin.y);
[self drawAtPoint:drawPoint];
}
croppedImage = UIGraphicsGetImageFromCurrentImageContext();
}
UIGraphicsEndImageContext();
return croppedImage;
}
When I debug,
It burns 100MB in line
UIGraphicsBeginImageContextWithOptions(frame.size, YES, self.scale);
then when run into follow line, it burn another 150MB
[self drawAtPoint:drawPoint];
When run to follow line, it release 100MB
UIGraphicsEndImageContext();
after it's done, the 150MB never released
I thought UIGraphicsEndImageContext() should release all the 250MB. Why it's not?
Related
I may point out that Drawing and Rendering in Objective-C is my weakness. Now, here's my problem.
I want to add a 'Day/Night' feature to my game. It has got lots of objects on a map. Every object is a UIView containing some data in variables and some UIImageViews: the sprite, and some of the objects have a hidden ring (used to show selection).
I want to be able to darken the content of the UIView, but I can't figure out how. The sprite is a PNG with transparency. I've just managed to add a black rectangle behind the sprite using this:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSaveGState(ctx);
CGContextSetRGBFillColor(ctx, 0, 0, 0, 0.5);
CGContextFillRect(ctx, rect);
CGContextRestoreGState(ctx);
As I've read, this should be done in the drawRect method. Help please!
If you want to understand better my scenario, the App where I'm trying to do this is called 'Kipos', at the App Store.
Floris497's approach is a good strategy for a blanket darkening for more than one image at a time (probably more what you're after in this case). But here's a general purpose method to generate darker UIImages (while respecting alpha pixels):
+ (UIImage *)darkenImage:(UIImage *)image toLevel:(CGFloat)level
{
// Create a temporary view to act as a darkening layer
CGRect frame = CGRectMake(0.0, 0.0, image.size.width, image.size.height);
UIView *tempView = [[UIView alloc] initWithFrame:frame];
tempView.backgroundColor = [UIColor blackColor];
tempView.alpha = level;
// Draw the image into a new graphics context
UIGraphicsBeginImageContext(frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[image drawInRect:frame];
// Flip the context vertically so we can draw the dark layer via a mask that
// aligns with the image's alpha pixels (Quartz uses flipped coordinates)
CGContextTranslateCTM(context, 0, frame.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, frame, image.CGImage);
[tempView.layer renderInContext:context];
// Produce a new image from this context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *toReturn = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
UIGraphicsEndImageContext();
[tempView release];
return toReturn;
}
The best way would be to add a core image filter to the layer that darkened it. You could use CIExposureAdjust.
CIFilter *filter = [CIFilter filterWithName:#"CIExposureAdjust"];
[filter setDefaults];
[filter setValue:[NSNumber numberWithFloat:-2.0] forKey:#"inputEV"];
view.layer.filters = [NSArray arrayWithObject:filter];
Here is how to do it:
// inputEV controlls the exposure, the lower the darker (e.g "-1" -> dark)
-(UIImage*)adjustImage:(UIImage*)image exposure:(float)inputEV
{
CIImage *inputImage = [[CIImage alloc] initWithCGImage:[image CGImage]];
UIImageOrientation originalOrientation = image.imageOrientation;
CIFilter* adjustmentFilter = [CIFilter filterWithName:#"CIExposureAdjust"];
[adjustmentFilter setDefaults];
[adjustmentFilter setValue:inputImage forKey:#"inputImage"];
[adjustmentFilter setValue:[NSNumber numberWithFloat:-1.0] forKey:#"inputEV"];
CIImage *outputImage = [adjustmentFilter valueForKey:#"outputImage"];
CIContext* context = [CIContext contextWithOptions:nil];
CGImageRef imgRef = [context createCGImage:outputImage fromRect:outputImage.extent] ;
UIImage* img = [[UIImage alloc] initWithCGImage:imgRef scale:1.0 orientation:originalOrientation];
CGImageRelease(imgRef);
return img;
}
Remember to import:
#import <QuartzCore/Quartzcore.h>
And add CoreGraphics and CoreImage frameworks to your project.
Tested on iPhone 3GS with iOS 5.1
CIFilter is available starting from iOS 5.0.
draw a UIView (a black one) over it and set "User interaction enabled" to NO
hope you can do something with this.
then use this to make it dark
[UIView animateWithDuration:2
animations:^{nightView.alpha = 0.4;}
completion:^(BOOL finished){ NSLog(#"done making it dark"); ]; }];
to make it light
[UIView animateWithDuration:2
animations:^{nightView.alpha = 0.0;}
completion:^(BOOL finished){ NSLog(#"done making it light again"); ]; }];
I'm using BCTabBarController in my app, and I'm trying to customize it so that it uses Core Graphics to highlight the images automatically, so that I don't need four copies of each image. (Retina, Retina-selected, Legacy, Legacy-selected)
User Ephraim has posted a great starting point for this, but it returns legacy sized images. I've played with some of the settings, but I'm not very familiar with Core Graphics, so I'm shooting in the dark.
Ephraim's Code:
- (UIImage *) imageWithBackgroundColor:(UIColor *)bgColor
shadeAlpha1:(CGFloat)alpha1
shadeAlpha2:(CGFloat)alpha2
shadeAlpha3:(CGFloat)alpha3
shadowColor:(UIColor *)shadowColor
shadowOffset:(CGSize)shadowOffset
shadowBlur:(CGFloat)shadowBlur {
UIImage *image = self;
CGColorRef cgColor = [bgColor CGColor];
CGColorRef cgShadowColor = [shadowColor CGColor];
CGFloat components[16] = {1,1,1,alpha1,1,1,1,alpha1,1,1,1,alpha2,1,1,1,alpha3};
CGFloat locations[4] = {0,0.5,0.6,1};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGGradientRef colorGradient = CGGradientCreateWithColorComponents(colorSpace, components, locations, (size_t)4);
CGRect contextRect;
contextRect.origin.x = 0.0f;
contextRect.origin.y = 0.0f;
contextRect.size = [image size];
//contextRect.size = CGSizeMake([image size].width+5,[image size].height+5);
// Retrieve source image and begin image context
UIImage *itemImage = image;
CGSize itemImageSize = [itemImage size];
CGPoint itemImagePosition;
itemImagePosition.x = ceilf((contextRect.size.width - itemImageSize.width) / 2);
itemImagePosition.y = ceilf((contextRect.size.height - itemImageSize.height) / 2);
UIGraphicsBeginImageContext(contextRect.size);
CGContextRef c = UIGraphicsGetCurrentContext();
// Setup shadow
CGContextSetShadowWithColor(c, shadowOffset, shadowBlur, cgShadowColor);
// Setup transparency layer and clip to mask
CGContextBeginTransparencyLayer(c, NULL);
CGContextScaleCTM(c, 1.0, -1.0);
CGContextClipToMask(c, CGRectMake(itemImagePosition.x, -itemImagePosition.y, itemImageSize.width, -itemImageSize.height), [itemImage CGImage]);
// Fill and end the transparency layer
CGContextSetFillColorWithColor(c, cgColor);
contextRect.size.height = -contextRect.size.height;
CGContextFillRect(c, contextRect);
CGContextDrawLinearGradient(c, colorGradient,CGPointZero,CGPointMake(contextRect.size.width*1.0/4.0,contextRect.size.height),0);
CGContextEndTransparencyLayer(c);
//CGPointMake(contextRect.size.width*3.0/4.0, 0)
// Set selected image and end context
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGColorSpaceRelease(colorSpace);
CGGradientRelease(colorGradient);
return resultImage;
}
To implement this code, I've added a category to UIImage in my project, and then made the following changes to BCTab.h:
- (id)initWithIconImageName:(NSString *)imageName {
if (self = [super init]) {
self.adjustsImageWhenHighlighted = NO;
self.background = [UIImage imageNamed:#"BCTabBarController.bundle/tab-background.png"];
self.rightBorder = [UIImage imageNamed:#"BCTabBarController.bundle/tab-right-border.png"];
self.backgroundColor = [UIColor clearColor];
// NSString *selectedName = [NSString stringWithFormat:#"%#-selected.%#",
// [imageName stringByDeletingPathExtension],
// [imageName pathExtension]];
UIImage *defImage = [UIImage imageNamed:imageName];
[self setImage:[defImage imageWithBackgroundColor:[UIColor lightGrayColor] shadeAlpha1:0.4 shadeAlpha2:0.0 shadeAlpha3:0.6 shadowColor:[UIColor blackColor] shadowOffset:CGSizeMake(0.0, -1.0f) shadowBlur:3.0] forState:UIControlStateNormal];
[self setImage:[defImage imageWithBackgroundColor:[UIColor redColor] shadeAlpha1:0.4 shadeAlpha2:0.0 shadeAlpha3:0.6 shadowColor:[UIColor blackColor] shadowOffset:CGSizeMake(0.0, -1.0f) shadowBlur:3.0] forState:UIControlStateSelected];
}
return self;
}
How can I use Ephraim's code to work correctly with Retina display?
After digging around the internet, a Google search lead me back to StackOverflow. I found this answer to this question which discusses a different method which should be used to set the scale factor of the UIImageGraphicsContext when it is initialized.
UIGraphicsBeginImageContext(contextRect.size); needs to be changed to UIGraphicsBeginImageContextWithOptions(contextRect.size, NO, scale);, where "scale" is the
value of the scale you want to use. I grabbed it from [[UIScreen mainScreen] scale].
My iPad app has a navigation where I show screenshots of the different pages and because I want to show more than one screenshot at once I scale the container to around 24% of the original screenshots (1024x768).
- (void) loadView
{
// get landscape screen frame
CGRect screenFrame = [UIScreen mainScreen].bounds;
CGRect landscapeFrame = CGRectMake(0, 0, screenFrame.size.height, screenFrame.size.width);
UIView *view = [[UIView alloc] initWithFrame:landscapeFrame];
view.backgroundColor = [UIColor grayColor];
self.view = view;
// add container view for 2 images
CGRect startFrame = CGRectMake(-landscapeFrame.size.width/2, 0, landscapeFrame.size.width*2, landscapeFrame.size.height);
container = [[UIView alloc] initWithFrame:startFrame];
container.backgroundColor = [UIColor whiteColor];
// add image 1 (1024x768)
UIImage *img1 = [UIImage imageNamed:#"01.jpeg"];
UIImageView *img1View = [[UIImageView alloc] initWithImage:img1];
[container addSubview:img1View];
// add image 2 (1024x768)
UIImage *img2 = [UIImage imageNamed:#"02.jpeg"];
UIImageView *img2View = [[UIImageView alloc] initWithImage:img2];
// move img2 to the right of img1
CGRect newFrame = img2View.frame;
newFrame.origin.x = 1024.0;
img2View.frame = newFrame;
[container addSubview:img2View];
// scale to 24%
container.transform = CGAffineTransformMakeScale(0.24, 0.24);
[self.view addSubview:container];
}
but when I scale images with "small" text it looks sth like this:
I have to use the big screenshots because if a user taps the image it should scale to 100% and be crispy clear.
is there a way how I can scale the images "smoothly" (on the fly) without ruining performance?
it would be enough to have two versions: the full-px one and another for the 24% version.
The reason the scaled-down image looks crappy is it's being scaled in OpenGL, which is using fast-but-low-quality linear interpolation. As you probably know, UIView is built on top of CALayer, which is in turn a sort of wrapper for OpenGL textures. Because the contents of the layer reside in the video card, CALayer can do all of its magic on the GPU, independent of whether the CPU is busy loading a web site, blocked on disk access, or whatever. I mention this only because it's useful to pay attention to what's actually in the textures inside your layers. In your case, the UIImageView's layer has the full 1024x768 bitmap image on its texture, and that isn't affected by the container's transform: The CALayer inside the UIImageView doesn't see that it's going to be (let's see..) 246x185 on-screen and re-scale its bitmap, it just lets OpenGL do its thing and scale down the bitmap every time it updates the display.
To get better scaling, we'll need to do it in CoreGraphics instead of OpenGL. Here's one way to do it:
- (UIImage*)scaleImage:(UIImage*)image by:(float)scale
{
CGSize size = CGSizeMake(image.size.width * scale, image.size.height * scale);
UIGraphicsBeginImageContextWithOptions(size, YES, 0.0);
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageCopy;
}
- (void)loadView
{
// get landscape screen frame
CGRect screenFrame = [UIScreen mainScreen].bounds;
CGRect landscapeFrame = CGRectMake(0, 0, screenFrame.size.height, screenFrame.size.width);
UIView *view = [[UIView alloc] initWithFrame:landscapeFrame];
view.backgroundColor = [UIColor grayColor];
self.view = view;
// add container view for 2 images
CGRect startFrame = CGRectMake(-landscapeFrame.size.width/2, 0, landscapeFrame.size.width*2, landscapeFrame.size.height);
container = [[UIView alloc] initWithFrame:startFrame];
container.backgroundColor = [UIColor whiteColor];
// add image 1 (1024x768)
UIImage *img1 = [UIImage imageNamed:#"01.png"];
img1View = [[TapImageView alloc] initWithFrame:CGRectMake(0, 0, 1024, 768)];
img1View.userInteractionEnabled = YES; // important!
img1View.image = [self scaleImage:img1 by:0.24];
[container addSubview:img1View];
// add image 2 (1024x768)
UIImage *img2 = [UIImage imageNamed:#"02.png"];
img2View = [[TapImageView alloc] initWithFrame:CGRectMake(1024, 0, 1024, 768)];
img2View.userInteractionEnabled = YES;
img2View.image = [self scaleImage:img2 by:0.24];
[container addSubview:img2View];
// scale to 24% and layout subviews
zoomed = YES;
container.transform = CGAffineTransformMakeScale(0.24, 0.24);
[self.view addSubview:container];
}
- (void)viewTapped:(id)sender
{
zoomed = !zoomed;
[UIView animateWithDuration:0.5 animations:^
{
if ( zoomed )
{
container.transform = CGAffineTransformMakeScale(0.24, 0.24);
}
else
{
img1View.image = [UIImage imageNamed:#"01.png"];
img2View.image = [UIImage imageNamed:#"02.png"];
container.transform = CGAffineTransformMakeScale(1.0, 1.0);
}
}
completion:^(BOOL finished)
{
if ( zoomed )
{
UIImage *img1 = [UIImage imageNamed:#"01.png"];
img1View.image = [self scaleImage:img1 by:0.24];
UIImage *img2 = [UIImage imageNamed:#"02.png"];
img2View.image = [self scaleImage:img2 by:0.24];
}
}];
}
And here's TapImageView, a UIImageView subclass that tells us when it's been tapped by sending an action up the responder chain:
#interface TapImageView : UIImageView
#end
#implementation TapImageView
- (void)touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event
{
[[UIApplication sharedApplication] sendAction:#selector(viewTapped:) to:nil from:self forEvent:event];
}
#end
Instead of scaling the container and all of its subviews. Create a UIImageView from the contents of the container and adjust its frame size to 24% of the original.
UIGraphicsBeginImageContext(container.bounds.size);
[container renderInContext:UIGraphicsGetCurrentContext()];
UIImage *containerImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *containerImageView = [[UIImageView alloc] initWithImage:containerImage];
CGRectFrame containerFrame = startFrame;
containerFrame.size.with *= 0.24;
containerFrame.size.height *= 0.24;
containerImageView.frame = containerFrame;
[self.view addSubView:containerImageView];
I want to rotate a image with UIslider control.
I have done that with the below function
- (void)rotateImage:(UIImageView *)image duration:(NSTimeInterval)duration
curve:(int)curve degrees:(CGFloat)degrees
{
// Setup the animation
[UIView beginAnimations:nil context:NULL];
[UIView setAnimationDuration:duration];
[UIView setAnimationCurve:curve];
[UIView setAnimationBeginsFromCurrentState:YES];
// The transform matrix
CGAffineTransform transform = CGAffineTransformMakeRotation(DEGREES_TO_RADIANS(degrees));
image.transform = transform;
// Commit the changes
[UIView commitAnimations];}
by using this function it will work perfect
but the problem is that i can not save the image reference that is rotated.
i have to use that rotated image for further processing.
So how can i save the image that is in Rotated position?
please Help me over this issue
Thanks
i found the solution of my question
use This below method for This
- (UIImage*)upsideDownBunny:(CGFloat)radians withImage:(UIImage*)testImage {
__block CGImageRef cgImg;
__block CGSize imgSize;
__block UIImageOrientation orientation;
dispatch_block_t createStartImgBlock = ^(void) {
// UIImages should only be accessed from the main thread
UIImage *img =testImage
imgSize = [img size]; // this size will be pre rotated
orientation = [img imageOrientation];
cgImg = CGImageRetain([img CGImage]); // this data is not rotated
};
if([NSThread isMainThread]) {
createStartImgBlock();
} else {
dispatch_sync(dispatch_get_main_queue(), createStartImgBlock);
}
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
// in iOS4+ you can let the context allocate memory by passing NULL
CGContextRef context = CGBitmapContextCreate( NULL,
imgSize.width,
imgSize.height,
8,
imgSize.width * 4,
colorspace,
kCGImageAlphaPremultipliedLast);
// rotate so the image respects the original UIImage's orientation
switch (orientation) {
case UIImageOrientationDown:
CGContextTranslateCTM(context, imgSize.width, imgSize.height);
CGContextRotateCTM(context, -radians);
break;
case UIImageOrientationLeft:
CGContextTranslateCTM(context, 0.0, imgSize.height);
CGContextRotateCTM(context, 3.0 * -radians / 2.0);
break;
case UIImageOrientationRight:
CGContextTranslateCTM(context,imgSize.width, 0.0);
CGContextRotateCTM(context, -radians / 2.0);
break;
default:
// there are mirrored modes possible
// but they aren't generated by the iPhone's camera
break;
}
// rotate the image upside down
CGContextTranslateCTM(context, +(imgSize.width * 0.5f), +(imgSize.height * 0.5f));
CGContextRotateCTM(context, -radians);
//CGContextDrawImage( context, CGRectMake(0.0, 0.0, imgSize.width, imgSize.height), cgImg );
CGContextDrawImage(context, (CGRect){.origin.x = -imgSize.width* 0.5f , .origin.y = -imgSize.width* 0.5f , .size.width = imgSize.width, .size.height = imgSize.width}, cgImg);
// grab the new rotated image
CGContextFlush(context);
CGImageRef newCgImg = CGBitmapContextCreateImage(context);
__block UIImage *newImage;
dispatch_block_t createRotatedImgBlock = ^(void) {
// UIImages should only be accessed from the main thread
newImage = [UIImage imageWithCGImage:newCgImg];
};
if([NSThread isMainThread]) {
createRotatedImgBlock();
} else {
dispatch_sync(dispatch_get_main_queue(), createRotatedImgBlock);
}
CGColorSpaceRelease(colorspace);
CGImageRelease(newCgImg);
CGContextRelease(context);
return newImage;
}
call this method with
UIImage *rotated2 = [self upsideDownBunny:rotation];
where rotation is sliderValue between 0 to 360.
now we can save the rotated state.
I tried the code above, it's working but only for a SQUARE image, here is another working solution, which will redraw the image and keep the right width/height :
- (UIImage *) rotatedImage:(UIImage *)imageRotation and:(CGFloat) rotation
{
// Calculate Destination Size
CGAffineTransform t = CGAffineTransformMakeRotation(rotation);
CGRect sizeRect = (CGRect) {.size = imageRotation.size};
CGRect destRect = CGRectApplyAffineTransform(sizeRect, t);
CGSize destinationSize = destRect.size;
// Draw image
UIGraphicsBeginImageContext(destinationSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, destinationSize.width / 2.0f, destinationSize.height / 2.0f);
CGContextRotateCTM(context, rotation);
[imageRotation drawInRect:CGRectMake(-imageRotation.size.width / 2.0f, -imageRotation.size.height / 2.0f, imageRotation.size.width, imageRotation.size.height)];
// Save image
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
The rotation parameter here is to give in Radian. You can perform the conversion by adding this above your function (or in you .m file directly)
#define M_PI 3.14159265358979323846264338327950288 /* pi */
#define DEGREES_TO_RADIANS(angle) (angle / 180.0 * M_PI)
Finally, I called this with an animation :
[UIView animateWithDuration:0.5f delay:0 options:UIViewAnimationCurveEaseIn animations:^{
//Only used for the ANIMATION of the uiimage
imageView.transform = CGAffineTransformRotate(editorPad.transform, DEGREES_TO_RADIANS(90));
}completion:^(BOOL finished) {
// Workaround : Need to set previous transformation back or the next step will turn the image more
imageView.transform = CGAffineTransformRotate(imageView.transform, DEGREES_TO_RADIANS(-90));
imageView.image = [self imageView.image and:DEGREES_TO_RADIANS(90)];
}];
I hope this could help too !
Cheers.
One approach to further processing is applying more transformation in sequence, like in:
CGAffineTransform transform = CGAffineTransformScale(previousTransform, newScale, newScale);
in this case you would apply a scaling to your rotated image.
If you need saving this information in order to be able to redo the transformation at some later point, you can simply store the angle of the rotation, the scaling factor (in my example), and the build the transform once again.
You could also think of storing your CGAffineTransform in a ivar of your class or other mechanism.
EDIT:
if by saving you mean save to a file, you can convert your view to an image with this code:
NSData *data;
NSBitmapImageRep *rep;
rep = [self bitmapImageRepForCachingDisplayInRect:[self frame]];
[self cacheDisplayInRect:[self frame] toBitmapImageRep:rep];
data = [rep TIFFRepresentation];
then you save the NSData to file
For PNG:
UIGraphicsBeginImageContext(self.bounds.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* image1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(image1);
[imageData writeToFile:filePath atomically:YES];
You can render your rotated image into another image:
UIGraphicsBeginImageContext(aCGRectToHoldYourImage);
CALayer* layer = myRotatedImageView.layer;
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
and then get the PNG data from that and save it to a file
NSData* myPNGData = [viewImage UIImagePNGRepresentation];
[myPNGData writeToFile:#"aFileName.png" atomically:YES];
Proviso, I typed this into StackOverflow, not a compiler :-)
Save the transform value of your UIImageView and apply this transform value next time when you want to use this UIImageView or even in anoother UIImageView .
I am using QREncoder library found here: https://github.com/jverkoey/ObjQREncoder
Basically, i looked at the example code by this author, and when he creates the QRCode it comes out perfectly with no pixelation. The image itself that the library provides is 33 x 33 pixels, but he uses kCAFilterNearest to magnify and make it very clear (no pixilation). Here is his code:
UIImage* image = [QREncoder encode:#"http://www.google.com/"];
UIImageView* imageView = [[UIImageView alloc] initWithImage:image];
CGFloat qrSize = self.view.bounds.size.width - kPadding * 2;
imageView.frame = CGRectMake(kPadding, (self.view.bounds.size.height - qrSize) / 2,
qrSize, qrSize);
[imageView layer].magnificationFilter = kCAFilterNearest;
[self.view addSubview:imageView];
I have a UIImageView in a xib, and I am setting it's image like this:
[[template imageVQRCode] setImage:[QREncoder encode:ticketNum]];
[[[template imageVQRCode] layer] setMagnificationFilter:kCAFilterNearest];
but the qrcode is really blurry. In the example, it comes out crystal clear.
What am i doing wrong?
Thanks!
UPDATE:I found out that the problem isn't with scaling or anything to do with kCAFFilterNearest. It has to do with generating the PNG image from the view. Here's how it looks on the deive vs how it looks like when i save the UIView to the PNG representation (Notice the QRCodes quality):
UPDATE 2: This is how I am generating the PNG file from UIView:
UIGraphicsBeginImageContextWithOptions([[template view] bounds].size, YES, 0.0);
[[[template view] layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[UIImagePNGRepresentation(viewImage) writeToFile:plistPath atomically:YES];
I have used below function for editing image.
- (UIImage *)resizedImage:(CGSize)newSize interpolationQuality:(CGInterpolationQuality)quality
{
BOOL drawTransposed;
switch (self.imageOrientation) {
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
drawTransposed = YES;
break;
default:
drawTransposed = NO;
}
return [self resizedImage:newSize
transform:[self transformForOrientation:newSize]
drawTransposed:drawTransposed
interpolationQuality:quality];
}
- (UIImage *)resizedImage:(CGSize)newSize
transform:(CGAffineTransform)transform
drawTransposed:(BOOL)transpose
interpolationQuality:(CGInterpolationQuality)quality
{
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGRect transposedRect = CGRectMake(0, 0, newRect.size.height, newRect.size.width);
CGImageRef imageRef = self.CGImage;
// Build a context that's the same dimensions as the new size
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
if((bitmapInfo == kCGImageAlphaLast) || (bitmapInfo == kCGImageAlphaNone))
bitmapInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
bitmapInfo);
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(bitmap, quality);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, transpose ? transposedRect : newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
return newImage;
}
UIImageWriteToSavedPhotosAlbum([image resizedImage:CGSizeMake(300, 300) interpolationQuality:kCGInterpolationNone], nil, nil, nil);
Please find below image and let me know if you need any help.