I created an application, in which I can rotate, re-size, translate an image using gestures. Then I need to get the image from the UIImageView. I found this part of the code at some where in Stack-overflow. Although the smiler question is answered here, but it requires the input of the angle. The same person wrote somewhere else the better solution, which I'm using. But it have a problem. Often it returns a blank image. or truncated image (often from top side). So there is something wrong with the code and it requires some changes. My problem is that, I'm new to Core-graphics and badly stuck in this problem.
UIGraphicsBeginImageContext(imgView.image.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform transform = imgView.transform;
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, CGRectMake(0, 0, imgView.image.size.width, imgView.image.size.height), imgView.image.CGImage);
UIImage *newRotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
UIGraphicsEndImageContext();
EDIT 1.1
Thanks for the sample code, but again it have the problem. Let me explain in more detail, I'm using gestures for scaling, translating and resizing the image using imageview. So all this data is saved in the transform property of the imageview. I fond another method in core-image. So I changed my code to:
CGRect bounds = CGRectMake(0, 0, imgTop.size.width, imgTop.size.height);
CIImage *ciImage = [[CIImage alloc] initWithCGImage:imageView.image.CGImage options:nil];
CGAffineTransform transform = imgView.transform;
ciImage = [ciImage imageByApplyingTransform:transform];
return [UIImage imageWithCIImage:ciImage] ;
Now I'm getting the squeezed and wrong size mirrored image. Sorry to disturbing you again. Can you guide me how to get the proper image using imageview's transform in coreimage?
CIImage *ciImage = [[CIImage alloc] initWithCGImage:fximage.CGImage options:nil];
CGAffineTransform transform = fxobj.transform;
float angle = atan2(transform.b, transform.a);
transform = CGAffineTransformRotate(transform, - 2 * angle);
ciImage = [ciImage imageByApplyingTransform:transform];
UIImage *screenfxImage = [UIImage imageWithCIImage:ciImage];
Do remember to add code : transform = CGAffineTransformRotate(transform, - 2 * angle); coz the rotation direction is opposite
I created an objective-C class just for this sort of thing. You can check it out on GitHub ANImageBitmapRep. Here's how you would do rotation:
ANImageBitmapRep * ibr = [myImage image];
[ibr rotate:anAngle];
UIImage * rotated = [ibr image];
Note that here, anAngle is in radians.
Here is the link to Documentation:-
http://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html
Sample code to rotate image:-
CIImage *inputImage = [[CIImage alloc] initWithImage:currentImage];
CIFilter * controlsFilter = [CIFilter filterWithName:#"CIAffineTransform"];
[controlsFilter setValue:inputImage forKey:kCIInputImageKey];
[controlsFilter setValue:[NSNumber numberWithFloat:slider.value] forKey:#"inputAngle"];
CIImage *displayImage = controlsFilter.outputImage;
UIImage *finalImage = [UIImage imageWithCIImage:displayImage];
CIContext *context = [CIContext contextWithOptions:nil];
if (displayImage == nil || finalImage == nil) {
// We did not get output image. Let's display the original image itself.
photoEditView.image = currentImage;
}
else {
CGImageRef imageRef = [context createCGImage:displayImage fromRect:displayImage.extent];
photoEditView.image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
}
context = nil;
[inputImage release];
I created sample app to do this (minus the scaling part) in objective C. If anybody is interested, you can download it here: https://github.com/gene-code/coregraphics-drawing/tree/master/coregraphics-drawing/test
Related
I'm playing with the Core Image framework. As I understand, if I have an image (NSImage), it needs to be converted into CIImage, first. I can do that.
NSImage *im1 = [[NSImage alloc] initWithContentsOfFile:imagepath];
NSRect rect1;rect1.size.width = img1.size.width; rect1.size.height = img1.size.height;
CGImageRef imageRef1 = [img1 CGImageForProposedRect:&rect1 context:[NSGraphicsContext currentContext] hints:nil];
CIImage *ciimage = [CIImage imageWithCGImage:imageRef1];
I have a function that applies a Core Image filter to a core image (CIImage), which I want to test. And I want to add output image to a window as a subview. So I need NSImage. How can I convert this core image back into NSImage? If I ask Google, I don't get good results.
Thank you for your help.
I haven't tested it, but I think this should do it:
CIImage *ciImage = ...;
NSCIImageRep *rep = [NSCIImageRep imageRepWithCIImage:ciImage];
NSImage *nsImage = [[NSImage alloc] initWithSize:rep.size];
[nsImage addRepresentation:rep];
In Swift:
let ciImage = ...
let rep = NSCIImageRep(ciImage: ciImage)
let nsImage = NSImage(size: rep.size)
nsImage.addRepresentation(rep)
In Swift:
var rep: NSCIImageRep = NSCIImageRep(ciImage: gaussianBlurFilter.outputImage)
var nsImage: NSImage = NSImage(size: rep.size)
nsImage.addRepresentation(rep)
There are filters that extend the size of the image quite a lot, like CIMotionBlur.
For an original image size 5120x1440 I ended up with an image with an "extent" x,y,w,h = -126,-502,5184,2444. To convert that to NSImage I use:
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cg_img = [context createCGImage:img fromRect:CGRectMake(0, 0, size.width, size.height)];
NSImage *ns_img = [[NSImage alloc] initWithCGImage:cg_img size:NSZeroSize];
CGImageRelease(cg_img); // Don't forget this! (memory leak)
Where size is the original image's size. I don't see another direct path form CIImage to NSImage that allows you to specify the origin within the CIImage, while the CGImageRef conversion does.
How can i show only a portion of the original image in a UIImageView
This question may be very familiar and old, But the reason for asking again is,i could not find a workable idea with the help of those answers
Many said set image.contentMode = UIViewContentModeCenter; (but not working)
I need almost a rectangle containing the center of the original image,How do I get this ?
I do make this working,when i am displaying a static image to my app and setting Content mode of UIImageVie as Aspect Fill.
But this is not workable in the case when i am displaying an image from url and using NSData
Adding my code and the images
NSString *weburl = #"http://topnews.in/files/Sachin-Tendulkar_15.jpg";
UIImageView *imageView = [[UIImageView alloc] initWithFrame:CGRectMake(10, 50, 108, 86)];
NSURL *url = [NSURL URLWithString:weburl];
NSData *data = [NSData dataWithContentsOfURL:url];
imageView.image = [UIImage imageWithData:data];
[self.view addSubview:imageView];
If you added the UIImageView form XIB, you can find a "mode" property there and then you can see and set different modes from there (from Interface builder). Or by programatically, you can set different modes by
imageView.contentMode = UIViewContentModeCenter;
imageView.clipsToBounds = YES;
Try this:
self.imageView.layer.contentsRect = CGRectMake(0.25, 0.25, 0.5, 0.5);
self.imageView will display middle part of image. You can calculate itself the required values of CGRect
For this kind of output, you need to crop the image according to your requirement.
Cropping code as below which can be used.
-(UIImage *) CropImageFromTop:(UIImage *)image
{
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], CGRectMake(0, 12, image.size.width, image.size.height - 12));
UIImage *cropimage = [[[UIImage alloc] initWithCGImage:imageRef] autorelease];
CGImageRelease(imageRef);
return cropimage;
}
you try to scale image and then add image in UiImageView and set center that image then it is in center. code of scale image is
-(UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize{
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
then you set the center of that image and add in the image view i hope this is work
As I noticed when CIGaussianBlur is applied to image, image's corners gets blurred so that it looks like being smaller than original. So I figured out that I need to crop it correctly to avoid having transparent edges of image. But how to calculate how much I need to crop in dependence of blur amount?
Example:
Original image:
Image with 50 inputRadius of CIGaussianBlur (blue color is background of everything):
Take the following code as an example...
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:5.0f] forKey:#"inputRadius"];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[result extent]];
This results in the images you provided above. But if I instead use the original images rect to create the CGImage off of the context the resulting image is the desired size.
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
There are two issues. The first is that the blur filter samples pixels outside the edges of the input image. These pixels are transparent. That's where the transparent pixels come from.
The trick is to extend the edges before you apply the blur filter. This can be done by a clamp filter e.g. like this:
CIFilter *affineClampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
CGAffineTransform xform = CGAffineTransformMakeScale(1.0, 1.0);
[affineClampFilter setValue:[NSValue valueWithBytes:&xform
objCType:#encode(CGAffineTransform)]
forKey:#"inputTransform"];
This filter extends the edges infinitely and eliminates the transparency. The next step would be to apply the blur filter.
The second issue is a bit weird. Some renderers produce a bigger output image for the blur filter and you must adapt the origin of the resulting CIImage by some offset e.g. like this:
CGImageRef cgImage = [context createCGImage:outputImage
fromRect:CGRectOffset([inputImage extend],
offset, offset)];
The software renderer on my iPhone needs three times the blur radius as offset. The hardware renderer on the same iPhone does not need any offset at all. Maybee you could deduce the offset from the size difference of input and output images, but I did not try...
To get a nice blurred version of an image with hard edges you first need to apply a CIAffineClamp to the source image, extending its edges out and then you need to ensure that you use the input image's extents when generating the output image.
The code is as follows:
CIContext *context = [CIContext contextWithOptions:nil];
UIImage *image = [UIImage imageNamed:#"Flower"];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *clampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
[clampFilter setDefaults];
[clampFilter setValue:inputImage forKey:kCIInputImageKey];
CIFilter *blurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[blurFilter setValue:clampFilter.outputImage forKey:kCIInputImageKey];
[blurFilter setValue:#10.0f forKey:#"inputRadius"];
CIImage *result = [blurFilter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
UIImage *result = [[UIImage alloc] initWithCGImage:cgImage scale:image.scale orientation:UIImageOrientationUp];
CGImageRelease(cgImage);
Note this code was tested on iOS. It should be the similar for OS X (substituting NSImage for UIImage).
I saw some of the solutions and wanted to recommend a more modern one, based off some of the ideas shared here:
private lazy var coreImageContext = CIContext() // Re-use this.
func blurredImage(image: CIImage, radius: CGFloat) -> CGImage? {
let blurredImage = image
.clampedToExtent()
.applyingFilter(
"CIGaussianBlur",
parameters: [
kCIInputRadiusKey: radius,
]
)
.cropped(to: image.extent)
return coreImageContext.createCGImage(blurredImage, from: blurredImage.extent)
}
If you need a UIImage afterward, you can of course get it like so:
let image = UIImage(cgImage: cgImage)
... For those wondering, the reason for returning a CGImage is (as noted in the Apple documentation):
Due to Core Image's coordinate system mismatch with UIKit, this filtering approach may yield unexpected results when displayed in a UIImageView with "contentMode". Be sure to back it with a CGImage so that it handles contentMode properly.
If you need a CIImage you could return that, but in this case if you're displaying the image, you'd probably want to be careful.
This works for me :)
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *blurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[blurFilter setDefaults];
[blurFilter setValue:inputImage forKey:#"inputImage"];
CGFloat blurLevel = 20.0f; // Set blur level
[blurFilter setValue:[NSNumber numberWithFloat:blurLevel] forKey:#"inputRadius"]; // set value for blur level
CIImage *outputImage = [blurFilter valueForKey:#"outputImage"];
CGRect rect = inputImage.extent; // Create Rect
rect.origin.x += blurLevel; // and set custom params
rect.origin.y += blurLevel; //
rect.size.height -= blurLevel*2.0f; //
rect.size.width -= blurLevel*2.0f; //
CGImageRef cgImage = [context createCGImage:outputImage fromRect:rect]; // Then apply new rect
imageView.image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
Here is the Swift 5 version of blurring the image. Set the Clamp filter to defaults so you will no need to give transform.
func applyBlurEffect() -> UIImage? {
let context = CIContext(options: nil)
let imageToBlur = CIImage(image: self)
let clampFilter = CIFilter(name: "CIAffineClamp")!
clampFilter.setDefaults()
clampFilter.setValue(imageToBlur, forKey: kCIInputImageKey)
//The CIAffineClamp filter is setting your extent as infinite, which then confounds your context. Try saving off the pre-clamp extent CGRect, and then supplying that to the context initializer.
let inputImageExtent = imageToBlur!.extent
guard let currentFilter = CIFilter(name: "CIGaussianBlur") else {
return nil
}
currentFilter.setValue(clampFilter.outputImage, forKey: kCIInputImageKey)
currentFilter.setValue(10, forKey: "inputRadius")
guard let output = currentFilter.outputImage, let cgimg = context.createCGImage(output, from: inputImageExtent) else {
return nil
}
return UIImage(cgImage: cgimg)
}
Here is Swift version:
func applyBlurEffect(image: UIImage) -> UIImage {
let context = CIContext(options: nil)
let imageToBlur = CIImage(image: image)
let blurfilter = CIFilter(name: "CIGaussianBlur")
blurfilter!.setValue(imageToBlur, forKey: "inputImage")
blurfilter!.setValue(5.0, forKey: "inputRadius")
let resultImage = blurfilter!.valueForKey("outputImage") as! CIImage
let cgImage = context.createCGImage(resultImage, fromRect: resultImage.extent)
let blurredImage = UIImage(CGImage: cgImage)
return blurredImage
}
See below two implementations for Xamarin (C#).
1) Works for iOS 6
public static UIImage Blur(UIImage image)
{
using(var blur = new CIGaussianBlur())
{
blur.Image = new CIImage(image);
blur.Radius = 6.5f;
using(CIImage output = blur.OutputImage)
using(CIContext context = CIContext.FromOptions(null))
using(CGImage cgimage = context.CreateCGImage (output, new RectangleF(0, 0, image.Size.Width, image.Size.Height)))
{
return UIImage.FromImage(cgimage);
}
}
}
2) Implementation for iOS 7
Using the way shown above isn't working properly on iOS 7 anymore (at least at the moment with Xamarin 7.0.1). So I decided to add cropping another way (measures may depend on the blur radius).
private static UIImage BlurImage(UIImage image)
{
using(var blur = new CIGaussianBlur())
{
blur.Image = new CIImage(image);
blur.Radius = 6.5f;
using(CIImage output = blur.OutputImage)
using(CIContext context = CIContext.FromOptions(null))
using(CGImage cgimage = context.CreateCGImage (output, new RectangleF(0, 0, image.Size.Width, image.Size.Height)))
{
return UIImage.FromImage(Crop(CIImage.FromCGImage(cgimage), image.Size.Width, image.Size.Height));
}
}
}
private static CIImage Crop(CIImage image, float width, float height)
{
var crop = new CICrop
{
Image = image,
Rectangle = new CIVector(10, 10, width - 20, height - 20)
};
return crop.OutputImage;
}
Try this, let the input's extent be -createCGImage:fromRect:'s parameter:
-(UIImage *)gaussianBlurImageWithRadius:(CGFloat)radius {
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *input = [CIImage imageWithCGImage:self.CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:input forKey:kCIInputImageKey];
[filter setValue:#(radius) forKey:kCIInputRadiusKey];
CIImage *output = [filter valueForKey:kCIOutputImageKey];
CGImageRef imgRef = [context createCGImage:output
fromRect:input.extent];
UIImage *outImage = [UIImage imageWithCGImage:imgRef
scale:UIScreen.mainScreen.scale
orientation:UIImageOrientationUp];
CGImageRelease(imgRef);
return outImage;
}
I may point out that Drawing and Rendering in Objective-C is my weakness. Now, here's my problem.
I want to add a 'Day/Night' feature to my game. It has got lots of objects on a map. Every object is a UIView containing some data in variables and some UIImageViews: the sprite, and some of the objects have a hidden ring (used to show selection).
I want to be able to darken the content of the UIView, but I can't figure out how. The sprite is a PNG with transparency. I've just managed to add a black rectangle behind the sprite using this:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSaveGState(ctx);
CGContextSetRGBFillColor(ctx, 0, 0, 0, 0.5);
CGContextFillRect(ctx, rect);
CGContextRestoreGState(ctx);
As I've read, this should be done in the drawRect method. Help please!
If you want to understand better my scenario, the App where I'm trying to do this is called 'Kipos', at the App Store.
Floris497's approach is a good strategy for a blanket darkening for more than one image at a time (probably more what you're after in this case). But here's a general purpose method to generate darker UIImages (while respecting alpha pixels):
+ (UIImage *)darkenImage:(UIImage *)image toLevel:(CGFloat)level
{
// Create a temporary view to act as a darkening layer
CGRect frame = CGRectMake(0.0, 0.0, image.size.width, image.size.height);
UIView *tempView = [[UIView alloc] initWithFrame:frame];
tempView.backgroundColor = [UIColor blackColor];
tempView.alpha = level;
// Draw the image into a new graphics context
UIGraphicsBeginImageContext(frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[image drawInRect:frame];
// Flip the context vertically so we can draw the dark layer via a mask that
// aligns with the image's alpha pixels (Quartz uses flipped coordinates)
CGContextTranslateCTM(context, 0, frame.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, frame, image.CGImage);
[tempView.layer renderInContext:context];
// Produce a new image from this context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *toReturn = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
UIGraphicsEndImageContext();
[tempView release];
return toReturn;
}
The best way would be to add a core image filter to the layer that darkened it. You could use CIExposureAdjust.
CIFilter *filter = [CIFilter filterWithName:#"CIExposureAdjust"];
[filter setDefaults];
[filter setValue:[NSNumber numberWithFloat:-2.0] forKey:#"inputEV"];
view.layer.filters = [NSArray arrayWithObject:filter];
Here is how to do it:
// inputEV controlls the exposure, the lower the darker (e.g "-1" -> dark)
-(UIImage*)adjustImage:(UIImage*)image exposure:(float)inputEV
{
CIImage *inputImage = [[CIImage alloc] initWithCGImage:[image CGImage]];
UIImageOrientation originalOrientation = image.imageOrientation;
CIFilter* adjustmentFilter = [CIFilter filterWithName:#"CIExposureAdjust"];
[adjustmentFilter setDefaults];
[adjustmentFilter setValue:inputImage forKey:#"inputImage"];
[adjustmentFilter setValue:[NSNumber numberWithFloat:-1.0] forKey:#"inputEV"];
CIImage *outputImage = [adjustmentFilter valueForKey:#"outputImage"];
CIContext* context = [CIContext contextWithOptions:nil];
CGImageRef imgRef = [context createCGImage:outputImage fromRect:outputImage.extent] ;
UIImage* img = [[UIImage alloc] initWithCGImage:imgRef scale:1.0 orientation:originalOrientation];
CGImageRelease(imgRef);
return img;
}
Remember to import:
#import <QuartzCore/Quartzcore.h>
And add CoreGraphics and CoreImage frameworks to your project.
Tested on iPhone 3GS with iOS 5.1
CIFilter is available starting from iOS 5.0.
draw a UIView (a black one) over it and set "User interaction enabled" to NO
hope you can do something with this.
then use this to make it dark
[UIView animateWithDuration:2
animations:^{nightView.alpha = 0.4;}
completion:^(BOOL finished){ NSLog(#"done making it dark"); ]; }];
to make it light
[UIView animateWithDuration:2
animations:^{nightView.alpha = 0.0;}
completion:^(BOOL finished){ NSLog(#"done making it light again"); ]; }];
I am using QREncoder library found here: https://github.com/jverkoey/ObjQREncoder
Basically, i looked at the example code by this author, and when he creates the QRCode it comes out perfectly with no pixelation. The image itself that the library provides is 33 x 33 pixels, but he uses kCAFilterNearest to magnify and make it very clear (no pixilation). Here is his code:
UIImage* image = [QREncoder encode:#"http://www.google.com/"];
UIImageView* imageView = [[UIImageView alloc] initWithImage:image];
CGFloat qrSize = self.view.bounds.size.width - kPadding * 2;
imageView.frame = CGRectMake(kPadding, (self.view.bounds.size.height - qrSize) / 2,
qrSize, qrSize);
[imageView layer].magnificationFilter = kCAFilterNearest;
[self.view addSubview:imageView];
I have a UIImageView in a xib, and I am setting it's image like this:
[[template imageVQRCode] setImage:[QREncoder encode:ticketNum]];
[[[template imageVQRCode] layer] setMagnificationFilter:kCAFilterNearest];
but the qrcode is really blurry. In the example, it comes out crystal clear.
What am i doing wrong?
Thanks!
UPDATE:I found out that the problem isn't with scaling or anything to do with kCAFFilterNearest. It has to do with generating the PNG image from the view. Here's how it looks on the deive vs how it looks like when i save the UIView to the PNG representation (Notice the QRCodes quality):
UPDATE 2: This is how I am generating the PNG file from UIView:
UIGraphicsBeginImageContextWithOptions([[template view] bounds].size, YES, 0.0);
[[[template view] layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[UIImagePNGRepresentation(viewImage) writeToFile:plistPath atomically:YES];
I have used below function for editing image.
- (UIImage *)resizedImage:(CGSize)newSize interpolationQuality:(CGInterpolationQuality)quality
{
BOOL drawTransposed;
switch (self.imageOrientation) {
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
drawTransposed = YES;
break;
default:
drawTransposed = NO;
}
return [self resizedImage:newSize
transform:[self transformForOrientation:newSize]
drawTransposed:drawTransposed
interpolationQuality:quality];
}
- (UIImage *)resizedImage:(CGSize)newSize
transform:(CGAffineTransform)transform
drawTransposed:(BOOL)transpose
interpolationQuality:(CGInterpolationQuality)quality
{
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGRect transposedRect = CGRectMake(0, 0, newRect.size.height, newRect.size.width);
CGImageRef imageRef = self.CGImage;
// Build a context that's the same dimensions as the new size
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
if((bitmapInfo == kCGImageAlphaLast) || (bitmapInfo == kCGImageAlphaNone))
bitmapInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
bitmapInfo);
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(bitmap, quality);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, transpose ? transposedRect : newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
return newImage;
}
UIImageWriteToSavedPhotosAlbum([image resizedImage:CGSizeMake(300, 300) interpolationQuality:kCGInterpolationNone], nil, nil, nil);
Please find below image and let me know if you need any help.