I am trying to resize UIImage . i am taking the image and set its width to 640(for ex), and check the factor i had to use, and use it to the image height also .
Somehow the image is sometimes flips, even if it was portrait image, it becomes landscape.
I am probably not giving attention to something here ..
//scale 'newImage'
CGRect screenRect = CGRectMake(0, 0, 640.0, newImage.size.height* (640/newImage.size.width) );
UIGraphicsBeginImageContext(screenRect.size);
[newImage drawInRect:screenRect];
UIImage *scaled = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Ok i have the answer but its not her.
when i took the image from the assetLibrary i took it with :
CGImageRef imageRef = result.defaultRepresentation.fullResolution;
This flips the image, now i use :
CGImageRef imageRef = result.defaultRepresentation.fullScreenImage;
and the image is just fine .
Related
This question already has answers here:
Losing image orientation while converting an image to CGImage
(7 answers)
Closed 4 years ago.
I am trying to create a simple cropping app. I have a view overlayed on a UIImage and I want to crop a new image out of only the part of the UIImage where the view is overlayed. I used the code
CGRect cropRect = _cropView.frame;
UIImage *cropImage = [UIImage imageWithData:imageData];
CGImageRef cropped = CGImageCreateWithImageInRect([cropImage CGImage], cropRect);
UIImage *croppedImage = [UIImage imageWithCGImage:cropped];
self.imageView.image = croppedImage;
but it resulted in an unexpected image that seemed very zoomed in and not what I wanted. How can I crop a new image from the image behind the view?
A CGImage deals with pixels, not "points" (the abstract measurement unit of UIKit). When the crop view's contentScaleFactor is larger than 1, its pixel dimensions will be larger than its point dimensions. So, to get a crop rect that corresponds to the pixels, you need to multiply all of the coordinates and dimensions of the view's frame by its contentScaleFactor.
CGRect cropRect = _cropView.frame;
cropRect.origin.x *= _cropView.contentScaleFactor;
cropRect.origin.y *= _cropView.contentScaleFactor;
cropRect.size.width *= _cropView.contentScaleFactor;
cropRect.size.height *= _cropView.contentScaleFactor;
May be this help you,
// Create new image context
CGSize size = _cropView.frame.size;
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
// Create rect for image
CGRect rect = _cropView.frame;
// Draw the image into the rect
[existingImage drawInRect:rect];
// Saving the image, ending image context
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I am making UIImage from CGIImageRef.
CGImageRef drawImage = CGImageCreateWithImageInRect(croppedImage.CGImage, frame);
UIImage *newImage = [UIImage imageWithCGImage:drawImage];
CGImageRelease(drawImage);
And applying it to Image View:
self.imageview.image = newImage;
After this line self.imageview.image = imgV.image; my imageView is resized to size of UIImage(newImage).
When I log my self.imageview frame then I am getting correct value. I am not able to get exact reason behind this.
UIImageView will be autoResize to fit its image size if u didnt set the frame of imageView
I have one UIImageView which is in scale to fill mode. Now i want to crop the image in imageview. I have one rectangle on this imageview which user can move and change the size. After particular size is selected i want to crop the image using this frame. But since my mode of uiimageview is scale to fill this messes the cropping rectangle and different part is cropped. Has any one faced this problem. How to crop properly ?
Your options are to calculate, or use this walkaround:
-(UIImage *)imageFromImageView:(UIImageView *)view withCropRect:(CGRect)cropRect
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * largeImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
UIImage* img = UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return img;
}
I loaded a jpg into a UIImageView. The image is oversized to the iPhone screen. How can I resize it to a specific CGRect frame?
UIImageView *uivSplash = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"iPhone-Splash.jpg"]];
[self.view addSubview:uivSplash];
A UIImageView is just a UIView, so you can change its frame property.
uivSplash.frame = CGRectMake(0, 0, width, height);
You'll want a method like the following:
CGFloat newWidth = whateverYourDesiredWidth; // someView.size.width for example
CGFloat newHeight = whateverYourDesiredHeight; // someView.size.height for example
CGSize newSize = CGSizeMake(newWidth, newHeight);
UIGraphicsBeginImageContext(newSize);
[yourLargeImage drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
So this is getting your desired width and height (maybe the screen size, maybe hard-coded size, maybe a size based on a UIView) and re-drawing the image in a context that is that size.
~Good Luck
EDIT: it occurs to me I may have misunderstood your desire, so I'll also point out (as others have said) that UIImageView has properties for its image that let you fit it to size, scale to fill, retain aspect ratio, etc.
I want to save 2 UIImages that are moved, resized and rotated by user. The problem is i dont want to use such function as any 'printscreen one', because it makes both images to lose a lot from their quality (resolution).
Atm i use something like this:
UIGraphicsBeginImageContext(image1.size);
[image1 drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
[image2 drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
However ofc it just adds two images, their rotation, resizing and moving isn't operated here. Can anybody help with considering these 3 aspects in coding? Any help is appreciated!
My biggest thanks in advance :)
EDIT: images can be rotated and zoomed by user (handling touch events)!
You have to set the transform of the context to match your imageView's transform before you start drawing into it.
i.e.,
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, boundingRect.size.width/2, boundingRect.size.height/2);
transform = CGAffineTransformRotate(transform, angle);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(-imageView.image.size.width/2, -imageView.image.size.height/2, imageView.image.size.width, imageView.image.size.height), imageView.image.CGImage);
// Get an image from the context
rotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
and check out Creating a UIImage from a rotated UIImageView.
EDIT: if you don't know the angle of rotation of the image you can get the transform from the layer property of the UIImageView:
UIGraphicsBeginImageContext(rotatedImageView.image.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform transform = rotatedImageView.transform;
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(0, 0, rotatedImageView.image.size.width, rotatedImageView.image.size.height), rotatedImageView.image.CGImage);
// Get an image from the context
UIImage *newRotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
UIGraphicsEndImageContext();
You will have to play about with the transform matrix to centre the image in the context and you will also have to calculate a bounding rectangle for the rotated image or it will be cropped at the corners (i.e., rotatedImageView.image.size is not big enough to encompass a rotated version of itself).
Try this:
UIImage *temp = [[UIImage alloc] initWithCGImage:image1 scale:1.0 orientation: yourOrientation];
[temp drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
Similarly for image2. Rotation and resizing are handled by orientation and scale respectively. yourOrientation is a UIImageOrientation enum variable and can have a value from 0-7(check this apple documentation on different UIImageOrientation values). Hope it helps...
EDIT: To handle rotations, just write the desired orientation for the rotation you require. You can rotate 90 deg left/right or flip vertically/horizontally. For eg, in the apple documentation, UIImageOrientationUp is 0, UIImageOrientationDown is 1 and so on. Check out my github repo for an example.