UIImage resize and crop to fit frame - objective-c

I know this question has been asked several times, but their answers make my images loose quality. They all become pixelated. So even though it crops and resizes correctly it looses quality.
Just so you can check it, this is the algorithm which is in every post:
- (UIImage*)scaleAndCropImage:(UIImage *)aImage forSize:(CGSize)targetSize
{
UIImage *sourceImage = aImage;
UIImage *newImage = nil;
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO)
{
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor > heightFactor)
{
scaleFactor = widthFactor; // scale to fit height
}
else
{
scaleFactor = heightFactor; // scale to fit width
}
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the image
if (widthFactor > heightFactor)
{
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else
{
if (widthFactor < heightFactor)
{
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
}
UIGraphicsBeginImageContext(targetSize); // this will crop
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
if(newImage == nil)
{
NSLog(#"could not scale image");
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
return newImage;
}

On the line where your begin the UIGraphicsImageContext, use UIGraphicsBeginImageContextWithOptions instead of UIGraphicsBeginImageContext. Try something like this:
UIGraphicsBeginImageContextWithOptions(targetSize, NO, 0.0)
Notice the three parameters passed above, I'll go through them in order:
targetSize is the size of the image measured in points (not pixels)
NO is a BOOL value (could be YES or NO) that indicates whether the image is 100% opaque or not. Setting this to NO will preserve transparency and create an alpha channel to handle transparency.
The most important part of the above code is the final parameter, 0.0. This is the image scale factor that will be applied. Specifying the value to 0.0 sets the scale factor of the current device's screen. This means that the quality will be preserved, and look especially good on Retina Displays.
Here's the Apple Documentation on UIGraphicsBeginImageContextWithOptions.

You should use UIGraphicsBeginImageContextWithOptions(targetSize, false, 0.0) instead of UIGraphicsBeginImageContext(targetSize) so the correct scale factor gets applied to the bitmap.
Specifying 0.0 as scale factor, sets the scale factor to that from the device's main screen.
Calling only UIGraphicsBeginImageContext() is the same as calling
UIGraphicsBeginImageContextWithOptions(..) with a scale factor of 1.0
for more details take a look at: http://developer.apple.com/library/ios/documentation/UIKit/Reference/UIKitFunctionReference/Reference/reference.html#//apple_ref/c/func/UIGraphicsBeginImageContextWithOptions

Related

AVAsset video size (width * height)

I capture a screen record and output it to an file. And My mac is retina.
I get the file size by:
self.asset = [AVAsset assetWithURL:_assetURL];
AVAssetTrack *track = [[self.asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
self.vedioNaturalSize = CGSizeApplyAffineTransform(track.naturalSize, track.preferredTransform);
but this size is in pixel, I want to get the size in point.
When I play the video in QuickTime, I found the initial window size in point not in pixel, but I can only get the size in pixel.
Does anybody know the approach, thanks very much.
Try this code convert pixelToPoints
+(CGFloat)pixelToPoints:(CGFloat)px {
CGFloat pointsPerInch = 72.0; // see: http://en.wikipedia.org/wiki/Point%5Fsize#Current%5FDTP%5Fpoint%5Fsystem
CGFloat scale = 1; // We dont't use [[UIScreen mainScreen] scale] as we don't want the native pixel, we want pixels for UIFont - it does the retina scaling for us
float pixelPerInch; // aka dpi
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) {
pixelPerInch = 132 * scale;
} else if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPhone) {
pixelPerInch = 163 * scale;
} else {
pixelPerInch = 160 * scale;
}
CGFloat result = px * pointsPerInch / pixelPerInch;
return result;
}

In iOS, how to convert a web image into a less-quality one?

The app downloads images from the web and shows them in a table view as thumbnails. However, the quality of these image are "too good". They are HD quality and if there are too many images in a table view, it might slow down the UI a bit.
Before I set the image to a cell, how to make it "less quality"? (occupying less memory & requiring less processing power to show them)
I tried something like this:
let smallerImage = UIImage(CGImage: image.CGImage!, scale: 0.2, orientation: image.imageOrientation)
but it's not working as expected. What's the right way to do it?
Here is my working code from an old project from before the Swift era. An extra parameter maxLength passes the maximum height and width to use (for portrait it's max height, for landscape it's max width). Hope you know how to translate from Ojbective-C to Swift:
- (UIImage *) scaleImage: (UIImage *) image toMax: (float) maxLength
{
//scale down image to fit withing square of maxLength x maxLength
CGSize size = image.size;
printf("scaleImage source size: %f, %f\n", size.width, size.height);
float scaleX = size.width / maxLength;
float scaleY = size.height / maxLength;
float scale = scaleX > scaleY ? scaleX : scaleY;
int newWidth = round(size.width / scale);
int newHeight = round(size.height / scale);
CGSize newSize = CGSizeMake(size.width / scale, size.height / scale);
printf("scaleImage new size: %d, %d\n", newWidth, newHeight);
UIGraphicsBeginImageContext( newSize );// a CGSize that has the size you want
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
//image is the original UIImage
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Applying such scaling does indeed limit the total memory consumed by the thumbnails.

UIImage Resize with aspect ratio -> SAVE JPG -> LOAD -> image loaded twice bigger

im using this code to resize image.
- (UIImage*)resizeToSize:(CGSize)size {
float height = self.size.height;
float width = self.size.width;
if (width > size.width) {
width = size.width;
height = size.width / (self.size.width / self.size.height);
}
if (height > size.height) {
height = size.height;
width = size.height / (self.size.height / self.size.width);
}
NSLog(#"Resize to size %#",NSStringFromCGSize(size));
if (height == self.size.height && width == self.size.width) {
return self;
}
CGSize newSize = CGSizeMake(width, height);
UIGraphicsBeginImageContextWithOptions(newSize, YES, 0.0);
[self drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"Resized %#",NSStringFromCGSize(newImage.size));
return newImage;
}
The image is resized and in next step i save it via [UIImageJPEGRepresentation(image, 1.0) writeToFile:pngPath atomically:YES];.
After that i load the file, and the image size is twice bigger, any hint why?
Thank you!
I suspect what is happening here is that you have an image with scale 2.0 (on a Retina device) but you are saving it without #2x in the filename, so when you reopen it, it is a scale 1.0 image with dimensions twice as large as you want.
If that's the case, there are two solutions: you can fix your filename, or you can re-open the file by reading the file into an NSData object and then use UIImage +imageWithData:scale: to get a properly-scaled image.

CGContextDrawAngleGradient?

Dipping my feet into some more Core Graphics drawing, I'm attempting to recreate a wicked looking metallic knob, and I've landed on what is probably a show-stopping issue.
There doesn't seem to be any way to draw angle gradients in Core Graphics. I see there's CGContextDrawRadialGradient() and CGContextDrawLinearGradient(), but there's nothing that I see that would allow me to draw an angle gradient. Does anyone know of a workaround, or a bit of framework hidden away somewhere to accomplish this without pre-rendering the knob into an image file?
AngleGradientKnob http://dl.dropbox.com/u/3009808/AngleGradient.png.
This is kind of thrown together, but it's the approach I'd probably take. This is creating an angle gradient by drawing it directly into a bitmap using some simple trig, then clipping it to a circle. I create a grid of memory using a grayscale colorspace, calculate the angle from a given point to the center, and then color that based on a periodic function, running between 0 to 255. You could of course expand this to do RGBA color as well.
Of course you'd cache this and play with the math to get the colors you want. This currently runs all the way from black to white, which doesn't look as good as you'd like.
- (void)drawRect:(CGRect)rect {
CGImageAlphaInfo alphaInfo = kCGImageAlphaNone;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
size_t components = CGColorSpaceGetNumberOfComponents( colorSpace );
size_t width = 100;
size_t height = 100;
size_t bitsPerComponent = 8;
size_t bytesPerComponent = bitsPerComponent / 8;
size_t bytesPerRow = width * bytesPerComponent * components;
size_t dataLength = bytesPerRow * height;
uint8_t data[dataLength];
CGContextRef imageCtx = CGBitmapContextCreate( &data, width, height, bitsPerComponent,
bytesPerRow, colorSpace, alphaInfo );
NSUInteger offset = 0;
for (NSUInteger y = 0; y < height; ++y) {
for (NSUInteger x = 0; x < bytesPerRow; x += components) {
CGFloat opposite = y - height/2.;
CGFloat adjacent = x - width/2.;
if (adjacent == 0) adjacent = 0.001;
CGFloat angle = atan(opposite/adjacent);
data[offset] = abs((cos(angle * 2) * 255));
offset += components * bytesPerComponent;
}
}
CGImageRef image = CGBitmapContextCreateImage(imageCtx);
CGContextRelease(imageCtx);
CGColorSpaceRelease(colorSpace);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect buttonRect = CGRectMake(100, 100, width, width);
CGContextAddEllipseInRect(ctx, buttonRect);
CGContextClip(ctx);
CGContextDrawImage(ctx, buttonRect, image);
CGImageRelease(image);
}
To expand on what's in the comments to the accepted answer, here's the code for generating an angle gradient using Core Image. This should work in iOS 8 or later.
// generate a dummy image of the required size
UIGraphicsBeginImageContextWithOptions(CGSizeMake(256.0, 256.0), NO, [[UIScreen mainScreen] scale]);
CIImage *dummyImage = [CIImage imageWithCGImage:UIGraphicsGetImageFromCurrentImageContext().CGImage];
// define the kernel algorithm
NSString *kernelString = #"kernel vec4 circularGradientKernel(__color startColor, __color endColor, vec2 center, float radius) { \n"
" vec2 point = destCoord() - center;"
" float rsq = point.x * point.x + point.y * point.y;"
" float theta = mod(atan(point.y, point.x), radians(360.0));"
" return (rsq < radius*radius) ? mix(startColor, endColor, 0.5+0.5*cos(4.0*theta)) : vec4(0.0, 0.0, 0.0, 1.0);"
"}";
// initialize a Core Image context and the filter kernel
CIContext *context = [CIContext contextWithOptions:nil];
CIColorKernel *kernel = [CIColorKernel kernelWithString:kernelString];
// argument array, corresponding to the first line of the kernel string
NSArray *args = #[ [CIColor colorWithRed:0.5 green:0.5 blue:0.5],
[CIColor colorWithRed:1.0 green:1.0 blue:1.0],
[CIVector vectorWithCGPoint:CGPointMake(CGRectGetMidX(dummyImage.extent),CGRectGetMidY(dummyImage.extent))],
[NSNumber numberWithFloat:200.0]];
// apply the kernel to our dummy image, and convert the result to a UIImage
CIImage *ciOutputImage = [kernel applyWithExtent:dummyImage.extent arguments:args];
CGImageRef cgOutput = [context createCGImage:ciOutputImage fromRect:ciOutputImage.extent];
UIImage *gradientImage = [UIImage imageWithCGImage:cgOutput];
CGImageRelease(cgOutput);
This generates the following image:

Why I cannot set a zoomScale lower than 1?

I'm using some code extracted from PhotoScroller example (WWDC10). But I'm using an UIView instead of UIImageView. This is the only change I made (on the UIScrollView subclass).
My problem is when I call 'configureForViewSize'. It tries to apply a zoomScale of 0.66 (to show the full view), but it doesn't work.
After setting self.minimumZoomScale = minScale, then I look at the value of self.zoomScale and it's always equals to 1
- (void)displayView:(UIView *)mView
{
// clear the previous view
[view removeFromSuperview];
[view release];
view = nil;
// reset our zoomScale to 1.0 before doing any further calculations
self.zoomScale = 1.0;
[self addSubview:mView];
[self configureForViewSize:mView.frame.size];
}
- (void)configureForViewSize:(CGSize)viewSize
{
CGSize boundsSize = [self bounds].size;
// set up our content size and min/max zoomscale
CGFloat xScale = boundsSize.width / viewSize.width; // the scale needed to perfectly fit the image width-wise
CGFloat yScale = boundsSize.height / viewSize.height; // the scale needed to perfectly fit the image height-wise
CGFloat minScale = MIN(xScale, yScale); // use minimum of these to allow the image to become fully visible
// on high resolution screens we have double the pixel density, so we will be seeing every pixel if we limit the
// maximum zoom scale to 0.5.
CGFloat maxScale = 1.0 / [[UIScreen mainScreen] scale];
// don't let minScale exceed maxScale. (If the view is smaller than the screen, we don't want to force it to be zoomed.)
if (minScale > maxScale) {
minScale = maxScale;
}
NSLog(#"contentSize %f %f", viewSize.width, viewSize.height);
NSLog(#"minScale %f", minScale);
NSLog(#"maxScale %f", maxScale);
self.contentSize = viewSize;
self.maximumZoomScale = maxScale;
self.minimumZoomScale = minScale;
//--> THIS IS NOT WORKING! =(
self.zoomScale = minScale; // start out with the content fully visible
NSLog(#"self.zoomScale %f", self.zoomScale);
}
The console msgs are:
2011-03-29 10:18:39.950 MyProj[6259:207] contentSize 1536.000000 893.000000
2011-03-29 10:18:39.953 MyProj[6259:207] minScale 0.666293
2011-03-29 10:18:39.954 MyProj[6259:207] maxScale 1.000000
2011-03-29 10:18:39.955 MyProj[6259:207] self.zoomScale 1.000000
EDIT:
I figured it out by myself. I forgot to assign view = mView in "displayView" method =(