Given an image, add white to all surrounding areas within a region - iOS - objective-c

Given an image G with dimensions w x h, how can I surround the image with white to produce a new image that is 320 x 480 where G is centered, surrounded by white. The width and height of the incoming image might be larger than 320 or 480, repsecitively, so first I would need to scale it to ensure that it is less than 320x480 (or equal to).
-(UIImage *)whiteVersion:(UIImage *)img {
}
I would use these steps:
Firstly, scale image if needed...then:
1. Create a UIImage of size 320x480
2. Paint the whole thing white
3. Paint G (the image) in the center by using some simple math to find the location
However, I can't figure out step 1 or 2.

You don't draw into images. You draw into graphics contexts. There are different kinds of graphics contexts. An image graphics context lets you take a snapshot of its contents as an image.
- (UIImage *)whiteVersion:(UIImage *)myImage {
UIGraphicsBeginImageContextWithOptions(CGSizeMake(320, 480), YES, myImage.scale);
[[UIColor whiteColor] setFill];
UIRectFill(CGRectInfinite);
[myImage drawAtPoint:CGPointMake((320 - myImage.size.width) / 2, (480 - myImage.size.height) / 2)];
UIImage *myPaddedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return myPaddedImage;
}
You can learn more about drawing on iOS by perusing the Drawing and Printing Guide for iOS and the Quartz 2D Programming Guide.

Related

coordinate computation of the image thumbnail

This is a code snippet for creating a thumbnail sized image (from an original large image) and placing it appropriately on top of a tableviewcell. As i was studying the code i got stuck at the part where the thumbnail is being given a position by setting its abscissa and ordinate. In the method -(void)setThumbDataFromImage:(UIImage *)image they're setting the dimensions and coordinate for project thumbnail—
-(void)setThumbnailDataFromImage:(UIImage *)image{
CGSize origImageSize= [image size];
// the rectange of the thumbnail
CGRect newRect= CGRectMake(0, 0, 40, 40);
// figure out a scaling ratio to make sure we maintain the same aspect ratio
float ratio= MAX(newRect.size.width/origImageSize.width, newRect.size.height/origImageSize.height);
// Create a transparent bitmap context with a scaling factor equal to that of the screen
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0);
// create a path that is a rounded rectangle
UIBezierPath *path= [UIBezierPath bezierPathWithRoundedRect:newRect cornerRadius:5.0];
// make all the subsequent drawing to clip to this rounded rectangle
[path addClip];
// center the image in the thumbnail rectangle
CGRect projectRect;
projectRect.size.width=ratio * origImageSize.width;
projectRect.size.height= ratio * origImageSize.height;
projectRect.origin.x= (newRect.size.width- projectRect.size.width)/2;
projectRect.origin.y= (newRect.size.height- projectRect.size.height)/2;
// draw the image on it
[image drawInRect:projectRect];
// get the image from the image context, keep it as our thumbnail
UIImage *smallImage= UIGraphicsGetImageFromCurrentImageContext();
[self setThumbnail:smallImage];
// get the PNG representation of the image and set it as our archivable data
NSData *data= UIImagePNGRepresentation(smallImage);
[self setThumbnailData:data];
// Cleanup image context resources, we're done
UIGraphicsEndImageContext();
}
I got the width and height computation wherein we multiply the origImageSize with scaling factor/ratio.
But then we use the following to give the thumbnail a position—
projectRect.origin.x= (newRect.size.width- projectRect.size.width)/2;
projectRect.origin.y= (newRect.size.height- projectRect.size.height)/2;
This i fail to understand. I cannot wrap my head around it. :?
Is this part of the centering process. I mean, are we using a mathematical relation here to position the thumbnail or is it some random calculation i.e could have been anything.. Am i missing some fundamental behind these two lines of code??
Those two lines are standard code for centering something, although they aren’t quite written in the most general way. You normally want to use:
projectRect.origin.x = newRect.origin.x + newRect.size.width / 2.0 - projectRect.size.width / 2.0;
projectRect.origin.y = newRect.origin.y + newRect.size.height / 2.0 - projectRect.size.height / 2.0;
In your case the author knows the origin is 0,0, so they omitted the first term in each line.
Since to center a rectangle in another rectangle you want the centers of the two axes to line up, you take, say, half the container’s width (the center of the outer rectangle) and subtract half the inner rectangle’s width (which takes you to the left side of the inner rectangle), and that gives you where the inner rectangle’s left side should be (e.g.: its x origin) when it is correctly centered.

-(void) drawRect:(CGRect)rect; is using up nearly all of iPhone CPU

- (void)drawRect:(CGRect)rect{
float sliceSize = rect.size.width / imagesShownAtOnce;
//Apply our clipping region and fill it with black
[clippingRegion addClip];
[clippingRegion fill];
//Draw the 3 images (+1 for inbetween), with our scroll amount.
CGPoint loc;
for (int i=0;i<imagesShownAtOnce+1;i++){
loc = CGPointMake(rect.origin.x+(i*sliceSize)-imageScroll, rect.origin.y);
[[buttonImages objectAtIndex:i] drawAtPoint:loc];
}
//Draw the text region background
[[UIColor blackColor] setFill];
[textRegion fillWithBlendMode:kCGBlendModeNormal alpha:0.4f];
//Draw the actual text.
CGRect textRectangle = CGRectMake(rect.origin.x+16,rect.origin.y+rect.size.height*4/5.6,rect.size.width/1.5,rect.size.height/3);
[[UIColor whiteColor] setFill];
[buttonText drawInRect:textRectangle withFont:[UIFont fontWithName:#"Avenir-HeavyOblique" size:22]];
}
clippingRegion and textRegion are UIBezierPaths to give me the rounded rectangles I want (First for a clipping region, 2nd as an overlay for my text)
The middle section is drawing 3 images and letting them scroll along, which im updating every 2 refreshes from a CADisplayLink, and that invalidates the draw region by calling [self setNeedsDisplay], and also increasing my imageScroll variable.
Now that that background information is done, here is my issue:
It runs, and even runs smoothly. But it is using up an absolutely high amount of CPU time (80%+)!! How do I push this off to the GPU on the phone instead? Someone told me about CALayers but I've never dealt with them before
Draw each component of your drawing once into something (a view or layer) and let it hold the cached the drawing. Then you just move or transform each component, and exactly as you say, it's all done by the GPU.
You could do this with individual views or with individual layers, but that doesn't really matter (a view is a layer, under the hood). The point is that there is no need to be constantly redrawing from scratch when all you really want is to move the same persistent pieces around.
Learning about CALayer would be a good idea, as it is in fact the basis of all drawing on iOS. What could be more important to know about than that?

Problems scaling a UIImage to fit a UIButton

I have a set of buttons and different sized images. I want to scale each image in order that it fits the button in the correct aspect ratio. Once I've scaled the image, I set the button's image property to the scaled version.
UIImage *scaledImage = [image scaledForButton:pickerButton];
[pickerButton setImage:scaledImage forState:UIControlStateNormal];
my scaledForButton method is defined in a class extension for UIImage. It looks like this:
- (UIImage *)scaledForButton:(UIButton *)button
{
// Check which dimension (width or height) to pay respect to and
// calculate the scale factor
CGFloat imageRatio = self.size.width / self.size.height;
CGFloat buttonRatio = button.frame.size.width / button.frame.size.height;
CGFloat scaleFactor = (imageRatio > buttonRatio ? self.size.width/button.frame.size.width : self.size.height/button.frame.size.height);
// Create image using scale factor
UIImage *scaledimage = [UIImage imageWithCGImage:[self CGImage]
scale:scaleFactor
orientation:UIImageOrientationUp];
return scaledimage;
}
When I run this on an iPad2 it works fine and the images are scaled correctly. However if I run it on a retina display (both in the simulator and on a device) the image does not scale correctly and is squished into the button.
Any ideas why this would happen on retina only? I've been scratching my head for a couple of days but can't figure it out. They're both running the same iOS and I've checked the scale and ratio outputs, which are always the same, regardless of device. Many thanks.
Found the answer here: UIButton doesn't listen to content mode setting?
If you're setting the .contentMode, it seems you have to set the imageView property of the UIButton, not just the UIButton, then it worked properly.
The problem on iPad 3 was as Herman suggested - the CGImage was still a lot larger than the UIButton, so even though it was scaled down, it still had to be resized to fit the button.

UIImageView, CGImage, and Retina Art

I've got a few CALayers in my interface, and I'm drawing images directly to the layers as opposed to imageViews.
Here's a snippet:
UIImage *anImage = [UIImage imageNamed:#"anyImage"];
CGImageRef anImageRef = [anImage CGImage];
CALayer *aLayer = [CALayer layer];
CGFloat anImageWidth = CGImageGetWidth(anImageRef);
CGFloat anImageHeight = CGImageGetHeight(anImageRef);
CGRect layerFrame = CGRectMake(0,0,anImageWidth, anImageHeight);
[aLayer setLayerContents:(__bridge id)anImageRef];
[parentLayer addSublayer:aLayer];
So my problem is that I'm getting inconsistent results with the size of the image. On the retina Device, the image that appears is double the size anticipated (e.g., it matches the pixel size of the #2x image). On the simulator in retina mode, the image drawn to the layer is the anticipated size (where points match the pixels of the non retina image).
Rather than statically set the size, or halve the size (which corrects the issue on the device but breaks compatibility with non-retina displays), what is a good solution or workaround to this scenario? Why is it happening?
The UIImage contains a scale property. It will be 2.0 for retina display images. See the docs for more info.
CGImageGetWidth() and CGImageGetHeight() return the number of pixels whereas you need the image size in points. Use -[UIImage size] instead.

How do I clip or change alpha of an image (pixels) in Quartz?

I'm working on making an iPhone App where there are two ImageViews and when you touch the top one, wherever you tapped, the bottom one shows instead.
Basically what I want to do is cut an ellipse/roundedrect out of an image. To do this I was thinking on either clipping the image, or changing the alpha pixels in the rect to zero. I am new to Quartz 2D Programming so I am not sure how to do this.
Assuming I have:
UIImageView *topImage;
UIImageView *bottomImage;
How do I delete a CGRect/Ellipse/RoundedRect from these images.
This is kind of like those lottery tickets that you have to scratch off to reveal if you won.
I would generally try to make a mask from a path (here containing a rounded rectangle), then masking the image with it, as demonstrated in the apple docs. The one of the benefits of this is that for hit testing all you need to do is CGPathContainsPoint with the point that was touched (as in it will test whether it was in the visible area of the image).
I tried this code:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect frame = CGRectMake(100, 100, 100, 100);
CGPathRef roundedRectPath = [self newPathForRoundedRect:frame radius:5];
CGContextAddPath(ctx, roundedRectPath);
CGContextClip (ctx);
CGPathRelease(roundedRectPath);
(Together with the rounded rect path function you sent)
This is on a white view and beneath the view there is a gray Window, so I thought this would just show gray instead of white in CGRect frame but it didn't do anything...