Adding a tint to an image - objective-c

I'm creating an app which uses UIImagePickerController to present a camera to the user with a custom overlay which includes one of two grids/patterns over the camera "view" itself.
The grids themselves are .png files in a UIImageView which is added to the overlay, they're quite complex so I would really like to steer away from drawing the grid in code, even though that would present I nice clean and simple answer to my question.
I would like to be able to offer the grids in a variety of colours. The obvious solution is create more .png images in different colours, but for each colour there would have to be four separate images (regular and retina for each of the grids) so that would quickly add up to a lot of assets.
The solution which, I think, would be ideal, would be for me to just create the grids in white/gray and then apply a tint to it to colour it appropriately.
Is that possible? Or do I need to seek an alternative solution?

With thanks to Ananth for pointing me to iPhone - How do you color an image?
I've added this method to my code as suggested in the question, with the modification in willc2's answer:
-(UIImage *)colorizeImage:(UIImage *)baseImage color:(UIColor *)theColor {
UIGraphicsBeginImageContext(baseImage.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect area = CGRectMake(0, 0, baseImage.size.width, baseImage.size.height);
CGContextScaleCTM(ctx, 1, -1);
CGContextTranslateCTM(ctx, 0, -area.size.height);
CGContextSaveGState(ctx);
CGContextClipToMask(ctx, area, baseImage.CGImage);
[theColor set];
CGContextFillRect(ctx, area);
CGContextRestoreGState(ctx);
CGContextSetBlendMode(ctx, kCGBlendModeMultiply);
CGContextDrawImage(ctx, area, baseImage.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
...and I'm getting exactly what I'm after.

Related

Blending two images and drawing resized image from two UIImageViews

I have two ImageViews, one called imageView and the other called subView (which is a subview of imageView).
I want to blend the images on these views together, with the user being able to switch the alpha of the blend with a pan. My code works, but right now, the code is slow as we are redrawing the image each time the pan gesture is moved. Is there a faster/more efficient way of doing this?
BONUS Q: I want to allow for my subView image to drawn zoomed in. Currently I've set my subView to be UIViewContentModeCenter, however I can't seem to draw a zoomed in part of my image with this content mode. Is there any way around this?
My drawrect:
- (void)drawRect:(CGRect)rect
{
float xCenter = self.center.x - self.currentImage1.size.width/2.0;
float yCenter = self.center.y - self.currentImage1.size.height/2.0;
subView.alpha = self.blendAmount; // Customize the opacity of the top image.
UIGraphicsBeginImageContext(self.currentImage1.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(c, kCGBlendModeColorBurn);
[imageView.layer renderInContext:c];
self.blendedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.blendedImage drawAtPoint:CGPointMake(xCenter,yCenter)];
}
You need to use GPU for image processing which is far faster than using CPU (as you're doing right now).
You can use Core Image framework which is very fast and easy to use but requires iOS 5, or you can use Open GL directly but you need to be experienced and have some knowledge about Open GL Shading.

Divide UIImage into two parts along a UIBezierPath

How to divide this UIImage by the black line into two parts. The upper contour set of UIBezierPath.
I need to get two resulting UIImages. So is it possible?
The following set of routines create versions of a UIImage with either only the content inside a path, or only content outside that path.
Both make use of the compositeImage method, which uses CGBlendMode. CGBlendMode is very powerful for masking anything you can draw against anything else you can draw. Calling compositeImage: with other blend modes can have interesting (if not always useful) effects. See the CGContext Reference for all the modes.
The clipping method I described in my comment to your OP does work and is probably faster, but only if you have UIBezierPaths defining all the regions you want to clip.
- (UIImage*) compositeImage:(UIImage*) sourceImage onPath:(UIBezierPath*) path usingBlendMode:(CGBlendMode) blend;
{
// Create a new image of the same size as the source.
UIGraphicsBeginImageContext([sourceImage size]);
// First draw an opaque path...
[path fill];
// ...then composite with the image.
[sourceImage drawAtPoint:CGPointZero blendMode:blend alpha:1.0];
// With drawing complete, store the composited image for later use.
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
// Graphics contexts must be ended manually.
UIGraphicsEndImageContext();
return maskedImage;
}
- (UIImage*) maskImage:(UIImage*) sourceImage toAreaInsidePath:(UIBezierPath*) maskPath;
{
return [self compositeImage:sourceImage onPath:maskPath usingBlendMode:kCGBlendModeSourceIn];
}
- (UIImage*) maskImage:(UIImage*) sourceImage toAreaOutsidePath:(UIBezierPath*) maskPath;
{
return [self compositeImage:sourceImage onPath:maskPath usingBlendMode:kCGBlendModeSourceOut];
}
I tested clipping, and in a few different tests it was 25% slower than masking to achieve the same result as the [maskImage: toAreaInsidePath:] method in my other answer. For completeness I include it here, but please don't use it without a good reason.
- (UIImage*) clipImage:(UIImage*) sourceImage toPath:(UIBezierPath*) path;
{
// Create a new image of the same size as the source.
UIGraphicsBeginImageContext([sourceImage size]);
// Clipping means drawing only happens within the path.
[path addClip];
// Draw the image to the context.
[sourceImage drawAtPoint:CGPointZero];
// With drawing complete, store the composited image for later use.
UIImage *clippedImage = UIGraphicsGetImageFromCurrentImageContext();
// Graphics contexts must be ended manually.
UIGraphicsEndImageContext();
return clippedImage;
}
This can be done but it requires some trigonometry. Let's consider the case for the upper image. First, determine the bottommost end point of the UIBezierPath and use UIGraphicsBeginImageContext to get the top part of the image above the line. This will look as follows:
Now, assuming that your line is straight, move pixel by pixel along the line drawing vertical strokes of clearColor (loop for top portion. Proceed on similar lines for bottom portion):
for(int currentPixel_x=0;currentPixel_x<your_ui_image_top.size.width)
UIGraphicsBeginImageContext(your_ui_image_top.size);
[your_ui_image_top drawInRect:CGRectMake(0, 0, your_ui_image_top.size.width, your_ui_image_top.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 1.0);
CGContextSetBlendMode(UIGraphicsGetCurrentContext(),kCGBlendModeClear);
CGContextSetStrokeColorWithColor(UIGraphicsGetCurrentContext(),[UIColor clearColor].CGColor);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), currentPixel_x, m*currentPixel_x + c);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPixel_x, your_ui_image_top.size.height);
CGContextStrokePath(UIGraphicsGetCurrentContext());
your_ui_image_top = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Your UIBezierPath will have to be converted to a straight line of the form y = m*x + c. The x in this equation will be currentPixel_x above. Iterate through the width of the image, increasingcurrentPixel_x by 1 each time. next_y_point_on_your_line will be calculated as:
next_y_point_on_your_line = m*currentPixel_x + c
Each vertical stroke will be 1 pixel wide and its height will depend on how you traverse through them. After some iterations, your image will look roughly (please excuse my poor photo-editing skills!) like:
There are multiple ways of how you draw the clear strokes and this is just one way of going about it. You can also have clear strokes that are parallel to the given path if it gives better results.
Another way is to set the alpha of the pixels below the line to 0.

Blending a UITableViewCell with a background

I have a dilemma I can't seem to solve. I have a UITableView populated by custom UITableViewCells and I need the background of each cell to be blended (like Photoshop multiply) with the background image (the background image is part of the UIView behind the UITableView).
I've found several ways to do the blend (and it does somewhat work). The blending function I'm using is:
- (UIImage *)blendOverlay:(UIImage *)topImage withBaseImage:(UIImage *)baseImage toSize:(CGFloat)imageSize
{
UIGraphicsBeginImageContext(CGSizeMake(imageSize, imageSize));
[baseImage drawInRect:CGRectMake(0.0, 0.0, imageSize, imageSize)];
[topImage drawInRect:CGRectMake(0.0, 0.0, imageSize, imageSize) blendMode:kCGBlendModeMulitply alpha:0.5];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
which is from http://www.cocoabuilder.com/archive/cocoa/280142-uiimageview-display-as-multiply.html.
Basically what I do now is chop out the piece of the background I'm over by getting the cell's frame and creating a new image which is the piece of the background in that frame. However, in order for the effect to work, it essentially needs to recalculate and update every frame (much like an alpha effect). Oh, and I tried just setting a timer to 1/60 seconds and calling
[tableView reloadData];
But that just slowed everything down and the redraws never actually occurred. Any help is much appreciated, I've been googling for nearly 3 hours to avail.
Oh, and for posterity purposes, I chop the background image like so:
CGRect r = [tableView rectForRowAtIndexPath:indexPath];
CGRect r2 = [tableView convertRect:r toView:self.view];
CGImageRef imageRef = CGImageCreateWithImageInRect([m_background.image CGImage], r2);
// Get the image by calling [UIImage imageWithCGImage:imageRef]
CGImageRelease(imageRef);
Thanks!
The problem is that a UITableView doesn't call the drawRect: method of the each individual UITabelViewCell while moving the rows. It seems to cache the view at the start of the table movement, and use that image for moving the whole table. Therefore the custom UITableViewCell has no opportunity to update the background image to show the movement of the blended backgrounds.
The one possible way I can see to animate the background image blending as the table cells move is to subclass UITableView and override drawRect: in that subclass.

Erasing after drawing with CGContext

I'm trying to do a simple drawing app for the iPad where you can draw on a picture, and I'm using CGContext stuff to do it but the way I originally planned on handling erasing was to just draw over stuff with white...except I just realized today that it doesn't work when you're drawing onto another image because then when you "erase" you'll also "erase" the background image as well.
Is there any way to support actual erasing?
Thanks!
I also needed erasing functionality. Based on #Jeremy's answer, here is what worked for me:
CGContextRef cgref = UIGraphicsGetCurrentContext();
if(erase == TRUE) // Erase to show background
{
CGContextSetBlendMode(cgref, kCGBlendModeClear);
}
else // Draw with color
{
CGContextSetBlendMode(cgref, kCGBlendModeNormal);
}
Display the user's drawing in a layer above the image. Then erasing is as simple as drawing a transparent patch on the drawing layer in order to let the image pixels below it show through.
Clear all CGContextRef drawings:
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, self.bounds);
[self setNeedsDisplay];

crop image from certain portion of screen in iphone programmatically

NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
CGSize contextSize=CGSizeMake(320,400);
UIGraphicsBeginImageContext(self.view.bounds.size);
UIGraphicsBeginImageContext(contextSize);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *savedImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self setSaveImage:savedImg];
to extarct some part of image from main screen.
In UIGraphicsBeginImageContext I can only use size, is there any way to use CGRect or some other way to extract image from a specific portion of screen ie (x,y, 320, 400) some thing like this
Hope this helps:
// Create new image context (retina safe)
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
// Create rect for image
CGRect rect = CGRectMake(x, y, size.width, size.height);
// Draw the image into the rect
[existingImage drawInRect:rect];
// Saving the image, ending image context
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This question is really a duplicate of several other questions including this: How to crop the UIImage?, but since it took me a while to find a solution, I will cross post again.
In my quest for a solution that I could more easily understand (and written in Swift), I arrived at this:
I wanted to be able to crop from a region based on an aspect ratio, and scale to a size based on a outer bounding extent. Here is my variation:
import AVFoundation
import ImageIO
class Image {
class func crop(image:UIImage, crop source:CGRect, aspect:CGSize, outputExtent:CGSize) -> UIImage {
let sourceRect = AVMakeRectWithAspectRatioInsideRect(aspect, source)
let targetRect = AVMakeRectWithAspectRatioInsideRect(aspect, CGRect(origin: CGPointZero, size: outputExtent))
let opaque = true, deviceScale:CGFloat = 0.0 // use scale of device's main screen
UIGraphicsBeginImageContextWithOptions(targetRect.size, opaque, deviceScale)
let scale = max(
targetRect.size.width / sourceRect.size.width,
targetRect.size.height / sourceRect.size.height)
let drawRect = CGRect(origin: -sourceRect.origin * scale, size: image.size * scale)
image.drawInRect(drawRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage
}
}
There are a couple things that I found confusing, the separate concerns of cropping and resizing. Cropping is handled with the origin of the rect that you pass to drawInRect, and scaling is handled by the size portion. In my case, I needed to relate the size of the cropping rect on the source, to my output rect of the same aspect ratio. The scale factor is then output / input, and this needs to be applied to the drawRect (passed to drawInRect).
One caveat is that this approach effectively assumes that the image you are drawing is larger than the image context. I have not tested this, but I think you can use this code to handle cropping / zooming, but explicitly defining the scale parameter to be the aforementioned scale parameter. By default, UIKit applies a multiplier based on the screen resolution.
Finally, it should be noted that this UIKit approach is higher level than CoreGraphics / Quartz and Core Image approaches, and seems to handle image orientation issues. It is also worth mentioning that it is pretty fast, second to ImageIO, according to this post here: http://nshipster.com/image-resizing/