Stitch/Composite multiple images vertically and save as one image (iOS, objective-c) - objective-c

I need help writing an objective-c function that will take in an array of UIImages/PNGs, and return/save one tall image of all the images stitched together in order vertically. I am new to this so please go slow and take it easy :)
My ideas so far:
Draw a UIView, then addSubviews to each one's parent (of the images)
and then ???

Below are Swift 3 and Swift 2 examples that stitch images together vertically or horizontally. They use the dimensions of the largest image in the array provided by the caller to determine the common size used for each individual frame each individual image is stitched into.
Note: The Swift 3 example preserves each image's aspect ratio, while the Swift 2 example does not. See note inline below regarding that.
UPDATE: Added Swift 3 example
Swift 3:
import UIKit
import AVFoundation
func stitchImages(images: [UIImage], isVertical: Bool) -> UIImage {
var stitchedImages : UIImage!
if images.count > 0 {
var maxWidth = CGFloat(0), maxHeight = CGFloat(0)
for image in images {
if image.size.width > maxWidth {
maxWidth = image.size.width
}
if image.size.height > maxHeight {
maxHeight = image.size.height
}
}
var totalSize : CGSize
let maxSize = CGSize(width: maxWidth, height: maxHeight)
if isVertical {
totalSize = CGSize(width: maxSize.width, height: maxSize.height * (CGFloat)(images.count))
} else {
totalSize = CGSize(width: maxSize.width * (CGFloat)(images.count), height: maxSize.height)
}
UIGraphicsBeginImageContext(totalSize)
for image in images {
let offset = (CGFloat)(images.index(of: image)!)
let rect = AVMakeRect(aspectRatio: image.size, insideRect: isVertical ?
CGRect(x: 0, y: maxSize.height * offset, width: maxSize.width, height: maxSize.height) :
CGRect(x: maxSize.width * offset, y: 0, width: maxSize.width, height: maxSize.height))
image.draw(in: rect)
}
stitchedImages = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
return stitchedImages
}
Note: The original Swift 2 example below does not preserve the
aspect ratio (e.g. in the Swift 2 example all images are expanded to
fit in the bounding box that represents the extrema of the widths and
heights of the images, thus Any non-square image can be stretched
disproportionately in one of its dimensions). If you're using Swift 2
and want to preserve the aspect ratio please use the AVMakeRect()
modification from the Swift 3 example. Since I no longer have access
to a Swift 2 playground and can't test it to ensure no errors I'm not
updating the Swift 2 example here for that.
Swift 2: (Doesn't preserve aspect ratio. Fixed in Swift 3 example above)
import UIKit
import AVFoundation
func stitchImages(images: [UIImage], isVertical: Bool) -> UIImage {
var stitchedImages : UIImage!
if images.count > 0 {
var maxWidth = CGFloat(0), maxHeight = CGFloat(0)
for image in images {
if image.size.width > maxWidth {
maxWidth = image.size.width
}
if image.size.height > maxHeight {
maxHeight = image.size.height
}
}
var totalSize : CGSize, maxSize = CGSizeMake(maxWidth, maxHeight)
if isVertical {
totalSize = CGSizeMake(maxSize.width, maxSize.height * (CGFloat)(images.count))
} else {
totalSize = CGSizeMake(maxSize.width * (CGFloat)(images.count), maxSize.height)
}
UIGraphicsBeginImageContext(totalSize)
for image in images {
var rect : CGRect, offset = (CGFloat)((images as NSArray).indexOfObject(image))
if isVertical {
rect = CGRectMake(0, maxSize.height * offset, maxSize.width, maxSize.height)
} else {
rect = CGRectMake(maxSize.width * offset, 0 , maxSize.width, maxSize.height)
}
image.drawInRect(rect)
}
stitchedImages = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
return stitchedImages
}

The normal way is to create a bitmap image context, draw your images in at the required position, and then get the image from the image context.
You can do this with UIKit, which is somewhat easier, but isn't thread safe, so will need to run in the main thread and will block the UI.
There is loads of example code around for this, but if you want to understand it properly, you should look at UIGraphicsContextBeginImageContext, UIGraphicsGetCurrentContext, UIGraphicsGetImageFromCurrentImageContext and UIImageDrawInRect. Don't forget UIGraphicsPopCurrentContext.
You can also do this with Core Graphics, which is AFAIK, safe to use on background threads ( I've not had a crash from it yet). Efficiency is about the same, as UIKit just uses CG under the hood.
Key words for this are CGBitmapContextCreate, CGContextDrawImage, CGBitmapContextCreateImage, CGContextTranslateCTM, CGContextScaleCTM and CGContextRelease ( no ARC for Core Graphics). The scaling and translating is because CG has the origin in the bottom right hand corner and Y inscreases upwards.
There is also a third way, which is to use CG for the context, but save yourself all the co-ordinate pain by using a CALayer, set your CGImage ( UIImage.CGImage) as the contents and then render the layer to the context. This is still thread safe and lets the layer take care of all the transformations. Keywords for this is - renderInContext:

I know I'm a bit late here but hopefully this can help someone out. If you're trying to create one large image out of an array you can use this method
- (UIImage *)mergeImagesFromArray: (NSArray *)imageArray {
if ([imageArray count] == 0) return nil;
UIImage *exampleImage = [imageArray firstObject];
CGSize imageSize = exampleImage.size;
CGSize finalSize = CGSizeMake(imageSize.width, imageSize.height * [imageArray count]);
UIGraphicsBeginImageContext(finalSize);
for (UIImage *image in imageArray) {
[image drawInRect: CGRectMake(0, imageSize.height * [imageArray indexOfObject: image],
imageSize.width, imageSize.height)];
}
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return finalImage;
}

Swift 4
extension Array where Element: UIImage {
func stitchImages(isVertical: Bool) -> UIImage {
let maxWidth = self.compactMap { $0.size.width }.max()
let maxHeight = self.compactMap { $0.size.height }.max()
let maxSize = CGSize(width: maxWidth ?? 0, height: maxHeight ?? 0)
let totalSize = isVertical ?
CGSize(width: maxSize.width, height: maxSize.height * (CGFloat)(self.count))
: CGSize(width: maxSize.width * (CGFloat)(self.count), height: maxSize.height)
let renderer = UIGraphicsImageRenderer(size: totalSize)
return renderer.image { (context) in
for (index, image) in self.enumerated() {
let rect = AVMakeRect(aspectRatio: image.size, insideRect: isVertical ?
CGRect(x: 0, y: maxSize.height * CGFloat(index), width: maxSize.width, height: maxSize.height) :
CGRect(x: maxSize.width * CGFloat(index), y: 0, width: maxSize.width, height: maxSize.height))
image.draw(in: rect)
}
}
}
}

Try this piece of code,I tried stitching two images together n displayed them in an ImageView
UIImage *bottomImage = [UIImage imageNamed:#"bottom.png"]; //first image
UIImage *image = [UIImage imageNamed:#"top.png"]; //foreground image
CGSize newSize = CGSizeMake(209, 260); //size of image view
UIGraphicsBeginImageContext( newSize );
// drawing 1st image
[bottomImage drawInRect:CGRectMake(0,0,newSize.width/2,newSize.height/2)];
// drawing the 2nd image after the 1st
[image drawInRect:CGRectMake(0,newSize.height/2,newSize.width/2,newSize.height/2)] ;
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
join.image = newImage;
join is the name of the imageview and you get to see the images as a single image.

The solutions above were helpful but had a serious flaw for me. The problem is that if the images are of different sizes, the resulting stitched image would have potentially large spaces between the parts. The solution I came up with combines all images right below each other so that it looks like more like a single image no matter the individual image sizes.
For Swift 3.x
static func combineImages(images:[UIImage]) -> UIImage
{
var maxHeight:CGFloat = 0.0
var maxWidth:CGFloat = 0.0
for image in images
{
maxHeight += image.size.height
if image.size.width > maxWidth
{
maxWidth = image.size.width
}
}
let finalSize = CGSize(width: maxWidth, height: maxHeight)
UIGraphicsBeginImageContext(finalSize)
var runningHeight: CGFloat = 0.0
for image in images
{
image.draw(in: CGRect(x: 0.0, y: runningHeight, width: image.size.width, height: image.size.height))
runningHeight += image.size.height
}
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return finalImage!
}

Related

Generate texture UIImage from small image in Swift

I want to create a full size texture image lets say 1000 x 1000 from existing UIImage of size 40 x 40.
The code I am using to create the image is
let textureImg = UIImage(CGImage: CGImageCreateWithImageInRect(textureImage?.CGImage, CGRect(origin: CGPointZero, size: CGSizeMake(1000, 1000)))!)
where textureImage is some image in 40x40.
Please let me know any correct method to achieve this as the code above not giving proper result. The whole image comes white.
Here's how to use CIAffineTile to tile a 40x40 image into a 500x500 one. Be aware, I'm doing some unneeded forced unwrapping, etc. because I pulled this from existing code. This is tested.
let inputImage = UIImage(named: "vermont.jpg")
let tiledImage = createTiledImage(inputImage:inputImage!)
print(inputImage!.size) // displays (40.0, 40.0)
print(tiledImage.size) // displays (500.0,500.0)
imageView.image = tiledImage
func createTiledImage(inputImage:UIImage) -> UIImage {
let filter = CIFilter(name: "CIAffineTile")
filter!.setValue(CIImage(image: inputImage), forKey: "inputImage")
let outputImage = filter?.outputImage?.applyingFilter("CIAffineTile", withInputParameters: nil).cropping(to: CGRect(x: 0, y: 0, width: 500, height: 500))
let ciCtx = CIContext(options: nil)
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}

import swift class into objective-C works, but some custom file not

My project has Objective-C file and Swift file (mix and match, thanks to https://developer.apple.com/library/content/documentation/Swift/Conceptual/BuildingCocoaApps/MixandMatch.html).
Now, i add a new folder named 'Helper' on my project to contain function which i can use at any class.
Any swift class in my Controller folder is included in my-project.swift.h
and i can use it in any Objective-C class without any issue. But, here is the problem, i can't use swift class inside the Helper folder which i created by myself (it is outside Controller folder), so i can't use it on my objective-C class. Any solution here? Here is my helper file
import Foundation
public class imageHelper{
func ResizeImage(image: UIImage, targetSize: CGSize) -> UIImage {
let size = image.size
let widthRatio = targetSize.width / image.size.width
let heightRatio = targetSize.height / image.size.height
// Figure out what our orientation is, and use that to form the rectangle
var newSize: CGSize
if(widthRatio > heightRatio) {
newSize = CGSizeMake(size.width * heightRatio, size.height * heightRatio)
} else {
newSize = CGSizeMake(size.width * widthRatio, size.height * widthRatio)
}
// This is the rect that we've calculated out and this is what is actually used below
let rect = CGRectMake(0, 0, newSize.width, newSize.height)
// Actually do the resizing to the rect using the ImageContext stuff
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
image.drawInRect(rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
func imageWithSize(image: UIImage,size: CGSize)->UIImage{
if UIScreen.mainScreen().respondsToSelector("scale"){
UIGraphicsBeginImageContextWithOptions(size,false,UIScreen.mainScreen().scale);
}
else
{
UIGraphicsBeginImageContext(size);
}
image.drawInRect(CGRectMake(0, 0, size.width, size.height));
let newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
//Summon this function VVV
func resizeImageWithAspect(image: UIImage,scaledToMaxWidth width:CGFloat,maxHeight height :CGFloat)->UIImage
{
let oldWidth = image.size.width;
let oldHeight = image.size.height;
let scaleFactor = (oldWidth > oldHeight) ? width / oldWidth : height / oldHeight;
let newHeight = oldHeight * scaleFactor;
let newWidth = oldWidth * scaleFactor;
let newSize = CGSizeMake(newWidth, newHeight);
return imageWithSize(image, size: newSize);
}
}
i figured it out!
just change this code
public class imageHelper{
into this
class imageHelper: NSObject{

How to add a gradient tint color to a UISlider in XCode 6?

I'm working on a design application that has a section for selecting colors by three sliders for RGB.
As we can see in xcode, where we want to select a color by RGB values, the slider tint color is a gradient color that changes when we change the sliders. I want to use this in my application. but I have no idea about how to do this?
I've found this code in a blog. but didn't work for me.
- (void)setGradientToSlider:(UISlider *)Slider WithColors:(NSArray *)Colors{
UIView * view = (UIView *)[[Slider subviews]objectAtIndex:0];
UIImageView * maxTrackImageView = (UIImageView *)[[view subviews]objectAtIndex:0];
CAGradientLayer * maxTrackGradient = [CAGradientLayer layer];
CGRect rect = maxTrackImageView.frame;
rect.origin.x = view.frame.origin.x;
maxTrackGradient.frame = rect;
maxTrackGradient.colors = Colors;
[maxTrackGradient setStartPoint:CGPointMake(0.0, 0.5)];
[maxTrackGradient setEndPoint:CGPointMake(1.0, 0.5)];
[[maxTrackImageView layer] insertSublayer:maxTrackGradient atIndex:0];
/////////////////////////////////////////////////////
UIImageView * minTrackImageView = (UIImageView *)[[view subviews]objectAtIndex:1];
CAGradientLayer * minTrackGradient = [CAGradientLayer layer];
rect = minTrackImageView.frame;
rect.size.width = maxTrackImageView.frame.size.width;
rect.origin.x = 0;
rect.origin.y = 0;
minTrackGradient.frame = rect;
minTrackGradient.colors = Colors;
[minTrackGradient setStartPoint:CGPointMake(0.0, 0.5)];
[minTrackGradient setEndPoint:CGPointMake(1.0, 0.5)];
[minTrackImageView.layer insertSublayer:minTrackGradient atIndex:0];
}
I would appreciate any helps. Thanks.
While it didnt give me the desired results here is a down and dirty Swift version of the answer above for those that want to try it.
func setSlider(slider:UISlider) {
let tgl = CAGradientLayer()
let frame = CGRectMake(0, 0, slider.frame.size.width, 5)
tgl.frame = frame
tgl.colors = [UIColor.blueColor().CGColor, UIColor.greenColor().CGColor, UIColor.yellowColor().CGColor, UIColor.orangeColor().CGColor, UIColor.redColor().CGColor]
tgl.startPoint = CGPointMake(0.0, 0.5)
tgl.endPoint = CGPointMake(1.0, 0.5)
UIGraphicsBeginImageContextWithOptions(tgl.frame.size, tgl.opaque, 0.0);
tgl.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
image.resizableImageWithCapInsets(UIEdgeInsetsZero)
slider.setMinimumTrackImage(image, forState: .Normal)
//slider.setMaximumTrackImage(image, forState: .Normal)
}
UPDATE for Swift 4.0
func setSlider(slider:UISlider) {
let tgl = CAGradientLayer()
let frame = CGRect.init(x:0, y:0, width:slider.frame.size.width, height:5)
tgl.frame = frame
tgl.colors = [UIColor.blue.cgColor, UIColor.green.cgColor, UIColor.yellow.cgColor, UIColor.orange.cgColor, UIColor.red.cgColor]
tgl.startPoint = CGPoint.init(x:0.0, y:0.5)
tgl.endPoint = CGPoint.init(x:1.0, y:0.5)
UIGraphicsBeginImageContextWithOptions(tgl.frame.size, tgl.isOpaque, 0.0);
tgl.render(in: UIGraphicsGetCurrentContext()!)
if let image = UIGraphicsGetImageFromCurrentImageContext() {
UIGraphicsEndImageContext()
image.resizableImage(withCapInsets: UIEdgeInsets.zero)
slider.setMinimumTrackImage(image, for: .normal)
}
}
Here is possible solution:
Usage:
//array of CGColor objects, color1 and color2 are UIColor objects
NSArray *colors = [NSArray arrayWithObjects:(id)color1.CGColor, (id)color2.CGColor, nil];
//your UISlider
[slider setGradientBackgroundWithColors:colors];
Implementation:
Create category on UISlider:
- (void)setGradientBackgroundWithColors:(NSArray *)colors
{
CAGradientLayer *trackGradientLayer = [CAGradientLayer layer];
CGRect frame = self.frame;
frame.size.height = 5.0; //set the height of slider
trackGradientLayer.frame = frame;
trackGradientLayer.colors = colors;
//setting gradient as horizontal
trackGradientLayer.startPoint = CGPointMake(0.0, 0.5);
trackGradientLayer.endPoint = CGPointMake(1.0, 0.5);
UIImage *trackImage = [[UIImage imageFromLayer:trackGradientLayer] resizableImageWithCapInsets:UIEdgeInsetsZero];
[self setMinimumTrackImage:trackImage forState:UIControlStateNormal];
[self setMaximumTrackImage:trackImage forState:UIControlStateNormal];
}
Where colors is array of CGColor.
I have also created a category on UIImage which creates image from layer as you need an UIImage for setting gradient on slider.
+ (UIImage *)imageFromLayer:(CALayer *)layer
{
UIGraphicsBeginImageContextWithOptions(layer.frame.size, layer.opaque, 0.0);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
For Swift 3 and to prevent the slider from scaling the Min image, apply this when setting the its image. Recalculating the slider's left side is not necessary. Only recalc if you can changing the color of the gradient. The Max image does not seem to scale, but you should probably apply the same setting for consistency. There is a slight difference on the Max image when not applying its insets.
slider.setMinimumTrackImage(image?.resizableImage(withCapInsets:.zero), for: .normal)
For some reason it only works properly when resizableImage(withCapInsets:.zero) is all done at the same time. Running that part separate does not allow the image to work and gets scaled.
Here is the entire routine in Swift 3:
func setSlider(slider:UISlider) {
let tgl = CAGradientLayer()
let frame = CGRect(x: 0.0, y: 0.0, width: slider.bounds.width, height: 5.0 )
tgl.frame = frame
tgl.colors = [ UIColor.yellow.cgColor,UIColor.black.cgColor]
tgl.endPoint = CGPoint(x: 1.0, y: 1.0)
tgl.startPoint = CGPoint(x: 0.0, y: 1.0)
UIGraphicsBeginImageContextWithOptions(tgl.frame.size, false, 0.0)
tgl.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
slider.setMaximumTrackImage(image?.resizableImage(withCapInsets:.zero), for: .normal)
slider.setMinimumTrackImage(image?.resizableImage(withCapInsets:.zero), for: .normal)
}
This is a really effective approach that I've found after a lot of web search. So it's better to share it here as a complete answer. The following code is a Swift Class That you can use to create and use gradients as UIView or UIImage.
import Foundation
import UIKit
class Gradient: UIView{
// Gradient Color Array
private var Colors: [UIColor] = []
// Start And End Points Of Linear Gradient
private var SP: CGPoint = CGPoint.zeroPoint
private var EP: CGPoint = CGPoint.zeroPoint
// Start And End Center Of Radial Gradient
private var SC: CGPoint = CGPoint.zeroPoint
private var EC: CGPoint = CGPoint.zeroPoint
// Start And End Radius Of Radial Gradient
private var SR: CGFloat = 0.0
private var ER: CGFloat = 0.0
// Flag To Specify If The Gradient Is Radial Or Linear
private var flag: Bool = false
// Some Overrided Init Methods
required init(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
override init(frame: CGRect) {
super.init(frame: frame)
}
// Draw Rect Method To Draw The Graphics On The Context
override func drawRect(rect: CGRect) {
// Get Context
let context = UIGraphicsGetCurrentContext()
// Get Color Space
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Create Arrays To Convert The UIColor to CG Color
var colorComponent: [CGColor] = []
var colorLocations: [CGFloat] = []
var i: CGFloat = 0.0
// Add Colors Into The Color Components And Use An Index Variable For Their Location In The Array [The Location Is From 0.0 To 1.0]
for color in Colors {
colorComponent.append(color.CGColor)
colorLocations.append(i)
i += CGFloat(1.0) / CGFloat(self.Colors.count - 1)
}
// Create The Gradient With The Colors And Locations
let gradient: CGGradientRef = CGGradientCreateWithColors(colorSpace, colorComponent, colorLocations)
// Create The Suitable Gradient Based On Desired Type
if flag {
CGContextDrawRadialGradient(context, gradient, SC, SR, EC, ER, 0)
} else {
CGContextDrawLinearGradient(context, gradient, SP, EP, 0)
}
}
// Get The Input Data For Linear Gradient
func CreateLinearGradient(startPoint: CGPoint, endPoint: CGPoint, colors: UIColor...) {
self.Colors = colors
self.SP = startPoint
self.EP = endPoint
self.flag = false
}
// Get The Input Data For Radial Gradient
func CreateRadialGradient(startCenter: CGPoint, startRadius: CGFloat, endCenter: CGPoint, endRadius: CGFloat, colors: UIColor...) {
self.Colors = colors
self.SC = startCenter
self.EC = endCenter
self.SR = startRadius
self.ER = endRadius
self.flag = true
}
// Function To Convert Gradient To UIImage And Return It
func getImage() -> UIImage {
// Begin Image Context
UIGraphicsBeginImageContext(self.bounds.size)
// Draw The Gradient
self.drawRect(self.frame)
// Get Image From The Current Context
let image = UIGraphicsGetImageFromCurrentImageContext()
// End Image Context
UIGraphicsEndImageContext()
// Return The Result Gradient As UIImage
return image
}
}

UIImage Resize with aspect ratio -> SAVE JPG -> LOAD -> image loaded twice bigger

im using this code to resize image.
- (UIImage*)resizeToSize:(CGSize)size {
float height = self.size.height;
float width = self.size.width;
if (width > size.width) {
width = size.width;
height = size.width / (self.size.width / self.size.height);
}
if (height > size.height) {
height = size.height;
width = size.height / (self.size.height / self.size.width);
}
NSLog(#"Resize to size %#",NSStringFromCGSize(size));
if (height == self.size.height && width == self.size.width) {
return self;
}
CGSize newSize = CGSizeMake(width, height);
UIGraphicsBeginImageContextWithOptions(newSize, YES, 0.0);
[self drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"Resized %#",NSStringFromCGSize(newImage.size));
return newImage;
}
The image is resized and in next step i save it via [UIImageJPEGRepresentation(image, 1.0) writeToFile:pngPath atomically:YES];.
After that i load the file, and the image size is twice bigger, any hint why?
Thank you!
I suspect what is happening here is that you have an image with scale 2.0 (on a Retina device) but you are saving it without #2x in the filename, so when you reopen it, it is a scale 1.0 image with dimensions twice as large as you want.
If that's the case, there are two solutions: you can fix your filename, or you can re-open the file by reading the file into an NSData object and then use UIImage +imageWithData:scale: to get a properly-scaled image.

Cocoa Touch - Adding texture with overlay view

I have a set of tiles as UIViews that have a programmable background color, and each one
can be a different color. I want to add texture, like a side-lit bevel, to each one. Can this be done with an overlay view or by some other method?
I'm looking for suggestions that don't require a custom image file for each case.
This may help someone, although this was pieced together from other topics on SO.
To create a beveled tile image with an arbitrary color for normal and for retina display, I made a beveled image in photoshop and set the saturation to zero, making a grayscale image called tileBevel.png
I also created one for the retina display (tileBevel#2x.png)
Here is the code:
+ (UIImage*) createTileWithColor:(UIColor*)tileColor {
int pixelsHigh = 44;
int pixelsWide = 46;
UIImage *bottomImage;
if([UIScreen respondsToSelector:#selector(scale)] && [[UIScreen mainScreen] scale] == 2.0) {
pixelsHigh *= 2;
pixelsWide *= 2;
bottomImage = [UIImage imageNamed:#"tileBevel#2x.png"];
}
else {
bottomImage = [UIImage imageNamed:#"tileBevel.png"];
}
CGImageRef theCGImage = NULL;
CGContextRef tileBitmapContext = NULL;
CGRect rectangle = CGRectMake(0,0,pixelsWide,pixelsHigh);
UIGraphicsBeginImageContext(rectangle.size);
[bottomImage drawInRect:rectangle];
tileBitmapContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(tileBitmapContext, kCGBlendModeOverlay);
CGContextSetFillColorWithColor(tileBitmapContext, tileColor.CGColor);
CGContextFillRect(tileBitmapContext, rectangle);
theCGImage=CGBitmapContextCreateImage(tileBitmapContext);
UIGraphicsEndImageContext();
return [UIImage imageWithCGImage:theCGImage];
}
This checks to see if the retina display is used, sizes the rectangle to draw in, picks the appropriate grayscale base image, set the blending mode to overlay, then draws a rectangle on top of the bottom image. All of this is done inside a graphics context bracketed by the BeginImageContext and EndImageContext calls. These set the current context needed by the UIImage drawRect: method. The Core Graphics functions need the context as a parameter, which is obtained by a call to get the current context.
And the result looks like this:
If you want to preserve the alpha channel of the source image, just add this to jim's code before the fill rect:
// Apply mask
CGContextTranslateCTM(tileBitmapContext, 0, rectangle.size.height);
CGContextScaleCTM(tileBitmapContext, 1.0f, -1.0f);
CGContextClipToMask(tileBitmapContext, rectangle, bottomImage.CGImage);
Swift 3 solution, essentially based on Jim's answer with Scriptease's addition, and some minor changes:
class func image(bottomImage: UIImage, topImage: UIImage, tileColor: UIColor) -> UIImage? {
let pixelsHigh: CGFloat = bottomImage.size.height
let pixelsWide: CGFloat = bottomImage.size.width
let rectangle = CGRect.init(x: 0, y: 0, width: pixelsWide, height: pixelsHigh)
UIGraphicsBeginImageContext(rectangle.size);
bottomImage.draw(in: rectangle)
if let tileBitmapContext = UIGraphicsGetCurrentContext() {
tileBitmapContext.setBlendMode(.overlay)
tileBitmapContext.setFillColor(tileColor.cgColor)
tileBitmapContext.scaleBy(x: 1.0, y: -1.0)
tileBitmapContext.clip(to: rectangle, mask: bottomImage.cgImage!)
tileBitmapContext.fill(rectangle)
let theCGImage = tileBitmapContext.makeImage()
UIGraphicsEndImageContext();
if let theImage = theCGImage {
return UIImage.init(cgImage: theImage)
}
}
return nil
}