Generate texture UIImage from small image in Swift - objective-c

I want to create a full size texture image lets say 1000 x 1000 from existing UIImage of size 40 x 40.
The code I am using to create the image is
let textureImg = UIImage(CGImage: CGImageCreateWithImageInRect(textureImage?.CGImage, CGRect(origin: CGPointZero, size: CGSizeMake(1000, 1000)))!)
where textureImage is some image in 40x40.
Please let me know any correct method to achieve this as the code above not giving proper result. The whole image comes white.

Here's how to use CIAffineTile to tile a 40x40 image into a 500x500 one. Be aware, I'm doing some unneeded forced unwrapping, etc. because I pulled this from existing code. This is tested.
let inputImage = UIImage(named: "vermont.jpg")
let tiledImage = createTiledImage(inputImage:inputImage!)
print(inputImage!.size) // displays (40.0, 40.0)
print(tiledImage.size) // displays (500.0,500.0)
imageView.image = tiledImage
func createTiledImage(inputImage:UIImage) -> UIImage {
let filter = CIFilter(name: "CIAffineTile")
filter!.setValue(CIImage(image: inputImage), forKey: "inputImage")
let outputImage = filter?.outputImage?.applyingFilter("CIAffineTile", withInputParameters: nil).cropping(to: CGRect(x: 0, y: 0, width: 500, height: 500))
let ciCtx = CIContext(options: nil)
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}

Related

Translate detected rectangle from portrait CIImage to landscape CIImage

I am modifying code found here. In the code we are capturing video from the phone camera using AVCaptureSession and using CIDetector to detect a rectangle in the image feed. The feed has an image which is 640x842 (iphone5 in portrait). We then do an overlay on the image so the user can see the detected rectangle (actually it's a trapezoid most of the time).
When the user presses a button on the UI, we capture an image from the video and re-run the rectangle detection on this larger image (3264x2448) which as you can see is landscape. We then do a perspective transform on the detected rectangle and crop on the image.
This is working pretty well but the issue I have is say 1 out of 5 captures the detected rectangle on the larger image is different to the one detected (and presented to the user) from the smaller image. Even though I only capture when I detect the phone is (relatively) still, the final image then does not represent the rectangle the user expected.
To resolve this my idea is to use the coordinates of the originally captured rectangle and translate them to a rectangle on the captured still image. This is where I'm struggling.
I tried this with the detected rectangle:
CGFloat radians = -90 * (M_PI/180);
CGAffineTransform rotation = CGAffineTransformMakeRotation(radians);
CGRect rect = CGRectMake(detectedRect.bounds.origin.x, detectedRect.bounds.origin.y, detectedRect.bounds.size.width, detectedRect.bounds.size.height);
CGRect rotatedRect = CGRectApplyAffineTransform(rect, rotation);
So given a detected rect:
TopLeft: 88.213425, 632.31329
TopRight: 545.59302, 632.15546
BottomRight: 575.57819, 369.22321
BottomLeft: 49.973862, 369.40466
I now have this rotated rect:
origin = (x = 369.223206, y = -575.578186)
size = (width = 263.090088, height = 525.604309)
How do I translate the rotated rectangle coordinates in the smaller portrait image to coordinates to the 3264x2448 image?
Edit
Duh.. reading my own approach realised that creating a rectangle out of a trapezoid will not solve my problem!
Supporting code to detect the rectangle etc...
// In this method we detect a rect from the video feed and overlay
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
// image is 640x852 on iphone5
NSArray *rects = [[CIDetector detectorOfType:CIDetectorTypeRectangle context:nil options:#{CIDetectorAccuracy : CIDetectorAccuracyHigh}] featuresInImage:image];
CIRectangleFeature *detectedRect = rects[0];
// draw overlay on image code....
}
This is a summarized version of how the still image is obtained:
// code block to handle output from AVCaptureStillImageOutput
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
CIImage *enhancedImage = [[CIImage alloc] initWithData:imageData options:#{kCIImageColorSpace:[NSNull null]}];
imageData = nil;
CIRectangleFeature *rectangleFeature = [self getDetectedRect:[[self highAccuracyRectangleDetector] featuresInImage:enhancedImage]];
if (rectangleFeature) {
enhancedImage = [self correctPerspectiveForImage:enhancedImage withTopLeft:rectangleFeature.topLeft andTopRight:rectangleFeature.topRight andBottomRight:rectangleFeature.bottomRight andBottomLeft:rectangleFeature.bottomLeft];
}
}
Thank you.
I had the same issue while doing this kind of stuff.I have resolved it by doing below code in swift. Take a look if it can help you.
if let videoConnection = stillImageOutput.connection(withMediaType: AVMediaTypeVideo) {
stillImageOutput.captureStillImageAsynchronously(from: videoConnection) {
(imageDataSampleBuffer, error) -> Void in
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
var img = UIImage(data: imageData!)!
let outputRect = self.previewLayer?.metadataOutputRectOfInterest(for: (self.previewLayer?.bounds)!)
let takenCGImage = img.cgImage
let width = (takenCGImage?.width)!
let height = (takenCGImage?.height)!
let cropRect = CGRect(x: (outputRect?.origin.x)! * CGFloat(width), y: (outputRect?.origin.y)! * CGFloat(height), width: (outputRect?.size.width)! * CGFloat(width), height: (outputRect?.size.height)! * CGFloat(height))
let cropCGImage = takenCGImage!.cropping(to: cropRect)
img = UIImage(cgImage: cropCGImage!, scale: 1, orientation: img.imageOrientation)
let cropViewController = TOCropViewController(image: self.cropToBounds(image: img))
cropViewController.delegate = self
self.navigationController?.pushViewController(cropViewController, animated: true)
}
}

How to generate thumbnail image from source server url?

I want to create thumbnail image from original source url and show blur effect on tableview like WhatsApp i don't want to upload two separate url(low and high) to server, any one have idea how it can be done? Help.
For that you have to use these steps:
1. Download image from URL using any library using lazy loading mechanism
2. Convert image to blur one and show on UI
3. After tapping on it show actual image from cache
You can add blur effect to UIImageView with CIGaussianBlur effect
func getBlurImageFrom(image image: UIImage) -> UIImage {
let radius: CGFloat = 20;
let context = CIContext(options: nil);
let inputImage = CIImage(CGImage: image.CGImage!);
let filter = CIFilter(name: "CIGaussianBlur");
filter?.setValue(inputImage, forKey: kCIInputImageKey);
filter?.setValue("\(radius)", forKey:kCIInputRadiusKey);
let result = filter?.valueForKey(kCIOutputImageKey) as! CIImage;
let rect = CGRectMake(radius * 2, radius * 2, image.size.width - radius * 4, image.size.height - radius * 4)
let cgImage = context.createCGImage(result, fromRect: rect);
let returnImage = UIImage(CGImage: cgImage);
return returnImage;
}
Just follow above steps to achieve it. I did it in one of my app but I implemented in Objective C

How do I resize a UIImage without smoothing? [duplicate]

I want to scale up an UIImage in such a way, that the user can see the pixels in the UIImage very sharp. When I put that to an UIImageView and scale the transform matrix up, the UIImage appears antialiased and smoothed.
Is there a way to render in a bigger bitmap context by simply repeating every row and every column to get bigger pixels? How could I do that?
#import <QuartzCore/CALayer.h>
view.layer.magnificationFilter = kCAFilterNearest
When drawing directly into bitmap context, we can use:
CGContextSetInterpolationQuality(myBitmapContext, kCGInterpolationNone);
I found this on CGContextDrawImage very slow on iPhone 4
Swift 5
let image = UIImage(named: "Foo")
let scaledImageSize = image.size.applying(CGAffineTransform(scaleX: 2, y: 2))
UIGraphicsBeginImageContext(scaledImageSize)
let scaledContext = UIGraphicsGetCurrentContext()!
scaledContext.interpolationQuality = .none
image.draw(in: CGRect(origin: .zero, size: scaledImageSize))
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()!
I was also trying this (on a sublayer) and I couldn't get it working, it was still blurry. This is what I had to do:
const CGFloat PIXEL_SCALE = 2;
layer.magnificationFilter = kCAFilterNearest; //Nearest neighbor texture filtering
layer.transform = CATransform3DMakeScale(PIXEL_SCALE, PIXEL_SCALE, 1); //Scale layer up
//Rasterize w/ sufficient resolution to show sharp pixels
layer.shouldRasterize = YES;
layer.rasterizationScale = PIXEL_SCALE;
For UIImage created from CIImage you may use:
imageView.image = UIImage(CIImage: ciImage.imageByApplyingTransform(CGAffineTransformMakeScale(kScale, kScale)))

How can I make an UIImage programmatically?

This isn't what you probably thought it was to begin with. I know how to use UIImage's, but I now need to know how to create a "blank" UIImage using:
CGRect screenRect = [self.view bounds];
Well, those dimensions. Anyway, I want to know how I can create a UIImage with those dimensions colored all white. No actual images here.
Is this even possible? I am sure it is, but maybe I am wrong.
Edit
This needs to be a "white" image. Not a blank one. :)
You need to use CoreGraphics, as follows.
CGSize size = CGSizeMake(desiredWidth, desiredHeight);
UIGraphicsBeginImageContextWithOptions(size, YES, 0);
[[UIColor whiteColor] setFill];
UIRectFill(CGRectMake(0, 0, size.width, size.height));
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The code creates a new CoreGraphics image context with the options passed as parameters; size, opaqueness, and scale. By passing 0 for scale, iOS automatically chooses the appropriate value for the current device.
Then, the context fill colour is set to [UIColor whiteColor]. Immediately, the canvas is then actually filled with that color, by using UIRectFill() and passing a rectangle which fills the canvas.
A UIImage is then created of the current context, and the context is closed. Therefore, the image variable contains a UIImage of the desired size, filled white.
Swift version:
extension UIImage {
static func emptyImage(with size: CGSize) -> UIImage? {
UIGraphicsBeginImageContext(size)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
If you want to draw just an empty image, you could use UIKit UIImageBeginImageContextWithOptions: method.
UIGraphicsBeginImageContext(CGSizeMake(width, height));
CGContextAddRect(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, width, height)); // this may not be necessary
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The code above assumes that you draw a image with a size of width x height. It adds rectangle into the graphics context, but it may not be necessary. Try it yourself. This is the way to go. :)
Or, if you want to create a snapshot of your current view you would type code like;
UIGraphicsBeginImageContext(CGSizeMake(self.view.size.width, self.view.size.height));
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Don't forget to include Quartz library if you use layer.
Based on #HixField and #Rudolf Adamkovič answer. here's an extension which returns an optional, which I believe is the correct way to do this (correct me if I'm wrong!)?
This extension allows you to create a an empty UIImage of what ever size you need (up to memory limit) with what ever fill color you want, which defaults to white, if you want the image to be the clear color you would use something like the following:
let size = CGSize(width: 32.0, height: 32.0)
if var image = UIImage.imageWithSize(size:size, UIColor.clear) {
//image was successfully created, do additional stuff with it here.
}
This is for swift 3.x:
extension UIImage {
static func imageWithSize(size : CGSize, color : UIColor = UIColor.white) -> UIImage? {
var image:UIImage? = nil
UIGraphicsBeginImageContext(size)
if let context = UIGraphicsGetCurrentContext() {
context.setFillColor(color.cgColor)
context.addRect(CGRect(origin: CGPoint.zero, size: size));
context.drawPath(using: .fill)
image = UIGraphicsGetImageFromCurrentImageContext();
}
UIGraphicsEndImageContext()
return image
}
}
Using the latest UIGraphics classes, and in swift, this looks like this (note the CGContextDrawPath that is missing in the answer from user1834305, this is the reason that is produces an transparant image) :
static func imageWithSize(size : CGSize, color : UIColor) -> UIImage {
UIGraphicsBeginImageContext(size)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, color.CGColor)
CGContextAddRect(context, CGRect(x: 0, y: 0, width: size.width, height: size.height));
CGContextDrawPath(context, .Fill)
let image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image
}
Same solution in C# for Xamarin.iOS :
UIGraphics.BeginImageContext(new CGSize(width, height));
var image = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();

Stitch/Composite multiple images vertically and save as one image (iOS, objective-c)

I need help writing an objective-c function that will take in an array of UIImages/PNGs, and return/save one tall image of all the images stitched together in order vertically. I am new to this so please go slow and take it easy :)
My ideas so far:
Draw a UIView, then addSubviews to each one's parent (of the images)
and then ???
Below are Swift 3 and Swift 2 examples that stitch images together vertically or horizontally. They use the dimensions of the largest image in the array provided by the caller to determine the common size used for each individual frame each individual image is stitched into.
Note: The Swift 3 example preserves each image's aspect ratio, while the Swift 2 example does not. See note inline below regarding that.
UPDATE: Added Swift 3 example
Swift 3:
import UIKit
import AVFoundation
func stitchImages(images: [UIImage], isVertical: Bool) -> UIImage {
var stitchedImages : UIImage!
if images.count > 0 {
var maxWidth = CGFloat(0), maxHeight = CGFloat(0)
for image in images {
if image.size.width > maxWidth {
maxWidth = image.size.width
}
if image.size.height > maxHeight {
maxHeight = image.size.height
}
}
var totalSize : CGSize
let maxSize = CGSize(width: maxWidth, height: maxHeight)
if isVertical {
totalSize = CGSize(width: maxSize.width, height: maxSize.height * (CGFloat)(images.count))
} else {
totalSize = CGSize(width: maxSize.width * (CGFloat)(images.count), height: maxSize.height)
}
UIGraphicsBeginImageContext(totalSize)
for image in images {
let offset = (CGFloat)(images.index(of: image)!)
let rect = AVMakeRect(aspectRatio: image.size, insideRect: isVertical ?
CGRect(x: 0, y: maxSize.height * offset, width: maxSize.width, height: maxSize.height) :
CGRect(x: maxSize.width * offset, y: 0, width: maxSize.width, height: maxSize.height))
image.draw(in: rect)
}
stitchedImages = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
return stitchedImages
}
Note: The original Swift 2 example below does not preserve the
aspect ratio (e.g. in the Swift 2 example all images are expanded to
fit in the bounding box that represents the extrema of the widths and
heights of the images, thus Any non-square image can be stretched
disproportionately in one of its dimensions). If you're using Swift 2
and want to preserve the aspect ratio please use the AVMakeRect()
modification from the Swift 3 example. Since I no longer have access
to a Swift 2 playground and can't test it to ensure no errors I'm not
updating the Swift 2 example here for that.
Swift 2: (Doesn't preserve aspect ratio. Fixed in Swift 3 example above)
import UIKit
import AVFoundation
func stitchImages(images: [UIImage], isVertical: Bool) -> UIImage {
var stitchedImages : UIImage!
if images.count > 0 {
var maxWidth = CGFloat(0), maxHeight = CGFloat(0)
for image in images {
if image.size.width > maxWidth {
maxWidth = image.size.width
}
if image.size.height > maxHeight {
maxHeight = image.size.height
}
}
var totalSize : CGSize, maxSize = CGSizeMake(maxWidth, maxHeight)
if isVertical {
totalSize = CGSizeMake(maxSize.width, maxSize.height * (CGFloat)(images.count))
} else {
totalSize = CGSizeMake(maxSize.width * (CGFloat)(images.count), maxSize.height)
}
UIGraphicsBeginImageContext(totalSize)
for image in images {
var rect : CGRect, offset = (CGFloat)((images as NSArray).indexOfObject(image))
if isVertical {
rect = CGRectMake(0, maxSize.height * offset, maxSize.width, maxSize.height)
} else {
rect = CGRectMake(maxSize.width * offset, 0 , maxSize.width, maxSize.height)
}
image.drawInRect(rect)
}
stitchedImages = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
return stitchedImages
}
The normal way is to create a bitmap image context, draw your images in at the required position, and then get the image from the image context.
You can do this with UIKit, which is somewhat easier, but isn't thread safe, so will need to run in the main thread and will block the UI.
There is loads of example code around for this, but if you want to understand it properly, you should look at UIGraphicsContextBeginImageContext, UIGraphicsGetCurrentContext, UIGraphicsGetImageFromCurrentImageContext and UIImageDrawInRect. Don't forget UIGraphicsPopCurrentContext.
You can also do this with Core Graphics, which is AFAIK, safe to use on background threads ( I've not had a crash from it yet). Efficiency is about the same, as UIKit just uses CG under the hood.
Key words for this are CGBitmapContextCreate, CGContextDrawImage, CGBitmapContextCreateImage, CGContextTranslateCTM, CGContextScaleCTM and CGContextRelease ( no ARC for Core Graphics). The scaling and translating is because CG has the origin in the bottom right hand corner and Y inscreases upwards.
There is also a third way, which is to use CG for the context, but save yourself all the co-ordinate pain by using a CALayer, set your CGImage ( UIImage.CGImage) as the contents and then render the layer to the context. This is still thread safe and lets the layer take care of all the transformations. Keywords for this is - renderInContext:
I know I'm a bit late here but hopefully this can help someone out. If you're trying to create one large image out of an array you can use this method
- (UIImage *)mergeImagesFromArray: (NSArray *)imageArray {
if ([imageArray count] == 0) return nil;
UIImage *exampleImage = [imageArray firstObject];
CGSize imageSize = exampleImage.size;
CGSize finalSize = CGSizeMake(imageSize.width, imageSize.height * [imageArray count]);
UIGraphicsBeginImageContext(finalSize);
for (UIImage *image in imageArray) {
[image drawInRect: CGRectMake(0, imageSize.height * [imageArray indexOfObject: image],
imageSize.width, imageSize.height)];
}
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return finalImage;
}
Swift 4
extension Array where Element: UIImage {
func stitchImages(isVertical: Bool) -> UIImage {
let maxWidth = self.compactMap { $0.size.width }.max()
let maxHeight = self.compactMap { $0.size.height }.max()
let maxSize = CGSize(width: maxWidth ?? 0, height: maxHeight ?? 0)
let totalSize = isVertical ?
CGSize(width: maxSize.width, height: maxSize.height * (CGFloat)(self.count))
: CGSize(width: maxSize.width * (CGFloat)(self.count), height: maxSize.height)
let renderer = UIGraphicsImageRenderer(size: totalSize)
return renderer.image { (context) in
for (index, image) in self.enumerated() {
let rect = AVMakeRect(aspectRatio: image.size, insideRect: isVertical ?
CGRect(x: 0, y: maxSize.height * CGFloat(index), width: maxSize.width, height: maxSize.height) :
CGRect(x: maxSize.width * CGFloat(index), y: 0, width: maxSize.width, height: maxSize.height))
image.draw(in: rect)
}
}
}
}
Try this piece of code,I tried stitching two images together n displayed them in an ImageView
UIImage *bottomImage = [UIImage imageNamed:#"bottom.png"]; //first image
UIImage *image = [UIImage imageNamed:#"top.png"]; //foreground image
CGSize newSize = CGSizeMake(209, 260); //size of image view
UIGraphicsBeginImageContext( newSize );
// drawing 1st image
[bottomImage drawInRect:CGRectMake(0,0,newSize.width/2,newSize.height/2)];
// drawing the 2nd image after the 1st
[image drawInRect:CGRectMake(0,newSize.height/2,newSize.width/2,newSize.height/2)] ;
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
join.image = newImage;
join is the name of the imageview and you get to see the images as a single image.
The solutions above were helpful but had a serious flaw for me. The problem is that if the images are of different sizes, the resulting stitched image would have potentially large spaces between the parts. The solution I came up with combines all images right below each other so that it looks like more like a single image no matter the individual image sizes.
For Swift 3.x
static func combineImages(images:[UIImage]) -> UIImage
{
var maxHeight:CGFloat = 0.0
var maxWidth:CGFloat = 0.0
for image in images
{
maxHeight += image.size.height
if image.size.width > maxWidth
{
maxWidth = image.size.width
}
}
let finalSize = CGSize(width: maxWidth, height: maxHeight)
UIGraphicsBeginImageContext(finalSize)
var runningHeight: CGFloat = 0.0
for image in images
{
image.draw(in: CGRect(x: 0.0, y: runningHeight, width: image.size.width, height: image.size.height))
runningHeight += image.size.height
}
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return finalImage!
}