Related
I am facing problem with taking snapshot of UIView along with CAEmitterLayer. Below is the code which I am using for taking snapshot:
here editingView is my UIView:
UIGraphicsBeginImageContextWithOptions(editingView.bounds.size, NO, 0.0);
[editingView drawViewHierarchyInRect:editingView.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Please go through the below links for Gif Images
Link For For afterScreenUpdates:NO (Gif) in this case I am missing Snapshot data
Link For For afterScreenUpdates:YES (Gif) in this case uiview is blinking and then updated and also facing performance issue.
I've had similar a similar issue in the past. The following might work for you as a workaround:
UIGraphicsBeginImageContextWithOptions(editingView.bounds.size, NO, 0.0);
[editingView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Note that you may lose some accuracy in the screen capture, as it may exclude blurs, and some Core Animation features.
Edit:
There seems to be issues/tradeoffs with both renderInContext and drawViewHierarchyInRect. The following code may be worth a try (taken from here for reference):
- (CGContextRef) createBitmapContextOfSize:(CGSize) size {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (bitmapData != NULL) {
free(bitmapData);
}
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL) {
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
size.width,
size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaNoneSkipFirst);
CGContextSetAllowsAntialiasing(context,NO);
if (context== NULL) {
free (bitmapData);
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, size.height);
CGContextConcatCTM(context, flipVertical);
return context;
}
Then you can do:
CGContextRef context = [self createBitmapContextOfSize:editingView.bounds.size];
[editingView.layer renderInContext:context];
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage* background = [UIImage imageWithCGImage: cgImage];
CGImageRelease(cgImage);
I'm trying to draw a line with a specific width. I searched for examples online, but I only found examples using straight lines. I need curved lines. Also, I need to detect if the user touched within the line. Is it possible to achieve this using Objective C and Sprite Kit? If so can someone provide an example?
You can use UIBezierPath to create bezier curves (nice smooth curves). You can specify this path for a CAShapeLayer and add that as a sublayer to your view:
UIBezierPath *path = [UIBezierPath bezierPath];
[path moveToPoint:CGPointMake(10, 150)];
[path addCurveToPoint:CGPointMake(110, 150) controlPoint1:CGPointMake(40, 100) controlPoint2:CGPointMake(80, 100)];
[path addCurveToPoint:CGPointMake(210, 150) controlPoint1:CGPointMake(140, 200) controlPoint2:CGPointMake(170, 200)];
[path addCurveToPoint:CGPointMake(310, 150) controlPoint1:CGPointMake(250, 100) controlPoint2:CGPointMake(280, 100)];
CAShapeLayer *layer = [CAShapeLayer layer];
layer.lineWidth = 10;
layer.strokeColor = [UIColor redColor].CGColor;
layer.fillColor = [UIColor clearColor].CGColor;
layer.path = path.CGPath;
[self.view.layer addSublayer:layer];
If you want to randomize it a little, you can just randomize some of the curves. If you want some fuzziness, add some shadow. If you want the ends to be round, specify a rounded line cap:
UIBezierPath *path = [UIBezierPath bezierPath];
CGPoint point = CGPointMake(10, 100);
[path moveToPoint:point];
CGPoint controlPoint1;
CGPoint controlPoint2 = CGPointMake(point.x - 5.0 - arc4random_uniform(50), 150.0);
for (NSInteger i = 0; i < 5; i++) {
controlPoint1 = CGPointMake(point.x + (point.x - controlPoint2.x), 50.0);
point.x += 40.0 + arc4random_uniform(20);
controlPoint2 = CGPointMake(point.x - 5.0 - arc4random_uniform(50), 150.0);
[path addCurveToPoint:point controlPoint1:controlPoint1 controlPoint2:controlPoint2];
}
CAShapeLayer *layer = [CAShapeLayer layer];
layer.lineWidth = 5;
layer.strokeColor = [UIColor redColor].CGColor;
layer.fillColor = [UIColor clearColor].CGColor;
layer.path = path.CGPath;
layer.shadowColor = [UIColor redColor].CGColor;
layer.shadowRadius = 2.0;
layer.shadowOpacity = 1.0;
layer.shadowOffset = CGSizeZero;
layer.lineCap = kCALineCapRound;
[self.view.layer addSublayer:layer];
If you want it to be even more irregular, break those beziers into smaller segments, but the idea would be the same. The only trick with conjoined bezier curves is that you want to make sure that the second control point of one curve is in line with the first control point of the next one, or else you end up with sharp discontinuities in the curves.
If you want to detect if and when a user taps on it, that's more complicated. But what you have to do is:
Make a snapshot of the image:
- (UIImage *)captureView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 1.0); // usually I'd use 0.0, but we'll use 1.0 here so that the tap point of the gesture matches the pixel of the snapshot
if ([view respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]) {
BOOL success = [view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
NSAssert(success, #"drawViewHierarchyInRect failed");
} else {
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Get the color of the pixel at the coordinate that the user tapped by identifying the color of the pixel the user tapped.
- (void)handleTap:(UITapGestureRecognizer *)gesture
{
CGPoint point = [gesture locationInView:gesture.view];
CGFloat red, green, blue, alpha;
UIColor *color = [self image:self.image colorAtPoint:point];
[color getRed:&red green:&green blue:&blue alpha:&alpha];
if (green < 0.9 && blue < 0.9 && red > 0.9)
NSLog(#"tapped on curve");
else
NSLog(#"didn't tap on curve");
}
Where I adapted Apple's code for getting the pixel buffer in order to determine the color of the pixel the user tapped on was.
// adapted from https://developer.apple.com/library/mac/qa/qa1509/_index.html
- (UIColor *)image:(UIImage *)image colorAtPoint:(CGPoint)point
{
UIColor *color;
CGImageRef imageRef = image.CGImage;
// Create the bitmap context
CGContextRef context = [self createARGBBitmapContextForImage:imageRef];
NSAssert(context, #"error creating context");
// Get image width, height. We'll use the entire image.
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
CGRect rect = {{0,0},{width,height}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(context, rect, imageRef);
// Now we can get a pointer to the image data associated with the bitmap
// context.
uint8_t *data = CGBitmapContextGetData (context);
if (data != NULL) {
size_t offset = (NSInteger) point.y * 4 * width + (NSInteger) point.x * 4;
uint8_t alpha = data[offset];
uint8_t red = data[offset+1];
uint8_t green = data[offset+2];
uint8_t blue = data[offset+3];
color = [UIColor colorWithRed:red / 255.0 green:green / 255.0 blue:blue / 255.0 alpha:alpha / 255.0];
}
// When finished, release the context
CGContextRelease(context);
// Free image data memory for the context
if (data) {
free(data); // we used malloc in createARGBBitmapContextForImage, so free it
}
return color;
}
- (CGContextRef) createARGBBitmapContextForImage:(CGImageRef) inImage
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
size_t bitmapByteCount;
size_t bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB(); // CGColorSpaceCreateDeviceWithName(kCGColorSpaceGenericRGB);
NSAssert(colorSpace, #"Error allocating color space");
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc(bitmapByteCount);
NSAssert(bitmapData, #"Unable to allocate bitmap buffer");
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
(CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
NSAssert(context, #"Context not created!");
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
I want to change color of my image but dont want to change alpha of image.
I am using following code for change color in blue.
But i want to change image of all pixels array into perticuler RGB value.
Like I have to apply RGB value (R= 116 G=170 B= 243).
CGImageRef sourceImage = ImageView_Test.image.CGImage;
CFDataRef theData;
theData = CGDataProviderCopyData(CGImageGetDataProvider(sourceImage));
UInt8 *pixelData = (UInt8 *) CFDataGetBytePtr(theData);
int red = 0;
int green = 1;
int blue = 2;
int dataLength = CFDataGetLength(theData);
for (int index = 0; index < dataLength; index += 4)
{
if (pixelData[index + blue] - 80 > 0)
{
pixelData[index + red] = pixelData[index + blue] - 139;
pixelData[index + green] = pixelData[index + blue] - 85;
}
else
{
pixelData[index + green] = 0;
pixelData[index + red] = 0;
}
}
CGContextRef context;
context = CGBitmapContextCreate(pixelData,
CGImageGetWidth(sourceImage),
CGImageGetHeight(sourceImage),
8,
CGImageGetBytesPerRow(sourceImage),
CGImageGetColorSpace(sourceImage),
kCGImageAlphaPremultipliedLast);
CGImageRef newCGImage = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newCGImage];
ImageView_Test.image = newImage;
CGContextRelease(context);
CFRelease(theData);
CGImageRelease(newCGImage);
I am using following method for change color of UIImage without affecting alpha of it.
-(UIImage *)didImageColorchanged:(NSString *)name withColor:(UIColor *)color
{
UIImage *img = [UIImage imageNamed:name];
UIGraphicsBeginImageContext(img.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[color setFill];
CGContextTranslateCTM(context, 0, img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
CGContextDrawImage(context, rect, img.CGImage);
CGContextClipToMask(context, rect, img.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return coloredImg;
}
Ex:
resultView.image = [self didImageColorchanged:[UIImage imageNamed:#"xyz.png"] withColor:[UIColor redColor]];
you can just use
// load image
UIImage *image = [UIImage imageNamed:#"test.png"];
CGImageRef imageRef = image.CGImage;
NSData *data = (NSData *)CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
char *pixels = (char *)[data bytes];
// this is where you manipulate the individual pixels
// assumes a 4 byte pixel consisting of rgb and alpha
// for PNGs without transparency use i+=3 and remove int a
for(int i = 0; i < [data length]; i += 4)
{
int r = i;
int g = i+1;
int b = i+2;
int a = i+3;
pixels[r] = 0; // eg. remove red
pixels[g] = pixels[g];
pixels[b] = pixels[b];
pixels[a] = pixels[a];
}
// create a new image from the modified pixel data
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(imageRef);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, [data length], NULL);
CGImageRef newImageRef = CGImageCreate (
width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
provider,
NULL,
false,
kCGRenderingIntentDefault
);
// the modified image
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// cleanup
free(pixels);
CGImageRelease(imageRef);
CGColorSpaceRelease(colorspace);
CGDataProviderRelease(provider);
CGImageRelease(newImageRef);
I'm using the OpenCV framework with XCode and want to convert from cvMat or IplImage to UIImage, how to do that? Thanks.
Note: most implementations don't correctly handle an alpha channel or convert from OpenCV's BGR pixel format to iOS's RGB.
This will correctly convert from cv::Mat to UIImage:
+(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat {
NSData *data = [NSData dataWithBytes:cvMat.data length:image.step.p[0]*image.rows];
CGColorSpaceRef colorSpace;
CGBitmapInfo bitmapInfo;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapInfo = kCGBitmapByteOrder32Little | (
cvMat.elemSize() == 3? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst
);
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(
cvMat.cols, //width
cvMat.rows, //height
8, //bits per component
8 * cvMat.elemSize(), //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
bitmapInfo, // bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
And to convert from UIImage to cv::Mat:
+ (cv::Mat)cvMatWithImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
size_t numberOfComponents = CGColorSpaceGetNumberOfComponents(colorSpace);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGBitmapInfo bitmapInfo = kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault;
// check whether the UIImage is greyscale already
if (numberOfComponents == 1){
cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channels
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
}
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
bitmapInfo); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
From opencv 2.4.6 on this functionality is already included.
Just include opencv2/highgui/ios.h
In OpenCV 3 this include has changed to:
opencv2/imgcodecs/ios.h
And you can use these functions:
UIImage* MatToUIImage(const cv::Mat& image);
void UIImageToMat(const UIImage* image, cv::Mat& m, bool alphaExist = false);
Here is the correct method to convert a cv::Mat to a UIImage.
Every other implementation I've seen — including OpenCV's documentation — is incorrect: they do not correctly convert from OpenCV's BGR to iOS's RGB, and they do not consider the alpha channel (if one exists). See comments above bitmapInfo = ….
+(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat {
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
CGBitmapInfo bitmapInfo;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
// OpenCV defaults to either BGR or ABGR. In CoreGraphics land,
// this means using the "32Little" byte order, and potentially
// skipping the first pixel. These may need to be adjusted if the
// input matrix uses a different pixel format.
bitmapInfo = kCGBitmapByteOrder32Little | (
cvMat.elemSize() == 3? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst
);
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(
cvMat.cols, //width
cvMat.rows, //height
8, //bits per component
8 * cvMat.elemSize(), //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
bitmapInfo, // bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
You should consider using native OpenCV functions to convert forth and back :
#import <opencv2/imgcodecs/ios.h>
...
UIImage* MatToUIImage(const cv::Mat& image);
void UIImageToMat(const UIImage* image,
cv::Mat& m, bool alphaExist = false);
Here I am mentioning all the needed conversion methods together.
Converting UIImage color to UIImage gray, without using opencv and only using iOS library functions:
- (UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
/* changes start here */
// Create bitmap image info from pixel data in current context
CGImageRef grayImage = CGBitmapContextCreateImage(context);
// release the colorspace and graphics context
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
// make a new alpha-only graphics context
context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, nil, kCGImageAlphaOnly);
// draw image into context with no colorspace
CGContextDrawImage(context, imageRect, [image CGImage]);
// create alpha bitmap mask from current context
CGImageRef mask = CGBitmapContextCreateImage(context);
// release graphics context
CGContextRelease(context);
// make UIImage from grayscale image with alpha mask
UIImage *grayScaleImage = [UIImage imageWithCGImage:CGImageCreateWithMask(grayImage, mask) scale:image.scale orientation:image.imageOrientation];
// release the CG images
CGImageRelease(grayImage);
CGImageRelease(mask);
// return the new grayscale image
return grayScaleImage;
}
Converting color UIImage to color cvMat. Please note that, you will find this piece of code in several links but there is a small modification here. Notice the portion "swap channels". This part is for keeping the color undisturbed otherwise the color channel got modified.
Also notice the following lines. These lines will help to keep the orientation of the image undisturbed.
if (image.imageOrientation == UIImageOrientationLeft
|| image.imageOrientation == UIImageOrientationRight) {
cols = image.size.height;
rows = image.size.width;
}
- (cv::Mat)cvMatFromUIImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
if (image.imageOrientation == UIImageOrientationLeft
|| image.imageOrientation == UIImageOrientationRight) {
cols = image.size.height;
rows = image.size.width;
}
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels (color channels + alpha)
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
//--swap channels -- //
std::vector<Mat> ch;
cv::split(cvMat,ch);
std::swap(ch[0],ch[2]);
cv::merge(ch,cvMat);
return cvMat;
}
Converting UIImage to cvMat gray. Notice the line
cv::Mat cvMat(rows, cols, CV_8UC4, Scalar(1,2,3,4)); // 8 bits per
component, 4 channels
instead of
cv::Mat cvMat(rows, cols, CV_8UC1); // 8 bits per component, 1
channels
This line is needed otherwise the code will throw error
- (cv::Mat)cvMatGrayFromUIImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
// cv::Mat cvMat(rows, cols, CV_8UC1); // 8 bits per component, 1 channels
cv::Mat cvMat(rows, cols, CV_8UC4, Scalar(1,2,3,4)); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return cvMat;
}
Now finally, converting cvMat (color,binary, gray) to UIImage (color, binary, gray). Notice the line :
UIImage *finalImage = [UIImage imageWithCGImage:imageRef scale:1 orientation:self.originalImage.imageOrientation];
This line will help to keep the original orientation of the image
ENJOY !!
-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat {
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
CGBitmapInfo bitmapInfo;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapInfo = kCGBitmapByteOrder32Little | (
cvMat.elemSize() == 3? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst
);
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(
cvMat.cols, //width
cvMat.rows, //height
8, //bits per component
8 * cvMat.elemSize(), //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
bitmapInfo, // bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef scale:1 orientation:self.originalImage.imageOrientation];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
You should consider using native OpenCV functions to convert forth and back :
#import <opencv2/imgcodecs/ios.h>
...
UIImage* MatToUIImage(const cv::Mat& image);
void UIImageToMat(const UIImage* image,
cv::Mat& m, bool alphaExist = false);
Note: if your UIImage comes from the camera, you should 'normalize' it (
iOS UIImagePickerController result image orientation after upload) before converting to cv::Mat since OpenCV does not take into account Exif data. If you don't do that the result should be misoriented.
As a category:
#import <UIKit/UIKit.h>
#import <opencv2/core/core.hpp>
using namespace cv;
#interface UIImage (OCV)
-(id)initWithOImage:(cv::Mat)oImage;
-(cv::Mat)oImage;
#end
and
#import "UIImage+OCV.h"
#implementation UIImage (OCV)
-(id)initWithOImage:(cv::Mat)oImage
{
NSData *data = [NSData dataWithBytes:oImage.data length:oImage.elemSize() * oImage.total()];
CGColorSpaceRef colorSpace;
if (oImage.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(oImage.cols, // Width
oImage.rows, // Height
8, // Bits per component
8 * oImage.elemSize(), // Bits per pixel
oImage.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone | kCGBitmapByteOrderDefault, // Bitmap info flags
provider, // CGDataProviderRef
NULL, // Decode
false, // Should interpolate
kCGRenderingIntentDefault); // Intent
self = [self initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return self;
}
-(cv::Mat)oImage
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(self.CGImage);
CGFloat cols = self.size.width;
CGFloat rows = self.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), self.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
#end
What I have experienced with converting between UIImage and cvMat is following:
When I used the method:
UIImage* MatToUIImage(const cv::Mat& image);
for converting cv::Mat to UIImage and the method:
void UIImageToMat(const UIImage* image, cv::Mat& m);
for converting UIImage to cv::Mat these methods didn't work correctly using the Simulator.
After I deployed my app on a real device, there weren't any problems.
Best regards,
Nazar
I'm trying to create an image mask that from a composite of two existing images.
First I start with creating the composite which consists of a small image that is the masking image, and a larger image which is the same size as the background:
UIImage *baseTextureImage = [UIImage imageNamed:#"background.png"];
UIImage *maskImage = [UIImage imageNamed:#"my_mask.jpg"];
UIImage *shapesBase = [UIImage imageNamed:#"largerimage.jpg"];
UIImage *maskImageFull;
CGSize finalSize = CGSizeMake(480.0, 320.0);
UIGraphicsBeginImageContext(finalSize);
[shapesBase drawInRect:CGRectMake(0, 0, 480, 320)];
[maskImage drawInRect:CGRectMake(150, 50, 250, 250)];
maskImageFull = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I can output this UIImage (MaskImageFull) and it looks right. It is a full size background size and it has a white background with my mask object in black, in the right place on the screen.
I then pass the MaskImageFull UIImage through this:
CGImageRef maskRef = [maskImage CGImage];
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
UIImage *retImage= [UIImage imageWithCGImage:masked];
The problem is that the retImage is all black. If I send a pre-made UIImage in as the mask it works fine, it is just when I try to make it from multiple images that it breaks.
I thought it was a colorspace thing but couldn't seem to fix it. Any help is much appreciated!
I tried the same thing with CGImageCreateWithMask, and got the same result. The solution I found was to use CGContextClipToMask instead:
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGImageRef maskImage = [[UIImage imageNamed:#"mask.png"] CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height), maskImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGImageRelease(mainViewContentBitmapContext);
// return the image
return theImage;
- (UIImage *) maskImage:(UIImage *)image {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
UIImage *maskImage = [UIImage imageNamed:#"MaskFinal.png"];
CGImageRef maskImageRef = [maskImage CGImage];
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = maskImage.size.width/ image.size.width;
if(ratio * image.size.height < maskImage.size.height) {
ratio = maskImage.size.height/ image.size.height;
}
CGRect rect1 = {{0, 0}, {maskImage.size.width, maskImage.size.height}};
CGRect rect2 = {{-((image.size.width*ratio)-maskImage.size.width)/2 , -((image.size.height*ratio)-maskImage.size.height)/2}, {image.size.width*ratio, image.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
// return the image
return theImage;
}
The image to be masked MUST be created with an alpha channel. The Alpha channel may not be created from the code.