I need to implement SURF algorithm in objc iOS.
I have searched on openCV and also tried to implement following examples
jonmarimba and ishai jaffe
The examples are not working and I need to make any one of them work so atleast I can get relieved that yes SURF can work on iOS as well. I have tried to build from scratch but I am totally FUSED with SHORT CIRCUIT.
I am trying to use openCV 2.4.2 in jonmarimba's example.
And also trying to use iOS5.1.1 with Xcode 4.3
First of all: Go with OpenCVs C++-interface. Objective-C is a strict super set of C, so you can just use it.
To get a grip on the topic take a look at OpenCVs official docs and the example code about Feature Description.
The next step is to grab a copy of the current OpenCV version for iOS. As of version 2.4.2 OpenCV has official iOS-support and you just need the opencv2.framework.
To convert an UIImage to a cv::Mat use this function:
static UIImage* MatToUIImage(const cv::Mat& m) {
CV_Assert(m.depth() == CV_8U);
NSData *data = [NSData dataWithBytes:m.data length:m.elemSize()*m.total()];
CGColorSpaceRef colorSpace = m.channels() == 1 ?
CGColorSpaceCreateDeviceGray() : CGColorSpaceCreateDeviceRGB();
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(m.cols, m.cols, m.elemSize1()*8, m.elemSize()*8,
m.step[0], colorSpace, kCGImageAlphaNoneSkipLast|kCGBitmapByteOrderDefault,
provider, NULL, false, kCGRenderingIntentDefault);
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef); CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace); return finalImage;
}
… and vice-versa:
static void UIImageToMat(const UIImage* image, cv::Mat& m) {
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width, rows = image.size.height;
m.create(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(m.data, m.cols, m.rows, 8,
m.step[0], colorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef); CGColorSpaceRelease(colorSpace);
}
The rest of the work you have to do is plain OpenCV Stuff. So grab you a coffee and start working.
If you need some "inspiration" take a look at this repo gsoc2012 - /ios/trunk It's dedicated to OpenCV + iOS.
Related
I am trying to create an image mask with kCGColorSpaceDisplayP3 colorspace to support the iPhone 7's wide color range.
I am able to create image mask correctly when using sRGB colorspace on iPhone 6 and earlier devices using iOS 10 and earlier iOS. But I have no clue where I am going wrong when creating colorspace using kCGColorSpaceDisplayP3:
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceDisplayP3);
CGContextRef context = CGBitmapContextCreate(NULL, 320.0, 320.0, 32, 320.0*16, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapFloatComponents);
CGFloat radius = 10.0;
CGFloat components[] = {1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0, 1.0,1.0,1.0,0.5, 1.0,1.0,1.0,0.0};
CGFloat locations[] = {0.0, 0.1, 0.2, 0.8, 0.9, 1.0};
CGGradientRef gradient = CGGradientCreateWithColorComponents(colorSpace, components, locations, 6); //colorSpaceP3
CGPoint center = CGPointMake(100.0, 100.0);
CGContextDrawRadialGradient(context, gradient, center, 0.1, center, radius, 0);
CGGradientRelease(gradient);
CGImageRef imageHole = CGBitmapContextCreateImage(context);
CGImageRef maskHole = CGImageMaskCreate(CGImageGetWidth(imageHole), CGImageGetHeight(imageHole), CGImageGetBitsPerComponent(imageHole), CGImageGetBitsPerPixel(imageHole), CGImageGetBytesPerRow(imageHole), CGImageGetDataProvider(imageHole), NULL, FALSE);
CGImageRelease(imageHole);
CGImageRef image = [UIImage imageNamed:#"prosbo_hires.jpg"].CGImage;
CGImageRef masked = CGImageCreateWithMask(image, maskHole);
CGImageRelease(maskHole);
UIImage *img = [UIImage imageWithCGImage:masked];
CGImageRelease(masked);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
The log says:
: CGImageMaskCreate: invalid mask bits/component: 32.
I don't have much experience with Core Graphics. Can anyone please suggest something here.
Thanks.
The documentation for the bitsPerComponent parameter of CGImageMaskCreate() says:
Image masks must be 1, 2, 4, or 8 bits per component.
You're passing CGImageGetBitsPerComponent(imageHole), which is 32 bits per component. As per both the documentation and the log message, that's not valid.
The implication is that image masks don't support floating point bitmap formats.
It should be possible to create the bitmap context and the mask using 8 bits per component. More or less, just leave out kCGBitmapFloatComponents. I expect that will result in reduced granularity of the opacity of the mask, but won't affect the color range of masked images.
this fixed my issue:
contextRef = CGBitmapContextCreate(
m.data,
m.cols,
m.rows,
8,
m.step[0],
CGColorSpaceCreateDeviceRGB(),
bitmapInfo);
https://developer.apple.com/search/?q=CGColorSpaceCreate
I have an image editing application, that has been working through 10.10, but in 10.11 a bug came up
When I view a CIImage created w/ -imageWithCGLayer, it shows as an empty image (of the correct size) only in 10.11
CGSize size = NSSizeToCGSize(rect.size);
size_t width = size.width;
size_t height = size.height;
size_t bitsPerComponent = 8;
size_t bytesPerRow = (width * 4+ 0x0000000F) & ~0x0000000F; // 16 byte aligned is good
size_t dataSize = bytesPerRow * height;
void* data = calloc(1, dataSize);
CGColorSpaceRef colorspace = [[[_imageController document] captureColorSpace] CGColorSpace];
CGContextRef bitmapContext = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorspace, kCGImageAlphaNone | kCGBitmapByteOrder32Host);
CGLayerRef canvasLayer = CGLayerCreateWithContext(bitmapContext, scaledRect.size, NULL);
[self drawCanvasInLayer:canvasLayer inRect:scaledRect];
CIImage *test = [CIImage imageWithCGLayer:canvasLayer];
NSLog(#"%#",test);
So when I view CIImage *test on 10.10, it looks precisely as I want it. On 10.11 it is a blank image of the same size.
I tried looking at the API diffs for CGLayer & CIImage but the documentation is too dense for me. Has anybody else stumbled across this issue? I imagine it must be something w/ the initialization of the CGContextRef, because everything else in the code is size related
That particular API was deprecated some time ago and completely removed in macOS 10.11. So your results are expected.
Since you already have a bitmapContext, modify your -drawCanvasInLayer: method to directly draw into the bitmap and then create the image using the bitmpap context thusly,
CGImageRef tmpCGImage = CGBitmapContextCreateImage( bitmapContext );
CIImage* myCIImage = [[CIImage alloc] initWithCGImage: myCIImage];
Remember to do CGImageRelease( tmpCGImage ) after you are done with your CIImage.
I recently solved this very problem and posted a sample objective-C project to work around the loss of this API.
See http://www.blinddogsoftware.com/goodies/#DontSpillTheBits
Also, don't forget to read the header file where that API is declared. There is very often extremely useful information there (in Xcode, Command+click on the specific API)
I'm using the following code to convert a UIImage into a cv::Mat:
-(cv::Mat)openCVMat{
// Turn a UIImage into a cv::Mat
// Draw the image into a bitmap context: the Mat's matrix
CGColorSpaceRef colorSpace = CGImageGetColorSpace(self.CGImage);
CGFloat columns = self.size.width;
CGFloat rows = self.size.height;
cv::Mat m(rows, columns, CV_8UC4, cv::Scalar(0,0,0,0)); // (bits, 4 channels (RGBA)
CGContextRef context = CGBitmapContextCreate(m.data,
columns,
rows,
8,
m.step[0],
colorSpace,
kCGImageAlphaNoneSkipLast|kCGBitmapByteOrderDefault);
// Draw
CGContextDrawImage(context, CGRectMake(0, 0, columns, rows), self.CGImage);
// Cleanup
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
// Return
return m;
}
It seems to work fine, but on some images the resulting cv::Mat is different if I just read it from file with imread:
NSURL *url = [[NSBundle mainBundle] URLForResource:#"blue"
withExtension:#"png"];
cv::Mat fromFile = cv::imread([url.path cStringUsingEncoding:NSUTF8StringEncoding], CV_LOAD_IMAGE_UNCHANGED | CV_LOAD_IMAGE_COLOR);
cv::Mat withAlpha;
cv::cvtColor(fromFile, withAlpha, CV_BGRA2RGBA);
In this case, the resulting cv::Mat (withAlpha) is slightly different, specially in the corner of the images.
For example, when reading the image as a UIImage and then converting to cv::Mat, the final pixel is:
[0,0,110,255] this is RGBA
while the last pixel after reading it directly with imread is:
[0,0,255, 110] also RGBA
To make things better, this doesn't happen with all images.
Any ideas why this is happening?
Here's the test image: blue.png
There's some mismatch between the format being read and format with which the cv::Mat is being created.
I see that while loading the image from the resource bundle, you're using:
CV_LOAD_IMAGE_UNCHANGED | CV_LOAD_IMAGE_COLOR.
Just keep CV_LOAD_IMAGE_UNCHANGED and check. That should guarantee the loading of the alpha channel from your PNG.
See if that helps.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to get UIImage from EAGLView?
So I was just wondering if anybody knows any way to save what is stored in an EAGLContext as a UIImage.
I am currently using:
UIGraphicsBeginImageContext(CGSizeMake(768, 1024));
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
in other apps I have and this works fine, but obviously, EAGLContext doesn't have a .layer property. I've tried casting to UIView, but that - unsurprisingly - doesn't work:
UIView *newView = [[UIView alloc] init];
newView = (UIView *)context;
I am drawing to an EAGLContext property on a UIView (technically an EAGLContext on a UIView on another UIView on a View Controller, but I figure that shouldn't make any difference) using OpenGLES 1.
If anybody knows anything about this, even if its just that I'm completely barking up an impossible tree, please let me know!
Matt
After a few days I finally got a working solution to this. There is code provided by Apple which produces an UIImage from an EAGLView. Then you simply need to flip the image vertically since UIKit is upside down. The link to the documentation where I found this method doesn't exist anymore.
Method to capture EAGLView:
-(UIImage *)drawableToCGImage
{
GLint backingWidth2, backingHeight2;
//Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth2);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight2);
NSInteger x = 0, y = 0, width2 = backingWidth2, height2 = backingHeight2;
NSInteger dataLength = width2 * height2 * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width2, height2, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width2, height2, 8, 32, width2 * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width2 / scale;
heightInPoints = height2 / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width2;
heightInPoints = height2;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Method to flip the image vertically:
- (UIImage *)flipImageVertically:(UIImage *)originalImage
{
UIImageView *tempImageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(tempImageView.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform flipVertical = CGAffineTransformMake(
1, 0, 0, -1, 0, tempImageView.frame.size.height
);
CGContextConcatCTM(context, flipVertical);
[tempImageView.layer renderInContext:context];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//[tempImageView release];
return flippedImage;
}
I have done a simple but effective emboss effect with Core Graphics.
It works great! But only in simulator...
Here is the result:
What I do is the following:
- From a picked image, I take the alpha out if it has and I fill it with white.
- I transform this RGB image to Grayscale
- I invert colors of this image
I then call a custom method to create the effect with parameters:
canvasImg: a semi-transparent image to mask on
maskImg: the image I just created, grayscale and inverted:
opacitity: the opacity of the resulting image
The method then makes a simple mask, applies shadows and oppacity and returns a brand new UIImage.
I can't understand why in the simulator it does work, nor the device.
While running in the device, I get a non-null UIImage tho...
Please help!
Here is the code:
- (UIImage *)stampImage:(UIImage *)canvasImg withMask:(UIImage *)maskImg withOpacity:(CGFloat)opacity
{
//Creating the mask Image
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
mainViewContentContext = CGBitmapContextCreate(NULL, maskImg.size.width, maskImg.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext == NULL) return NULL;
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, maskImg.size.width, maskImg.size.height), maskImg.CGImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(0, 0, maskImg.size.width, maskImg.size.height), canvasImg.CGImage);
CGContextSetAllowsAntialiasing(mainViewContentContext, true);
CGContextSetShouldAntialias(mainViewContentContext, true);
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *maskedImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
CGImageRelease(mainViewContentBitmapContext);
//Giving some Drop shadows
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef shadowContext = CGBitmapContextCreate(NULL, maskedImage.size.width + 10, maskedImage.size.height + 10,
CGImageGetBitsPerComponent(maskedImage.CGImage), 0,
colourSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colourSpace);
CGContextSetShadowWithColor(shadowContext, CGSizeMake(0, -1), 1, [UIColor colorWithWhite:1.0 alpha:0.3].CGColor);
CGContextSetAllowsAntialiasing(shadowContext, true);
CGContextSetShouldAntialias(shadowContext, true);
CGContextDrawImage(shadowContext, CGRectMake(0, 10, maskedImage.size.width, maskedImage.size.height), maskedImage.CGImage);
CGImageRef shadowedCGImage = CGBitmapContextCreateImage(shadowContext);
CGContextRelease(shadowContext);
UIImage *stampImg = [UIImage imageWithCGImage:shadowedCGImage];
CGImageRelease(shadowedCGImage);
return stampImg;
}
Also be aware of the memory limitations of the device vs the simulator. I've had CG logic that would build and run fine on the simulator; the same logic will build and run emitting no errors on the device, but the visual result is not the desired one. I'd suggest trying your logic on a considerably smaller image to verify that it works on the device. I had to abandon some very cool image masking stuff that I'd come up with because the device didn't have the horsepower to pull it off for larger images.