CGLayerRef turns out empty only in OS X 10.11 (El Capitan) - objective-c

I have an image editing application, that has been working through 10.10, but in 10.11 a bug came up
When I view a CIImage created w/ -imageWithCGLayer, it shows as an empty image (of the correct size) only in 10.11
CGSize size = NSSizeToCGSize(rect.size);
size_t width = size.width;
size_t height = size.height;
size_t bitsPerComponent = 8;
size_t bytesPerRow = (width * 4+ 0x0000000F) & ~0x0000000F; // 16 byte aligned is good
size_t dataSize = bytesPerRow * height;
void* data = calloc(1, dataSize);
CGColorSpaceRef colorspace = [[[_imageController document] captureColorSpace] CGColorSpace];
CGContextRef bitmapContext = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorspace, kCGImageAlphaNone | kCGBitmapByteOrder32Host);
CGLayerRef canvasLayer = CGLayerCreateWithContext(bitmapContext, scaledRect.size, NULL);
[self drawCanvasInLayer:canvasLayer inRect:scaledRect];
CIImage *test = [CIImage imageWithCGLayer:canvasLayer];
NSLog(#"%#",test);
So when I view CIImage *test on 10.10, it looks precisely as I want it. On 10.11 it is a blank image of the same size.
I tried looking at the API diffs for CGLayer & CIImage but the documentation is too dense for me. Has anybody else stumbled across this issue? I imagine it must be something w/ the initialization of the CGContextRef, because everything else in the code is size related

That particular API was deprecated some time ago and completely removed in macOS 10.11. So your results are expected.
Since you already have a bitmapContext, modify your -drawCanvasInLayer: method to directly draw into the bitmap and then create the image using the bitmpap context thusly,
CGImageRef tmpCGImage = CGBitmapContextCreateImage( bitmapContext );
CIImage* myCIImage = [[CIImage alloc] initWithCGImage: myCIImage];
Remember to do CGImageRelease( tmpCGImage ) after you are done with your CIImage.
I recently solved this very problem and posted a sample objective-C project to work around the loss of this API.
See http://www.blinddogsoftware.com/goodies/#DontSpillTheBits
Also, don't forget to read the header file where that API is declared. There is very often extremely useful information there (in Xcode, Command+click on the specific API)

Related

Draw text into CGBitmapContext

I have an app that renders into a UIView's CGContext in drawRect. I also export those renderings using a background renderer. It uses the same rendering logic to render (in faster than real time) into a CGBitmapContext (which I subsequently transform into an mp4 file).
I have noticed that the output video has a number of weird glitches. Such as the image being rotated, weird duplications of the rendered images, random noise, and the timing is also odd.
I'm looking for ways to debug this. For the timing issue, I thought I'd render a string that tells me which frame I'm currently viewing, only to find rendering text into CGContext's not very well documented. In fact, the documentations around much of core graphics is quite unforgiving to some one of my experience.
So specifically, I'd like to know how to render text into a context. If its Core Text, must it inter-operate some how with the core graphics context? And in general, I'd appreciate any tips and advice on doing bitmap rendering and debugging the results.
according another question:
How to convert Text to Image in Cocoa Objective-C
we can use the CTLineDraw to draw the text in a CGBitmapContext
sample code:
NSString* string = #"terry.wang";
CGFloat fontSize = 10.0f;
// Create an attributed string with string and font information
CTFontRef font = CTFontCreateWithName(CFSTR("Helvetica Light"), fontSize, nil);
NSDictionary* attributes = [NSDictionary dictionaryWithObjectsAndKeys:
(id)font, kCTFontAttributeName,
nil];
NSAttributedString* as = [[NSAttributedString alloc] initWithString:string attributes:attributes];
CFRelease(font);
// Figure out how big an image we need
CTLineRef line = CTLineCreateWithAttributedString((CFAttributedStringRef)as);
CGFloat ascent, descent, leading;
double fWidth = CTLineGetTypographicBounds(line, &ascent, &descent, &leading);
// On iOS 4.0 and Mac OS X v10.6 you can pass null for data
size_t width = (size_t)ceilf(fWidth);
size_t height = (size_t)ceilf(ascent + descent);
void* data = malloc(width*height*4);
// Create the context and fill it with white background
CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
CGContextRef ctx = CGBitmapContextCreate(data, width, height, 8, width*4, space, bitmapInfo);
CGColorSpaceRelease(space);
CGContextSetRGBFillColor(ctx, 1.0, 1.0, 1.0, 1.0); // white background
CGContextFillRect(ctx, CGRectMake(0.0, 0.0, width, height));
// Draw the text
CGFloat x = 0.0;
CGFloat y = descent;
CGContextSetTextPosition(ctx, x, y);
CTLineDraw(line, ctx);
CFRelease(line);

Issue on image data processing with iOS 11

i really need your help today ! I'm debugging an old objective-c app created by another dev and there's a new bug that only appear on iOS 11.
The bug come's from a image processing function used when trying to create a "Scratch View", similar to this one -> https://github.com/joehour/ScratchCard
But, since iOS 11, the function doesn't work anymore, in the code above i've got an error on [Unknown process name] CGImageMaskCreate: invalid image provider: NULL. <-- the variable CGDataProviderRef dataProvider is not created (null)
// Method to change the view which will be scratched
- (void)setHideView:(UIView *)hideView
{
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
UIGraphicsBeginImageContextWithOptions(hideView.bounds.size, NO, 0);
[hideView.layer renderInContext:UIGraphicsGetCurrentContext()];
hideView.layer.contentsScale = scale;
_hideImage = UIGraphicsGetImageFromCurrentImageContext().CGImage;
UIGraphicsEndImageContext();
size_t imageWidth = CGImageGetWidth(_hideImage);
size_t imageHeight = CGImageGetHeight(_hideImage);
CFMutableDataRef pixels = CFDataCreateMutable(NULL, imageWidth * imageHeight);
_contextMask = CGBitmapContextCreate(CFDataGetMutableBytePtr(pixels), imageWidth, imageHeight , 8, imageWidth, colorspace, kCGImageAlphaNone);
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData(pixels);
CFRelease(pixels);
CGContextSetFillColorWithColor(_contextMask, [UIColor blackColor].CGColor);
CGContextFillRect(_contextMask, self.frame);
CGContextSetStrokeColorWithColor(_contextMask, [UIColor whiteColor].CGColor);
CGContextSetLineWidth(_contextMask, _sizeBrush);
CGContextSetLineCap(_contextMask, kCGLineCapRound);
CGImageRef mask = CGImageMaskCreate(imageWidth, imageHeight, 8, 8, imageWidth, dataProvider, nil, NO);
_scratchImage = CGImageCreateWithMask(_hideImage, mask);
CGDataProviderRelease(dataProvider);
CGImageRelease(mask);
CGColorSpaceRelease(colorspace);
}
I'm not an expert of this function of image processing and i'm really lost for debugging this part...
Does anyone know why this function doesn't work anymore in iOS 11 ?
Thanks for your help !
iOS 11 stopped handling CGDataProviderCreateWithCFData(null) therefore you will need to set the length of pixels.
Something like:
...
CFMutableDataRef pixels = CFDataCreateMutable(NULL, imageWidth * imageHeight);
CFDataSetLength(pixels, imageWidth * imageHeight); // this is the line you're missing for iOS11+
_contextMask = ...
...

CGAffineTransformMakeRotation bug when masking

I have a bug when masking a rotated image. The bug wasn't present on iOS 8 (even when it was built with the iOS 9 SDK), and it still isn't present on non-retina iOS 9 iPads (iPad 2). I don't have any more retina iPads that are still on iOS 8, and in the meantime, I've also updated the build to support both 32bit and 64bit architectures. Point is, I can't be sure if the update to iOS 9 brought the bug, or the change to the both architectures setting.
The nature of the bug is this. I'm iterating through a series of images, rotating them by an angle determined by the number of segments I want to get out of the picture, and masking the picture on each rotation to get a different part of the picture. Everything works fine EXCEPT when the source image is rotated M_PI, 2*M_PI, 3*M_PI or 4*M_PI times (or multiples, i. e. 5*M_PI, 6*M_PI etc.). When It's rotated 1-3*M_PI times, the resulting images are incorrectly masked - the parts that should be transparent end up black. When it's rotated 4*M_PI times, the masking of the resulting image ends up with a nil image, thus crashing the application in the last step (where I'm adding the resulting image in an array).
I'm using CIImage for rotation, and CGImage masking for performance reasons (this combination showed best performance), so I would prefer keeping this choice of methods if at all possible.
UPDATE: The code shown below is run on a background thread.
Here is the code snippet:
CIContext *context = [CIContext contextWithOptions:[NSDictionary dictionaryWithObject:[NSNumber numberWithBool:NO] forKey:kCIContextUseSoftwareRenderer]];
for (int i=0;i<[src count];i++){
//for every image in the source array
CIImage * newImage = [[CIImage alloc] init];
if (IS_RETINA){
newImage = [CIImage imageWithCGImage:[[src objectAtIndex:i] CGImage]];
}else {
CIImage *tImage = [CIImage imageWithCGImage:[[src objectAtIndex:i] CGImage]];
newImage = [tImage imageByApplyingTransform:CGAffineTransformMakeScale(0.5, 0.5)];
}
float angle = angleOff*M_PI/180.0;
//angleOff is just the inital angle offset, If I want to start cutting from a different start point
for (int j = 0; j < nSegments; j++){
//for the given number of circle slices (segments)
CIImage * ciResult = [newImage imageByApplyingTransform:CGAffineTransformMakeRotation(angle + ((2*M_PI/ (2 * nSegments)) + (2*M_PI/nSegments) * (j)))];
//the way the angle is calculated is specific for the application of the image later. In any case, the bug happens when the resulting angle is M_PI, 2*M_PI, 3*M_PI and 4*M_PI
CGPoint center = CGPointMake([ciResult extent].origin.x + [ciResult extent].size.width/2, [ciResult extent].origin.y+[ciResult extent].size.width/2);
for (int k = 0; k<[src count]; k++){
//this iteration is also specific, it has to do with how much of the image is being masked in each iteration, but the bug happens irrelevant to this iteration
CGSize dim = [[masks objectAtIndex:k] size];
if (IS_RETINA && (floor(NSFoundationVersionNumber)>(NSFoundationVersionNumber_iOS_7_1))) {
dim = CGSizeMake(dim.width*2, dim.height*2);
}//this correction was needed after iOS 7 was introduced, not sure why :)
CGRect newSize = CGRectMake(center.x-dim.width*1/2,center.y+((circRadius + orbitWidth*(k+1))*scale-dim.height)*1, dim.width*1, dim.height*1);
//the calculation of the new size is specific to the application, I don't find it relevant.
CGImageRef imageRef = [context createCGImage:ciResult fromRect:newSize];
CGImageRef maskRef = [[masks objectAtIndex:k] CGImage];
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef),
NULL,
YES);
CGImageRef masked = CGImageCreateWithMask(imageRef, mask);
UIImage * result = [UIImage imageWithCGImage:masked];
CGImageRelease(imageRef);
CGImageRelease(masked);
CGImageRelease(maska);
[temps addObject:result];
}
}
}
I would be eternally grateful for any tips anyone might have. This bug has me puzzled beyond words :) .

`-drawInRect` seems to be different in 10.11? What could have changed?

Edit: I got a downvote, and I just wanted to make sure it was because I didn't ask the dev forums (which I did, https://forums.developer.apple.com/thread/11593, but haven't gotten any response). This is a pretty big issue for us so I figured it'd be best to cast a wide line out, maybe somebody can help.
We have an app that works in 10.10, but has this issue in 10.11.
We call -drawRect in a NSGraphicsContext on an NSImage that is properly set in both OS
But in 10.11 this NSImage doesn't get drawn.
I'm pretty novice, but I have been debugging for quite a while to get to where I'm at now, and I'm just plain stuck. Was seeing if anybody ran into this before, or has any idea why this could be.
Here is the pertinent code:
(the layer object is a CGLayerRef that is passed into this method, from the -drawRect method)
Here is how the layer is instantiated:
NSRect scaledRect = [Helpers scaleRect:rect byScale:[self backingScaleFactorRelativeToZoom]];
CGSize size = NSSizeToCGSize(rect.size);
size_t width = size.width;
size_t height = size.height;
size_t bitsPerComponent = 8;
size_t bytesPerRow = (width * 4+ 0x0000000F) & ~0x0000000F; /
size_t dataSize = bytesPerRow * height;
void* data = calloc(1, dataSize);
CGColorSpaceRef colorspace = [[[_imageController document] captureColorSpace] CGColorSpace];
CGContextRef bitmapContext = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorspace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host);
CGLayerRef canvasLayer = CGLayerCreateWithContext(bitmapContext, scaledRect.size, NULL);
Here is the method that draws the image:
CGContextRef mainLayerContext = CGLayerGetContext(layer);
NSRect scaledBounds = [contextInfo[kContextInfoBounds] rectValue];
if( !_flatCache )
{
_flatCache = CGLayerCreateWithContext(mainLayerContext, scaledBounds.size, NULL);
CGContextRef flatCacheCtx = CGLayerGetContext(_flatCache);
CGLayerRef tempLayer = CGLayerCreateWithContext(flatCacheCtx, scaledBounds.size, NULL);
CIImage *tempImage = [CIImage imageWithCGLayer:tempLayer];
NSLog(#"%#",tempImage);
CGContextRef tempLayerCtx = CGLayerGetContext(tempLayer);
CGContextTranslateCTM(tempLayerCtx, -scaledBounds.origin.x, -scaledBounds.origin.y);
[NSGraphicsContext saveGraphicsState];
NSGraphicsContext* newContext = [NSGraphicsContext graphicsContextWithGraphicsPort:tempLayerCtx flipped:NO];
[NSGraphicsContext setCurrentContext:newContext];
if ( [_imageController background] )
{
NSRect bgRect = { [_imageController backgroundPosition], [_imageController backgroundSize] };
bgRect = [Helpers scaleRect:bgRect byScale:[self backingScaleFactorRelativeToZoom]];
[[_imageController background] drawInRect:bgRect fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
}
CIImage *tempImage2 = [CIImage imageWithCGLayer:tempLayer];
NSLog(#"%#",tempImage2);
In 10.10 and 10.11, tempImage is an empty image w/ the correct size.
In 10.10 tempImage2 now has [_imageController background] properly drawn
In 10.11 tempImage2 is the same as tempImage, a blank image w/ the correct size
Unfortunately the person who originally wrote this code is gone now, and I'm too novice to go dig any lower w/o finding a book and reading it.
bgRect is not the issue, already tried modifying that around. I have also messed with the -translate arguments, but still couldn't learn anything.
Does anybody know how else I could debug this to find the issue? Or better yet, has anybody seen this issue and know what my problem is?
Just needed to add this line:
//`bounds` calculated from the size of the image
//mainLayerContext and flatCache defined above in question ^^
CGContextDrawLayerInRect(mainLayerContext, bounds, _flatCache);
at the very end

How would I implement SURF in objC?

I need to implement SURF algorithm in objc iOS.
I have searched on openCV and also tried to implement following examples
jonmarimba and ishai jaffe
The examples are not working and I need to make any one of them work so atleast I can get relieved that yes SURF can work on iOS as well. I have tried to build from scratch but I am totally FUSED with SHORT CIRCUIT.
I am trying to use openCV 2.4.2 in jonmarimba's example.
And also trying to use iOS5.1.1 with Xcode 4.3
First of all: Go with OpenCVs C++-interface. Objective-C is a strict super set of C, so you can just use it.
To get a grip on the topic take a look at OpenCVs official docs and the example code about Feature Description.
The next step is to grab a copy of the current OpenCV version for iOS. As of version 2.4.2 OpenCV has official iOS-support and you just need the opencv2.framework.
To convert an UIImage to a cv::Mat use this function:
static UIImage* MatToUIImage(const cv::Mat& m) {
CV_Assert(m.depth() == CV_8U);
NSData *data = [NSData dataWithBytes:m.data length:m.elemSize()*m.total()];
CGColorSpaceRef colorSpace = m.channels() == 1 ?
CGColorSpaceCreateDeviceGray() : CGColorSpaceCreateDeviceRGB();
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(m.cols, m.cols, m.elemSize1()*8, m.elemSize()*8,
m.step[0], colorSpace, kCGImageAlphaNoneSkipLast|kCGBitmapByteOrderDefault,
provider, NULL, false, kCGRenderingIntentDefault);
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef); CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace); return finalImage;
}
… and vice-versa:
static void UIImageToMat(const UIImage* image, cv::Mat& m) {
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width, rows = image.size.height;
m.create(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(m.data, m.cols, m.rows, 8,
m.step[0], colorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef); CGColorSpaceRelease(colorSpace);
}
The rest of the work you have to do is plain OpenCV Stuff. So grab you a coffee and start working.
If you need some "inspiration" take a look at this repo gsoc2012 - /ios/trunk It's dedicated to OpenCV + iOS.