CGAffineTransformMakeRotation bug when masking - objective-c

I have a bug when masking a rotated image. The bug wasn't present on iOS 8 (even when it was built with the iOS 9 SDK), and it still isn't present on non-retina iOS 9 iPads (iPad 2). I don't have any more retina iPads that are still on iOS 8, and in the meantime, I've also updated the build to support both 32bit and 64bit architectures. Point is, I can't be sure if the update to iOS 9 brought the bug, or the change to the both architectures setting.
The nature of the bug is this. I'm iterating through a series of images, rotating them by an angle determined by the number of segments I want to get out of the picture, and masking the picture on each rotation to get a different part of the picture. Everything works fine EXCEPT when the source image is rotated M_PI, 2*M_PI, 3*M_PI or 4*M_PI times (or multiples, i. e. 5*M_PI, 6*M_PI etc.). When It's rotated 1-3*M_PI times, the resulting images are incorrectly masked - the parts that should be transparent end up black. When it's rotated 4*M_PI times, the masking of the resulting image ends up with a nil image, thus crashing the application in the last step (where I'm adding the resulting image in an array).
I'm using CIImage for rotation, and CGImage masking for performance reasons (this combination showed best performance), so I would prefer keeping this choice of methods if at all possible.
UPDATE: The code shown below is run on a background thread.
Here is the code snippet:
CIContext *context = [CIContext contextWithOptions:[NSDictionary dictionaryWithObject:[NSNumber numberWithBool:NO] forKey:kCIContextUseSoftwareRenderer]];
for (int i=0;i<[src count];i++){
//for every image in the source array
CIImage * newImage = [[CIImage alloc] init];
if (IS_RETINA){
newImage = [CIImage imageWithCGImage:[[src objectAtIndex:i] CGImage]];
}else {
CIImage *tImage = [CIImage imageWithCGImage:[[src objectAtIndex:i] CGImage]];
newImage = [tImage imageByApplyingTransform:CGAffineTransformMakeScale(0.5, 0.5)];
}
float angle = angleOff*M_PI/180.0;
//angleOff is just the inital angle offset, If I want to start cutting from a different start point
for (int j = 0; j < nSegments; j++){
//for the given number of circle slices (segments)
CIImage * ciResult = [newImage imageByApplyingTransform:CGAffineTransformMakeRotation(angle + ((2*M_PI/ (2 * nSegments)) + (2*M_PI/nSegments) * (j)))];
//the way the angle is calculated is specific for the application of the image later. In any case, the bug happens when the resulting angle is M_PI, 2*M_PI, 3*M_PI and 4*M_PI
CGPoint center = CGPointMake([ciResult extent].origin.x + [ciResult extent].size.width/2, [ciResult extent].origin.y+[ciResult extent].size.width/2);
for (int k = 0; k<[src count]; k++){
//this iteration is also specific, it has to do with how much of the image is being masked in each iteration, but the bug happens irrelevant to this iteration
CGSize dim = [[masks objectAtIndex:k] size];
if (IS_RETINA && (floor(NSFoundationVersionNumber)>(NSFoundationVersionNumber_iOS_7_1))) {
dim = CGSizeMake(dim.width*2, dim.height*2);
}//this correction was needed after iOS 7 was introduced, not sure why :)
CGRect newSize = CGRectMake(center.x-dim.width*1/2,center.y+((circRadius + orbitWidth*(k+1))*scale-dim.height)*1, dim.width*1, dim.height*1);
//the calculation of the new size is specific to the application, I don't find it relevant.
CGImageRef imageRef = [context createCGImage:ciResult fromRect:newSize];
CGImageRef maskRef = [[masks objectAtIndex:k] CGImage];
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef),
NULL,
YES);
CGImageRef masked = CGImageCreateWithMask(imageRef, mask);
UIImage * result = [UIImage imageWithCGImage:masked];
CGImageRelease(imageRef);
CGImageRelease(masked);
CGImageRelease(maska);
[temps addObject:result];
}
}
}
I would be eternally grateful for any tips anyone might have. This bug has me puzzled beyond words :) .

Related

CGLayerRef turns out empty only in OS X 10.11 (El Capitan)

I have an image editing application, that has been working through 10.10, but in 10.11 a bug came up
When I view a CIImage created w/ -imageWithCGLayer, it shows as an empty image (of the correct size) only in 10.11
CGSize size = NSSizeToCGSize(rect.size);
size_t width = size.width;
size_t height = size.height;
size_t bitsPerComponent = 8;
size_t bytesPerRow = (width * 4+ 0x0000000F) & ~0x0000000F; // 16 byte aligned is good
size_t dataSize = bytesPerRow * height;
void* data = calloc(1, dataSize);
CGColorSpaceRef colorspace = [[[_imageController document] captureColorSpace] CGColorSpace];
CGContextRef bitmapContext = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorspace, kCGImageAlphaNone | kCGBitmapByteOrder32Host);
CGLayerRef canvasLayer = CGLayerCreateWithContext(bitmapContext, scaledRect.size, NULL);
[self drawCanvasInLayer:canvasLayer inRect:scaledRect];
CIImage *test = [CIImage imageWithCGLayer:canvasLayer];
NSLog(#"%#",test);
So when I view CIImage *test on 10.10, it looks precisely as I want it. On 10.11 it is a blank image of the same size.
I tried looking at the API diffs for CGLayer & CIImage but the documentation is too dense for me. Has anybody else stumbled across this issue? I imagine it must be something w/ the initialization of the CGContextRef, because everything else in the code is size related
That particular API was deprecated some time ago and completely removed in macOS 10.11. So your results are expected.
Since you already have a bitmapContext, modify your -drawCanvasInLayer: method to directly draw into the bitmap and then create the image using the bitmpap context thusly,
CGImageRef tmpCGImage = CGBitmapContextCreateImage( bitmapContext );
CIImage* myCIImage = [[CIImage alloc] initWithCGImage: myCIImage];
Remember to do CGImageRelease( tmpCGImage ) after you are done with your CIImage.
I recently solved this very problem and posted a sample objective-C project to work around the loss of this API.
See http://www.blinddogsoftware.com/goodies/#DontSpillTheBits
Also, don't forget to read the header file where that API is declared. There is very often extremely useful information there (in Xcode, Command+click on the specific API)

How to align memory in Objective-C CGIMageRef to fix EXC_ARC_DA_ALIGN error

I've used this snippet of code for years in my apps without fail. It cuts out a set of pieces from a CGImageRef I pass in and a black and white mask.
I am attempting to migrate the code into an enviornment where the layout will be based on autolayout and constraints.
It was worked perfectly with autolayout until I tried running it on an iPad 3, iPhone 5 or 4s.
After much research I believe that it is crashing due to memory alignment issues with this error.
EXC_BAD_ACCESS (code=EXC_ARM_DALIGN, address=0x5baa20)
I believe I need to adjust Bites/Bytes per component somehow to ensure the results always line up in memory to avoid angering the ARMV7 gods who rule over the problem devices.
I found some examples on the web but I have not successfully adapted them to this situation.
I have an alternate path where I can hard-code my sizing that doesn't cause the crash, but I have to abandon days worth of work on autolayout for hard coding.
Help please
- (UIImage*) maskImage2:(CGImageRef)image withMask:(UIImage *)maskImage withFrame:(CGRect)currentFrame {
#autoreleasepool {
CGFloat scale = [self adjustScale];
CGRect scaledRect = CGRectMake(currentFrame.origin.x*scale, currentFrame.origin.y*scale, maskImage.size.width*scale, maskImage.size.height*scale);
//Rect scaled up to account for iPad 3 sizing
NSLog( #"%# origin", NSStringFromCGRect(scaledRect));
CGImageRef tempImage = CGImageCreateWithImageInRect(image, scaledRect);// Cut out the image at the size of the pieces
CGImageRef maskRef = maskImage.CGImage; //Creates the mask to cut out the fine edge and add backing layer
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef colorsHighlights = CGImageCreateWithMask(tempImage, mask);//Colors with tran transparent surround
CFRelease(tempImage);
CGSize tempsize = CGSizeMake(maskImage.size.width, maskImage.size.height);
CFRelease(mask);
UIGraphicsBeginImageContextWithOptions(CGSizeMake(tempsize.width+5, tempsize.height+5), NO, scale);
CGContextRef context = UIGraphicsGetCurrentContext();
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) {
CGContextSetShadow(context, CGSizeMake(1, 2), 2);
}else{
CGContextSetShadow(context, CGSizeMake(.5, .5), 1);
}
[[UIImage imageWithCGImage:colorsHighlights scale:scale orientation:UIImageOrientationUp] drawInRect:CGRectMake(0, 0, tempsize.width, tempsize.height)]; // <<<< Crash is here
CFRelease(colorsHighlights);
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return finalImage;
}
}

Cropping out a face using CoreImage

I need to crop out a face/multiple faces from a given image and use the cropped face image for other use. I am using CIDetectorTypeFace from CoreImage. The problem is the new UIImage that contains just the detected face needs to be bigger in size as the hair is cut-off or the lower jaw is cut-off. How do i increase the size of the initWithFrame:faceFeature.bounds ??
Sample code i am using:
CIImage* image = [CIImage imageWithCGImage:staticBG.image.CGImage];
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
NSArray* features = [detector featuresInImage:image];
for(CIFaceFeature* faceFeature in features)
{
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
[staticBG addSubview:faceView];
// cropping the face
CGImageRef imageRef = CGImageCreateWithImageInRect([staticBG.image CGImage], faceFeature.bounds);
[resultView setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
}
Note: The red frame that i made to show the detected face region does-not at all match with the cropped out image. Maybe i am not displaying the frame right but since i do not need to show the frame, i really need the cropped out face, i am not worrying about it much.
Not sure, but you could try
CGRect biggerRectangle = CGRectInset(faceFeature.bounds, someNegativeCGFloatToIncreaseSizeForXAxis, someNegativeCGFloatToIncreaseSizeForYAxis);
CGImageRef imageRef = CGImageCreateWithImageInRect([staticBG.image CGImage], biggerRectangle);
https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CGGeometry/Reference/reference.html#//apple_ref/c/func/CGRectInset

Split a big UIImage for Texture Use

I need to split a big Image ( about 10000px Height ) in a number of smaller Images to use them as Textures for a OpenGL, below is the way I'm doing it right now, anybody got any ideas to do it faster, because it is taking quite long.
NSArray *images = [NSArray alloc] initWith
for (int i = 0; i<numberOfImages; i++){
int t = i*origHeight;
CGRect fromRect = CGRectMake(0, t, origWidth, origHeight); // or whatever rectangle
CGImageRef drawImage = CGImageCreateWithImageInRect(sourceImage.CGImage, fromRect);
UIImage *newImage = [UIImage imageWithData:UIImageJPEGRepresentation([UIImage imageWithCGImage:drawImage],1.0)];
[images addObject:newImage];
CGImageRelease(drawImage);
}
You can pre-split them before ie using the convert command with ImageMagick which you can get with brew
http://www.imagemagick.org/discourse-server/viewtopic.php?f=1&t=15771

How to convert a CVImageBufferRef to UIImage

I am trying to capture video from a camera. i have gotten the captureOutput:didOutputSampleBuffer: callback to trigger and it gives me a sample buffer that i then convert to a CVImageBufferRef. i then attempt to convert that image to a UIImage that i can then view in my app.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
/*We unlock the image buffer*/
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
/*We release some components*/
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
/*We display the result on the custom layer*/
/*self.customLayer.contents = (id) newImage;*/
/*We display the result on the image view (We need to change the orientation of the image so that the video is displayed correctly)*/
UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];
self.capturedView.image = image;
/*We relase the CGImageRef*/
CGImageRelease(newImage);
}
the code seems to work fine up until the call to CGBitmapContextCreate. it always returns a NULL pointer. so consequently none of the rest of the function works. no matter what i seem to pass it the function returns null. i have no idea why.
The way that you are passing on the baseAddress presumes that the image data is in the form
ACCC
( where C is some color component, R || G || B ).
If you've set up your AVCaptureSession to capture the video frames in native format, more than likely you're getting the video data back in planar YUV420 format. (see: link text ) In order to do what you're attempting to do here, probably the easiest thing to do would be specify that you want the video frames captured in kCVPixelFormatType_32RGBA . Apple recommends that you capture the video frames in kCVPixelFormatType_32BGRA if you capture it in non-planar format at all, the reasoning for which is not stated, but I can reasonably assume is due to performance considerations.
Caveat: I've not done this, and am assuming that accessing the CVPixelBufferRef contents like this is a reasonable way to build the image. I can't vouch for this actually working, but I /can/ tell you that the way you are doing things right now reliably will not work due to the pixel format that you are (probably) capturing the video frames as.
If you need to convert a CVImageBufferRef to UIImage, it seems to be much more difficult than it should be unfortunately.
Essentially you need to first convert it to CIImage, then CGImage, and then finally UIImage. I wish I could tell you why. :)
-(void) screenshotOfVideoStream:(CVImageBufferRef)imageBuffer
{
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer))];
UIImage *image = [[UIImage alloc] initWithCGImage:videoImage];
[self doSomethingWithOurUIImage:image];
CGImageRelease(videoImage);
}
This particular method worked for me when I was converting H.264 video using the VTDecompressionSession callback to get the CVImageBufferRef (but it should work for any CVImageBufferRef). I was using iOS 8.1, XCode 6.2.
You can directly call:
self.yourImageView.image=[[UIImage alloc] initWithCIImage:[CIImage imageWithCVPixelBuffer:imageBuffer]];
Benjamin Loulier wrote a really good post on outputting a CVImageBufferRef under the consideration of speed with multiple approaches.
You can also find a working example on github ;)
How about back in time? ;)
Here you go: http://web.archive.org/web/20140426162537/http://www.benjaminloulier.com/posts/ios4-and-direct-access-to-the-camera