BSXPCMessage Error on CIContext & Performance in iOS8 - objective-c

I get this error BSXPCMessage received error for message: Connection interrupted showing up whenever I create a CIContext for an image I am going to allow the user to edit. This error can be found on other stackoverflow question like this one - BSXPCMessage received error for message: Connection interrupted on CIContext with iOS 8
However, while the accepted answer (https://stackoverflow.com/a/26268910/2939977) does work in eliminating the error message it spikes the CPU usage to unusable level and makes the UI get very choppy (CPU usage from 50% to about 150% once the kCIContextUseSoftwareRenderer flag is set to YES) as my filters are attached to sliders and performance is paramount.
I tried following apple documentation here to create an EAGL context to create my CIContext that I set the color space to null.
EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
NSDictionary *options = #{ kCIContextWorkingColorSpace : [NSNull null] };
imageContext = [CIContext contextWithEAGLContext:myEAGLContext options:options];
This creates the filter without the error message and performance is much much better Hooray! However, as soon as the user engages the editing options to adjust the image I get the BSXPCMessage received error for message: Connection interrupted on the first adjustment. I never see the error message again and it does not seem to effect the output at all.
[filter setValue:#(contrast) forKey:#"inputContrast"];
[filter setValue:#(color) forKey:#"inputSaturation"];
CIImage *outputImage = [filter outputImage];
CGImageRef cgimg = [imageContext createCGImage:outputImage fromRect:[outputImage extent]];
UIImage *newImage = [UIImage imageWithCGImage:cgimg scale:1 orientation:UIImageOrientationRight];
transitionImage.image = newImage;
CGImageRelease(cgimg);
In case you are wondering I did start on GPUImage because of the better performance when using filters on sliders but found it had some other issues when it came to image scaling that cause me to remove it and go down the CIFilter route.
Any advice or help would be greatly appreciated :)

If you are going to create a CIContext with:
[CIContext contextWithEAGLContext:options:]
then you should draw directly into that context using:
[cictx drawImage:inRect:fromRect:]
This will be much faster than creating a CGImageRef and passing that to a UIImageView.

Related

NSImage drawInRect and NSView cacheDisplayInRect memory retained

Application that I am working process images. It's like user drops at max 4 images and app layout them based on user's selected template. One image might be added 2-3 times in final view.
Each image in layout is drawn in NSView (drawRect method of NSView using drawInRect method).Now final image (combined image by layouting all images) is created by saving NSView as Image and it all works very well.
Now problem that I am facing is memory is being retained by app once all processing is done. I have used instruments allocation and I don't see memory leaks but I see "Persistent bytes" are increasing continuously with each session of app and one user reported issue in GB's. Please see screenshot.
When I further investigated in Instruments I saw below code snaps of app that is causing memory retentions. All are related to ImageIO and coreImages. See below from instruments:
However this seems to be only problem with 10.10 and above system. Tested same version of the app in 10.9.x and memory usage remains with in 60MB. During session execution in app it goes to 200MB but once it's done it comes back to 50-60MB that usual for kind of app.
[_photoImage drawInRect: self.bounds fromRect: NSZeroRect operation: NSCompositeSourceOver fraction: 1.0 respectFlipped: YES hints: nil];
_photoImage = nil;
Above code I am using to draw image in NSView's drawRect method and code shown in image is being used to get NSView as Image.
Update: After my further investigation I found that it's CGImageSourceCreateWithData that is caching the TIFF data of NSImage. More ever I am using below code to crop the image and if i uncomment it memory consumption just works fine.
NSData *imgData = [imageToCrop TIFFRepresentation];
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)imgData, NULL);
CGImageRef maskRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
CGImageRef imageRef = CGImageCreateWithImageInRect(maskRef, rect);
NSImage *cropped = [[NSImage alloc] initWithCGImage: imageRef size:rect.size];
CGImageRelease(maskRef);
CGImageRelease(imageRef);
CFRelease(source);
//CFRelease( options );
imgData = nil;
I have also trying explicitly setting kCGImageSourceShouldCache to false (but it's by default false) but same results.
Please help to solve the memory retention issue.
Finally after lots of debugging it turns out that CGImageSourceCreateWithData is somewhere retaining TIFF data of NSImage. When I changed this line:
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)imgData, NULL);
with
CGImageSourceRef source = CGImageSourceCreateWithURL((CFURLRef)[NSURL fileURLWithPath:path], NULL);
everything just started working fine and app's memory usage was dropped from 300MB (for 6images) to 50-60MB and it's consistent behaviour now.
Apart from above changes it was still causing memory retention somewhere so to get rid of that, after all processing is done I cleared image of each layer to 'nil' and that works like charm. I was in impression that making parent as 'nil' would release images as well but that was not working.
Anyway if anyone seems issue with drawInRect or cacheDisplayInRect then make sure to clear out the image if not needed later on.
Update 2nd July 2016
I found that kCGImageSourceShouldCache is false by default in 32bit and true for 64bit. I was able to release memory with below code by setting it to false.
const void *keys[] = { kCGImageSourceShouldCache};
const void *values[] = { kCFBooleanFalse};
CFDictionaryRef optionsDictionary = CFDictionaryCreate(NULL, keys, values, 1, NULL, NULL);
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)[image TIFFRepresentation], optionsDictionary);
Hope it helps someone.

Getting message in console: "CreateWrappedSurface() failed for a dataprovider-backed CGImageRef."

Updated to Xcode 7 and getting this (warning?) message while an image was being rendered in an operation:
CreateWrappedSurface() failed for a dataprovider-backed CGImageRef.
There was no message like this under Xcode 6.4.
Got which code part threw the message:
if (!self.originalImage) // #property (nonatomic, strong) UIImage *originalImage;
return;
CGImageRef originalCGImage = self.originalImage.CGImage;
NSAssert(originalCGImage, #"Cannot get CGImage from original image");
CIImage *inputCoreImage = [CIImage imageWithCGImage:originalCGImage]; // this results the console message
I replaced my CIIImage creator to get it directly from the UIImage:
CIImage *originalCIImage = self.originalImage.CIImage;
NSAssert(originalCIImage, #"Cannot build CIImage from original image");
In this case I didn't get any console message, but had an assert: originalCIImage was nil.
The class reference of UIImage says:
#property(nonatomic, readonly) CIImage *CIImage
If the UIImage object was initialized using a CGImageRef, the value of the property is nil.
So I'm using the original code as fallback:
CIImage *originalCIImage = self.originalImage.CIImage;
if (!originalCIImage) {
CGImageRef originalCGImageRef = self.originalImage.CGImage;
NSAssert(originalCGImageRef, #"Unable to get CGimageRef of originalImage");
originalCIImage = [CIImage imageWithCGImage:originalCGImageRef];
}
NSAssert(originalCIImage, #"Cannot build CIImage from original image");
The problem is, I'm still getting the warning messages in console.
Has anybody got this message before? What's the solution to nuke that warning(?) message?
Thanks,
Adam
Finally figured out the answer. Curious by the error I studied up on how CIImage works (https://uncorkedstudios.com/blog/image-filters-with-core-graphics)
I noticed that the CGImageRef is dataprovider-backed with premultiplied values (RGB and A)
I thought to myself that the CGImage I am loading into a CIImage (using [CIImage imageWithCGImage:originalCGImage]; is only RGB and not RGBA). Sure enough, I was creating this image by taking a snapshot of a view using the standard UIGraphicsBeginImageContextWithOptions and I had the opaque parameter set to "YES".
I simply changed:
UIGraphicsBeginImageContextWithOptions(bounds, YES, 1.0f);
to
UIGraphicsBeginImageContextWithOptions(bounds, NO, 1.0f);
So that I am now creating a RGBA image, not an RGB image.
Now I convert my CGImage to CIImage and the CIImage NOW has proper dataprovider backing and the error goes away.
NOTE:
I was using a CIClamp filter for gaussian blur purposes, and with opaque set to NO the clamp doesn't work as effectively. I decided to just keep the opaque at YES and ignore the log warnings, they don't seem to actually do anything.)

Draw Image to PDF

I'd like to draw an full page image to a PDF but I've always had a hard time wrapping my head around CGContextRefs so I don't know what to make of the error.
Note, this is NOT iOS. I'm making a desktop application.
So far, I have this:
-(void) addImage:(NSURL*) url toPage:(size_t) page toPDF:(CGPDFDocumentRef) pdf
{
NSImage *image = [[NSImage alloc] initWithContentsOfURL:url];
image = [image imageScaledToFitSize:pageSize.size]; //From Matt Gemmell's Crop extensions category
[image lockFocus];
CGContextRef context = [[NSGraphicsContext currentContext] graphicsPort];
[image drawInRect:pageSize];
[image unlockFocus];
CGPDFPageRef pageRef = CGPDFDocumentGetPage(pdf, page);
CGPDFContextBeginPage(context, pageInformation);
CGContextDrawPDFPage(context, pageRef);
CGPDFContextEndPage(context);
}
However, I'm greeted with the error:
CGPDFContextEndPage: invalid context 0x61000017b540. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update.
What is wrong with my contexts please?
You need to create and use a CGPDFContext. Different contexts are specific to different rendering processes / destinations so you need to choose the correct one. So look at using CGPDFContextCreateWithURL to create a PDF context to write the data to a file.

Only loading UIImage data used

I have a 400 pattern images at 400x300 bundled within my app. I would like to make some kind of factory method to take a portion of that image and load them into UIImageViews. I've had some success with using content mode and clipping to bounds, but when I load a ton of these into a view it can take upwards of 5 seconds for the view to load. Here is an example of my current method.
UIImageView *tinyImageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed#"400x300testImage.png"];
[tinyImageView setFrame:CGRectMake(0, 0, 10, 200)];
[tinyImageView setContentMode:UIViewContentModeTopLeft];
[tinyImageView setClipsToBounds:YES];
[self.tinyImagesView addSubview:tinyImageView];
I've been reading the ImageIO class files and I think my answer is in there but I'm having a hard time putting together workable code. In another stackoverflow question I came across this code
CFDictionaryRef options = (__bridge CFDictionaryRef)[NSDictionary dictionaryWithObjectsAndKeys:
(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailWithTransform,
(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailFromImageIfAbsent,
(id)[NSNumber numberWithFloat:200.0f], (id)kCGImageSourceThumbnailMaxPixelSize,
nil];
CGImageRef imgRef = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, options);
UIImage *scaled = [UIImage imageWithCGImage:imgRef];
CGImageRelease(imgRef);
CFRelease(imageSource);
return scaled;
This has a similar load time to loading the full images and clipping.
Is it possible to read in only a 10x200 strip of an image file and load that into a UIImageView that is as fast as creating that 10x200 png and loading that using imageNamed?
I'm pretty sure what you really want is a CATiledLayer, where you can point it at the set of images and have it automatically pull up what it needs.
You can just add a CATiledLayer to any UIView.

Using drawAtPoint with my CIImage not doing anything on screen

Stuck again. :(
I have the following code crammed into a procedure invoked when I click on a button on my application main window. I'm just trying to tweak a CIIMage and then display the results. At this point I'm not even worried about exactly where / how to display it. I'm just trying to slam it up on the window to make sure my Transform worked. This code seems to work down through the drawAtPoint message. But I never see anything on the screen. What's wrong? Thanks.
Also, as far as displaying it in a particular location on the window ... is the best technique to put a frame of some sort on the window, then get the coordinates of that frame and "draw into" that rectangle? Or use a specific control from IB? Or what? Thanks again.
// earlier I initialize a NSImage from JPG file on disk.
// then create NSBitmapImageRep from the NSImage. This all works fine.
// then ...
CIImage * inputCIimage = [[CIImage alloc] initWithBitmapImageRep:inputBitmap];
if (inputCIimage == Nil)
NSLog(#"could not create CI Image");
else {
NSLog (#"CI Image created. working on transform");
CIFilter *transform = [CIFilter filterWithName:#"CIAffineTransform"];
[transform setDefaults];
[transform setValue:inputCIimage forKey:#"inputImage"];
NSAffineTransform *affineTransform = [NSAffineTransform transform];
[affineTransform rotateByDegrees:3];
[transform setValue:affineTransform forKey:#"inputTransform"];
CIImage * myResult = [transform valueForKey:#"outputImage"];
if (myResult == Nil)
NSLog(#"Transformation failed");
else {
NSLog(#"Created transformation successfully ... now render it");
[myResult drawAtPoint: NSMakePoint ( 0,0 )
fromRect: NSMakeRect ( 0,0,128,128 )
operation: NSCompositeSourceOver
fraction: 1.0]; //100% opaque
[inputCIimage release];
}
}
Edit #1:
snip - removed the prior code sample mentioned below (in the comments about drawRect), which did not work
Edit #2: adding some code that DOES work, for anyone else in the future who might be stuck on this same thing. Not sure if this is the BEST way to do it ... but it does work for my quick and dirty purposes. So this new code (below) replaces the entire [myResult drawAtPoint ...] message from above / in my initial question. This code takes the image created by the CIImage transform and displays it in the NSImageView control.
NSImage *outputImage;
NSCIImageRep *ir;
ir = [NSCIImageRep imageRepWithCIImage:myResult];
outputImage = [[[NSImage alloc] initWithSize: NSMakeSize(inputImage.size.width, inputImage.size.height)] autorelease];
[outputImage addRepresentation:ir];
[outputImageView setImage: outputImage]; //outputImageView is an NSImageView control on my application's main window
Drawing on screen in Cocoa normally takes place inside an -[NSView drawRect:] override. I take it you're not doing that, so you don't have a correctly set up graphics context.
So one solution to this problem is to create a NSCIImageRep from the CIImage, then add that representation to a new NSImage, then it is easy to display the NSImage in a variety of ways. I've added the code I used up above (see "edit #2"), where I display the "output image" within an NSImageView control. Man ... what a PITA this was!