Edit: I got a downvote, and I just wanted to make sure it was because I didn't ask the dev forums (which I did, https://forums.developer.apple.com/thread/11593, but haven't gotten any response). This is a pretty big issue for us so I figured it'd be best to cast a wide line out, maybe somebody can help.
We have an app that works in 10.10, but has this issue in 10.11.
We call -drawRect in a NSGraphicsContext on an NSImage that is properly set in both OS
But in 10.11 this NSImage doesn't get drawn.
I'm pretty novice, but I have been debugging for quite a while to get to where I'm at now, and I'm just plain stuck. Was seeing if anybody ran into this before, or has any idea why this could be.
Here is the pertinent code:
(the layer object is a CGLayerRef that is passed into this method, from the -drawRect method)
Here is how the layer is instantiated:
NSRect scaledRect = [Helpers scaleRect:rect byScale:[self backingScaleFactorRelativeToZoom]];
CGSize size = NSSizeToCGSize(rect.size);
size_t width = size.width;
size_t height = size.height;
size_t bitsPerComponent = 8;
size_t bytesPerRow = (width * 4+ 0x0000000F) & ~0x0000000F; /
size_t dataSize = bytesPerRow * height;
void* data = calloc(1, dataSize);
CGColorSpaceRef colorspace = [[[_imageController document] captureColorSpace] CGColorSpace];
CGContextRef bitmapContext = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorspace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host);
CGLayerRef canvasLayer = CGLayerCreateWithContext(bitmapContext, scaledRect.size, NULL);
Here is the method that draws the image:
CGContextRef mainLayerContext = CGLayerGetContext(layer);
NSRect scaledBounds = [contextInfo[kContextInfoBounds] rectValue];
if( !_flatCache )
{
_flatCache = CGLayerCreateWithContext(mainLayerContext, scaledBounds.size, NULL);
CGContextRef flatCacheCtx = CGLayerGetContext(_flatCache);
CGLayerRef tempLayer = CGLayerCreateWithContext(flatCacheCtx, scaledBounds.size, NULL);
CIImage *tempImage = [CIImage imageWithCGLayer:tempLayer];
NSLog(#"%#",tempImage);
CGContextRef tempLayerCtx = CGLayerGetContext(tempLayer);
CGContextTranslateCTM(tempLayerCtx, -scaledBounds.origin.x, -scaledBounds.origin.y);
[NSGraphicsContext saveGraphicsState];
NSGraphicsContext* newContext = [NSGraphicsContext graphicsContextWithGraphicsPort:tempLayerCtx flipped:NO];
[NSGraphicsContext setCurrentContext:newContext];
if ( [_imageController background] )
{
NSRect bgRect = { [_imageController backgroundPosition], [_imageController backgroundSize] };
bgRect = [Helpers scaleRect:bgRect byScale:[self backingScaleFactorRelativeToZoom]];
[[_imageController background] drawInRect:bgRect fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
}
CIImage *tempImage2 = [CIImage imageWithCGLayer:tempLayer];
NSLog(#"%#",tempImage2);
In 10.10 and 10.11, tempImage is an empty image w/ the correct size.
In 10.10 tempImage2 now has [_imageController background] properly drawn
In 10.11 tempImage2 is the same as tempImage, a blank image w/ the correct size
Unfortunately the person who originally wrote this code is gone now, and I'm too novice to go dig any lower w/o finding a book and reading it.
bgRect is not the issue, already tried modifying that around. I have also messed with the -translate arguments, but still couldn't learn anything.
Does anybody know how else I could debug this to find the issue? Or better yet, has anybody seen this issue and know what my problem is?
Just needed to add this line:
//`bounds` calculated from the size of the image
//mainLayerContext and flatCache defined above in question ^^
CGContextDrawLayerInRect(mainLayerContext, bounds, _flatCache);
at the very end
Related
I have an app that renders into a UIView's CGContext in drawRect. I also export those renderings using a background renderer. It uses the same rendering logic to render (in faster than real time) into a CGBitmapContext (which I subsequently transform into an mp4 file).
I have noticed that the output video has a number of weird glitches. Such as the image being rotated, weird duplications of the rendered images, random noise, and the timing is also odd.
I'm looking for ways to debug this. For the timing issue, I thought I'd render a string that tells me which frame I'm currently viewing, only to find rendering text into CGContext's not very well documented. In fact, the documentations around much of core graphics is quite unforgiving to some one of my experience.
So specifically, I'd like to know how to render text into a context. If its Core Text, must it inter-operate some how with the core graphics context? And in general, I'd appreciate any tips and advice on doing bitmap rendering and debugging the results.
according another question:
How to convert Text to Image in Cocoa Objective-C
we can use the CTLineDraw to draw the text in a CGBitmapContext
sample code:
NSString* string = #"terry.wang";
CGFloat fontSize = 10.0f;
// Create an attributed string with string and font information
CTFontRef font = CTFontCreateWithName(CFSTR("Helvetica Light"), fontSize, nil);
NSDictionary* attributes = [NSDictionary dictionaryWithObjectsAndKeys:
(id)font, kCTFontAttributeName,
nil];
NSAttributedString* as = [[NSAttributedString alloc] initWithString:string attributes:attributes];
CFRelease(font);
// Figure out how big an image we need
CTLineRef line = CTLineCreateWithAttributedString((CFAttributedStringRef)as);
CGFloat ascent, descent, leading;
double fWidth = CTLineGetTypographicBounds(line, &ascent, &descent, &leading);
// On iOS 4.0 and Mac OS X v10.6 you can pass null for data
size_t width = (size_t)ceilf(fWidth);
size_t height = (size_t)ceilf(ascent + descent);
void* data = malloc(width*height*4);
// Create the context and fill it with white background
CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
CGContextRef ctx = CGBitmapContextCreate(data, width, height, 8, width*4, space, bitmapInfo);
CGColorSpaceRelease(space);
CGContextSetRGBFillColor(ctx, 1.0, 1.0, 1.0, 1.0); // white background
CGContextFillRect(ctx, CGRectMake(0.0, 0.0, width, height));
// Draw the text
CGFloat x = 0.0;
CGFloat y = descent;
CGContextSetTextPosition(ctx, x, y);
CTLineDraw(line, ctx);
CFRelease(line);
i really need your help today ! I'm debugging an old objective-c app created by another dev and there's a new bug that only appear on iOS 11.
The bug come's from a image processing function used when trying to create a "Scratch View", similar to this one -> https://github.com/joehour/ScratchCard
But, since iOS 11, the function doesn't work anymore, in the code above i've got an error on [Unknown process name] CGImageMaskCreate: invalid image provider: NULL. <-- the variable CGDataProviderRef dataProvider is not created (null)
// Method to change the view which will be scratched
- (void)setHideView:(UIView *)hideView
{
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
UIGraphicsBeginImageContextWithOptions(hideView.bounds.size, NO, 0);
[hideView.layer renderInContext:UIGraphicsGetCurrentContext()];
hideView.layer.contentsScale = scale;
_hideImage = UIGraphicsGetImageFromCurrentImageContext().CGImage;
UIGraphicsEndImageContext();
size_t imageWidth = CGImageGetWidth(_hideImage);
size_t imageHeight = CGImageGetHeight(_hideImage);
CFMutableDataRef pixels = CFDataCreateMutable(NULL, imageWidth * imageHeight);
_contextMask = CGBitmapContextCreate(CFDataGetMutableBytePtr(pixels), imageWidth, imageHeight , 8, imageWidth, colorspace, kCGImageAlphaNone);
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData(pixels);
CFRelease(pixels);
CGContextSetFillColorWithColor(_contextMask, [UIColor blackColor].CGColor);
CGContextFillRect(_contextMask, self.frame);
CGContextSetStrokeColorWithColor(_contextMask, [UIColor whiteColor].CGColor);
CGContextSetLineWidth(_contextMask, _sizeBrush);
CGContextSetLineCap(_contextMask, kCGLineCapRound);
CGImageRef mask = CGImageMaskCreate(imageWidth, imageHeight, 8, 8, imageWidth, dataProvider, nil, NO);
_scratchImage = CGImageCreateWithMask(_hideImage, mask);
CGDataProviderRelease(dataProvider);
CGImageRelease(mask);
CGColorSpaceRelease(colorspace);
}
I'm not an expert of this function of image processing and i'm really lost for debugging this part...
Does anyone know why this function doesn't work anymore in iOS 11 ?
Thanks for your help !
iOS 11 stopped handling CGDataProviderCreateWithCFData(null) therefore you will need to set the length of pixels.
Something like:
...
CFMutableDataRef pixels = CFDataCreateMutable(NULL, imageWidth * imageHeight);
CFDataSetLength(pixels, imageWidth * imageHeight); // this is the line you're missing for iOS11+
_contextMask = ...
...
I have an image editing application, that has been working through 10.10, but in 10.11 a bug came up
When I view a CIImage created w/ -imageWithCGLayer, it shows as an empty image (of the correct size) only in 10.11
CGSize size = NSSizeToCGSize(rect.size);
size_t width = size.width;
size_t height = size.height;
size_t bitsPerComponent = 8;
size_t bytesPerRow = (width * 4+ 0x0000000F) & ~0x0000000F; // 16 byte aligned is good
size_t dataSize = bytesPerRow * height;
void* data = calloc(1, dataSize);
CGColorSpaceRef colorspace = [[[_imageController document] captureColorSpace] CGColorSpace];
CGContextRef bitmapContext = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorspace, kCGImageAlphaNone | kCGBitmapByteOrder32Host);
CGLayerRef canvasLayer = CGLayerCreateWithContext(bitmapContext, scaledRect.size, NULL);
[self drawCanvasInLayer:canvasLayer inRect:scaledRect];
CIImage *test = [CIImage imageWithCGLayer:canvasLayer];
NSLog(#"%#",test);
So when I view CIImage *test on 10.10, it looks precisely as I want it. On 10.11 it is a blank image of the same size.
I tried looking at the API diffs for CGLayer & CIImage but the documentation is too dense for me. Has anybody else stumbled across this issue? I imagine it must be something w/ the initialization of the CGContextRef, because everything else in the code is size related
That particular API was deprecated some time ago and completely removed in macOS 10.11. So your results are expected.
Since you already have a bitmapContext, modify your -drawCanvasInLayer: method to directly draw into the bitmap and then create the image using the bitmpap context thusly,
CGImageRef tmpCGImage = CGBitmapContextCreateImage( bitmapContext );
CIImage* myCIImage = [[CIImage alloc] initWithCGImage: myCIImage];
Remember to do CGImageRelease( tmpCGImage ) after you are done with your CIImage.
I recently solved this very problem and posted a sample objective-C project to work around the loss of this API.
See http://www.blinddogsoftware.com/goodies/#DontSpillTheBits
Also, don't forget to read the header file where that API is declared. There is very often extremely useful information there (in Xcode, Command+click on the specific API)
I've used this snippet of code for years in my apps without fail. It cuts out a set of pieces from a CGImageRef I pass in and a black and white mask.
I am attempting to migrate the code into an enviornment where the layout will be based on autolayout and constraints.
It was worked perfectly with autolayout until I tried running it on an iPad 3, iPhone 5 or 4s.
After much research I believe that it is crashing due to memory alignment issues with this error.
EXC_BAD_ACCESS (code=EXC_ARM_DALIGN, address=0x5baa20)
I believe I need to adjust Bites/Bytes per component somehow to ensure the results always line up in memory to avoid angering the ARMV7 gods who rule over the problem devices.
I found some examples on the web but I have not successfully adapted them to this situation.
I have an alternate path where I can hard-code my sizing that doesn't cause the crash, but I have to abandon days worth of work on autolayout for hard coding.
Help please
- (UIImage*) maskImage2:(CGImageRef)image withMask:(UIImage *)maskImage withFrame:(CGRect)currentFrame {
#autoreleasepool {
CGFloat scale = [self adjustScale];
CGRect scaledRect = CGRectMake(currentFrame.origin.x*scale, currentFrame.origin.y*scale, maskImage.size.width*scale, maskImage.size.height*scale);
//Rect scaled up to account for iPad 3 sizing
NSLog( #"%# origin", NSStringFromCGRect(scaledRect));
CGImageRef tempImage = CGImageCreateWithImageInRect(image, scaledRect);// Cut out the image at the size of the pieces
CGImageRef maskRef = maskImage.CGImage; //Creates the mask to cut out the fine edge and add backing layer
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef colorsHighlights = CGImageCreateWithMask(tempImage, mask);//Colors with tran transparent surround
CFRelease(tempImage);
CGSize tempsize = CGSizeMake(maskImage.size.width, maskImage.size.height);
CFRelease(mask);
UIGraphicsBeginImageContextWithOptions(CGSizeMake(tempsize.width+5, tempsize.height+5), NO, scale);
CGContextRef context = UIGraphicsGetCurrentContext();
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) {
CGContextSetShadow(context, CGSizeMake(1, 2), 2);
}else{
CGContextSetShadow(context, CGSizeMake(.5, .5), 1);
}
[[UIImage imageWithCGImage:colorsHighlights scale:scale orientation:UIImageOrientationUp] drawInRect:CGRectMake(0, 0, tempsize.width, tempsize.height)]; // <<<< Crash is here
CFRelease(colorsHighlights);
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return finalImage;
}
}
Is there an easy way to get an two-dimensional array or something similar that represents the pixel data of an image?
I have black & white PNG images and I simply want to read the color value at a certain coordinate. For example the color value at 20/100.
This Category on UIImage might be helpful Source
#import <CoreGraphics/CoreGraphics.h>
#import "UIImage+ColorAtPixel.h"
#implementation UIImage (ColorAtPixel)
- (UIColor *)colorAtPixel:(CGPoint)point {
// Cancel if point is outside image coordinates
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, self.size.width, self.size.height), point)) {
return nil;
}
// Create a 1x1 pixel byte array and bitmap context to draw the pixel into.
// Reference: http://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = self.CGImage;
NSUInteger width = CGImageGetWidth(cgImage);
NSUInteger height = CGImageGetHeight(cgImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, -pointY);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
// Convert color values [0..255] to floats [0.0..1.0]
CGFloat red = (CGFloat)pixelData[0] / 255.0f;
CGFloat green = (CGFloat)pixelData[1] / 255.0f;
CGFloat blue = (CGFloat)pixelData[2] / 255.0f;
CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}
#end
You could put the png into an image view, and then use this method to get the pixel value from a graphics context that you would draw the the image into.
A class to do it for you, and explained too:
http://www.markj.net/iphone-uiimage-pixel-color/
The direct approach is slightly tedious, but here goes:
Get the CoreGraphics image.
CGImageRef cgImage = image.CGImage;
Get the "data provider", and from that get the data.
NSData * d = [(id)CGDataProviderCopyData(CGImageGetDataProvider(cgImage)) autorelease];
Figure out what format the data is in.
CGImageGetBitmapInfo();
CGImageGetBitsPerComponent();
CGImageGetBitsPerPixel();
CGImageGetBytesPerRow();
figure out the colour space (PNG supports greyscale/RGB/paletted).
CGImageGetColorSpace()
The indirect approach is to draw the image to a context (note that you may need to specify the context's byte order if you want any guarantees) and read the bytes out.
If you only want single pixels, it might be faster to draw the image to a 1x1 context with the right rect
(something like (CGRect){{-x,-y},{imgWidth,imgHeight}}).
This will handle colour-space conversion for you. If you just want a brightness value, use a greyscale context.