How to create a CGImageRef from a NSBitmapImageRep? - objective-c

How can I create a CGImageRef from a NSBitmapImageRep?
Or how can I define a complete new CGImageRef in the same way as the NSBitmapImageRep? The definition of a NSBitmapImageRep works fine. But I need an image as CGImageRef.
unsigned char *plane = (unsigned char *)[data bytes]; // data = 3 bytes for each RGB pixel
NSBitmapImageRep* imageRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes: &plane
pixelsWide: width
pixelsHigh: height
bitsPerSample: depth
samplesPerPixel: channel
hasAlpha: NO
isPlanar: NO
colorSpaceName: NSCalibratedRGBColorSpace
//bitmapFormat: NSAlphaFirstBitmapFormat
bytesPerRow: channel * width
bitsPerPixel: channel * depth
];
I have no idea how to create the CGImageRef from the NSBitmapImageRep or how to define a new CGImageRef:
CGImageRef imageRef = CGImageCreate(width, height, depth, channel*depth, channel*width, CGColorSpaceCreateDeviceRGB(), ... );
Please, can somebody give me a hint?

The easy way is by using the CGImage property (introduced in 10.5):
CGImageRef image = imageRep.CGImage;
Documentation:
https://developer.apple.com/library/mac/documentation/Cocoa/Reference/ApplicationKit/Classes/NSBitmapImageRep_Class/index.html#//apple_ref/occ/instm/NSBitmapImageRep/CGImage
Return Value
Returns an autoreleased CGImageRef opaque type based on
the receiver’s current bitmap data.
Discussion
The returned CGImageRef has pixel dimensions that are
identical to the receiver’s. This method might return a preexisting
CGImageRef opaque type or create a new one. If the receiver is later
modified, subsequent invocations of this method might return different
CGImageRef opaque types.

From your code snippet, it seems you're starting with an NSData object. So, your question seems to be how to create a CGImage from a data object. In that case, there's no reason to go through NSBitmapImageRep.
You were almost there with the call to CGImageCreate(). You just needed to figure out how to supply a CGDataProvider to it. You can create a CGDataProvider from an NSData pretty directly, once you realize that NSData is toll-free bridged with CFData. So:
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGColorSpaceRef colorspace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGImageRef image = CGImageCreate(width, height, depth / 3, depth, channel*width, colorspace, kCGImageAlphaNone, provider, NULL, TRUE, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorspace);

Related

CGImageRef gets corrupted after returned from a method

My code creates an TIFFRepresentation of an image and I want to recode it to something different. This is not problematic.
My ImgUtils function is:
+ (CGImageRef) processImageData:(NSData*)rep {
NSBitmapImageRep *bitmapRep = [NSBitmapImageRep imageRepWithData:rep];
int width = bitmapRep.size.width;
int height = bitmapRep.size.height;
size_t pixels_size = width * height;
Byte raw_bytes[pixels_size * 3];
//
// processing, creates and stores raw byte stream
//
int bitsPerComponent = 8;
int bytesPerPixel = 3;
int bitsPerPixel = bytesPerPixel * bitsPerComponent;
int bytesPerRow = bytesPerPixel * width;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
raw_bytes,
pixels_size * bytesPerPixel,
NULL);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider,
NULL,
NO,
renderingIntent);
[ImgUtils saveToPng:imageRef withSuffix:#"-ok"];
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
return imageRef;
}
There is another method, that saves a CGImageRef to filesystem.
+ (BOOL) saveToPng:(CGImageRef)imageRef withSuffix:(NSString*)suffix {
CFURLRef url = (__bridge CFURLRef)[NSURL fileURLWithPath:[NSString stringWithFormat:#"~/Downloads/pic%#.png", suffix]];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(url, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destination, imageRef, nil);
CGImageDestinationFinalize(destination);
CFRelease(destination);
return YES;
}
As you can see, immediately after processing the image, I save it on a disk, as pic-ok.png.
Here is the code, that calls the processing function:
CGImageRef cgImage = [ImgUtils processImageData:imageRep];
[ImgUtils saveToPng:cgImage withSuffix:#"-bad"];
The problem is, that the two images differ. Second one, with the -bad suffix is corrupted.
See examples below. Seems like the memory area the CGImageRef pointer is pointing to is released and overwritten immediately after returning from the method.
I tried also return CGImageCreateCopy(imageRef); but it changed nothing.
What am I missing?
CGDataProviderCreateWithData() does not copy the buffer you provide. Its purpose is to allow creation of a data provider that accesses that buffer directly.
Your buffer is created on the stack. It goes invalid after +processImageData: returns. However, the CGImage still refers to the provider and the provider still refers to the now-invalid buffer.
One solution would be to create the buffer on the heap and provide a callback via the releaseData parameter that frees it. Another would be to create a CFData from the buffer (which copies it) and then create the data provider using CGDataProviderCreateWithCFData(). Probably the best would be to create a CFMutableData of the desired capacity, set its length to match, and use its storage (CFDataGetMutableBytePtr()) as your buffer from the beginning. That's heap-allocated, memory-managed, and doesn't require any copying.

Open CV memory stacked (not released properly)

I am using 3rd party library for image processing, this method seems to be the cause of large memory usage (+30MB) everytime it executed, and it won't release properly. Repeated use of it ends up crashing the app (memory overload). The image used is directly from camera of my iP6.
+ (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(cvMat.cols, // Width
cvMat.rows, // Height
8, // Bits per component
8 * cvMat.elemSize(), // Bits per pixel
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone | kCGBitmapByteOrderDefault, // Bitmap info flags
provider, // CGDataProviderRef
NULL, // Decode
false, // Should interpolate
kCGRenderingIntentDefault); // Intent
// UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return image;
}
I suspect the problem is here: (__bridge CFDataRef)data. I cant use CFRelease on it cause it make app crash. Project is running with ARC.
EDIT:
It seems the same code is also in openCV official website:
http://docs.opencv.org/2.4/doc/tutorials/ios/image_manipulation/image_manipulation.html
Gah!
EDIT 2
Here is the code how I use it (actually below code is also a part of the 3rd party lib, but i added some lines).
cv::Mat undistorted = cv::Mat( cvSize(maxWidth,maxHeight), CV_8UC4); // here nothing
cv::Mat original = [MMOpenCVHelper cvMatFromUIImage:_adjustedImage]; // here +30MB
//NSLog(#"%f %f %f %f",ptBottomLeft.x,ptBottomRight.x,ptTopRight.x,ptTopLeft.x);
cv::warpPerspective(original, undistorted,
cv::getPerspectiveTransform(src, dst), cvSize(maxWidth, maxHeight)); // here +16MB
_cropRect.hidden=YES;
#autoreleasepool {
_sourceImageView.image=[MMOpenCVHelper UIImageFromCVMat:undistorted]; // here +15MB (PROBLEM)
}
original.release(); // here -30MB (THIS IS OK)
undistorted.release(); // here -16MB (ok)
Guess it is a hard subject since not many people knows OpenCV that well. What I found is that most answer for the similar problem involves putting #autoreleasepool where this method is used. But seems to be not releasing memory either.
As temporary solution I resize the image fed to this method by half. At least app will last longer before it crash finally. It just works.

Creating ad displaying a UIImage from raw BGRA data

I'm collecting image data from the camera using this code:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
// Called when a frame arrives
// Should be in BGRA format
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
unsigned char *raw = (unsigned char *)CVPixelBufferGetBaseAddress(imageBuffer);
// Copy memory into allocated buffer
unsigned char *buffer = malloc(sizeof(unsigned char) * bytesPerRow * height);
memcpy(buffer, raw, bytesPerRow * height);
[self processVideoData:buffer width:width height:height bytesPerRow:bytesPerRow];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
The processVideoData: method looks like this:
- (void)processVideoData:(unsigned char *)data width:(size_t)width height:(size_t)height bytesPerRow:(size_t)bytesPerRow
{
dispatch_sync(dispatch_get_main_queue(), ^{
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, data, bytesPerRow * height, NULL);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef image = CGImageCreate(width, height, 8, 32, bytesPerRow, colorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast, dataProvider, NULL, NO, kCGRenderingIntentDefault);
// Set layer contents???
UIImage *objcImage = [UIImage imageWithCGImage:image];
self.imageView.image = objcImage;
free(data);
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(colorSpace);
CGImageRelease(image);
});
}
No complaints, no leaks but nothing shows up in the image view it just stays blank (yes I have checked the outlet connection). Previously I had the bitmapInfo set just to kCGBitmapByteOrderDefault which was causing a crash when setting the image property of the image view however the image view would go dark which was promising just before the crash.
I summarised that the crash was due to the image being in BGRA not BGR so I set the bitmapInfo to kCGBitmapByteOrderDefault | kCGImageAlphaLast and that solved the crash but no image.
I realise that the image will look weird as the CGImageRef is expecting an RGB image and I'm passing it BGR but that should only result in a weird looking image due to channel swapping. I have also logged out the data that I'm getting and it seems to be in order something like: b:65 g:51 r:42 a:255 and the alpha channel is always 255 as expected.
I'm sorry if it's obvious but I can't work out what is going wrong.
You can use this flag combination to achieve BGRA format:
kCGBitmapByteOrder32Little | kCGImageAlphaSkipFirst
You should prefer to use this solution, it will be more performant way in comparison to OpenCV conversion.
Here is more common way to convert sourcePixelFormat to bitmapInfo:
sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
bitmapInfo = #{
#(kCVPixelFormatType_32ARGB) : #(kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst),
#(kCVPixelFormatType_32BGRA) : #(kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst),
}[#(sourcePixelFormat)].unsignedIntegerValue;
It turns out the data was just in the wrong format and I wasn't feeding it into the CGImageCreate function correctly.
The data comes out in BGRA format so I fed this data into an IplImage structure (I'm using OpenCV v 2.4.9) like so:
// Pack IplImage with data
IplImage *img = cvCreateImage(cvSize((int)width, (int)height), 8, 4);
img->imageData = (char *)data;
I then converted it to RGB like so:
IplImage *converted = cvCreateImage(cvSize((int)width, (int)height), 8, 3);
cvCvtColor(img, converted, CV_BGRA2RGB);
I then fed the data from the converted IplImage into a CGImageCreate function and it works nicely.

How to create CGImageRef from NSData string data (NOT UIImage)

How does one create a new CGImageRef without a UIImage? I can't use image.CGImage
I am receiving a base64 encoded image as a std::string from a server process. The first part of the code below simulates receiving the encoded string.
- (UIImage *)testChangeImageToBase64String
{
UIImage *processedImage = [UIImage imageNamed:#"myFile.jpg"];
// UIImage to unsigned char *
CGImageRef imageRef = processedImage.CGImage;
NSData *data = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
// encode data to Base64 NSString
NSString *base64EncodedDataString = [data base64EncodedStringWithOptions:0];
// create encoded std::string
std::string encoded([base64EncodedDataString UTF8String]);
// ***************************************************************************
// This is where we call the server method and receive the bytes in a std::string
std::string received = encoded;
// ***************************************************************************
// get Base64 encoded std::string into NSString
NSString *base64EncodedCstring = [NSString stringWithCString:encoded.c_str() encoding:[NSString defaultCStringEncoding]];
// NSData from the Base64 encoded std::string
NSData *nsdataFromBase64String = [[NSData alloc]initWithBase64EncodedString:base64EncodedCstring options:0];
Everything is good!!!!..... until I try to populate the newImage.
When I get the encoded string, I need to get a CGImageRef to get the data back into the correct format to populate a UIImage. If the data is not in the correct format the UIImage will be nil.
I need to create a new CGImageRef with the nsdataFromBase64String.
Something like:
CGImageRef base64ImageRef = [newCGImageRefFromString:nsdataFromBase64String];
Then I can use imageWithCGImage to put the data into a new UIImage.
Something like:
UIImage *imageFromImageRef = [UIImage imageWithCGImage: base64ImageRef];
Then I can return the UIImage.
return newImage;
}
Please note that the following line will NOT work:
UIImage *newImage = [[UIImage alloc] initWithData:nsdataFromBase64String];
The data needs to be in the correct format or the UIImage will be nil. Hence, my question, "How do I create a CGImageRef with NSData?"
Short-ish answer, since this is mostly just going over what I mentioned in NSChat:
Figure out what the format of the image you're receiving is as well as its size (width and height, in pixels). You mentioned in chat that it's just straight ARGB8 data, so keep that in mind. I'm not sure how you're receiving the other info, if at all.
Using CGImageCreate, create a new image using what you know about the image already (i.e., presumably you know its width, height, and so on — if you don't, you should be packing this in with the image you're sending). E.g., this bundle of boilerplate that nobody likes to write:
// NOTE: have not tested if this even compiles -- consider it pseudocode.
CGImageRef image;
CFDataRef bridgedData;
CGDataProviderRef dataProvider;
CGColorSpaceRef colorSpace;
CGBitmapInfo infoFlags = kCGImageAlphaFirst; // ARGB
// Get a color space
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
// Assuming the decoded data is only pixel data
bridgedData = (__bridge CFDataRef)decodedData;
dataProvider = CGDataProviderCreateWithCFData(bridgedData);
// Given size_t width, height which you should already have somehow
image = CGImageCreate(
width, height, /* bpc */ 8, /* bpp */ 32, /* pitch */ width * 4,
colorSpace, infoFlags,
dataProvider, /* decode array */ NULL, /* interpolate? */ TRUE,
kCGRenderingIntentDefault /* adjust intent according to use */
);
// Release things the image took ownership of.
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(colorSpace);
That code's written with the idea that it's guaranteed to be ARGB_8888, the data is correct, nothing could possibly return NULL, etc. Copy/pasting the above code could potentially cause everything in a three mile radius to explode. Error handling's up to you (e.g., CGColorSpaceCreateWithName can potentially return null).
Allocate a UIImage using the CGImage. Since the UIImage will take ownership of/copy the CGImage, release your CGImageRef (actually, the docs say nothing about what UIImage does with the CGImage, but you're not going to use it anymore, so you must release yours).

How do I access and manipulate JPEG image pixels?

i have a jpg file. I need to convert it to pixel data and then change color of some pixel. I do it like this:
NSString *string = [[NSBundle mainBundle] pathForResource:#"pic" ofType:#"jpg"];
NSData *data = [NSData dataWithContentsOfFile:string];
unsigned char *bytesArray = dataI.bytes;
NSUInteger byteslenght = data.length;
//--------pixel to array
NSMutableArray *array = [[NSMutableArray alloc] initWithCapacity:byteslenght];
for (int i = 0; i<byteslenght; i++) {
[array addObject:[NSNumber numberWithUnsignedChar:bytesArray[i]]];
}
Here i try to change color of pixels since 95 till 154.
NSNumber *number = [NSNumber numberWithInt:200];
for (int i=95; i<155; i++) {
[array replaceObjectAtIndex:i withObject:number];
}
But when i convert array to image i got a blurred picture. I don't understand why i don't have an influence on some pixels and why i have influence on picture in total?
The process of accessing pixel-level data is a little more complicated than your question might suggest, because, as Martin pointed out, JPEG can be a compressed image format. Apple discusses the approved technique for getting pixel data in Technical Q&A QA1509.
Bottom line, to get the uncompressed pixel data for a UIImage, you would:
Get the CGImage for the UIImage.
Get the data provider for that CGImageRef via CGImageGetDataProvider.
Get the binary data associated with that data provider via CGDataProviderCopyData.
Extract some of the information about the image, so you know how to interpret that buffer.
Thus:
UIImage *image = ...
CGImageRef imageRef = image.CGImage; // get the CGImageRef
NSAssert(imageRef, #"Unable to get CGImageRef");
CGDataProviderRef provider = CGImageGetDataProvider(imageRef); // get the data provider
NSAssert(provider, #"Unable to get provider");
NSData *data = CFBridgingRelease(CGDataProviderCopyData(provider)); // get copy of the data
NSAssert(data, #"Unable to copy image data");
NSInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef); // some other interesting details about image
NSInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
NSInteger bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
NSInteger bytesPerRow = CGImageGetBytesPerRow(imageRef);
NSInteger width = CGImageGetWidth(imageRef);
NSInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorspace = CGImageGetColorSpace(imageRef);
Given that you want to manipulate this, you presumably want some mutable pixel buffer. The easiest approach would be to make a mutableCopy of that NSData object and manipulate it there, but in these cases, I tend to fall back to C, creating a void *outputBuffer, into which I copy the original pixels and manipulate using traditional C array techniques.
To create the buffer:
void *outputBuffer = malloc(width * height * bitsPerPixel / 8);
NSAssert(outputBuffer, #"Unable to allocate buffer");
For the precise details on how to manipulate it, you have to look at bitmapInfo (which will tell you whether it's RGBA or ARGB; whether it's floating point or integer) and bitsPerComponent (which will tell you whether it's 8 or 16 bits per component, etc.). For example, a very common JPEG format is 8 bits per component, four components, RGBA (i.e. red, green, blue, and alpha, in that order). But you really need to check those various properties we extracted from the CGImageRef to make sure. See the discussion in the Quartz 2D Programming Guide - Bitmap Images and Image Masks for more information. I personally find "Figure 11-2" to be especially illuminating.
The next logical question is when you're done manipulating the pixel data, how to create a UIImage for that. In short, you'd reverse the above process, e.g. create a data provider, create a CGImageRef, and then create a UIImage:
CGDataProviderRef outputProvider = CGDataProviderCreateWithData(NULL, outputBuffer, sizeof(outputBuffer), releaseData);
CGImageRef outputImageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
outputProvider,
NULL,
NO,
kCGRenderingIntentDefault);
UIImage *outputImage = [UIImage imageWithCGImage:outputImageRef];
CGImageRelease(outputImageRef);
CGDataProviderRelease(outputProvider);
Where releaseData is a C function that simply calls free the pixel buffer associated with the data provider:
void releaseData(void *info, const void *data, size_t size)
{
free((void *)data);
}