Creating ad displaying a UIImage from raw BGRA data - objective-c

I'm collecting image data from the camera using this code:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
// Called when a frame arrives
// Should be in BGRA format
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
unsigned char *raw = (unsigned char *)CVPixelBufferGetBaseAddress(imageBuffer);
// Copy memory into allocated buffer
unsigned char *buffer = malloc(sizeof(unsigned char) * bytesPerRow * height);
memcpy(buffer, raw, bytesPerRow * height);
[self processVideoData:buffer width:width height:height bytesPerRow:bytesPerRow];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
The processVideoData: method looks like this:
- (void)processVideoData:(unsigned char *)data width:(size_t)width height:(size_t)height bytesPerRow:(size_t)bytesPerRow
{
dispatch_sync(dispatch_get_main_queue(), ^{
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, data, bytesPerRow * height, NULL);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef image = CGImageCreate(width, height, 8, 32, bytesPerRow, colorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast, dataProvider, NULL, NO, kCGRenderingIntentDefault);
// Set layer contents???
UIImage *objcImage = [UIImage imageWithCGImage:image];
self.imageView.image = objcImage;
free(data);
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(colorSpace);
CGImageRelease(image);
});
}
No complaints, no leaks but nothing shows up in the image view it just stays blank (yes I have checked the outlet connection). Previously I had the bitmapInfo set just to kCGBitmapByteOrderDefault which was causing a crash when setting the image property of the image view however the image view would go dark which was promising just before the crash.
I summarised that the crash was due to the image being in BGRA not BGR so I set the bitmapInfo to kCGBitmapByteOrderDefault | kCGImageAlphaLast and that solved the crash but no image.
I realise that the image will look weird as the CGImageRef is expecting an RGB image and I'm passing it BGR but that should only result in a weird looking image due to channel swapping. I have also logged out the data that I'm getting and it seems to be in order something like: b:65 g:51 r:42 a:255 and the alpha channel is always 255 as expected.
I'm sorry if it's obvious but I can't work out what is going wrong.

You can use this flag combination to achieve BGRA format:
kCGBitmapByteOrder32Little | kCGImageAlphaSkipFirst
You should prefer to use this solution, it will be more performant way in comparison to OpenCV conversion.
Here is more common way to convert sourcePixelFormat to bitmapInfo:
sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
bitmapInfo = #{
#(kCVPixelFormatType_32ARGB) : #(kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst),
#(kCVPixelFormatType_32BGRA) : #(kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst),
}[#(sourcePixelFormat)].unsignedIntegerValue;

It turns out the data was just in the wrong format and I wasn't feeding it into the CGImageCreate function correctly.
The data comes out in BGRA format so I fed this data into an IplImage structure (I'm using OpenCV v 2.4.9) like so:
// Pack IplImage with data
IplImage *img = cvCreateImage(cvSize((int)width, (int)height), 8, 4);
img->imageData = (char *)data;
I then converted it to RGB like so:
IplImage *converted = cvCreateImage(cvSize((int)width, (int)height), 8, 3);
cvCvtColor(img, converted, CV_BGRA2RGB);
I then fed the data from the converted IplImage into a CGImageCreate function and it works nicely.

Related

Image Quality getting affected on scaling the image using vImageScale_ARGB8888 - Cocoa Objective C

I am capturing my system's screen with AVCaptureSession and then create a video file out of the image buffers captured. It works fine.
Now I want to scale the image buffers by maintaining the aspect ratio for the video file's dimension. I have used the following code to scale the images.
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (pixelBuffer == NULL) { return; }
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
size_t finalWidth = 1080;
size_t finalHeight = 720;
size_t sourceWidth = CVPixelBufferGetWidth(imageBuffer);
size_t sourceHeight = CVPixelBufferGetHeight(imageBuffer);
CGRect aspectRect = AVMakeRectWithAspectRatioInsideRect(CGSizeMake(sourceWidth, sourceHeight), CGRectMake(0, 0, finalWidth, finalHeight));
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t startY = aspectRect.origin.y;
size_t yOffSet = (finalWidth*startY*4);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
void* destData = malloc(finalHeight * finalWidth * 4);
vImage_Buffer srcBuffer = { (void *)baseAddress, sourceHeight, sourceWidth, bytesPerRow};
vImage_Buffer destBuffer = { (void *)destData+yOffSet, aspectRect.size.height, aspectRect.size.width, aspectRect.size.width * 4};
vImage_Error err = vImageScale_ARGB8888(&srcBuffer, &destBuffer, NULL, 0);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
OSType pixelFormat = CVPixelBufferGetPixelFormatType(imageBuffer);
CVImageBufferRef pixelBuffer1 = NULL;
CVReturn result = CVPixelBufferCreateWithBytes(NULL, finalWidth, finalHeight, pixelFormat, destData, finalWidth * 4, NULL, NULL, NULL, &pixelBuffer1);
}
I am able scale the image with the above code but the final image seems to be blurry compare to resizing the image with Preview application. Because of this the video is not clear.
This works fine if I change the output pixel format to RGB with below code.
output.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil];
But I want the image buffers in YUV format (which is the default format for AVCaptureVideoDataOutput) since this will reduce the size of the buffer when transferring it over network.
Image after scaling:
Image resized with Preview application:
I have tried using vImageScale_CbCr8 instead of vImageScale_ARGB8888 but the resulting image didn't contain correct RGB values.
I have also noticed there is function to convert image format: vImageConvert_422YpCbYpCr8ToARGB8888(const vImage_Buffer *src, const vImage_Buffer *dest, const vImage_YpCbCrToARGB *info, const uint8_t permuteMap[4], const uint8_t alpha, vImage_Flags flags);
But I don't know what should be the values for vImage_YpCbCrToARGB and permuteMap as I don't know anything about image processing.
Expected Solution:
How to convert YUV pixel buffers to RGB buffers and back to YUV (or) How to scale YUV pixel buffers without affecting the RGB values.
After a lot search and going through different questions related to image rendering, found the below code to convert the pixel format of the image buffers. Thanks to the answer in this link.
CVPixelBufferRef imageBuffer;
CVPixelBufferCreate(kCFAllocatorDefault, sourceWidth, sourceHeight, kCVPixelFormatType_32ARGB, 0, &imageBuffer);
VTPixelTransferSessionTransferImage(pixelTransferSession, pixelBuffer, imageBuffer);

CGImageRef gets corrupted after returned from a method

My code creates an TIFFRepresentation of an image and I want to recode it to something different. This is not problematic.
My ImgUtils function is:
+ (CGImageRef) processImageData:(NSData*)rep {
NSBitmapImageRep *bitmapRep = [NSBitmapImageRep imageRepWithData:rep];
int width = bitmapRep.size.width;
int height = bitmapRep.size.height;
size_t pixels_size = width * height;
Byte raw_bytes[pixels_size * 3];
//
// processing, creates and stores raw byte stream
//
int bitsPerComponent = 8;
int bytesPerPixel = 3;
int bitsPerPixel = bytesPerPixel * bitsPerComponent;
int bytesPerRow = bytesPerPixel * width;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
raw_bytes,
pixels_size * bytesPerPixel,
NULL);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider,
NULL,
NO,
renderingIntent);
[ImgUtils saveToPng:imageRef withSuffix:#"-ok"];
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
return imageRef;
}
There is another method, that saves a CGImageRef to filesystem.
+ (BOOL) saveToPng:(CGImageRef)imageRef withSuffix:(NSString*)suffix {
CFURLRef url = (__bridge CFURLRef)[NSURL fileURLWithPath:[NSString stringWithFormat:#"~/Downloads/pic%#.png", suffix]];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(url, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destination, imageRef, nil);
CGImageDestinationFinalize(destination);
CFRelease(destination);
return YES;
}
As you can see, immediately after processing the image, I save it on a disk, as pic-ok.png.
Here is the code, that calls the processing function:
CGImageRef cgImage = [ImgUtils processImageData:imageRep];
[ImgUtils saveToPng:cgImage withSuffix:#"-bad"];
The problem is, that the two images differ. Second one, with the -bad suffix is corrupted.
See examples below. Seems like the memory area the CGImageRef pointer is pointing to is released and overwritten immediately after returning from the method.
I tried also return CGImageCreateCopy(imageRef); but it changed nothing.
What am I missing?
CGDataProviderCreateWithData() does not copy the buffer you provide. Its purpose is to allow creation of a data provider that accesses that buffer directly.
Your buffer is created on the stack. It goes invalid after +processImageData: returns. However, the CGImage still refers to the provider and the provider still refers to the now-invalid buffer.
One solution would be to create the buffer on the heap and provide a callback via the releaseData parameter that frees it. Another would be to create a CFData from the buffer (which copies it) and then create the data provider using CGDataProviderCreateWithCFData(). Probably the best would be to create a CFMutableData of the desired capacity, set its length to match, and use its storage (CFDataGetMutableBytePtr()) as your buffer from the beginning. That's heap-allocated, memory-managed, and doesn't require any copying.

How to create a CGImageRef from a NSBitmapImageRep?

How can I create a CGImageRef from a NSBitmapImageRep?
Or how can I define a complete new CGImageRef in the same way as the NSBitmapImageRep? The definition of a NSBitmapImageRep works fine. But I need an image as CGImageRef.
unsigned char *plane = (unsigned char *)[data bytes]; // data = 3 bytes for each RGB pixel
NSBitmapImageRep* imageRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes: &plane
pixelsWide: width
pixelsHigh: height
bitsPerSample: depth
samplesPerPixel: channel
hasAlpha: NO
isPlanar: NO
colorSpaceName: NSCalibratedRGBColorSpace
//bitmapFormat: NSAlphaFirstBitmapFormat
bytesPerRow: channel * width
bitsPerPixel: channel * depth
];
I have no idea how to create the CGImageRef from the NSBitmapImageRep or how to define a new CGImageRef:
CGImageRef imageRef = CGImageCreate(width, height, depth, channel*depth, channel*width, CGColorSpaceCreateDeviceRGB(), ... );
Please, can somebody give me a hint?
The easy way is by using the CGImage property (introduced in 10.5):
CGImageRef image = imageRep.CGImage;
Documentation:
https://developer.apple.com/library/mac/documentation/Cocoa/Reference/ApplicationKit/Classes/NSBitmapImageRep_Class/index.html#//apple_ref/occ/instm/NSBitmapImageRep/CGImage
Return Value
Returns an autoreleased CGImageRef opaque type based on
the receiver’s current bitmap data.
Discussion
The returned CGImageRef has pixel dimensions that are
identical to the receiver’s. This method might return a preexisting
CGImageRef opaque type or create a new one. If the receiver is later
modified, subsequent invocations of this method might return different
CGImageRef opaque types.
From your code snippet, it seems you're starting with an NSData object. So, your question seems to be how to create a CGImage from a data object. In that case, there's no reason to go through NSBitmapImageRep.
You were almost there with the call to CGImageCreate(). You just needed to figure out how to supply a CGDataProvider to it. You can create a CGDataProvider from an NSData pretty directly, once you realize that NSData is toll-free bridged with CFData. So:
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGColorSpaceRef colorspace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGImageRef image = CGImageCreate(width, height, depth / 3, depth, channel*width, colorspace, kCGImageAlphaNone, provider, NULL, TRUE, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorspace);

How to create CGImageRef from NSData string data (NOT UIImage)

How does one create a new CGImageRef without a UIImage? I can't use image.CGImage
I am receiving a base64 encoded image as a std::string from a server process. The first part of the code below simulates receiving the encoded string.
- (UIImage *)testChangeImageToBase64String
{
UIImage *processedImage = [UIImage imageNamed:#"myFile.jpg"];
// UIImage to unsigned char *
CGImageRef imageRef = processedImage.CGImage;
NSData *data = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
// encode data to Base64 NSString
NSString *base64EncodedDataString = [data base64EncodedStringWithOptions:0];
// create encoded std::string
std::string encoded([base64EncodedDataString UTF8String]);
// ***************************************************************************
// This is where we call the server method and receive the bytes in a std::string
std::string received = encoded;
// ***************************************************************************
// get Base64 encoded std::string into NSString
NSString *base64EncodedCstring = [NSString stringWithCString:encoded.c_str() encoding:[NSString defaultCStringEncoding]];
// NSData from the Base64 encoded std::string
NSData *nsdataFromBase64String = [[NSData alloc]initWithBase64EncodedString:base64EncodedCstring options:0];
Everything is good!!!!..... until I try to populate the newImage.
When I get the encoded string, I need to get a CGImageRef to get the data back into the correct format to populate a UIImage. If the data is not in the correct format the UIImage will be nil.
I need to create a new CGImageRef with the nsdataFromBase64String.
Something like:
CGImageRef base64ImageRef = [newCGImageRefFromString:nsdataFromBase64String];
Then I can use imageWithCGImage to put the data into a new UIImage.
Something like:
UIImage *imageFromImageRef = [UIImage imageWithCGImage: base64ImageRef];
Then I can return the UIImage.
return newImage;
}
Please note that the following line will NOT work:
UIImage *newImage = [[UIImage alloc] initWithData:nsdataFromBase64String];
The data needs to be in the correct format or the UIImage will be nil. Hence, my question, "How do I create a CGImageRef with NSData?"
Short-ish answer, since this is mostly just going over what I mentioned in NSChat:
Figure out what the format of the image you're receiving is as well as its size (width and height, in pixels). You mentioned in chat that it's just straight ARGB8 data, so keep that in mind. I'm not sure how you're receiving the other info, if at all.
Using CGImageCreate, create a new image using what you know about the image already (i.e., presumably you know its width, height, and so on — if you don't, you should be packing this in with the image you're sending). E.g., this bundle of boilerplate that nobody likes to write:
// NOTE: have not tested if this even compiles -- consider it pseudocode.
CGImageRef image;
CFDataRef bridgedData;
CGDataProviderRef dataProvider;
CGColorSpaceRef colorSpace;
CGBitmapInfo infoFlags = kCGImageAlphaFirst; // ARGB
// Get a color space
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
// Assuming the decoded data is only pixel data
bridgedData = (__bridge CFDataRef)decodedData;
dataProvider = CGDataProviderCreateWithCFData(bridgedData);
// Given size_t width, height which you should already have somehow
image = CGImageCreate(
width, height, /* bpc */ 8, /* bpp */ 32, /* pitch */ width * 4,
colorSpace, infoFlags,
dataProvider, /* decode array */ NULL, /* interpolate? */ TRUE,
kCGRenderingIntentDefault /* adjust intent according to use */
);
// Release things the image took ownership of.
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(colorSpace);
That code's written with the idea that it's guaranteed to be ARGB_8888, the data is correct, nothing could possibly return NULL, etc. Copy/pasting the above code could potentially cause everything in a three mile radius to explode. Error handling's up to you (e.g., CGColorSpaceCreateWithName can potentially return null).
Allocate a UIImage using the CGImage. Since the UIImage will take ownership of/copy the CGImage, release your CGImageRef (actually, the docs say nothing about what UIImage does with the CGImage, but you're not going to use it anymore, so you must release yours).

How to convert hex data to UIImage?

I like to display .cpbitmap (its the file formate iOS saves the wallpapers) in a UIImageView. The problem is I need to convert it. I already figure out that if you get its Data (NSData) and convert every bit you get the UIColor, so the first Bit is R, then B, then G and then Alpha (I think). Now I need to "draw" an UIImage out of the info. Does anyone know how to do this?
Here is the link to the .cpbitmap file: https://www.dropbox.com/s/s9v4lahixm9cuql/LockBackground.cpbitmap
It would be really cool if someone can help me,
Thanks
EDIT
I found a working python script, is someone able to translate it to Objective
#!/usr/bin/python
from PIL import Image,ImageOps
import struct
import sys
if len(sys.argv) < 3:
print "Need two args: filename and result_filename\n";
sys.exit(0)
filename = sys.argv[1]
result_filename = sys.argv[2]
with open(filename) as f:
contents = f.read()
unk1, width, height, unk2, unk3, unk4 = struct.unpack('<6i', contents[-24:])
im = Image.fromstring('RGBA', (width,height), contents, 'raw', 'RGBA', 0, 1)
r,g,b,a = im.split()
im = Image.merge('RGBA', (b,g,r,a))
im.save(result_filename)
The basic process of converting RGBA data into an image is to create a CGDataProviderRef with the raw data, and then use CGImageCreate to create a CGImageRef, from which you can easily generate a UIImage. So, that gives you something like:
- (UIImage *) imageForBitmapData:(NSData *)data size:(CGSize)size
{
void * bitmapData;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bitmapBytesPerRow = (size.width * 4);
int bitmapByteCount = (bitmapBytesPerRow * size.height);
bitmapData = malloc( bitmapByteCount );
NSAssert(bitmapData, #"Unable to create buffer");
[data getBytes:bitmapData length:bitmapByteCount];
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, bitmapData, bitmapByteCount, releasePixels);
CGImageRef imageRef = CGImageCreate(size.width,
size.height,
8,
32,
bitmapBytesPerRow,
colorSpace,
(CGBitmapInfo)kCGImageAlphaLast,
provider,
NULL,
NO,
kCGRenderingIntentDefault);
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
return image;
}
With a releasePixels function defined as follows:
void releasePixels(void *info, const void *data, size_t size)
{
free((void*)data);
}
The only trick was identifying the dimensions of the bitmap. There are 4,142,592 bytes of image data (there is some extra stuff at the end of the file, which is self evident if you examine the file in hexadecimal in Xcode). That doesn't correlate to any standard device dimensions. But if you look at the possible values that divide evenly into 4,142,592, you get a couple of promising ones (496, 522, 558, 576, 696, 744, 899, 928, and 992). And if you just try those out, it becomes obvious that the image is 744 x 1392.
You can then use those dimensions with the above method, and you get your image.
While I discovered the size of the image empirically, I noticed that those dimensions were encoded at the end of the file. This is confirmed by your Python code, which suggests that the image width is the fifth from the last UInt32, and the height is the fourth from the last UInt32. Thus, you can use the above routine like so, extracting the dimensions from those two 32-bit integers encoded near the end of the file:
NSString *path = [[NSBundle mainBundle] pathForResource:#"LockBackground" ofType:#"cpbitmap"];
NSData *data = [NSData dataWithContentsOfFile:path];
NSAssert(data, #"no data found");
UInt32 width;
UInt32 height;
[data getBytes:&width range:NSMakeRange([data length] - sizeof(UInt32) * 5, sizeof(UInt32))];
[data getBytes:&height range:NSMakeRange([data length] - sizeof(UInt32) * 4, sizeof(UInt32))];
self.imageView.image = [self imageForBitmapData:data size:CGSizeMake(width, height)];