How to convert hex data to UIImage? - objective-c

I like to display .cpbitmap (its the file formate iOS saves the wallpapers) in a UIImageView. The problem is I need to convert it. I already figure out that if you get its Data (NSData) and convert every bit you get the UIColor, so the first Bit is R, then B, then G and then Alpha (I think). Now I need to "draw" an UIImage out of the info. Does anyone know how to do this?
Here is the link to the .cpbitmap file: https://www.dropbox.com/s/s9v4lahixm9cuql/LockBackground.cpbitmap
It would be really cool if someone can help me,
Thanks
EDIT
I found a working python script, is someone able to translate it to Objective
#!/usr/bin/python
from PIL import Image,ImageOps
import struct
import sys
if len(sys.argv) < 3:
print "Need two args: filename and result_filename\n";
sys.exit(0)
filename = sys.argv[1]
result_filename = sys.argv[2]
with open(filename) as f:
contents = f.read()
unk1, width, height, unk2, unk3, unk4 = struct.unpack('<6i', contents[-24:])
im = Image.fromstring('RGBA', (width,height), contents, 'raw', 'RGBA', 0, 1)
r,g,b,a = im.split()
im = Image.merge('RGBA', (b,g,r,a))
im.save(result_filename)

The basic process of converting RGBA data into an image is to create a CGDataProviderRef with the raw data, and then use CGImageCreate to create a CGImageRef, from which you can easily generate a UIImage. So, that gives you something like:
- (UIImage *) imageForBitmapData:(NSData *)data size:(CGSize)size
{
void * bitmapData;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bitmapBytesPerRow = (size.width * 4);
int bitmapByteCount = (bitmapBytesPerRow * size.height);
bitmapData = malloc( bitmapByteCount );
NSAssert(bitmapData, #"Unable to create buffer");
[data getBytes:bitmapData length:bitmapByteCount];
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, bitmapData, bitmapByteCount, releasePixels);
CGImageRef imageRef = CGImageCreate(size.width,
size.height,
8,
32,
bitmapBytesPerRow,
colorSpace,
(CGBitmapInfo)kCGImageAlphaLast,
provider,
NULL,
NO,
kCGRenderingIntentDefault);
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
return image;
}
With a releasePixels function defined as follows:
void releasePixels(void *info, const void *data, size_t size)
{
free((void*)data);
}
The only trick was identifying the dimensions of the bitmap. There are 4,142,592 bytes of image data (there is some extra stuff at the end of the file, which is self evident if you examine the file in hexadecimal in Xcode). That doesn't correlate to any standard device dimensions. But if you look at the possible values that divide evenly into 4,142,592, you get a couple of promising ones (496, 522, 558, 576, 696, 744, 899, 928, and 992). And if you just try those out, it becomes obvious that the image is 744 x 1392.
You can then use those dimensions with the above method, and you get your image.
While I discovered the size of the image empirically, I noticed that those dimensions were encoded at the end of the file. This is confirmed by your Python code, which suggests that the image width is the fifth from the last UInt32, and the height is the fourth from the last UInt32. Thus, you can use the above routine like so, extracting the dimensions from those two 32-bit integers encoded near the end of the file:
NSString *path = [[NSBundle mainBundle] pathForResource:#"LockBackground" ofType:#"cpbitmap"];
NSData *data = [NSData dataWithContentsOfFile:path];
NSAssert(data, #"no data found");
UInt32 width;
UInt32 height;
[data getBytes:&width range:NSMakeRange([data length] - sizeof(UInt32) * 5, sizeof(UInt32))];
[data getBytes:&height range:NSMakeRange([data length] - sizeof(UInt32) * 4, sizeof(UInt32))];
self.imageView.image = [self imageForBitmapData:data size:CGSizeMake(width, height)];

Related

Creating ad displaying a UIImage from raw BGRA data

I'm collecting image data from the camera using this code:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
// Called when a frame arrives
// Should be in BGRA format
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
unsigned char *raw = (unsigned char *)CVPixelBufferGetBaseAddress(imageBuffer);
// Copy memory into allocated buffer
unsigned char *buffer = malloc(sizeof(unsigned char) * bytesPerRow * height);
memcpy(buffer, raw, bytesPerRow * height);
[self processVideoData:buffer width:width height:height bytesPerRow:bytesPerRow];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
The processVideoData: method looks like this:
- (void)processVideoData:(unsigned char *)data width:(size_t)width height:(size_t)height bytesPerRow:(size_t)bytesPerRow
{
dispatch_sync(dispatch_get_main_queue(), ^{
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, data, bytesPerRow * height, NULL);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef image = CGImageCreate(width, height, 8, 32, bytesPerRow, colorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast, dataProvider, NULL, NO, kCGRenderingIntentDefault);
// Set layer contents???
UIImage *objcImage = [UIImage imageWithCGImage:image];
self.imageView.image = objcImage;
free(data);
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(colorSpace);
CGImageRelease(image);
});
}
No complaints, no leaks but nothing shows up in the image view it just stays blank (yes I have checked the outlet connection). Previously I had the bitmapInfo set just to kCGBitmapByteOrderDefault which was causing a crash when setting the image property of the image view however the image view would go dark which was promising just before the crash.
I summarised that the crash was due to the image being in BGRA not BGR so I set the bitmapInfo to kCGBitmapByteOrderDefault | kCGImageAlphaLast and that solved the crash but no image.
I realise that the image will look weird as the CGImageRef is expecting an RGB image and I'm passing it BGR but that should only result in a weird looking image due to channel swapping. I have also logged out the data that I'm getting and it seems to be in order something like: b:65 g:51 r:42 a:255 and the alpha channel is always 255 as expected.
I'm sorry if it's obvious but I can't work out what is going wrong.
You can use this flag combination to achieve BGRA format:
kCGBitmapByteOrder32Little | kCGImageAlphaSkipFirst
You should prefer to use this solution, it will be more performant way in comparison to OpenCV conversion.
Here is more common way to convert sourcePixelFormat to bitmapInfo:
sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
bitmapInfo = #{
#(kCVPixelFormatType_32ARGB) : #(kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst),
#(kCVPixelFormatType_32BGRA) : #(kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst),
}[#(sourcePixelFormat)].unsignedIntegerValue;
It turns out the data was just in the wrong format and I wasn't feeding it into the CGImageCreate function correctly.
The data comes out in BGRA format so I fed this data into an IplImage structure (I'm using OpenCV v 2.4.9) like so:
// Pack IplImage with data
IplImage *img = cvCreateImage(cvSize((int)width, (int)height), 8, 4);
img->imageData = (char *)data;
I then converted it to RGB like so:
IplImage *converted = cvCreateImage(cvSize((int)width, (int)height), 8, 3);
cvCvtColor(img, converted, CV_BGRA2RGB);
I then fed the data from the converted IplImage into a CGImageCreate function and it works nicely.

How to create CGImageRef from NSData string data (NOT UIImage)

How does one create a new CGImageRef without a UIImage? I can't use image.CGImage
I am receiving a base64 encoded image as a std::string from a server process. The first part of the code below simulates receiving the encoded string.
- (UIImage *)testChangeImageToBase64String
{
UIImage *processedImage = [UIImage imageNamed:#"myFile.jpg"];
// UIImage to unsigned char *
CGImageRef imageRef = processedImage.CGImage;
NSData *data = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
// encode data to Base64 NSString
NSString *base64EncodedDataString = [data base64EncodedStringWithOptions:0];
// create encoded std::string
std::string encoded([base64EncodedDataString UTF8String]);
// ***************************************************************************
// This is where we call the server method and receive the bytes in a std::string
std::string received = encoded;
// ***************************************************************************
// get Base64 encoded std::string into NSString
NSString *base64EncodedCstring = [NSString stringWithCString:encoded.c_str() encoding:[NSString defaultCStringEncoding]];
// NSData from the Base64 encoded std::string
NSData *nsdataFromBase64String = [[NSData alloc]initWithBase64EncodedString:base64EncodedCstring options:0];
Everything is good!!!!..... until I try to populate the newImage.
When I get the encoded string, I need to get a CGImageRef to get the data back into the correct format to populate a UIImage. If the data is not in the correct format the UIImage will be nil.
I need to create a new CGImageRef with the nsdataFromBase64String.
Something like:
CGImageRef base64ImageRef = [newCGImageRefFromString:nsdataFromBase64String];
Then I can use imageWithCGImage to put the data into a new UIImage.
Something like:
UIImage *imageFromImageRef = [UIImage imageWithCGImage: base64ImageRef];
Then I can return the UIImage.
return newImage;
}
Please note that the following line will NOT work:
UIImage *newImage = [[UIImage alloc] initWithData:nsdataFromBase64String];
The data needs to be in the correct format or the UIImage will be nil. Hence, my question, "How do I create a CGImageRef with NSData?"
Short-ish answer, since this is mostly just going over what I mentioned in NSChat:
Figure out what the format of the image you're receiving is as well as its size (width and height, in pixels). You mentioned in chat that it's just straight ARGB8 data, so keep that in mind. I'm not sure how you're receiving the other info, if at all.
Using CGImageCreate, create a new image using what you know about the image already (i.e., presumably you know its width, height, and so on — if you don't, you should be packing this in with the image you're sending). E.g., this bundle of boilerplate that nobody likes to write:
// NOTE: have not tested if this even compiles -- consider it pseudocode.
CGImageRef image;
CFDataRef bridgedData;
CGDataProviderRef dataProvider;
CGColorSpaceRef colorSpace;
CGBitmapInfo infoFlags = kCGImageAlphaFirst; // ARGB
// Get a color space
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
// Assuming the decoded data is only pixel data
bridgedData = (__bridge CFDataRef)decodedData;
dataProvider = CGDataProviderCreateWithCFData(bridgedData);
// Given size_t width, height which you should already have somehow
image = CGImageCreate(
width, height, /* bpc */ 8, /* bpp */ 32, /* pitch */ width * 4,
colorSpace, infoFlags,
dataProvider, /* decode array */ NULL, /* interpolate? */ TRUE,
kCGRenderingIntentDefault /* adjust intent according to use */
);
// Release things the image took ownership of.
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(colorSpace);
That code's written with the idea that it's guaranteed to be ARGB_8888, the data is correct, nothing could possibly return NULL, etc. Copy/pasting the above code could potentially cause everything in a three mile radius to explode. Error handling's up to you (e.g., CGColorSpaceCreateWithName can potentially return null).
Allocate a UIImage using the CGImage. Since the UIImage will take ownership of/copy the CGImage, release your CGImageRef (actually, the docs say nothing about what UIImage does with the CGImage, but you're not going to use it anymore, so you must release yours).

How do I access and manipulate JPEG image pixels?

i have a jpg file. I need to convert it to pixel data and then change color of some pixel. I do it like this:
NSString *string = [[NSBundle mainBundle] pathForResource:#"pic" ofType:#"jpg"];
NSData *data = [NSData dataWithContentsOfFile:string];
unsigned char *bytesArray = dataI.bytes;
NSUInteger byteslenght = data.length;
//--------pixel to array
NSMutableArray *array = [[NSMutableArray alloc] initWithCapacity:byteslenght];
for (int i = 0; i<byteslenght; i++) {
[array addObject:[NSNumber numberWithUnsignedChar:bytesArray[i]]];
}
Here i try to change color of pixels since 95 till 154.
NSNumber *number = [NSNumber numberWithInt:200];
for (int i=95; i<155; i++) {
[array replaceObjectAtIndex:i withObject:number];
}
But when i convert array to image i got a blurred picture. I don't understand why i don't have an influence on some pixels and why i have influence on picture in total?
The process of accessing pixel-level data is a little more complicated than your question might suggest, because, as Martin pointed out, JPEG can be a compressed image format. Apple discusses the approved technique for getting pixel data in Technical Q&A QA1509.
Bottom line, to get the uncompressed pixel data for a UIImage, you would:
Get the CGImage for the UIImage.
Get the data provider for that CGImageRef via CGImageGetDataProvider.
Get the binary data associated with that data provider via CGDataProviderCopyData.
Extract some of the information about the image, so you know how to interpret that buffer.
Thus:
UIImage *image = ...
CGImageRef imageRef = image.CGImage; // get the CGImageRef
NSAssert(imageRef, #"Unable to get CGImageRef");
CGDataProviderRef provider = CGImageGetDataProvider(imageRef); // get the data provider
NSAssert(provider, #"Unable to get provider");
NSData *data = CFBridgingRelease(CGDataProviderCopyData(provider)); // get copy of the data
NSAssert(data, #"Unable to copy image data");
NSInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef); // some other interesting details about image
NSInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
NSInteger bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
NSInteger bytesPerRow = CGImageGetBytesPerRow(imageRef);
NSInteger width = CGImageGetWidth(imageRef);
NSInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorspace = CGImageGetColorSpace(imageRef);
Given that you want to manipulate this, you presumably want some mutable pixel buffer. The easiest approach would be to make a mutableCopy of that NSData object and manipulate it there, but in these cases, I tend to fall back to C, creating a void *outputBuffer, into which I copy the original pixels and manipulate using traditional C array techniques.
To create the buffer:
void *outputBuffer = malloc(width * height * bitsPerPixel / 8);
NSAssert(outputBuffer, #"Unable to allocate buffer");
For the precise details on how to manipulate it, you have to look at bitmapInfo (which will tell you whether it's RGBA or ARGB; whether it's floating point or integer) and bitsPerComponent (which will tell you whether it's 8 or 16 bits per component, etc.). For example, a very common JPEG format is 8 bits per component, four components, RGBA (i.e. red, green, blue, and alpha, in that order). But you really need to check those various properties we extracted from the CGImageRef to make sure. See the discussion in the Quartz 2D Programming Guide - Bitmap Images and Image Masks for more information. I personally find "Figure 11-2" to be especially illuminating.
The next logical question is when you're done manipulating the pixel data, how to create a UIImage for that. In short, you'd reverse the above process, e.g. create a data provider, create a CGImageRef, and then create a UIImage:
CGDataProviderRef outputProvider = CGDataProviderCreateWithData(NULL, outputBuffer, sizeof(outputBuffer), releaseData);
CGImageRef outputImageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
outputProvider,
NULL,
NO,
kCGRenderingIntentDefault);
UIImage *outputImage = [UIImage imageWithCGImage:outputImageRef];
CGImageRelease(outputImageRef);
CGDataProviderRelease(outputProvider);
Where releaseData is a C function that simply calls free the pixel buffer associated with the data provider:
void releaseData(void *info, const void *data, size_t size)
{
free((void *)data);
}

CMSampleBufferRef to bitmap?

I'm playing around with the AVScreenShack example from Apple's website (Xcode project) which captures the desktop and displays the capture in a window in quasi real-time.
I have modified the project a little bit and inserted this line of code:
-(void)captureOutput:(AVCaptureOutput*) captureOutput didOutputSampleBuffer:(CMSampleBufferRef) sampleBuffer fromConnection:(AVCaptureConnection*) connection
{
...
}
My question is : How do I convert the CMSampleBufferRef instance to CGImageRef ?
Thank you.
Here is how you can create a UIImage from a CMSampleBufferRef. This worked for me when responding to captureStillImageAsynchronouslyFromConnection:completionHandler: on AVCaptureStillImageOutput.
// CMSampleBufferRef imageDataSampleBuffer;
CMBlockBufferRef buff = CMSampleBufferGetDataBuffer(imageDataSampleBuffer);
size_t len = CMBlockBufferGetDataLength(buff);
char * data = NULL;
CMBlockBufferGetDataPointer(buff, 0, NULL, &len, &data);
NSData * d = [[NSData alloc] initWithBytes:data length:len];
UIImage * img = [[UIImage alloc] initWithData:d];
It looks like the data coming out of CMBlockBufferGetDataPointer is JPEG data.
UPDATE: To fully answer your question, you can call CGImage on img from my code to actually get a CGImageRef.

Read Multiframe DICOM images using ITK

I can read single frame DICOM images fine. But I am not sure how to go about reading a multi frame DICOM file, which say, has multiple images in a single DICOM file. Here is the code I use to read single frame DICOM. I am thinking along the lines that the image buffer loaded (imageBuf) should be divided into as many parts as the number of frames in the DICOM and use each part to construct an image. Can that be done?
int imageWidth = 880;
int imageHeight = 635;
NSString *dicomPath = [[[[NSBundle mainBundle] resourcePath] stringByAppendingString:#"/"] stringByAppendingString:#"your dicom file name"];
const char *c_dicomPath = [dicomPath UTF8String];
typedef unsigned char InputPixelType;
const unsigned int InputDimension = 3;
typedef itk::Image< InputPixelType, InputDimension > InputImageType;
typedef itk::ImageFileReader< InputImageType > ReaderType;
ReaderType::Pointer reader = ReaderType::New();
reader->SetFileName(c_dicomPath);
typedef itk::GDCMImageIO ImageIOType;
ImageIOType::Pointer gdcmImageIO = ImageIOType::New();
reader->SetImageIO(gdcmImageIO);
InputPixelType *imageBuf = (InputPixelType*)malloc(sizeof(InputPixelType)*imageHeight*imageWidth*3);
reader->Update();
//get dicom image
memset(imageBuf, 0, sizeof(InputPixelType)*imageHeight*imageWidth*3);
gdcmImageIO->Read(imageBuf);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef provider = CGDataProviderCreateWithData(nil, imageBuf, imageWidth*imageHeight*3*sizeof(InputPixelType), nil);
CGImageRef imageRef = CGImageCreate(imageWidth,//width
imageHeight,//height
8,//size_t bitsPerComponent,
24,//size_t bitsPerPixel,
imageWidth*sizeof(InputPixelType)*3,//size_t bytesPerRow,
colorspace,//CGColorSpaceRef space,
kCGBitmapByteOrderDefault,//CGBitmapInfo bitmapInfo,
provider,//CGDataProviderRef provider,
nil,//const CGFloat *decode,
NO,//bool shouldInterpolate,
kCGRenderingIntentDefault//CGColorRenderingIntent intent
);
//here is the dicom image decode from dicom file
UIImage *dicomImage = [[UIImage alloc] initWithCGImage:imageRef scale:1.0 orientation:UIImageOrientationUp];
Did you try itk::ImageSeriesReader?