How do I access and manipulate JPEG image pixels? - objective-c

i have a jpg file. I need to convert it to pixel data and then change color of some pixel. I do it like this:
NSString *string = [[NSBundle mainBundle] pathForResource:#"pic" ofType:#"jpg"];
NSData *data = [NSData dataWithContentsOfFile:string];
unsigned char *bytesArray = dataI.bytes;
NSUInteger byteslenght = data.length;
//--------pixel to array
NSMutableArray *array = [[NSMutableArray alloc] initWithCapacity:byteslenght];
for (int i = 0; i<byteslenght; i++) {
[array addObject:[NSNumber numberWithUnsignedChar:bytesArray[i]]];
}
Here i try to change color of pixels since 95 till 154.
NSNumber *number = [NSNumber numberWithInt:200];
for (int i=95; i<155; i++) {
[array replaceObjectAtIndex:i withObject:number];
}
But when i convert array to image i got a blurred picture. I don't understand why i don't have an influence on some pixels and why i have influence on picture in total?

The process of accessing pixel-level data is a little more complicated than your question might suggest, because, as Martin pointed out, JPEG can be a compressed image format. Apple discusses the approved technique for getting pixel data in Technical Q&A QA1509.
Bottom line, to get the uncompressed pixel data for a UIImage, you would:
Get the CGImage for the UIImage.
Get the data provider for that CGImageRef via CGImageGetDataProvider.
Get the binary data associated with that data provider via CGDataProviderCopyData.
Extract some of the information about the image, so you know how to interpret that buffer.
Thus:
UIImage *image = ...
CGImageRef imageRef = image.CGImage; // get the CGImageRef
NSAssert(imageRef, #"Unable to get CGImageRef");
CGDataProviderRef provider = CGImageGetDataProvider(imageRef); // get the data provider
NSAssert(provider, #"Unable to get provider");
NSData *data = CFBridgingRelease(CGDataProviderCopyData(provider)); // get copy of the data
NSAssert(data, #"Unable to copy image data");
NSInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef); // some other interesting details about image
NSInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
NSInteger bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
NSInteger bytesPerRow = CGImageGetBytesPerRow(imageRef);
NSInteger width = CGImageGetWidth(imageRef);
NSInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorspace = CGImageGetColorSpace(imageRef);
Given that you want to manipulate this, you presumably want some mutable pixel buffer. The easiest approach would be to make a mutableCopy of that NSData object and manipulate it there, but in these cases, I tend to fall back to C, creating a void *outputBuffer, into which I copy the original pixels and manipulate using traditional C array techniques.
To create the buffer:
void *outputBuffer = malloc(width * height * bitsPerPixel / 8);
NSAssert(outputBuffer, #"Unable to allocate buffer");
For the precise details on how to manipulate it, you have to look at bitmapInfo (which will tell you whether it's RGBA or ARGB; whether it's floating point or integer) and bitsPerComponent (which will tell you whether it's 8 or 16 bits per component, etc.). For example, a very common JPEG format is 8 bits per component, four components, RGBA (i.e. red, green, blue, and alpha, in that order). But you really need to check those various properties we extracted from the CGImageRef to make sure. See the discussion in the Quartz 2D Programming Guide - Bitmap Images and Image Masks for more information. I personally find "Figure 11-2" to be especially illuminating.
The next logical question is when you're done manipulating the pixel data, how to create a UIImage for that. In short, you'd reverse the above process, e.g. create a data provider, create a CGImageRef, and then create a UIImage:
CGDataProviderRef outputProvider = CGDataProviderCreateWithData(NULL, outputBuffer, sizeof(outputBuffer), releaseData);
CGImageRef outputImageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
outputProvider,
NULL,
NO,
kCGRenderingIntentDefault);
UIImage *outputImage = [UIImage imageWithCGImage:outputImageRef];
CGImageRelease(outputImageRef);
CGDataProviderRelease(outputProvider);
Where releaseData is a C function that simply calls free the pixel buffer associated with the data provider:
void releaseData(void *info, const void *data, size_t size)
{
free((void *)data);
}

Related

Memory problems while converting cvMat to UIImage

before posting this question here, i have read all the materials and similar posts on it but i cant get the main "idea" what is happening and how to fix it, in 10 of the similar question, everyone was fixing this problem with #autoreleasepool in this case i was unable to achive my goal. So while converting cvMat to UIImage i have increasing memory depending on size.
Below are step which i am doing before converting mat to uiimage:
cv::Mat undistorted = cv::Mat(cvSize(maxWidth,maxHeight), CV_8UC1);
cv::Mat original = [MatStructure convertUIImageToMat:adjustedImage];
cv::warpPerspective(original, undistorted, cv::getPerspectiveTransform(src, dst), cvSize(maxWidth, maxHeight));
original.release();
adjustedImage = [MatStructure convertMatToUIImage:undistorted];
undistorted.release();
problem is visible while i am converting my mat to uiimage, memory goes up to 400 mb and on every cycle it rises.
+ (UIImage *) convertMatToUIImage: (cv::Mat) cvMat {
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef) data);
CGBitmapInfo bmInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
CGImageRef imageRef = CGImageCreate(cvMat.cols, // width
cvMat.rows, // height
8, // bits per component
8 * cvMat.elemSize(), // bits per pixel
cvMat.step.p[0], // bytesPerRow
colorSpace, // colorspace
bmInfo, // bitmap info
provider, // CGDataProviderRef
NULL, // decode
false, // should interpolate
kCGRenderingIntentDefault // intent
);
UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
cvMat.release(); // this line is optional.
return image;
}
I have seen many similar code but every single example works as this one.
I belive that problem holds in (__bridge CFDataRef) and ARC cant clean up this data, if i will try to CFRelease((__bridge CFDataRef)data) than will happen crash because program will search for allocated memory and it will be freed already so it will run to crash.
I am using openCV3 and have tried their method MatToUIImage but problem still exsits, on leaks profiler there are no leaks at all, and most expensive task in memory is convertMatToUIImage.
I am reading all day about it but actually can't find any useful solution yet.
Currently i work on swift 3.0 which inherits class XXX and it uses objC class to crop something and than return to UIImage as well. In deinit i am assigning this inherited class property nil, but problem still exsists.Also i think that dataWithBytes is duplicating memory like if i have 16MB at start after creating NSData it will be 32MB..
And please if you can suggests useful threads about this problem i will be glad to read them all. Thanks for help
After working on this problem more than three days, i had to rewrite function and it worked 100%, i have tested on five different devices.
CFRelease, Free() and #autoreleasepool did not helped me at all and i implemented this:
data = UIImageJPEGRepresentation([[UIImage alloc] initWithCGImage:imageRef], 0.2f); // because images are 30MB and up
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *appFile = [documentsDirectory stringByAppendingPathComponent:#"MyFile.jpeg"];
[data writeToFile:appFile atomically:NO];
data = nil;
after this solution everything worked fine. So i grab the UIImage and converting to NSData, after that we should save it to the local directory and the only thing left is to read the data from directory. hope this thread will help someone one day.

How to create a CGImageRef from a NSBitmapImageRep?

How can I create a CGImageRef from a NSBitmapImageRep?
Or how can I define a complete new CGImageRef in the same way as the NSBitmapImageRep? The definition of a NSBitmapImageRep works fine. But I need an image as CGImageRef.
unsigned char *plane = (unsigned char *)[data bytes]; // data = 3 bytes for each RGB pixel
NSBitmapImageRep* imageRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes: &plane
pixelsWide: width
pixelsHigh: height
bitsPerSample: depth
samplesPerPixel: channel
hasAlpha: NO
isPlanar: NO
colorSpaceName: NSCalibratedRGBColorSpace
//bitmapFormat: NSAlphaFirstBitmapFormat
bytesPerRow: channel * width
bitsPerPixel: channel * depth
];
I have no idea how to create the CGImageRef from the NSBitmapImageRep or how to define a new CGImageRef:
CGImageRef imageRef = CGImageCreate(width, height, depth, channel*depth, channel*width, CGColorSpaceCreateDeviceRGB(), ... );
Please, can somebody give me a hint?
The easy way is by using the CGImage property (introduced in 10.5):
CGImageRef image = imageRep.CGImage;
Documentation:
https://developer.apple.com/library/mac/documentation/Cocoa/Reference/ApplicationKit/Classes/NSBitmapImageRep_Class/index.html#//apple_ref/occ/instm/NSBitmapImageRep/CGImage
Return Value
Returns an autoreleased CGImageRef opaque type based on
the receiver’s current bitmap data.
Discussion
The returned CGImageRef has pixel dimensions that are
identical to the receiver’s. This method might return a preexisting
CGImageRef opaque type or create a new one. If the receiver is later
modified, subsequent invocations of this method might return different
CGImageRef opaque types.
From your code snippet, it seems you're starting with an NSData object. So, your question seems to be how to create a CGImage from a data object. In that case, there's no reason to go through NSBitmapImageRep.
You were almost there with the call to CGImageCreate(). You just needed to figure out how to supply a CGDataProvider to it. You can create a CGDataProvider from an NSData pretty directly, once you realize that NSData is toll-free bridged with CFData. So:
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGColorSpaceRef colorspace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGImageRef image = CGImageCreate(width, height, depth / 3, depth, channel*width, colorspace, kCGImageAlphaNone, provider, NULL, TRUE, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorspace);

How to create CGImageRef from NSData string data (NOT UIImage)

How does one create a new CGImageRef without a UIImage? I can't use image.CGImage
I am receiving a base64 encoded image as a std::string from a server process. The first part of the code below simulates receiving the encoded string.
- (UIImage *)testChangeImageToBase64String
{
UIImage *processedImage = [UIImage imageNamed:#"myFile.jpg"];
// UIImage to unsigned char *
CGImageRef imageRef = processedImage.CGImage;
NSData *data = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
// encode data to Base64 NSString
NSString *base64EncodedDataString = [data base64EncodedStringWithOptions:0];
// create encoded std::string
std::string encoded([base64EncodedDataString UTF8String]);
// ***************************************************************************
// This is where we call the server method and receive the bytes in a std::string
std::string received = encoded;
// ***************************************************************************
// get Base64 encoded std::string into NSString
NSString *base64EncodedCstring = [NSString stringWithCString:encoded.c_str() encoding:[NSString defaultCStringEncoding]];
// NSData from the Base64 encoded std::string
NSData *nsdataFromBase64String = [[NSData alloc]initWithBase64EncodedString:base64EncodedCstring options:0];
Everything is good!!!!..... until I try to populate the newImage.
When I get the encoded string, I need to get a CGImageRef to get the data back into the correct format to populate a UIImage. If the data is not in the correct format the UIImage will be nil.
I need to create a new CGImageRef with the nsdataFromBase64String.
Something like:
CGImageRef base64ImageRef = [newCGImageRefFromString:nsdataFromBase64String];
Then I can use imageWithCGImage to put the data into a new UIImage.
Something like:
UIImage *imageFromImageRef = [UIImage imageWithCGImage: base64ImageRef];
Then I can return the UIImage.
return newImage;
}
Please note that the following line will NOT work:
UIImage *newImage = [[UIImage alloc] initWithData:nsdataFromBase64String];
The data needs to be in the correct format or the UIImage will be nil. Hence, my question, "How do I create a CGImageRef with NSData?"
Short-ish answer, since this is mostly just going over what I mentioned in NSChat:
Figure out what the format of the image you're receiving is as well as its size (width and height, in pixels). You mentioned in chat that it's just straight ARGB8 data, so keep that in mind. I'm not sure how you're receiving the other info, if at all.
Using CGImageCreate, create a new image using what you know about the image already (i.e., presumably you know its width, height, and so on — if you don't, you should be packing this in with the image you're sending). E.g., this bundle of boilerplate that nobody likes to write:
// NOTE: have not tested if this even compiles -- consider it pseudocode.
CGImageRef image;
CFDataRef bridgedData;
CGDataProviderRef dataProvider;
CGColorSpaceRef colorSpace;
CGBitmapInfo infoFlags = kCGImageAlphaFirst; // ARGB
// Get a color space
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
// Assuming the decoded data is only pixel data
bridgedData = (__bridge CFDataRef)decodedData;
dataProvider = CGDataProviderCreateWithCFData(bridgedData);
// Given size_t width, height which you should already have somehow
image = CGImageCreate(
width, height, /* bpc */ 8, /* bpp */ 32, /* pitch */ width * 4,
colorSpace, infoFlags,
dataProvider, /* decode array */ NULL, /* interpolate? */ TRUE,
kCGRenderingIntentDefault /* adjust intent according to use */
);
// Release things the image took ownership of.
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(colorSpace);
That code's written with the idea that it's guaranteed to be ARGB_8888, the data is correct, nothing could possibly return NULL, etc. Copy/pasting the above code could potentially cause everything in a three mile radius to explode. Error handling's up to you (e.g., CGColorSpaceCreateWithName can potentially return null).
Allocate a UIImage using the CGImage. Since the UIImage will take ownership of/copy the CGImage, release your CGImageRef (actually, the docs say nothing about what UIImage does with the CGImage, but you're not going to use it anymore, so you must release yours).

Edit Color Bytes in UIImage

I'm quite new to working with UIImages on the byte level, but I was hoping that someone could point me to some guides on this matter?
I am ultimately looking to edit the RGBA values of the bytes, based on certain parameters (position, color, etc.) and I know I've come across samples/tutorials for this before, but I just can't seem to find anything now.
Basically, I'm hoping to be able to break a UIImage down to its bytes and iterate over them and edit the bytes' RGBA values individually. Maybe some sample code here would be a big help as well.
I've already been working in the different image contexts and editing the images with the CG power tools, but I would like to be able to work at the byte level.
EDIT:
Sorry, but I do understand that you cannot edit the bytes in a UIImage directly. I should have asked my question more clearly. I meant to ask how can I get the bytes of a UIImage, edit those bytes and then create a new UIImage from those bytes.
As pointed out by #BradLarson, OpenGL is a better option for this and there is a great library, which was created by #BradLarson, here. Thanks #CSmith for pointing it out!
#MartinR has right answer, here is some code to get you started:
UIImage *image = your image;
CGImageRef imageRef = image.CGImage;
NSUInteger nWidth = CGImageGetWidth(imageRef);
NSUInteger nHeight = CGImageGetHeight(imageRef);
NSUInteger nBytesPerRow = CGImageGetBytesPerRow(imageRef);
NSUInteger nBitsPerPixel = CGImageGetBitsPerPixel(imageRef);
NSUInteger nBitsPerComponent = CGImageGetBitsPerComponent(imageRef);
NSUInteger nBytesPerPixel = nBitsPerPixel == 24 ? 3 : 4;
unsigned char *rawInput = malloc (nWidth * nHeight * nBytesPerPixel);
CGColorSpaceRef colorSpaceRGB = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawInput, nWidth, nHeight, nBitsPerComponent, nBytesPerRow, colorSpaceRGB, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGContextDrawImage (context, CGRectMake(0, 0, nWidth, nHeight), imageRef);
// modify the pixels stored in the array of 4-byte pixels at rawInput
.
.
.
UIImage *imageNew = [[UIImage alloc] initWithCGImage:CGBitmapContextCreateImage(context)];
CGContextRelease (context);
free (rawInput);
You have no direct access to the bytes in an UIImage and you cannot change them directly.
You have to draw the image into a CGBitmapContext, modify the pixels in the bitmap, and then create a new image from the bitmap context.

C and Objective-C - Correct way to free an unsigned char pointer

in my app I create an unsigned char pointer using this function:
- (unsigned char*)getRawData
{
// First get the image into your data buffer
CGImageRef image = [self CGImage];
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), image);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
return rawData;
}
And in another class I assign a property to that pointer like so: self.bitmapData = [image getRawData];
Where in this process can I free that malloc'd memory? When I try to free the property in dealloc, it gives me an exc_bad_access error. I feel like I'm missing a fundamental c or objective-c concept here. All help is appreciated.
There is a good discussion about the safety of using malloc/free in objective-c here.
As long as you correctly free() the memory that you malloc(), there should be no issue.
I personally think that using NSMutableData or NSMutableArray is just easier. If you don't need ultimate performance, I would not use the C malloc/free statements directly.
One way around this sort of issue is to use NSMutableData, so you can replace
unsigned char *rawData = malloc(height * width * 4);
with
myData = [[NSMutableData alloc] initWithCapacity:height * width * 4];
unsigned char *rawData = myData.mutableBytes;
you can then release myData in your deallocator.
alternativly you can do
myData = [NSMutableData dataWithCapacity:height * width * 4];
This will then mean your myData is kept around the the duration of the event loop, you can of cause even change the return type of getRawData method to return NSMUtableData or NSData, and that way it can be retained by other parts of your code, the only time I return raw bytes in my code is if I know it will be available for the life of the object that returns it, that way if I need to hold onto the data I can retain the owner class.
Apple will often use the
myData = [[NSMutableData alloc] initWithCapacity:height * width * 4];
unsigned char *rawData = myData.mutableBytes;
pattern and then document that if you need the bytes beyond the current autorelease pool cycle you will then have to copy it.