New CGIImage or CGImageRef from std::string using using base64 - objective-c

everyone:
I've been working on this for days. Here's a little bit of background. I'm sending an image to a server using protobuf. The image is directly from the camera, so it is not a jpeg nor a png. I found code to get the data from the UIImage using the CGImage to create a CGImageRef. See the following code:
- (UIImage *)testProcessedImage:(UIImage *)processedImage
{
CGImageRef imageRef = processedImage.CGImage;
NSData *data1 = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
Google protobuf uses C++ code to send and receive the bytes to and from the server. When I tried to get the data bytes back into NSData and alloc init a UIImage with that data the UIImage was always nil. This tells me that my NSData is not in the correct format.
At first, I thought my issue was with the C++ conversion, as shown with my previous question here. But after much frustration, I cut out everything in the middle and just created a UIImage with the CGImageRef and it worked. See the following code:
- (UIImage *)testProcessedImage:(UIImage *)processedImage
{
CGImageRef imageRef = processedImage.CGImage;
NSData *data1 = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
// Added this line and cut out everything in the middle
UIImage *image = [UIImage imageWithCGImage:imageRef];
Following is a description of what I ultimately need to do. There are two parts. Part 1 takes a UIImage and converts it into a std::string.
take a UIImage
get the NSData from it
convert the data to unsigned char *
stuff the unsigned char * into a std::string
The string is what we would receive from the protobuf call. Part 2 takes the data from the string and converts it back into the NSData format to populate a UIImage. Following are the steps to do that:
convert the std::string to char array
convert the char array to a const char *
put the char * into NSData
return NSData
Now, with that background information and armed with the fact that populating the UIImage with a CGImageRef works, meaning that data in that format is the correct format to populate the UIImage, I'm looking for help in figuring out how to get the base64.data() into either a CFDataRef or a CGImageRef. Below is my test method:
- (UIImage *)testProcessedImage:(UIImage *)processedImage
{
CGImageRef imageRef = processedImage.CGImage;
NSData *data1 = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
unsigned char *pixels = (unsigned char *)[data1 bytes];
unsigned long size = [data1 length];
// ***************************************************************************
// This is where we would call transmit and receive the bytes in a std::string
//
// The following line simulates that:
//
const std::string byteString(pixels, pixels + size);
//
// ***************************************************************************
// converting to base64
std::string encoded = base64_encode(reinterpret_cast<const unsigned char*>(byteString.c_str()), byteString.length());
// retrieving base64
std::string decoded = base64_decode(encoded);
// put byte array back into NSData format
NSUInteger usize = decoded.length();
const char *bytes = decoded.data();
NSData *data2 = [NSData dataWithBytes:(const void *)bytes length:sizeof(unsigned char)*usize];
NSLog(#"examine data");
// But when I try to alloc init a UIImage with the data, the image is nil
UIImage *image2 = [[UIImage alloc] initWithData:data2];
NSLog(#"examine image2");
// *********** Below is my convoluted approach at CFDataRef and CGImageRef ****************
CFDataRef dataRef = CFDataCreate( NULL, (const UInt8*) decoded.data(), decoded.length() );
NSData *myData = (__bridge NSData *)dataRef;
//CGDataProviderRef ref = CGDataProviderCreateWithCFData(dataRef);
id sublayer = (id)[UIImage imageWithCGImage:imageRef].CGImage;
UIImage *image3 = [UIImage imageWithCGImage:(__bridge CGImageRef)(sublayer)];
return image3;
}
As any casual observer can see, I need help. HELP!!! I've tried some of the other questions on SO, such as this one and this one and this one and cannot find the information I need for the solution. I admit part of my problem is that I do not understand much about images (like RGBA values and other stuff).

Related

React Native, Objective C returning nil whit initWithBase64EncodedString

I'm trying to passe base64 images from react native to objective c so that I can retrieve image's data pixels with the help of openCV.
However, each base64 image return nil when using initWithBase64EncodedString.
I have tried multiple answer regarding this issue but none worked.
The Base64 comes from the result of ImagePicker in React Native
ImagePicker.launchImageLibrary({...})
here is the code used in Objective C :
RCT_EXPORT_METHOD(getImagePixels:(NSString *)imageAsBase64 callback:(RCTResponseSenderBlock)callback)
{
NSUInteger paddedLength = imageAsBase64.length + ((4 - (imageAsBase64.length % 4)) % 4);
NSString *correctBase64String = [imageAsBase64 stringByPaddingToLength:paddedLength withString:#"=" startingAtIndex:0];
/* Here decodedData always nil */
NSData *decodedData = [[NSData alloc]initWithBase64EncodedString:correctBase64String options:NSDataBase64DecodingIgnoreUnknownCharacters];
UIImage *image = [UIImage imageWithData:decodedData];
cv::Mat imageMat;
cv::Mat imageMatRGBA;
UIImageToMat(image, imageMat);
cv::cvtColor(imageMat, imageMatRGBA, cv::COLOR_BGR2RGBA);
cv::Mat Image8bit;
imageMat.convertTo(Image8bit, CV_8UC1);
unsigned char *pixelsChar = Image8bit.data;
NSString *pixels = [NSString stringWithUTF8String:(char *)pixelsChar];
callback(#[[NSNull null], pixels]);
}

Appending random bytes to an image doesn't change the outcome?

So to simplify the situation I want to append a couple random bytes to an image to change the MD5 hash every time.
I have the code set up to look up the image then create an NSImage. After that it converts the NSImage to NSMutableData which offers me the opportunity to append random bytes. I then end it all by exporting the altered NSImage to the desktop.
That all works fine and dandy until I run my program twice and compare the MD5 hashes of the two outputs. They are exactly the same! It doesn't matter if I append 1 or 1000 random bytes, if you compare the two outputs, it is exactly the same to each other.
My code:
- (void)createNewImage:(NSString *)filePath {
// NSImage from path
NSImage *newImage = [[NSImage alloc]initWithContentsOfFile:filePath];
// NSData to NSMutableData
NSData *imgData = [newImage TIFFRepresentation];
NSMutableData *mutableData = [imgData mutableCopy];
// Get the random bytes
NSData *randomData = [self createRandomBytes:10];
// Append random data to new image
[mutableData appendData:randomData];
(etc...)
// Create file path for the new image
NSString *fileName = #"/Users/Computer/Desktop/MD5/newImage.jpg";
// Cache the new image
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:mutableData];
NSDictionary *imageProps = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:1.0] forKey:NSImageCompressionFactor];
NSData *newData = [imageRep representationUsingType:NSJPEGFileType properties:imageProps];
[newData writeToFile:fileName atomically:NO];
}
-(NSData *)createRandomBytes:(NSUInteger)amount {
return [[NSFileHandle fileHandleForReadingAtPath:#"/dev/random"] readDataOfLength:amount];
}
UPDATE:
With the help of picciano, I found that exporting the edited NSData directly manages to achieve my goal
[mutableData writeToFile:fileName atomically:NO];
HOWEVER, the image is significantly larger. The source image is 182 KB while the new images are 503 KBs. picciano's answer explains why this happens but does anyone happen to have a workaround to the inflation?
You are adding random data, but it is not being used in creating the image. When the image is converted back to a JPG data representation, only the valid portion of the image data is used.
To verify this, check the length of your newData object.

Image size anomaly

I have an image in the form of an NSURL as input. I converted this url to NSImage and then to NSData from which I could get CGImageRef. This imageRef helped me extracting the raw data information from the image such as the height, width, bytesPerRow, etc.
Here's the code that I used:
NSString * urlName = [url path];
NSImage *image = [[NSImage alloc] initWithContentsOfFile:urlName];
NSData *imageData = [image TIFFRepresentation];
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)CFBridgingRetain(imageData), NULL);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
NSUInteger numberOfBitsPerPixel = CGImageGetBitsPerPixel(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
...
...
Now, I checked the size of the image using:
int sz = [imageData length];
which is different from - int sz' = bytesPerRow * height
I cannot understand why is there such a difference. And sz is actually half of sz'.
Am I making some mistake while extracting various info? From what I can get is that maybe while conversion of image to NSData some decompressions are done. In such a case, what should I use that can get me the reliable data.
I am new to the world image processing in Objective-C, so please bear with me!
P.S. I actually checked the size of the file that I am getting as input in the form of NSURL which is same as sz.
Try This:
Instead of
NSData *imageData = [image TIFFRepresentation];
use this:
NSData *imageData = [image TIFFRepresentationUsingCompression:NSTIFFCompressionLZW factor:0];

Building NSImage from bytes

Im trying to build an NSImage from some strange bytes.
Im using BlackMagic SDK to get the bytes of a recieved frame:
unsigned char* frame3 = NULL;
unsigned char* frame2 = (Byte*)malloc(699840);
videoFrame->GetBytes ( (void**)&frame3);
memcpy(frame2, frame3, 699840);
NSData* data = [NSData dataWithBytes:frame2 length:sizeof(frame2) ];
NSImage *image = [[NSImage alloc] initWithData:data];
//(till now i use statically 699840, because i know its size)
Why i said the bytes are strange is that the content of the "frame2" looks like this:
printf("content: %s",frame2);
\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200.........\200 (to the end)
It should be blank black frame.
Does somebody know how could I figure out something with this?
You should use these apis to get an image from data bytes.
NSString *filePath = [yourDirectory stringByAppendingPathComponent:#"imageName.jpg"];
[data writeToFile:filePath atomically:YES];

Processing images using Leptonica in an Xcode project

In Xcode, I am trying to pre process an image prior to sending it to OCR'ing. The OCR engine, Tesseract, handles images based on the Leptonica library.
As an example:
The Leptonica feature pixConvertTo8("image.tif")... is there a way to "transfer" the image raw data from UIImage -> PIX (see pix.h from the leptonica library) -> perform the pixConvertTo8() and back from PIX -> UImage - and this preferably without saving it to a file for transition - all in memory.
- (void) processImage:(UIImage *) uiImage
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
// preprocess UIImage here with fx: pixConvertTo8();
CGSize imageSize = [uiImage size];
int bytes_per_line = (int)CGImageGetBytesPerRow([uiImage CGImage]);
int bytes_per_pixel = (int)CGImageGetBitsPerPixel([uiImage CGImage]) / 8.0;
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider([uiImage CGImage]));
const UInt8 *imageData = CFDataGetBytePtr(data);
// this could take a while.
char* text = tess->TesseractRect(imageData,
bytes_per_pixel,
bytes_per_line,
0, 0,
imageSize.width, imageSize.height);
these two functions will do the trick....
- (void) startTesseract
{
//code from http://robertcarlsen.net/2009/12/06/ocr-on-iphone-demo-1043
NSString *dataPath =
[[self applicationDocumentsDirectory]stringByAppendingPathComponent:#"tessdata"];
/*
Set up the data in the docs dir
want to copy the data to the documents folder if it doesn't already exist
*/
NSFileManager *fileManager = [NSFileManager defaultManager];
// If the expected store doesn't exist, copy the default store.
if (![fileManager fileExistsAtPath:dataPath]) {
// get the path to the app bundle (with the tessdata dir)
NSString *bundlePath = [[NSBundle mainBundle] bundlePath];
NSString *tessdataPath = [bundlePath stringByAppendingPathComponent:#"tessdata"];
if (tessdataPath) {
[fileManager copyItemAtPath:tessdataPath toPath:dataPath error:NULL];
}
}
NSString *dataPathWithSlash = [[self applicationDocumentsDirectory] stringByAppendingString:#"/"];
setenv("TESSDATA_PREFIX", [dataPathWithSlash UTF8String], 1);
// init the tesseract engine.
tess = new tesseract::TessBaseAPI();
tess->Init([dataPath cStringUsingEncoding:NSUTF8StringEncoding], "eng");
}
- (NSString *) ocrImage: (UIImage *) uiImage
{
//code from http://robertcarlsen.net/2009/12/06/ocr-on-iphone-demo-1043
CGSize imageSize = [uiImage size];
double bytes_per_line = CGImageGetBytesPerRow([uiImage CGImage]);
double bytes_per_pixel = CGImageGetBitsPerPixel([uiImage CGImage]) / 8.0;
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider([uiImage CGImage]));
const UInt8 *imageData = CFDataGetBytePtr(data);
imageThresholder = new tesseract::ImageThresholder();
imageThresholder->SetImage(imageData,(int) imageSize.width,(int) imageSize.height,(int)bytes_per_pixel,(int)bytes_per_line);
// this could take a while. maybe needs to happen asynchronously.
tess->SetImage(imageThresholder->GetPixRect());
char* text = tess->GetUTF8Text();
// Do something useful with the text!
NSLog(#"Converted text: %#",[NSString stringWithCString:text encoding:NSUTF8StringEncoding]);
return [NSString stringWithCString:text encoding:NSUTF8StringEncoding]
}
You will have to declare both tess and imageThresholder in the .h file
tesseract::TestBaseApi *tess;
tesseract::ImageThresholder *imageThresholder;
I've found some good code snippets in the Tesseract OCR engine about how to do this. Noticeably in class ImageThresholder inside thresholder.cpp - see link below. I didn't test it yet but here is some short description:
the interesting part for me is the else block wherein the depth is 32. here the
pixCreate()
pixGetdata()
pixgetwpl() do the acctual work.
The thresholder.cpp from the tesseract engine uses the above mentioned method