React Native, Objective C returning nil whit initWithBase64EncodedString - objective-c

I'm trying to passe base64 images from react native to objective c so that I can retrieve image's data pixels with the help of openCV.
However, each base64 image return nil when using initWithBase64EncodedString.
I have tried multiple answer regarding this issue but none worked.
The Base64 comes from the result of ImagePicker in React Native
ImagePicker.launchImageLibrary({...})
here is the code used in Objective C :
RCT_EXPORT_METHOD(getImagePixels:(NSString *)imageAsBase64 callback:(RCTResponseSenderBlock)callback)
{
NSUInteger paddedLength = imageAsBase64.length + ((4 - (imageAsBase64.length % 4)) % 4);
NSString *correctBase64String = [imageAsBase64 stringByPaddingToLength:paddedLength withString:#"=" startingAtIndex:0];
/* Here decodedData always nil */
NSData *decodedData = [[NSData alloc]initWithBase64EncodedString:correctBase64String options:NSDataBase64DecodingIgnoreUnknownCharacters];
UIImage *image = [UIImage imageWithData:decodedData];
cv::Mat imageMat;
cv::Mat imageMatRGBA;
UIImageToMat(image, imageMat);
cv::cvtColor(imageMat, imageMatRGBA, cv::COLOR_BGR2RGBA);
cv::Mat Image8bit;
imageMat.convertTo(Image8bit, CV_8UC1);
unsigned char *pixelsChar = Image8bit.data;
NSString *pixels = [NSString stringWithUTF8String:(char *)pixelsChar];
callback(#[[NSNull null], pixels]);
}

Related

OpenCV QRCodeDetector not working for IOS objective C++

I am trying to complete an app for a university project that will allow the user to take a photo which contains a QR code and some specific colours that could change over time. My app should be able to detect the QR code and changes in the colours. I am trying to use react native and openCV to complete this task concentrating on IOS first (I had hoped it would be easier). So far, using the openCV tutorial by brainhubeu/react-native-opencv-tutorial on GitHub, I am to take a picture, check for blur, but when it comes to detecting and decoding the QR code, its doesn't seem to be locating any QR in the image.
#import "RNOpenCVLibrary.h"
#import <React/RCTLog.h>
#implementation RNOpenCVLibrary
- (dispatch_queue_t)methodQueue
{
return dispatch_get_main_queue();
}
RCT_EXPORT_MODULE()
RCT_EXPORT_METHOD(addEvent:(NSString *)name location:(NSString *)location)
{
RCTLogInfo(#"Pretending to create an event %# at %#", name, location);
};
RCT_EXPORT_METHOD(checkForBlurryImage:(NSString *)imageAsBase64 callback:(RCTResponseSenderBlock)callback) {
RCTLog(#"%#", imageAsBase64);
UIImage* image = [self decodeBase64ToImage:imageAsBase64];
BOOL isImageBlurryResult = [self isImageBlurry:image];
id objects[] = { isImageBlurryResult ? #YES : #NO };
NSUInteger count = sizeof(objects) / sizeof(id);
NSArray *dataArray = [NSArray arrayWithObjects:objects
count:count];
callback(#[[NSNull null], dataArray]);
};
RCT_EXPORT_METHOD(checkForQRCode:(NSString *)imageAsBase64) {
RCTLog(#"%#", imageAsBase64);
UIImage* image = [self decodeBase64ToImage:imageAsBase64];
cv::QRCodeDetector qrDecoder = cv::QRCodeDetector();
BOOL isQRPresentResult = [self isQRPresent:image];
NSString *decodedQrData = [self decodeQRCode:image];
//BOOL isQRPresentResult = 1;
//std::string data = qrDecoder.detectAndDecode(matImage);
//std::string data = "testing";
//NSString* result = [NSString stringWithUTF8String:data.c_str()];
//NSString* result = #"test";
RCTLogInfo(#"Pretending to create an event %#", decodedQrData);
RCTLog(isQRPresentResult ? #"yes" : #"No");
};
-(BOOL)isQRPresent:(UIImage*)image{
cv::Mat matImage = [self convertUIImageToCVMat:image];
cv::Mat matImageGrey;
// converting image's color space (RGB) to grayscale
cv::cvtColor(matImage, matImageGrey, cv::COLOR_BGRA2GRAY);
std::vector<cv::Point> points;
cv::QRCodeDetector qrDecoder = cv::QRCodeDetector();
return qrDecoder.detect(matImageGrey, points);
};
-(NSString*)decodeQRCode:(UIImage*)image{
cv::Mat matImage = [self convertUIImageToCVMat:image];
cv::Mat matImageGrey;
// converting image's color space (RGB) to grayscale
cv::cvtColor(matImage, matImageGrey, cv::COLOR_BGRA2GRAY);
cv::QRCodeDetector qrDecoder = cv::QRCodeDetector();
std::string qrData;
qrData = qrDecoder.detectAndDecode(matImageGrey);
return [NSString stringWithUTF8String:qrData.c_str()];
};
- (cv::Mat)convertUIImageToCVMat:(UIImage *)image {
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels (color channels + alpha)
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
};
- (UIImage *)decodeBase64ToImage:(NSString *)strEncodeData {
NSData *data = [[NSData alloc]initWithBase64EncodedString:strEncodeData options:NSDataBase64DecodingIgnoreUnknownCharacters];
return [UIImage imageWithData:data];
};
- (BOOL) isImageBlurry:(UIImage *) image {
// converting UIImage to OpenCV format - Mat
cv::Mat matImage = [self convertUIImageToCVMat:image];
cv::Mat matImageGrey;
// converting image's color space (RGB) to grayscale
cv::cvtColor(matImage, matImageGrey, cv::COLOR_BGRA2GRAY);
cv::Mat dst2 = [self convertUIImageToCVMat:image];
cv::Mat laplacianImage;
dst2.convertTo(laplacianImage, CV_8UC1);
// applying Laplacian operator to the image
cv::Laplacian(matImageGrey, laplacianImage, CV_8U);
cv::Mat laplacianImage8bit;
laplacianImage.convertTo(laplacianImage8bit, CV_8UC1);
unsigned char *pixels = laplacianImage8bit.data;
// 16777216 = 256*256*256
int maxLap = -16777216;
for (int i = 0; i < ( laplacianImage8bit.elemSize()*laplacianImage8bit.total()); i++) {
if (pixels[i] > maxLap) {
maxLap = pixels[i];
}
}
// one of the main parameters here: threshold sets the sensitivity for the blur check
// smaller number = less sensitive; default = 180
int threshold = 180;
return (maxLap <= threshold);
};
#end

New CGIImage or CGImageRef from std::string using using base64

everyone:
I've been working on this for days. Here's a little bit of background. I'm sending an image to a server using protobuf. The image is directly from the camera, so it is not a jpeg nor a png. I found code to get the data from the UIImage using the CGImage to create a CGImageRef. See the following code:
- (UIImage *)testProcessedImage:(UIImage *)processedImage
{
CGImageRef imageRef = processedImage.CGImage;
NSData *data1 = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
Google protobuf uses C++ code to send and receive the bytes to and from the server. When I tried to get the data bytes back into NSData and alloc init a UIImage with that data the UIImage was always nil. This tells me that my NSData is not in the correct format.
At first, I thought my issue was with the C++ conversion, as shown with my previous question here. But after much frustration, I cut out everything in the middle and just created a UIImage with the CGImageRef and it worked. See the following code:
- (UIImage *)testProcessedImage:(UIImage *)processedImage
{
CGImageRef imageRef = processedImage.CGImage;
NSData *data1 = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
// Added this line and cut out everything in the middle
UIImage *image = [UIImage imageWithCGImage:imageRef];
Following is a description of what I ultimately need to do. There are two parts. Part 1 takes a UIImage and converts it into a std::string.
take a UIImage
get the NSData from it
convert the data to unsigned char *
stuff the unsigned char * into a std::string
The string is what we would receive from the protobuf call. Part 2 takes the data from the string and converts it back into the NSData format to populate a UIImage. Following are the steps to do that:
convert the std::string to char array
convert the char array to a const char *
put the char * into NSData
return NSData
Now, with that background information and armed with the fact that populating the UIImage with a CGImageRef works, meaning that data in that format is the correct format to populate the UIImage, I'm looking for help in figuring out how to get the base64.data() into either a CFDataRef or a CGImageRef. Below is my test method:
- (UIImage *)testProcessedImage:(UIImage *)processedImage
{
CGImageRef imageRef = processedImage.CGImage;
NSData *data1 = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
unsigned char *pixels = (unsigned char *)[data1 bytes];
unsigned long size = [data1 length];
// ***************************************************************************
// This is where we would call transmit and receive the bytes in a std::string
//
// The following line simulates that:
//
const std::string byteString(pixels, pixels + size);
//
// ***************************************************************************
// converting to base64
std::string encoded = base64_encode(reinterpret_cast<const unsigned char*>(byteString.c_str()), byteString.length());
// retrieving base64
std::string decoded = base64_decode(encoded);
// put byte array back into NSData format
NSUInteger usize = decoded.length();
const char *bytes = decoded.data();
NSData *data2 = [NSData dataWithBytes:(const void *)bytes length:sizeof(unsigned char)*usize];
NSLog(#"examine data");
// But when I try to alloc init a UIImage with the data, the image is nil
UIImage *image2 = [[UIImage alloc] initWithData:data2];
NSLog(#"examine image2");
// *********** Below is my convoluted approach at CFDataRef and CGImageRef ****************
CFDataRef dataRef = CFDataCreate( NULL, (const UInt8*) decoded.data(), decoded.length() );
NSData *myData = (__bridge NSData *)dataRef;
//CGDataProviderRef ref = CGDataProviderCreateWithCFData(dataRef);
id sublayer = (id)[UIImage imageWithCGImage:imageRef].CGImage;
UIImage *image3 = [UIImage imageWithCGImage:(__bridge CGImageRef)(sublayer)];
return image3;
}
As any casual observer can see, I need help. HELP!!! I've tried some of the other questions on SO, such as this one and this one and this one and cannot find the information I need for the solution. I admit part of my problem is that I do not understand much about images (like RGBA values and other stuff).

CGImageRelease causing crash

I am using AGImagePickerController to pick multiple pictures from album, and then push the selected assets to a viewController where it tries to convert each asset into an UIImage.
However, I found out that if I selected more than 20 images, I will start to get memory low warning and the app exited. Here is my code of the conversion
for(int i =0 ; i < [self.selectedPictures count] ; i++)
{
NSLog(#"Object %d",i);
ALAsset *asset = [self.selectedPictures objectAtIndex:i];
ALAssetRepresentation *rep = [asset defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
UIImage *anImage = [UIImage imageWithCGImage:iref scale:[rep scale] orientation:(UIImageOrientation)[rep orientation]];
float newHeight = anImage.size.height / (anImage.size.width / 1280);
UIImage *resizedImage = [anImage resizedImageWithContentMode:UIViewContentModeScaleAspectFit bounds:CGSizeMake(newHeight, 1280.f) interpolationQuality:kCGInterpolationHigh];
UIImage *resizedThumbnailImage = [anImage resizedImageWithContentMode:UIViewContentModeScaleAspectFill bounds:CGSizeMake(290.0f, 300.f) interpolationQuality:kCGInterpolationHigh];
// JPEG to decrease file size and enable faster uploads & downloads
NSData *imageData = UIImageJPEGRepresentation(resizedImage, 0.6f);
//NSData *thumbnailImageData = UIImagePNGRepresentation(thumbnailImage);
NSData *thumbnailImageData = UIImageJPEGRepresentation(resizedThumbnailImage, 0.6f);
PFFile *photoFile = [PFFile fileWithData:imageData];
PFFile *thumbnailFile = [PFFile fileWithData:thumbnailImageData];
[photoFile saveinbackground];
[thumbnailFile saveinbackground];
}
So i figured out that I should add CGImageRelease(iref); after anImage to release the iref, and the memory warning is gone. However, my app will crash after the last asset is converted to UIImage. And so far i could not find out why it is crashing.
You shouldn't be doing CGImageRelease(iref); unless you use CGImageCreate,
CGImageCreateCopy or CGImageRetain. That is the reason why it is crashing.
I found a way to fix this.
use #autoreleasepool

Building NSImage from bytes

Im trying to build an NSImage from some strange bytes.
Im using BlackMagic SDK to get the bytes of a recieved frame:
unsigned char* frame3 = NULL;
unsigned char* frame2 = (Byte*)malloc(699840);
videoFrame->GetBytes ( (void**)&frame3);
memcpy(frame2, frame3, 699840);
NSData* data = [NSData dataWithBytes:frame2 length:sizeof(frame2) ];
NSImage *image = [[NSImage alloc] initWithData:data];
//(till now i use statically 699840, because i know its size)
Why i said the bytes are strange is that the content of the "frame2" looks like this:
printf("content: %s",frame2);
\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200.........\200 (to the end)
It should be blank black frame.
Does somebody know how could I figure out something with this?
You should use these apis to get an image from data bytes.
NSString *filePath = [yourDirectory stringByAppendingPathComponent:#"imageName.jpg"];
[data writeToFile:filePath atomically:YES];

Processing images using Leptonica in an Xcode project

In Xcode, I am trying to pre process an image prior to sending it to OCR'ing. The OCR engine, Tesseract, handles images based on the Leptonica library.
As an example:
The Leptonica feature pixConvertTo8("image.tif")... is there a way to "transfer" the image raw data from UIImage -> PIX (see pix.h from the leptonica library) -> perform the pixConvertTo8() and back from PIX -> UImage - and this preferably without saving it to a file for transition - all in memory.
- (void) processImage:(UIImage *) uiImage
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
// preprocess UIImage here with fx: pixConvertTo8();
CGSize imageSize = [uiImage size];
int bytes_per_line = (int)CGImageGetBytesPerRow([uiImage CGImage]);
int bytes_per_pixel = (int)CGImageGetBitsPerPixel([uiImage CGImage]) / 8.0;
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider([uiImage CGImage]));
const UInt8 *imageData = CFDataGetBytePtr(data);
// this could take a while.
char* text = tess->TesseractRect(imageData,
bytes_per_pixel,
bytes_per_line,
0, 0,
imageSize.width, imageSize.height);
these two functions will do the trick....
- (void) startTesseract
{
//code from http://robertcarlsen.net/2009/12/06/ocr-on-iphone-demo-1043
NSString *dataPath =
[[self applicationDocumentsDirectory]stringByAppendingPathComponent:#"tessdata"];
/*
Set up the data in the docs dir
want to copy the data to the documents folder if it doesn't already exist
*/
NSFileManager *fileManager = [NSFileManager defaultManager];
// If the expected store doesn't exist, copy the default store.
if (![fileManager fileExistsAtPath:dataPath]) {
// get the path to the app bundle (with the tessdata dir)
NSString *bundlePath = [[NSBundle mainBundle] bundlePath];
NSString *tessdataPath = [bundlePath stringByAppendingPathComponent:#"tessdata"];
if (tessdataPath) {
[fileManager copyItemAtPath:tessdataPath toPath:dataPath error:NULL];
}
}
NSString *dataPathWithSlash = [[self applicationDocumentsDirectory] stringByAppendingString:#"/"];
setenv("TESSDATA_PREFIX", [dataPathWithSlash UTF8String], 1);
// init the tesseract engine.
tess = new tesseract::TessBaseAPI();
tess->Init([dataPath cStringUsingEncoding:NSUTF8StringEncoding], "eng");
}
- (NSString *) ocrImage: (UIImage *) uiImage
{
//code from http://robertcarlsen.net/2009/12/06/ocr-on-iphone-demo-1043
CGSize imageSize = [uiImage size];
double bytes_per_line = CGImageGetBytesPerRow([uiImage CGImage]);
double bytes_per_pixel = CGImageGetBitsPerPixel([uiImage CGImage]) / 8.0;
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider([uiImage CGImage]));
const UInt8 *imageData = CFDataGetBytePtr(data);
imageThresholder = new tesseract::ImageThresholder();
imageThresholder->SetImage(imageData,(int) imageSize.width,(int) imageSize.height,(int)bytes_per_pixel,(int)bytes_per_line);
// this could take a while. maybe needs to happen asynchronously.
tess->SetImage(imageThresholder->GetPixRect());
char* text = tess->GetUTF8Text();
// Do something useful with the text!
NSLog(#"Converted text: %#",[NSString stringWithCString:text encoding:NSUTF8StringEncoding]);
return [NSString stringWithCString:text encoding:NSUTF8StringEncoding]
}
You will have to declare both tess and imageThresholder in the .h file
tesseract::TestBaseApi *tess;
tesseract::ImageThresholder *imageThresholder;
I've found some good code snippets in the Tesseract OCR engine about how to do this. Noticeably in class ImageThresholder inside thresholder.cpp - see link below. I didn't test it yet but here is some short description:
the interesting part for me is the else block wherein the depth is 32. here the
pixCreate()
pixGetdata()
pixgetwpl() do the acctual work.
The thresholder.cpp from the tesseract engine uses the above mentioned method