Objective-C: NSBitmapImageRep SetColor - objective-c

I'm trying to read an NSImage into an NSBitmapImageRep, for changing the color of some pixels. The Program should read the Color-Value of each pixel, and check if its equal to a color which is selected by a colorwell. I've got an ImageView named "MainImageView" and a ColorWell named "MainColor". That's my code:
- (IBAction)ScanImg:(id)sender {
NSBitmapImageRep *ImgRep = [[NSBitmapImageRep alloc]initWithData: MainImageView.image.TIFFRepresentation];
for (int pixelx = 0; pixelx < ImgRep.pixelsWide; pixelx ++) {
for (int pixely = 0; pixely < ImgRep.pixelsHigh;pixely ++){
if ([ImgRep colorAtX:pixelx y:pixely] == MainColor.color) {
[ImgRep setColor: [NSColor whiteColor] atX:pixelx y:pixely];
} else{
[ImgRep setColor: [NSColor blackColor] atX:pixelx y:pixely];
}
}
}
struct CGImage *CG = [ImgRep CGImage];
NSImage *Image = [[NSImage alloc] initWithCGImage:CG size:ImgRep.size];
MainImageView.image = Image;
}
But the Code changes nothing on the picture! What's the problem? Or is there another way, to change the pixel's Color?
Thank you for your help!
DaRi

I have the same issue. A workaround I found is to use the -setPixel method, although this feels a bit clumsy:
NSUInteger pix[4]; pix[0] = 0; pix[1] = 0; pix[2] = 0; pix[3] = 255; //black
[ImgRep setPixel:pix atX:x y:y];
Note: The pixel data must be arranged according to the color space you are using, in this example 0,0,0,255 is black in RGBA color space.
But in the end, thats still better than copying and modifying the underlying (unsigned char*) data that is returned by [ImgRep bitmapData] ...

Related

OpenCV QRCodeDetector not working for IOS objective C++

I am trying to complete an app for a university project that will allow the user to take a photo which contains a QR code and some specific colours that could change over time. My app should be able to detect the QR code and changes in the colours. I am trying to use react native and openCV to complete this task concentrating on IOS first (I had hoped it would be easier). So far, using the openCV tutorial by brainhubeu/react-native-opencv-tutorial on GitHub, I am to take a picture, check for blur, but when it comes to detecting and decoding the QR code, its doesn't seem to be locating any QR in the image.
#import "RNOpenCVLibrary.h"
#import <React/RCTLog.h>
#implementation RNOpenCVLibrary
- (dispatch_queue_t)methodQueue
{
return dispatch_get_main_queue();
}
RCT_EXPORT_MODULE()
RCT_EXPORT_METHOD(addEvent:(NSString *)name location:(NSString *)location)
{
RCTLogInfo(#"Pretending to create an event %# at %#", name, location);
};
RCT_EXPORT_METHOD(checkForBlurryImage:(NSString *)imageAsBase64 callback:(RCTResponseSenderBlock)callback) {
RCTLog(#"%#", imageAsBase64);
UIImage* image = [self decodeBase64ToImage:imageAsBase64];
BOOL isImageBlurryResult = [self isImageBlurry:image];
id objects[] = { isImageBlurryResult ? #YES : #NO };
NSUInteger count = sizeof(objects) / sizeof(id);
NSArray *dataArray = [NSArray arrayWithObjects:objects
count:count];
callback(#[[NSNull null], dataArray]);
};
RCT_EXPORT_METHOD(checkForQRCode:(NSString *)imageAsBase64) {
RCTLog(#"%#", imageAsBase64);
UIImage* image = [self decodeBase64ToImage:imageAsBase64];
cv::QRCodeDetector qrDecoder = cv::QRCodeDetector();
BOOL isQRPresentResult = [self isQRPresent:image];
NSString *decodedQrData = [self decodeQRCode:image];
//BOOL isQRPresentResult = 1;
//std::string data = qrDecoder.detectAndDecode(matImage);
//std::string data = "testing";
//NSString* result = [NSString stringWithUTF8String:data.c_str()];
//NSString* result = #"test";
RCTLogInfo(#"Pretending to create an event %#", decodedQrData);
RCTLog(isQRPresentResult ? #"yes" : #"No");
};
-(BOOL)isQRPresent:(UIImage*)image{
cv::Mat matImage = [self convertUIImageToCVMat:image];
cv::Mat matImageGrey;
// converting image's color space (RGB) to grayscale
cv::cvtColor(matImage, matImageGrey, cv::COLOR_BGRA2GRAY);
std::vector<cv::Point> points;
cv::QRCodeDetector qrDecoder = cv::QRCodeDetector();
return qrDecoder.detect(matImageGrey, points);
};
-(NSString*)decodeQRCode:(UIImage*)image{
cv::Mat matImage = [self convertUIImageToCVMat:image];
cv::Mat matImageGrey;
// converting image's color space (RGB) to grayscale
cv::cvtColor(matImage, matImageGrey, cv::COLOR_BGRA2GRAY);
cv::QRCodeDetector qrDecoder = cv::QRCodeDetector();
std::string qrData;
qrData = qrDecoder.detectAndDecode(matImageGrey);
return [NSString stringWithUTF8String:qrData.c_str()];
};
- (cv::Mat)convertUIImageToCVMat:(UIImage *)image {
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels (color channels + alpha)
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
};
- (UIImage *)decodeBase64ToImage:(NSString *)strEncodeData {
NSData *data = [[NSData alloc]initWithBase64EncodedString:strEncodeData options:NSDataBase64DecodingIgnoreUnknownCharacters];
return [UIImage imageWithData:data];
};
- (BOOL) isImageBlurry:(UIImage *) image {
// converting UIImage to OpenCV format - Mat
cv::Mat matImage = [self convertUIImageToCVMat:image];
cv::Mat matImageGrey;
// converting image's color space (RGB) to grayscale
cv::cvtColor(matImage, matImageGrey, cv::COLOR_BGRA2GRAY);
cv::Mat dst2 = [self convertUIImageToCVMat:image];
cv::Mat laplacianImage;
dst2.convertTo(laplacianImage, CV_8UC1);
// applying Laplacian operator to the image
cv::Laplacian(matImageGrey, laplacianImage, CV_8U);
cv::Mat laplacianImage8bit;
laplacianImage.convertTo(laplacianImage8bit, CV_8UC1);
unsigned char *pixels = laplacianImage8bit.data;
// 16777216 = 256*256*256
int maxLap = -16777216;
for (int i = 0; i < ( laplacianImage8bit.elemSize()*laplacianImage8bit.total()); i++) {
if (pixels[i] > maxLap) {
maxLap = pixels[i];
}
}
// one of the main parameters here: threshold sets the sensitivity for the blur check
// smaller number = less sensitive; default = 180
int threshold = 180;
return (maxLap <= threshold);
};
#end

How to take a screenshot with low quality

Is there a way to a take a screenshot (low level quality) on osx programmatically?
I developed a function like below:
CGImageRef resizeImage(CGImageRef imageRef) {
CGRect thumRect;
CGPoint point;
point.x = 0;
point.y = 0;
thumRect.origin = point;
thumRect.size.height = 225;
thumRect.size.width = 360;
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
if (aplhaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL, thumRect.size.width, thumRect.size.height, CGImageGetBitsPerComponent(imageRef), 4 * thumRect.size.width, CGImageGetColorSpace(imageRef), alphaInfo);
CGContextDrawImage(bitmap, thumRect, imageRef);
imageRef = CGBitmapContextCreateImage(bitmap);
CGContextRelease(bitmap);
return imageRef;
}
When I runned this function, I took an between 150KB and 600KB image. If I decrease thumRect size, I cant read any characters in the image. But, I want to decrease these images as low as possible. Is there any suggestion or another possible solution?
Thanks.
I found a solution like below:
First af all, resize your image with code in my question.
Then Compress it :)
//imageRef is CGImageRef
NSImage * image = [[NSImage alloc] initWithCGImage:imageRef size:NSZeroSize];
NSBitmapImageRep *bmpImageRep = [NSBitmapImageRep imageRepWithData [image TIFFRepresentation]];
CGFloat compressionFactor = 1.0 //Read it : (1)
NSDictionary *jpgProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithDouble:compressionFactor], NSImageCompressionFactor, [NSNumber numberWithBool:NO], NSImageProgressive, nil];
NSData *jpgData = [bmpImageRep representationUsingType:NSJPEGFileType properties:jpgProperties];
(1):https://developer.apple.com/library/mac/documentation/Cocoa/Reference/ApplicationKit/Classes/NSBitmapImageRep_Class/index.html#//apple_ref/doc/constant_group/Bitmap_image_properties

Objective C: How to convert CGImageRef to UIImageView?

I'm using the XZINGObjC framework to create an EAN-Barcode-Image. Following the documentation, I'm doing it like
//in viewDidAppear
//XZING: create Matrix
NSString* eanString = #"1234567890123"; //sth. like that
ZXBitMatrix* result = [writer encode:eanString
format:kBarcodeFormatEan13
width:500
height:500
error:&error];
if (result) {
//XZING: convert matrix to CGImageRef
CGImageRef imageRef = [[ZXImage imageWithMatrix:result] cgimage];
//CRASHLINE HERE!! (this is NOT in the XZING documentation, but i cannot figure out the issue!)
UIImage* uiImage = [[UIImage alloc] initWithCGImage:imageRef]; //<--CRASH: EXC_BAD_ACCESS
if(image != nil){
//assigning image to ui
self.barCodeImageView.image = uiImage;
}
It works, if I step through this code using breakpoints! However, I think at some point an Image is not ready for use?! But I cannot find the reason.
What I tried:
using imageRef and uiImage as local variables (EXC_BAD_ACCESS CRASH)
tried that operation in a background thread (EXC_BAD_ACCESS CRASH)
Same here, every solution worked if I used breakpoints and stepped line by line through the code. What is my mistake here? Any ideas? Thanks in advance!
After some try and error programming, I could fix the issue by replacing the following lines
CGImageRef imageRef = [[ZXImage imageWithMatrix:result] cgimage];
UIImage* uiImage = [[UIImage alloc] initWithCGImage:imageRef]; //<--CRASH
with
UIImage* uiImage = [[UIImage alloc] initWithCGImage:[[ZXImage imageWithMatrix:result] cgimage]];
Still, I don't know why?! I'm pretty sure something isn't hold in the memory or maybe the CGImageRef isn't ready if I try to convert it to UIImage.
Problem is with in [ZXImage imageWithMatrix:result], it is creating CGImage and before assigning it to ZXImage which will increase its retain count it is releasing the CGImage by CFRelease.
To fix this issue, replace + (ZXImage *)imageWithMatrix:(ZXBitMatrix *)matrix method with below implementation.
+ (ZXImage *)imageWithMatrix:(ZXBitMatrix *)matrix {
int width = matrix.width;
int height = matrix.height;
int8_t *bytes = (int8_t *)malloc(width * height * 4);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
BOOL bit = [matrix getX:x y:y];
int8_t intensity = bit ? 0 : 255;
for(int i = 0; i < 3; i++) {
bytes[y * width * 4 + x * 4 + i] = intensity;
}
bytes[y * width * 4 + x * 4 + 3] = 255;
}
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef c = CGBitmapContextCreate(bytes, width, height, 8, 4 * width, colorSpace, kCGBitmapAlphaInfoMask & kCGImageAlphaPremultipliedLast);
CGImageRef image = CGBitmapContextCreateImage(c);
ZXImage *zxImage = [[ZXImage alloc] initWithCGImageRef:image];
CFRelease(colorSpace);
CFAutorelease(image);
CFRelease(c);
free(bytes);
return zxImage;
}
For people writing Swift, I hit the same problem that longilong described. The answer was in the ZXingObjC objective C example projects. Remember: this is for generating a barcode only.
internal func encodeBarcode(){
let writer : ZXMultiFormatWriter! = ZXMultiFormatWriter()
var result: ZXBitMatrix!
let hint: ZXEncodeHints! = ZXEncodeHints()
do {
hint.margin = 0
result = try writer.encode("Example String", format: kBarcodeFormatAztec, width: 500, height: 500, hints: hint)
**let rawBarcode: ZXImage = ZXImage(matrix: result)
barcodeUIImage.image = UIImage(CGImage: rawBarcode.cgimage)**
} catch {
print("failed to generate barcode")
}
}

Memory Leak with ARC

+(void)setup {
UIImage* spriteSheet = [UIImage imageNamed:#"mySpriteSheet.png"];
CGRect rect;
animation = [NSMutableArray arrayWithCapacity:numberOfFramesInSpriteSheet];
int frameCount = 0;
for (int row = 0; row < numberFrameRowsInSpriteSheet; row++) {
for (int col = 0; col < numberFrameColsInSpriteSheet; col++) {
frameCount++;
if (frameCount <= numberOfFramesInSpriteSheet) {
rect = CGRectMake(col*frameHeight, row*frameWidth, frameHeight, frameWidth);
[animation addObject:[UIImage imageWithCGImage:CGImageCreateWithImageInRect(spriteSheet.CGImage, rect)] ];
}
}
}
}
Compiled the above code with ARC enabled. The Analyze tool reports a possible memory leak since imageWithCGImage:: returns UIImage with count +1, then reference is lost. Leaks Instrument reports no memory leaks at all. Whats going on here?
Furthermore, since ARC prohibits use of manually using release ect, how does one fix the leak?
Thanks to anyone who can offer any advice.
ARC does not manage C-types, of which CGImage may be considered. You must release the ref manually when you are finished with CGImageRelease(image);
+(void)setup {
UIImage* spriteSheet = [UIImage imageNamed:#"mySpriteSheet.png"];
CGRect rect;
animation = [NSMutableArray arrayWithCapacity:numberOfFramesInSpriteSheet];
int frameCount = 0;
for (int row = 0; row < numberFrameRowsInSpriteSheet; row++) {
for (int col = 0; col < numberFrameColsInSpriteSheet; col++) {
frameCount++;
if (frameCount <= numberOfFramesInSpriteSheet) {
rect = CGRectMake(col*frameHeight, row*frameWidth, frameHeight, frameWidth);
//store our image ref so we can release it later
//The create rule says that any C-interface method with "create" in it's name
//returns a +1 foundation object, which we must release manually.
CGImageRef image = CGImageCreateWithImageInRect(spriteSheet.CGImage, rect)
//Create a UIImage from our ref. It is now owned by UIImage, so we may discard it.
[animation addObject:[UIImage imageWithCGImage:image]];
//Discard the ref.
CGImageRelease(image);
}
}
}
}
None of the core foundation data structure is dealt with ARC. Many a times this creates a problem. In these case we have to manually release the memory.

Exception while retriving data from Address Book

I am getting this exception while retriving data from address book. I have check through internet but not get any help for that.
Overflow allocating bitmap backing store. Cannot back bitmap with 320 bytes per row, -2147483648 height, and 1 planes
I am using AddressBook Framework for retriving data from Address Book. is this issue of Memory or it due to getting information of avatar that i have set in Addressbook contact.
Please help. If any suggestion or recommendations for it then please give it...
Thanks for your reply
As you have said, I have checked all the code for drawing large an image or view. And found the below function that i have used for resizing image. Now resizing image will be done on server side. I have more doubts for this issue. You can check it in below block of code. Now waiting from customer for this issue.
Thanks again for your help.
-(NSData *)getCompressedImageDataFromData:(NSData *)imData
{
NSImage *pImage = [[[NSImage alloc] initWithData:imData] autorelease];
NSSize orgSize = [pImage size];
int widthInput, heightInput;
widthInput = orgSize.width;
heightInput = orgSize.height;
if(widthInput <= 72 && heightInput <= 72)
return imData;
double newheight = heightInput;
NSSize newSize;
if(widthInput >= 72)
{
double ratio;
ratio = widthInput / heightInput;
newheight = 72 / ratio;
newSize = NSMakeSize (72, newheight);
}
else
newSize = NSMakeSize(widthInput, newheight);
NSImage *outputImage = [[[NSImage alloc] initWithSize:newSize] autorelease];
if(![outputImage isValid])
return nil;
[outputImage lockFocus];
[[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationHigh];
[pImage drawInRect:NSMakeRect(0, 0, newSize.width, newSize.height)
fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[outputImage unlockFocus];
NSData *imageData = [outputImage TIFFRepresentationUsingCompression:NSTIFFCompressionJPEG factor:0];
return [imageData mutableCopy];
}
Are you creating one large view or image into which you're drawing multiple contacts in the address book? It sounds like you're trying to create too large an image/view.