Objective C: How to convert CGImageRef to UIImageView? - objective-c

I'm using the XZINGObjC framework to create an EAN-Barcode-Image. Following the documentation, I'm doing it like
//in viewDidAppear
//XZING: create Matrix
NSString* eanString = #"1234567890123"; //sth. like that
ZXBitMatrix* result = [writer encode:eanString
format:kBarcodeFormatEan13
width:500
height:500
error:&error];
if (result) {
//XZING: convert matrix to CGImageRef
CGImageRef imageRef = [[ZXImage imageWithMatrix:result] cgimage];
//CRASHLINE HERE!! (this is NOT in the XZING documentation, but i cannot figure out the issue!)
UIImage* uiImage = [[UIImage alloc] initWithCGImage:imageRef]; //<--CRASH: EXC_BAD_ACCESS
if(image != nil){
//assigning image to ui
self.barCodeImageView.image = uiImage;
}
It works, if I step through this code using breakpoints! However, I think at some point an Image is not ready for use?! But I cannot find the reason.
What I tried:
using imageRef and uiImage as local variables (EXC_BAD_ACCESS CRASH)
tried that operation in a background thread (EXC_BAD_ACCESS CRASH)
Same here, every solution worked if I used breakpoints and stepped line by line through the code. What is my mistake here? Any ideas? Thanks in advance!

After some try and error programming, I could fix the issue by replacing the following lines
CGImageRef imageRef = [[ZXImage imageWithMatrix:result] cgimage];
UIImage* uiImage = [[UIImage alloc] initWithCGImage:imageRef]; //<--CRASH
with
UIImage* uiImage = [[UIImage alloc] initWithCGImage:[[ZXImage imageWithMatrix:result] cgimage]];
Still, I don't know why?! I'm pretty sure something isn't hold in the memory or maybe the CGImageRef isn't ready if I try to convert it to UIImage.

Problem is with in [ZXImage imageWithMatrix:result], it is creating CGImage and before assigning it to ZXImage which will increase its retain count it is releasing the CGImage by CFRelease.
To fix this issue, replace + (ZXImage *)imageWithMatrix:(ZXBitMatrix *)matrix method with below implementation.
+ (ZXImage *)imageWithMatrix:(ZXBitMatrix *)matrix {
int width = matrix.width;
int height = matrix.height;
int8_t *bytes = (int8_t *)malloc(width * height * 4);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
BOOL bit = [matrix getX:x y:y];
int8_t intensity = bit ? 0 : 255;
for(int i = 0; i < 3; i++) {
bytes[y * width * 4 + x * 4 + i] = intensity;
}
bytes[y * width * 4 + x * 4 + 3] = 255;
}
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef c = CGBitmapContextCreate(bytes, width, height, 8, 4 * width, colorSpace, kCGBitmapAlphaInfoMask & kCGImageAlphaPremultipliedLast);
CGImageRef image = CGBitmapContextCreateImage(c);
ZXImage *zxImage = [[ZXImage alloc] initWithCGImageRef:image];
CFRelease(colorSpace);
CFAutorelease(image);
CFRelease(c);
free(bytes);
return zxImage;
}

For people writing Swift, I hit the same problem that longilong described. The answer was in the ZXingObjC objective C example projects. Remember: this is for generating a barcode only.
internal func encodeBarcode(){
let writer : ZXMultiFormatWriter! = ZXMultiFormatWriter()
var result: ZXBitMatrix!
let hint: ZXEncodeHints! = ZXEncodeHints()
do {
hint.margin = 0
result = try writer.encode("Example String", format: kBarcodeFormatAztec, width: 500, height: 500, hints: hint)
**let rawBarcode: ZXImage = ZXImage(matrix: result)
barcodeUIImage.image = UIImage(CGImage: rawBarcode.cgimage)**
} catch {
print("failed to generate barcode")
}
}

Related

OpenCV QRCodeDetector not working for IOS objective C++

I am trying to complete an app for a university project that will allow the user to take a photo which contains a QR code and some specific colours that could change over time. My app should be able to detect the QR code and changes in the colours. I am trying to use react native and openCV to complete this task concentrating on IOS first (I had hoped it would be easier). So far, using the openCV tutorial by brainhubeu/react-native-opencv-tutorial on GitHub, I am to take a picture, check for blur, but when it comes to detecting and decoding the QR code, its doesn't seem to be locating any QR in the image.
#import "RNOpenCVLibrary.h"
#import <React/RCTLog.h>
#implementation RNOpenCVLibrary
- (dispatch_queue_t)methodQueue
{
return dispatch_get_main_queue();
}
RCT_EXPORT_MODULE()
RCT_EXPORT_METHOD(addEvent:(NSString *)name location:(NSString *)location)
{
RCTLogInfo(#"Pretending to create an event %# at %#", name, location);
};
RCT_EXPORT_METHOD(checkForBlurryImage:(NSString *)imageAsBase64 callback:(RCTResponseSenderBlock)callback) {
RCTLog(#"%#", imageAsBase64);
UIImage* image = [self decodeBase64ToImage:imageAsBase64];
BOOL isImageBlurryResult = [self isImageBlurry:image];
id objects[] = { isImageBlurryResult ? #YES : #NO };
NSUInteger count = sizeof(objects) / sizeof(id);
NSArray *dataArray = [NSArray arrayWithObjects:objects
count:count];
callback(#[[NSNull null], dataArray]);
};
RCT_EXPORT_METHOD(checkForQRCode:(NSString *)imageAsBase64) {
RCTLog(#"%#", imageAsBase64);
UIImage* image = [self decodeBase64ToImage:imageAsBase64];
cv::QRCodeDetector qrDecoder = cv::QRCodeDetector();
BOOL isQRPresentResult = [self isQRPresent:image];
NSString *decodedQrData = [self decodeQRCode:image];
//BOOL isQRPresentResult = 1;
//std::string data = qrDecoder.detectAndDecode(matImage);
//std::string data = "testing";
//NSString* result = [NSString stringWithUTF8String:data.c_str()];
//NSString* result = #"test";
RCTLogInfo(#"Pretending to create an event %#", decodedQrData);
RCTLog(isQRPresentResult ? #"yes" : #"No");
};
-(BOOL)isQRPresent:(UIImage*)image{
cv::Mat matImage = [self convertUIImageToCVMat:image];
cv::Mat matImageGrey;
// converting image's color space (RGB) to grayscale
cv::cvtColor(matImage, matImageGrey, cv::COLOR_BGRA2GRAY);
std::vector<cv::Point> points;
cv::QRCodeDetector qrDecoder = cv::QRCodeDetector();
return qrDecoder.detect(matImageGrey, points);
};
-(NSString*)decodeQRCode:(UIImage*)image{
cv::Mat matImage = [self convertUIImageToCVMat:image];
cv::Mat matImageGrey;
// converting image's color space (RGB) to grayscale
cv::cvtColor(matImage, matImageGrey, cv::COLOR_BGRA2GRAY);
cv::QRCodeDetector qrDecoder = cv::QRCodeDetector();
std::string qrData;
qrData = qrDecoder.detectAndDecode(matImageGrey);
return [NSString stringWithUTF8String:qrData.c_str()];
};
- (cv::Mat)convertUIImageToCVMat:(UIImage *)image {
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels (color channels + alpha)
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
};
- (UIImage *)decodeBase64ToImage:(NSString *)strEncodeData {
NSData *data = [[NSData alloc]initWithBase64EncodedString:strEncodeData options:NSDataBase64DecodingIgnoreUnknownCharacters];
return [UIImage imageWithData:data];
};
- (BOOL) isImageBlurry:(UIImage *) image {
// converting UIImage to OpenCV format - Mat
cv::Mat matImage = [self convertUIImageToCVMat:image];
cv::Mat matImageGrey;
// converting image's color space (RGB) to grayscale
cv::cvtColor(matImage, matImageGrey, cv::COLOR_BGRA2GRAY);
cv::Mat dst2 = [self convertUIImageToCVMat:image];
cv::Mat laplacianImage;
dst2.convertTo(laplacianImage, CV_8UC1);
// applying Laplacian operator to the image
cv::Laplacian(matImageGrey, laplacianImage, CV_8U);
cv::Mat laplacianImage8bit;
laplacianImage.convertTo(laplacianImage8bit, CV_8UC1);
unsigned char *pixels = laplacianImage8bit.data;
// 16777216 = 256*256*256
int maxLap = -16777216;
for (int i = 0; i < ( laplacianImage8bit.elemSize()*laplacianImage8bit.total()); i++) {
if (pixels[i] > maxLap) {
maxLap = pixels[i];
}
}
// one of the main parameters here: threshold sets the sensitivity for the blur check
// smaller number = less sensitive; default = 180
int threshold = 180;
return (maxLap <= threshold);
};
#end

How to get Image size from URL in ios

How can I get the size(height/width) of an image from URL in objective-C? I want my container size according to the image. I am using AFNetworking 3.0.
I could use SDWebImage if it fulfills my requirement.
Knowing the size of an image before actually loading it can be necessary in a number of cases. For example, setting the height of a tableView cell in the heightForRowAtIndexPath method while loading the actual image later in the cellForRowAtIndexPath (this is a very frequent catch 22).
One simple way to do it, is to read the image header from the server URL using the Image I/O interface:
#import <ImageIO/ImageIO.h>
NSMutableString *imageURL = [NSMutableString stringWithFormat:#"http://www.myimageurl.com/image.png"];
CGImageSourceRef source = CGImageSourceCreateWithURL((CFURLRef)[NSURL URLWithString:imageURL], NULL);
NSDictionary* imageHeader = (__bridge NSDictionary*) CGImageSourceCopyPropertiesAtIndex(source, 0, NULL);
NSLog(#"Image header %#",imageHeader);
NSLog(#"PixelHeight %#",[imageHeader objectForKey:#"PixelHeight"]);
Swift 4.x
Xcode 12.x
func sizeOfImageAt(url: URL) -> CGSize? {
// with CGImageSource we avoid loading the whole image into memory
guard let source = CGImageSourceCreateWithURL(url as CFURL, nil) else {
return nil
}
let propertiesOptions = [kCGImageSourceShouldCache: false] as CFDictionary
guard let properties = CGImageSourceCopyPropertiesAtIndex(source, 0, propertiesOptions) as? [CFString: Any] else {
return nil
}
if let width = properties[kCGImagePropertyPixelWidth] as? CGFloat,
let height = properties[kCGImagePropertyPixelHeight] as? CGFloat {
return CGSize(width: width, height: height)
} else {
return nil
}
}
Use Asynchronous mechanism called GCD in iOS to dowload image without affecting your main thread.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// Download IMAGE using URL
NSData *data = [[NSData alloc]initWithContentsOfURL:URL];
// COMPOSE IMAGE FROM NSData
UIImage *image = [[UIImage alloc]initWithData:data];
dispatch_async(dispatch_get_main_queue(), ^{
// UI UPDATION ON MAIN THREAD
// Calcualte height & width of image
CGFloat height = image.size.height;
CGFloat width = image.size.width;
});
});
For Swift 4 use this:
let imageURL = URL(string: post.imageBigPath)!
let source = CGImageSourceCreateWithURL(imageURL as CFURL,
let imageHeader = CGImageSourceCopyPropertiesAtIndex(source!, 0, nil)! as NSDictionary;
print("Image header: \(imageHeader)")
The header would looks like:
Image header: {
ColorModel = RGB;
Depth = 8;
PixelHeight = 640;
PixelWidth = 640;
"{Exif}" = {
PixelXDimension = 360;
PixelYDimension = 360;
};
"{JFIF}" = {
DensityUnit = 0;
JFIFVersion = (
1,
0,
1
);
XDensity = 72;
YDensity = 72;
};
"{TIFF}" = {
Orientation = 0;
}; }
So u can get from it the Width, Height.
you can try like this:
NSData *data = [[NSData alloc]initWithContentsOfURL:URL];
UIImage *image = [[UIImage alloc]initWithData:data];
CGFloat height = image.size.height;
CGFloat width = image.size.width;

How to take a screenshot with low quality

Is there a way to a take a screenshot (low level quality) on osx programmatically?
I developed a function like below:
CGImageRef resizeImage(CGImageRef imageRef) {
CGRect thumRect;
CGPoint point;
point.x = 0;
point.y = 0;
thumRect.origin = point;
thumRect.size.height = 225;
thumRect.size.width = 360;
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
if (aplhaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL, thumRect.size.width, thumRect.size.height, CGImageGetBitsPerComponent(imageRef), 4 * thumRect.size.width, CGImageGetColorSpace(imageRef), alphaInfo);
CGContextDrawImage(bitmap, thumRect, imageRef);
imageRef = CGBitmapContextCreateImage(bitmap);
CGContextRelease(bitmap);
return imageRef;
}
When I runned this function, I took an between 150KB and 600KB image. If I decrease thumRect size, I cant read any characters in the image. But, I want to decrease these images as low as possible. Is there any suggestion or another possible solution?
Thanks.
I found a solution like below:
First af all, resize your image with code in my question.
Then Compress it :)
//imageRef is CGImageRef
NSImage * image = [[NSImage alloc] initWithCGImage:imageRef size:NSZeroSize];
NSBitmapImageRep *bmpImageRep = [NSBitmapImageRep imageRepWithData [image TIFFRepresentation]];
CGFloat compressionFactor = 1.0 //Read it : (1)
NSDictionary *jpgProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithDouble:compressionFactor], NSImageCompressionFactor, [NSNumber numberWithBool:NO], NSImageProgressive, nil];
NSData *jpgData = [bmpImageRep representationUsingType:NSJPEGFileType properties:jpgProperties];
(1):https://developer.apple.com/library/mac/documentation/Cocoa/Reference/ApplicationKit/Classes/NSBitmapImageRep_Class/index.html#//apple_ref/doc/constant_group/Bitmap_image_properties

Objective-C: NSBitmapImageRep SetColor

I'm trying to read an NSImage into an NSBitmapImageRep, for changing the color of some pixels. The Program should read the Color-Value of each pixel, and check if its equal to a color which is selected by a colorwell. I've got an ImageView named "MainImageView" and a ColorWell named "MainColor". That's my code:
- (IBAction)ScanImg:(id)sender {
NSBitmapImageRep *ImgRep = [[NSBitmapImageRep alloc]initWithData: MainImageView.image.TIFFRepresentation];
for (int pixelx = 0; pixelx < ImgRep.pixelsWide; pixelx ++) {
for (int pixely = 0; pixely < ImgRep.pixelsHigh;pixely ++){
if ([ImgRep colorAtX:pixelx y:pixely] == MainColor.color) {
[ImgRep setColor: [NSColor whiteColor] atX:pixelx y:pixely];
} else{
[ImgRep setColor: [NSColor blackColor] atX:pixelx y:pixely];
}
}
}
struct CGImage *CG = [ImgRep CGImage];
NSImage *Image = [[NSImage alloc] initWithCGImage:CG size:ImgRep.size];
MainImageView.image = Image;
}
But the Code changes nothing on the picture! What's the problem? Or is there another way, to change the pixel's Color?
Thank you for your help!
DaRi
I have the same issue. A workaround I found is to use the -setPixel method, although this feels a bit clumsy:
NSUInteger pix[4]; pix[0] = 0; pix[1] = 0; pix[2] = 0; pix[3] = 255; //black
[ImgRep setPixel:pix atX:x y:y];
Note: The pixel data must be arranged according to the color space you are using, in this example 0,0,0,255 is black in RGBA color space.
But in the end, thats still better than copying and modifying the underlying (unsigned char*) data that is returned by [ImgRep bitmapData] ...

Create a thumbnail or image of an AVPlayer at current time

I have implemented an AVPlayer and i want to take an image or thumbnail when clicking on a toolbar button and open in a new UIViewController with UIImageView. The image should be scaled exactly like the AVPlayer.
The segue is already working, i just have to implement that i get the image at the current play time.
Thanks!
Objective-C
AVAsset *asset = [AVAsset assetWithURL:sourceURL];
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc]initWithAsset:asset];
CMTime time = CMTimeMake(1, 1);
CGImageRef imageRef = [imageGenerator copyCGImageAtTime:time actualTime:NULL error:NULL];
UIImage *thumbnail = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef); // CGImageRef won't be released by ARC
Swift
var asset = AVAsset.assetWithURL(sourceURL)
var imageGenerator = AVAssetImageGenerator(asset: asset!)
var time = CMTimeMake(1, 1)
var imageRef = try! imageGenerator!.copyCGImageAtTime(time, actualTime: nil)
var thumbnail = UIImage.imageWithCGImage(imageRef)
CGImageRelease(imageRef) // CGImageRef won't be released by ARC
Swift 3.0
var sourceURL = URL(string: "Your Asset URL")
var asset = AVAsset(url: sourceURL!)
var imageGenerator = AVAssetImageGenerator(asset: asset)
var time = CMTimeMake(1, 1)
var imageRef = try! imageGenerator.copyCGImage(at: time, actualTime: nil)
var thumbnail = UIImage(cgImage:imageRef)
Note : Interpret Swift code according to your swift version.
Try this
- (UIImage*)takeScreeenShot {
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:vidURL
options:nil];
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
imageGenerator.appliesPreferredTrackTransform = YES;
NSError *err = NULL;
CMTime time = CMTimeMake(1, 60); // time range in which you want
screenshot
CGImageRef imgRef = [imageGenerator copyCGImageAtTime:time actualTime:NULL
error:&err];
return [[UIImage alloc] initWithCGImage:imgRef];
}
Hope this helps !!!
Swift 2.x:
let asset = AVAsset(...)
let imageGenerator = AVAssetImageGenerator(asset: asset)
let screenshotTime = CMTime(seconds: 1, preferredTimescale: 1)
if let imageRef = try? imageGenerator.copyCGImageAtTime(screenshotTime, actualTime: nil) {
let image = UIImage(CGImage: imageRef)
// do something with your image
}
Add below code to generate thumbnail from video.
AVURLAsset *assetURL = [[AVURLAsset alloc] initWithURL:partOneUrl options:nil];
AVAssetImageGenerator *assetGenerator = [[AVAssetImageGenerator alloc] initWithAsset:assetURL];
assetGenerator.appliesPreferredTrackTransform = YES;
NSError *err = NULL;
CMTime time = CMTimeMake(1, 2);
CGImageRef imgRef = [assetGenerator copyCGImageAtTime:time actualTime:NULL error:&err];
UIImage *one = [[UIImage alloc] initWithCGImage:imgRef];
This is how I get a shot of the current visible frame on the scene in Swift:
The key is to
get the current time of the player which is of type CMTime
convert that time into seconds of type Float64
switch the secondss back to CMTime using CMTimeMake. The first parameter which would be where the seconds goes should be cast to Int64
Code:
var myImage: UIImage?
guard let player = player else { return }
let currentTime: CMTime = player.currentTime() // step 1.
let currentTimeInSecs: Float64 = CMTimeGetSeconds(currentTime) // step 2.
let actionTime: CMTime = CMTimeMake(Int64(currentTimeInSecs), 1) // step 3.
let asset = AVAsset(url: fileUrl)
let imageGenerator = AVAssetImageGenerator(asset: asset)
imageGenerator.appliesPreferredTrackTransform = true // prevent image rotation
do{
let imageRef = try imageGenerator.copyCGImage(at: actionTime, actualTime: nil)
myImage = UIImage(cgImage: imageRef)
}catch let err as NSError{
print(err.localizedDescription)
}
Swift extension for generating thumbnails from video
extension AVPlayer {
func generateThumbnail(time: CMTime) -> UIImage? {
guard let asset = currentItem?.asset else { return nil }
let imageGenerator = AVAssetImageGenerator(asset: asset)
do {
let cgImage = try imageGenerator.copyCGImage(at: time, actualTime: nil)
return UIImage(cgImage: cgImage)
} catch {
print(error.localizedDescription)
}
return nil
}
}
When you need to create multiple thumbnails at once the class AVAssetImageGenerator is golden, as it provides an async way.
If you need a Thumbnail-Image of the player's current frame, simply render it's View (platform specific) or its Layer (platform independent):
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGSize frameSize = _playerLayer.frame.size;
CGContextRef thumbnailContext = CGBitmapContextCreate(nil, frameSize.width, frameSize.height, 8, 0, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
[_playerLayer renderInContext:thumbnailContext];
CGImageRef playerThumbnail = CGBitmapContextCreateImage(thumbnailContext);
CGContextRelease(thumbnailContext);
This is super fast and works synchronously.
Code for 2022:
seconds = .. the normal human-meaning desired time position in the video
guard let pl = .. your player ..
guard let ite = pl.currentItem ..
let testGen = AVAssetImageGenerator(asset: ite.asset)
testGen.maximumSize = CGSize(width: 0, height: .. height of your preview box)
testGen.requestedTimeToleranceBefore = .zero // during development
// or something like ... CMTime(value: .. your tolerance .., timescale: 600)
testGen.requestedTimeToleranceAfter = .zero // during development
// ditto
if #available(tvOS 16, *) {
Task { [weak self] ..
do {
let ct = CMTime(value: CMTimeValue(seconds), timescale: 1)
// NOTE THE "1"
let (foundImage, foundTime) = try await testGen.image(at: ct)
let foundAsSecs = CMTimeGetSeconds(foundTime)
print("tried gen at \(seconds) found as \(foundAsSecs) \n")
self. .. your preview .image = UIImage(cgImage: foundImage)
} catch {
print("gen err \(error)")
}
}
}
Setting the two tolerances is a sophisticated issue, google.
Watch out for the gotchya where timescale of 1 is needed for the CMTime.