Let's say you have a NSImage with NSBitmapImageRep (raster image) of 16x16 pixels.
It can be a color image or contain only black pixels with alpha channel.
When it only has black pixels, I can set .isTemplate for the NSImage and handle it correspondingly.
The question is - how do you quickly detect it has black pixels only?
What is the fastest way to check if provided image is a template?
Here is how I do it. It works, but requires moving through all the pixels and check them one-by-one. Even with 16x16 size it takes about a second for 10-20 images to process. So I am looking for a more optimized approach:
+ (BOOL)detectImageIsTemplate:(NSImage *)image
{
BOOL result = NO;
if (image)
{
// If we have a valid image, assume it's a template until we face any non-black pixel
result = YES;
NSSize imageSize = image.size;
NSRect imageRect = NSMakeRect(0, 0, imageSize.width, imageSize.height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef ctx = CGBitmapContextCreate(NULL,
imageSize.width,
imageSize.height,
8,
0,
colorSpace,
kCGImageAlphaPremultipliedLast);
NSGraphicsContext* gctx = [NSGraphicsContext graphicsContextWithCGContext:ctx flipped:NO];
[NSGraphicsContext setCurrentContext:gctx];
[image drawInRect:imageRect];
// ......................................................
size_t width = CGBitmapContextGetWidth(ctx);
size_t height = CGBitmapContextGetHeight(ctx);
uint32_t* pixel = (uint32_t*)CGBitmapContextGetData(ctx);
for (unsigned y = 0; y < height; y++)
{
for (unsigned x = 0; x < width; x++)
{
uint32_t rgba = *pixel;
uint8_t red = (rgba & 0x000000ff) >> 0;
uint8_t green = (rgba & 0x0000ff00) >> 8;
uint8_t blue = (rgba & 0x00ff0000) >> 16;
if (red != 0 || green != 0 || blue != 0)
{
result = NO;
break;
}
pixel++; // Next pixel
}
if (result == NO) break;
}
// ......................................................
[NSGraphicsContext setCurrentContext:nil];
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
}
return result;
}
Pure black images are in your category, is color image or only pixels with alpha channel?
Why not judge the image type by the number of channels? RGBX or only A.
Related
I am working on Document edge detection using OpenCV in my iOS Project and successfully detected the edges of document.
Now, I want to rotate the image along with detected rectangle. I have referred this
Github project to detect the edges.
For that, I first rotated the image and trying to re-detect the edges by again finding the largest rectangle of the image. By unfortunately, it is not giving me exact rectangle.
Can I somebody suggest me something to detect the rotated document's edges, again or shall I rotate the detected rectangle along with image ?
Before Rotation Image
After Rotation Image
+(NSMutableArray *) getLargestSquarePoints: (UIImage *) image : (CGSize) size {
Mat imageMat;
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, cols, rows, 8, cvMat.step[0], colorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
imageMat = cvMat;
cv::resize(imageMat, imageMat, cvSize(size.width, size.height));
// UIImageToMat(image, imageMat);
std::vector<std::vector<cv::Point> >rectangle;
std::vector<cv::Point> largestRectangle;
getRectangles(imageMat, rectangle);
getlargestRectangle(rectangle, largestRectangle);
if (largestRectangle.size() == 4)
{
// Thanks to: https://stackoverflow.com/questions/20395547/sorting-an-array-of-x-and-y-vertice-points-ios-objective-c/20399468#20399468
NSArray *points = [NSArray array];
points = #[
[NSValue valueWithCGPoint:(CGPoint){(CGFloat)largestRectangle[0].x, (CGFloat)largestRectangle[0].y}],
[NSValue valueWithCGPoint:(CGPoint){(CGFloat)largestRectangle[1].x, (CGFloat)largestRectangle[1].y}],
[NSValue valueWithCGPoint:(CGPoint){(CGFloat)largestRectangle[2].x, (CGFloat)largestRectangle[2].y}],
[NSValue valueWithCGPoint:(CGPoint){(CGFloat)largestRectangle[3].x, (CGFloat)largestRectangle[3].y}] ];
CGPoint min = [points[0] CGPointValue];
CGPoint max = min;
for (NSValue *value in points) {
CGPoint point = [value CGPointValue];
min.x = fminf(point.x, min.x);
min.y = fminf(point.y, min.y);
max.x = fmaxf(point.x, max.x);
max.y = fmaxf(point.y, max.y);
}
CGPoint center = {
0.5f * (min.x + max.x),
0.5f * (min.y + max.y),
};
NSLog(#"center: %#", NSStringFromCGPoint(center));
NSNumber *(^angleFromPoint)(id) = ^(NSValue *value){
CGPoint point = [value CGPointValue];
CGFloat theta = atan2f(point.y - center.y, point.x - center.x);
CGFloat angle = fmodf(M_PI - M_PI_4 + theta, 2 * M_PI);
return #(angle);
};
NSArray *sortedPoints = [points sortedArrayUsingComparator:^NSComparisonResult(id a, id b) {
return [angleFromPoint(a) compare:angleFromPoint(b)];
}];
NSLog(#"sorted points: %#", sortedPoints);
NSMutableArray *squarePoints = [[NSMutableArray alloc] init];
[squarePoints addObject: [sortedPoints objectAtIndex:0]];
[squarePoints addObject: [sortedPoints objectAtIndex:1]];
[squarePoints addObject: [sortedPoints objectAtIndex:2]];
[squarePoints addObject: [sortedPoints objectAtIndex:3]];
imageMat.release();
return squarePoints;
}
else{
imageMat.release();
return nil;
}
}
void getRectangles(cv::Mat& image, std::vector<std::vector<cv::Point>>&rectangles) {
// blur will enhance edge detection
cv::Mat blurred(image);
GaussianBlur(image, blurred, cvSize(11,11), 0);
cv::Mat gray0(blurred.size(), CV_8U), gray;
std::vector<std::vector<cv::Point> > contours;
// find squares in every color plane of the image
for (int c = 0; c < 3; c++)
{
int ch[] = {c, 0};
mixChannels(&blurred, 1, &gray0, 1, ch, 1);
// try several threshold levels
const int threshold_level = 2;
for (int l = 0; l < threshold_level; l++)
{
// Use Canny instead of zero threshold level!
// Canny helps to catch squares with gradient shading
if (l == 0)
{
Canny(gray0, gray, 10, 20, 3); //
// Canny(gray0, gray, 0, 50, 5);
// Dilate helps to remove potential holes between edge segments
dilate(gray, gray, cv::Mat(), cv::Point(-1,-1));
}
else
{
gray = gray0 >= (l+1) * 255 / threshold_level;
}
// Find contours and store them in a list
findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
// Test contours
std::vector<cv::Point> approx;
for (size_t i = 0; i < contours.size(); i++)
{
// approximate contour with accuracy proportional
// to the contour perimeter
approxPolyDP(cv::Mat(contours[i]), approx, arcLength(cv::Mat(contours[i]), true)*0.02, true);
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
if (approx.size() == 4 &&
fabs(contourArea(cv::Mat(approx))) > 1000 &&
isContourConvex(cv::Mat(approx)))
{
double maxCosine = 0;
for (int j = 2; j < 5; j++)
{
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
if (maxCosine < 0.3)
rectangles.push_back(approx);
}
}
}
}
}
void getlargestRectangle(const std::vector<std::vector<cv::Point> >&rectangles, std::vector<cv::Point>& largestRectangle)
{
if (!rectangles.size())
{
return;
}
double maxArea = 0;
int index = 0;
for (size_t i = 0; i < rectangles.size(); i++)
{
cv::Rect rectangle = boundingRect(cv::Mat(rectangles[i]));
double area = rectangle.width * rectangle.height;
if (maxArea < area)
{
maxArea = area;
index = i;
}
}
largestRectangle = rectangles[index];
}
double angle(cv::Point pt1, cv::Point pt2, cv::Point pt0) {
double dx1 = pt1.x - pt0.x;
double dy1 = pt1.y - pt0.y;
double dx2 = pt2.x - pt0.x;
double dy2 = pt2.y - pt0.y;
return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}
+(UIImage *) getTransformedImage: (CGFloat) newWidth : (CGFloat) newHeight : (UIImage *) origImage : (CGPoint [4]) corners : (CGSize) size {
cv::Mat imageMat;
CGColorSpaceRef colorSpace = CGImageGetColorSpace(origImage.CGImage);
CGFloat cols = size.width;
CGFloat rows = size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,
// Pointer to backing data
cols,
// Width of bitmap
rows,
// Height of bitmap
8,
// Bits per component
cvMat.step[0],
// Bytes per row
colorSpace,
// Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), origImage.CGImage);
CGContextRelease(contextRef);
imageMat = cvMat;
cv::Mat newImageMat = cv::Mat( cvSize(newWidth,newHeight), CV_8UC4);
cv::Point2f src[4], dst[4];
src[0].x = corners[0].x;
src[0].y = corners[0].y;
src[1].x = corners[1].x;
src[1].y = corners[1].y;
src[2].x = corners[2].x;
src[2].y = corners[2].y;
src[3].x = corners[3].x;
src[3].y = corners[3].y;
dst[0].x = 0;
dst[0].y = -10;
dst[1].x = newWidth - 1;
dst[1].y = -10;
dst[2].x = newWidth - 1;
dst[2].y = newHeight + 1;
dst[3].x = 0;
dst[3].y = newHeight + 1;
dst[0].x = 0;
dst[0].y = 0;
dst[1].x = newWidth - 1;
dst[1].y = 0;
dst[2].x = newWidth - 1;
dst[2].y = newHeight - 1;
dst[3].x = 0;
dst[3].y = newHeight - 1;
cv::warpPerspective(imageMat, newImageMat, cv::getPerspectiveTransform(src, dst), cvSize(newWidth, newHeight));
//Transform to UIImage
NSData *data = [NSData dataWithBytes:newImageMat.data length:newImageMat.elemSize() * newImageMat.total()];
CGColorSpaceRef colorSpace2;
if (newImageMat.elemSize() == 1) {
colorSpace2 = CGColorSpaceCreateDeviceGray();
} else {
colorSpace2 = CGColorSpaceCreateDeviceGray();
// colorSpace2 = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGFloat width = newImageMat.cols;
CGFloat height = newImageMat.rows;
CGImageRef imageRef = CGImageCreate(width, height, 8, 8 * newImageMat.elemSize(),
newImageMat.step[0],
colorSpace2,
kCGImageAlphaNone | kCGBitmapByteOrderDefault, provider,
NULL, false, kCGRenderingIntentDefault);
UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace2);
return image;
}
If you use cv2.minAreaRect, it gives the best inclosing rectangle to a contour and the degrees, so you can rotate back.
I've searched through various Apple's docs and StackOverflow answers, but nothing really helped, still have a blank app's window. I'm trying to display the content of a pixel buffer in the NSWindow, to do that I've allocated a buffer:
UInt8* row = (UInt8 *) malloc(WINDOW_WIDTH * WINDOW_HEIGHT * bytes_per_pixel);
UInt32 pitch = (WINDOW_WIDTH * bytes_per_pixel);
// For each row
for (UInt32 y = 0; y < WINDOW_HEIGHT; ++y) {
Pixel* pixel = (Pixel *) row;
// For each pixel in a row
for (UInt32 x = 0; x < WINDOW_WIDTH; ++x) {
*pixel++ = 0xFF000000;
}
row += pitch;
}
This should prepare a buffer with red pixels. Then I'm creating a NSBitmapImageRep:
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:(u8 *) row
pixelsWide:WINDOW_WIDTH
pixelsHigh:WINDOW_HEIGHT
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:WINDOW_WIDTH * 4
bitsPerPixel:32];
Which then converted into NSImage:
NSSize imageSize = NSMakeSize(CGImageGetWidth([imageRep CGImage]), CGImageGetHeight([imageRep CGImage]));
NSImage *image = [[NSImage alloc] initWithSize:imageSize];
[image addRepresentation:imageRep];
Then I'm configuring the view:
NSView *view = [window contentView];
[view setWantsLayer: YES];
[[view layer] setContents: image];
Sadly this doesn't give me the result I expect.
Here are some problems with your code:
You are incrementing row by pitch at the end of each y-loop. You never saved the pointer to the beginning of the buffer. When you create your NSBitmapImageRep, you pass a pointer that is past the end of the buffer.
You are passing row as the first (planes) argument of initWithBitmapDataPlanes:..., but you need to pass &row. The documentation says
An array of character pointers, each of which points to a buffer containing raw image data.[…]
An “array of character pointers” means (in C) you pass a pointer to a pointer.
You say “This should prepare a buffer with red pixels.” But you filled the buffer with 0xFF000000, and you said hasAlpha:YES. Depending on the byte order used by the initializer, either you have set the alpha channel to 0, or you have set the alpha channel to 0xFF but set all of the color channels to 0.
As it happens, you have set each pixel to opaque black (alpha = 0xFF, colors all zero). Try setting each pixel to 0xFF00007F and you'll get a dimmed red (alpha = 0xFF, red = 0x7F).
Thus:
typedef struct {
uint8_t red;
uint8_t green;
uint8_t blue;
uint8_t alpha;
} Pixel;
#implementation AppDelegate
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
size_t width = self.window.contentView.bounds.size.width;
size_t height = self.window.contentView.bounds.size.height;
Pixel color = { .red=127, .green=0, .blue=0, .alpha=255 };
size_t pitch = width * sizeof(Pixel);
uint8_t *buffer = malloc(pitch * height);
for (size_t y = 0; y < height; ++y) {
Pixel *row = (Pixel *)(buffer + y * pitch);
for (size_t x = 0; x < width; ++x) {
row[x] = color;
}
}
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:&buffer
pixelsWide:width pixelsHigh:height
bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:pitch bitsPerPixel:sizeof(Pixel) * 8];
NSImage *image = [[NSImage alloc] initWithSize:NSMakeSize(width, height)];
[image addRepresentation:rep];
self.window.contentView.wantsLayer = YES;
self.window.contentView.layer.contents = image;
}
#end
Result:
Note that I didn't free buffer. If you free buffer before rep is destroyed, things will go wrong. For example, if you just add free(buffer) to the end of applicationDidFinishLaunching:, the window appears gray.
This is a thorny problem to solve. If you use Core Graphics instead, the memory management is all handled properly. You can ask Core Graphics to allocate the buffer for you (by passing NULL instead of a valid pointer), and it will free the buffer when appropriate.
You have to release the Core Graphics objects you create to avoid memory leaks, but you can do that as soon as you're done with them. The Product > Analyze command can also help you find leaks of Core Graphics objects, but will not help you find leaks of un-freed malloc blocks.
Here's what a Core Graphics solution looks like:
typedef struct {
uint8_t red;
uint8_t green;
uint8_t blue;
uint8_t alpha;
} Pixel;
#implementation AppDelegate
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
size_t width = self.window.contentView.bounds.size.width;
size_t height = self.window.contentView.bounds.size.height;
CGColorSpaceRef rgb = CGColorSpaceCreateWithName(kCGColorSpaceLinearSRGB);
CGContextRef gc = CGBitmapContextCreate(NULL, width, height, 8, 0, rgb, kCGImageByteOrder32Big | kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(rgb);
size_t pitch = CGBitmapContextGetBytesPerRow(gc);
uint8_t *buffer = CGBitmapContextGetData(gc);
Pixel color = { .red=127, .green=0, .blue=0, .alpha=255 };
for (size_t y = 0; y < height; ++y) {
Pixel *row = (Pixel *)(buffer + y * pitch);
for (size_t x = 0; x < width; ++x) {
row[x] = color;
}
}
CGImageRef image = CGBitmapContextCreateImage(gc);
CGContextRelease(gc);
self.window.contentView.wantsLayer = YES;
self.window.contentView.layer.contents = (__bridge id)image;
CGImageRelease(image);
}
#end
Note sure what's going on, but here's code that has been working for years:
static NSImage* NewImageFromRGBA( const UInt8* rawRGBA, NSInteger width, NSInteger height )
{
size_t rawRGBASize = height*width*4/* sizeof(RGBA) = 4 */;
// Create a bitmap representation, allowing NSBitmapImageRep to allocate its own data buffer
NSBitmapImageRep* imageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL
pixelsWide:width
pixelsHigh:height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
NSCAssert(imageRep!=nil,#"failed to create NSBitmapImageRep");
NSCAssert((size_t)[imageRep bytesPerPlane]==rawRGBASize,#"alignment or size of CGContext buffer and NSImageRep do not agree");
// Copy the raw bitmap image into the new image representation
memcpy([imageRep bitmapData],rawRGBA,rawRGBASize);
// Create an empty NSImage then add the bitmap representation to it
NSImage* image = [[NSImage alloc] initWithSize:NSMakeSize(width,height)];
[image addRepresentation:imageRep];
return image;
}
I'm trying to draw a line with a specific width. I searched for examples online, but I only found examples using straight lines. I need curved lines. Also, I need to detect if the user touched within the line. Is it possible to achieve this using Objective C and Sprite Kit? If so can someone provide an example?
You can use UIBezierPath to create bezier curves (nice smooth curves). You can specify this path for a CAShapeLayer and add that as a sublayer to your view:
UIBezierPath *path = [UIBezierPath bezierPath];
[path moveToPoint:CGPointMake(10, 150)];
[path addCurveToPoint:CGPointMake(110, 150) controlPoint1:CGPointMake(40, 100) controlPoint2:CGPointMake(80, 100)];
[path addCurveToPoint:CGPointMake(210, 150) controlPoint1:CGPointMake(140, 200) controlPoint2:CGPointMake(170, 200)];
[path addCurveToPoint:CGPointMake(310, 150) controlPoint1:CGPointMake(250, 100) controlPoint2:CGPointMake(280, 100)];
CAShapeLayer *layer = [CAShapeLayer layer];
layer.lineWidth = 10;
layer.strokeColor = [UIColor redColor].CGColor;
layer.fillColor = [UIColor clearColor].CGColor;
layer.path = path.CGPath;
[self.view.layer addSublayer:layer];
If you want to randomize it a little, you can just randomize some of the curves. If you want some fuzziness, add some shadow. If you want the ends to be round, specify a rounded line cap:
UIBezierPath *path = [UIBezierPath bezierPath];
CGPoint point = CGPointMake(10, 100);
[path moveToPoint:point];
CGPoint controlPoint1;
CGPoint controlPoint2 = CGPointMake(point.x - 5.0 - arc4random_uniform(50), 150.0);
for (NSInteger i = 0; i < 5; i++) {
controlPoint1 = CGPointMake(point.x + (point.x - controlPoint2.x), 50.0);
point.x += 40.0 + arc4random_uniform(20);
controlPoint2 = CGPointMake(point.x - 5.0 - arc4random_uniform(50), 150.0);
[path addCurveToPoint:point controlPoint1:controlPoint1 controlPoint2:controlPoint2];
}
CAShapeLayer *layer = [CAShapeLayer layer];
layer.lineWidth = 5;
layer.strokeColor = [UIColor redColor].CGColor;
layer.fillColor = [UIColor clearColor].CGColor;
layer.path = path.CGPath;
layer.shadowColor = [UIColor redColor].CGColor;
layer.shadowRadius = 2.0;
layer.shadowOpacity = 1.0;
layer.shadowOffset = CGSizeZero;
layer.lineCap = kCALineCapRound;
[self.view.layer addSublayer:layer];
If you want it to be even more irregular, break those beziers into smaller segments, but the idea would be the same. The only trick with conjoined bezier curves is that you want to make sure that the second control point of one curve is in line with the first control point of the next one, or else you end up with sharp discontinuities in the curves.
If you want to detect if and when a user taps on it, that's more complicated. But what you have to do is:
Make a snapshot of the image:
- (UIImage *)captureView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 1.0); // usually I'd use 0.0, but we'll use 1.0 here so that the tap point of the gesture matches the pixel of the snapshot
if ([view respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]) {
BOOL success = [view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
NSAssert(success, #"drawViewHierarchyInRect failed");
} else {
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Get the color of the pixel at the coordinate that the user tapped by identifying the color of the pixel the user tapped.
- (void)handleTap:(UITapGestureRecognizer *)gesture
{
CGPoint point = [gesture locationInView:gesture.view];
CGFloat red, green, blue, alpha;
UIColor *color = [self image:self.image colorAtPoint:point];
[color getRed:&red green:&green blue:&blue alpha:&alpha];
if (green < 0.9 && blue < 0.9 && red > 0.9)
NSLog(#"tapped on curve");
else
NSLog(#"didn't tap on curve");
}
Where I adapted Apple's code for getting the pixel buffer in order to determine the color of the pixel the user tapped on was.
// adapted from https://developer.apple.com/library/mac/qa/qa1509/_index.html
- (UIColor *)image:(UIImage *)image colorAtPoint:(CGPoint)point
{
UIColor *color;
CGImageRef imageRef = image.CGImage;
// Create the bitmap context
CGContextRef context = [self createARGBBitmapContextForImage:imageRef];
NSAssert(context, #"error creating context");
// Get image width, height. We'll use the entire image.
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
CGRect rect = {{0,0},{width,height}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(context, rect, imageRef);
// Now we can get a pointer to the image data associated with the bitmap
// context.
uint8_t *data = CGBitmapContextGetData (context);
if (data != NULL) {
size_t offset = (NSInteger) point.y * 4 * width + (NSInteger) point.x * 4;
uint8_t alpha = data[offset];
uint8_t red = data[offset+1];
uint8_t green = data[offset+2];
uint8_t blue = data[offset+3];
color = [UIColor colorWithRed:red / 255.0 green:green / 255.0 blue:blue / 255.0 alpha:alpha / 255.0];
}
// When finished, release the context
CGContextRelease(context);
// Free image data memory for the context
if (data) {
free(data); // we used malloc in createARGBBitmapContextForImage, so free it
}
return color;
}
- (CGContextRef) createARGBBitmapContextForImage:(CGImageRef) inImage
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
size_t bitmapByteCount;
size_t bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB(); // CGColorSpaceCreateDeviceWithName(kCGColorSpaceGenericRGB);
NSAssert(colorSpace, #"Error allocating color space");
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc(bitmapByteCount);
NSAssert(bitmapData, #"Unable to allocate bitmap buffer");
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
(CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
NSAssert(context, #"Context not created!");
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
My problem is: UIImage is rotated after processing.
I use a helper class for the image processing called ProcessHelper. This class has two methods:
+ (unsigned char *) convertUIImageToBitmapRGBA8:(UIImage *) image;
+ (UIImage *) convertBitmapRGBA8ToUIImage:(unsigned char *)rawData
withWidth:(int) width
withHeight:(int) height;
implementation
+ (unsigned char *) convertUIImageToBitmapRGBA8:(UIImage *) image {
NSLog(#"Convert image [%d x %d] to RGBA8 char data", (int)image.size.width,
(int)image.size.height);
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
return rawData;
}
+ (UIImage *) convertBitmapRGBA8ToUIImage:(unsigned char *) rawData
withWidth:(int) width
withHeight:(int) height {
CGContextRef ctx = CGBitmapContextCreate(rawData,
width,
height,
8,
width * 4,
CGColorSpaceCreateDeviceRGB(),
kCGImageAlphaPremultipliedLast );
CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
free(rawData);
return rawImage;
}
I
On start I get pixel data:
rawData = [ProcessHelper convertUIImageToBitmapRGBA8:image];
Next I do some processing:
-(void)process_grayscale {
int byteIndex = 0;
for (int i = 0 ; i < workingImage.size.width * workingImage.size.height ; ++i)
{
int outputColor = (rawData[byteIndex] + rawData[byteIndex+1] + rawData[byteIndex+2]) / 3;
rawData[byteIndex] = rawData[byteIndex + 1] = rawData[byteIndex + 2] = (char) (outputColor);
byteIndex += 4;
}
workingImage = [ProcessHelper convertBitmapRGBA8ToUIImage:rawData
withWidth:CGImageGetWidth(workingImage.CGImage)
withHeight:CGImageGetHeight(workingImage.CGImage)];
}
After this I returned the workingImage to parent class and UIImageView shows it returned but in old size, I mean: image before is WxH and after is WxH but rotated (should be HxW, after rotate). I would like to make the image does not rotate.
This happens when I edit photos from ipad. Screenshots are ok and images from internet like backgrounds are ok.
How can I do this correctly?
use UIGraphicsPushContext(ctx); [image drawInRect:CGRectMake(0, 0, width, height)]; UIGraphicsPopContext(); instead of CGContextDrawImage. CGContextDrawImage will flip the image verticaly.
Or Scale and Transform the Context before calling CGContextDrawImage
Very simple question... I have an array of pixels, how do I display them on the screen?
#define WIDTH 10
#define HEIGHT 10
#define SIZE WIDTH*HEIGHT
unsigned short pixels[SIZE];
for (int i = 0; i < WIDTH; i++) {
for (int j = 0; j < HEIGHT; j++) {
pixels[j*HEIGHT + i] = 0xFFFF;
}
}
That's it... now how can I show them on the screen?
Create a new "Cocoa Application" (if you don't know how to create a cocoa application go to Cocoa Dev Center)
Subclass NSView (if you don't know how to subclass a view read section "Create the NSView Subclass")
Set your NSWindow to size 400x400 on interface builder
Use this code in your NSView
#import "MyView.h"
#implementation MyView
#define WIDTH 400
#define HEIGHT 400
#define SIZE (WIDTH*HEIGHT)
#define BYTES_PER_PIXEL 2
#define BITS_PER_COMPONENT 5
#define BITS_PER_PIXEL 16
- (id)initWithFrame:(NSRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code here.
}
return self;
}
- (void)drawRect:(NSRect)dirtyRect
{
// Get current context
CGContextRef context = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
// Colorspace RGB
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Pixel Matrix allocation
unsigned short *pixels = calloc(SIZE, sizeof(unsigned short));
// Random pixels will give you a non-organized RAINBOW
for (int i = 0; i < WIDTH; i++) {
for (int j = 0; j < HEIGHT; j++) {
pixels[i+ j*HEIGHT] = arc4random() % USHRT_MAX;
}
}
// Provider
CGDataProviderRef provider = CGDataProviderCreateWithData(nil, pixels, SIZE, nil);
// CGImage
CGImageRef image = CGImageCreate(WIDTH,
HEIGHT,
BITS_PER_COMPONENT,
BITS_PER_PIXEL,
BYTES_PER_PIXEL*WIDTH,
colorSpace,
kCGImageAlphaNoneSkipFirst,
// xRRRRRGGGGGBBBBB - 16-bits, first bit is ignored!
provider,
nil, //No decode
NO, //No interpolation
kCGRenderingIntentDefault); // Default rendering
// Draw
CGContextDrawImage(context, self.bounds, image);
// Once everything is written on screen we can release everything
CGImageRelease(image);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
}
#end
There's a bunch of ways to do this. One of the more straightforward is to use CGContextDrawImage. In drawRect:
CGContextRef ctx = [[NSGraphicsContext currentContext] graphicsPort];
CGDataProviderRef provider = CGDataProviderCreateWithData(nil, bitmap, bitmap_bytes, nil);
CGImageRef img = CGImageCreate(..., provider, ...);
CGDataProviderRelease(provider);
CGContextDrawImage(ctx, dstRect, img);
CGImageRelease(img);
CGImageCreate has a bunch of arguments which I've left out here, as the correct values will depend on what your bitmap format is. See the CGImage reference for details.
Note that, if your bitmap is static, it may make sense to hold on to the CGImageRef instead of disposing of it immediately. You know best how your application works, so you decide whether that makes sense.
I solved this problem by using an NSImageView with NSBitmapImageRep to create the image from the pixel values. There are lots of options how you create the pixel values. In my case, I used 32-bit pixels (RGBA). In this code, pixels is the giant array of pixel value. display is the outlet for the NSImageView.
NSBitmapImageRep *myBitmap;
NSImage *myImage;
unsigned char *buff[4];
unsigned char *pixels;
int width, height, rectSize;
NSRect myBounds;
myBounds = [display bounds];
width = myBounds.size.width;
height = myBounds.size.height;
rectSize = width * height;
memset(buff, 0, sizeof(buff));
pixels = malloc(rectSize * 4);
(fill in pixels array)
buff[0] = pixels;
myBitmap = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:buff
pixelsWide:width
pixelsHigh:height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bitmapFormat:0
bytesPerRow:(4 * width)
bitsPerPixel:32];
myImage = [[NSImage alloc] init];
[myImage addRepresentation:myBitmap];
[display setImage: myImage];
[myImage release];
[myBitmap release];
free(pixels);