I've searched through various Apple's docs and StackOverflow answers, but nothing really helped, still have a blank app's window. I'm trying to display the content of a pixel buffer in the NSWindow, to do that I've allocated a buffer:
UInt8* row = (UInt8 *) malloc(WINDOW_WIDTH * WINDOW_HEIGHT * bytes_per_pixel);
UInt32 pitch = (WINDOW_WIDTH * bytes_per_pixel);
// For each row
for (UInt32 y = 0; y < WINDOW_HEIGHT; ++y) {
Pixel* pixel = (Pixel *) row;
// For each pixel in a row
for (UInt32 x = 0; x < WINDOW_WIDTH; ++x) {
*pixel++ = 0xFF000000;
}
row += pitch;
}
This should prepare a buffer with red pixels. Then I'm creating a NSBitmapImageRep:
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:(u8 *) row
pixelsWide:WINDOW_WIDTH
pixelsHigh:WINDOW_HEIGHT
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:WINDOW_WIDTH * 4
bitsPerPixel:32];
Which then converted into NSImage:
NSSize imageSize = NSMakeSize(CGImageGetWidth([imageRep CGImage]), CGImageGetHeight([imageRep CGImage]));
NSImage *image = [[NSImage alloc] initWithSize:imageSize];
[image addRepresentation:imageRep];
Then I'm configuring the view:
NSView *view = [window contentView];
[view setWantsLayer: YES];
[[view layer] setContents: image];
Sadly this doesn't give me the result I expect.
Here are some problems with your code:
You are incrementing row by pitch at the end of each y-loop. You never saved the pointer to the beginning of the buffer. When you create your NSBitmapImageRep, you pass a pointer that is past the end of the buffer.
You are passing row as the first (planes) argument of initWithBitmapDataPlanes:..., but you need to pass &row. The documentation says
An array of character pointers, each of which points to a buffer containing raw image data.[…]
An “array of character pointers” means (in C) you pass a pointer to a pointer.
You say “This should prepare a buffer with red pixels.” But you filled the buffer with 0xFF000000, and you said hasAlpha:YES. Depending on the byte order used by the initializer, either you have set the alpha channel to 0, or you have set the alpha channel to 0xFF but set all of the color channels to 0.
As it happens, you have set each pixel to opaque black (alpha = 0xFF, colors all zero). Try setting each pixel to 0xFF00007F and you'll get a dimmed red (alpha = 0xFF, red = 0x7F).
Thus:
typedef struct {
uint8_t red;
uint8_t green;
uint8_t blue;
uint8_t alpha;
} Pixel;
#implementation AppDelegate
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
size_t width = self.window.contentView.bounds.size.width;
size_t height = self.window.contentView.bounds.size.height;
Pixel color = { .red=127, .green=0, .blue=0, .alpha=255 };
size_t pitch = width * sizeof(Pixel);
uint8_t *buffer = malloc(pitch * height);
for (size_t y = 0; y < height; ++y) {
Pixel *row = (Pixel *)(buffer + y * pitch);
for (size_t x = 0; x < width; ++x) {
row[x] = color;
}
}
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:&buffer
pixelsWide:width pixelsHigh:height
bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:pitch bitsPerPixel:sizeof(Pixel) * 8];
NSImage *image = [[NSImage alloc] initWithSize:NSMakeSize(width, height)];
[image addRepresentation:rep];
self.window.contentView.wantsLayer = YES;
self.window.contentView.layer.contents = image;
}
#end
Result:
Note that I didn't free buffer. If you free buffer before rep is destroyed, things will go wrong. For example, if you just add free(buffer) to the end of applicationDidFinishLaunching:, the window appears gray.
This is a thorny problem to solve. If you use Core Graphics instead, the memory management is all handled properly. You can ask Core Graphics to allocate the buffer for you (by passing NULL instead of a valid pointer), and it will free the buffer when appropriate.
You have to release the Core Graphics objects you create to avoid memory leaks, but you can do that as soon as you're done with them. The Product > Analyze command can also help you find leaks of Core Graphics objects, but will not help you find leaks of un-freed malloc blocks.
Here's what a Core Graphics solution looks like:
typedef struct {
uint8_t red;
uint8_t green;
uint8_t blue;
uint8_t alpha;
} Pixel;
#implementation AppDelegate
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
size_t width = self.window.contentView.bounds.size.width;
size_t height = self.window.contentView.bounds.size.height;
CGColorSpaceRef rgb = CGColorSpaceCreateWithName(kCGColorSpaceLinearSRGB);
CGContextRef gc = CGBitmapContextCreate(NULL, width, height, 8, 0, rgb, kCGImageByteOrder32Big | kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(rgb);
size_t pitch = CGBitmapContextGetBytesPerRow(gc);
uint8_t *buffer = CGBitmapContextGetData(gc);
Pixel color = { .red=127, .green=0, .blue=0, .alpha=255 };
for (size_t y = 0; y < height; ++y) {
Pixel *row = (Pixel *)(buffer + y * pitch);
for (size_t x = 0; x < width; ++x) {
row[x] = color;
}
}
CGImageRef image = CGBitmapContextCreateImage(gc);
CGContextRelease(gc);
self.window.contentView.wantsLayer = YES;
self.window.contentView.layer.contents = (__bridge id)image;
CGImageRelease(image);
}
#end
Note sure what's going on, but here's code that has been working for years:
static NSImage* NewImageFromRGBA( const UInt8* rawRGBA, NSInteger width, NSInteger height )
{
size_t rawRGBASize = height*width*4/* sizeof(RGBA) = 4 */;
// Create a bitmap representation, allowing NSBitmapImageRep to allocate its own data buffer
NSBitmapImageRep* imageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL
pixelsWide:width
pixelsHigh:height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
NSCAssert(imageRep!=nil,#"failed to create NSBitmapImageRep");
NSCAssert((size_t)[imageRep bytesPerPlane]==rawRGBASize,#"alignment or size of CGContext buffer and NSImageRep do not agree");
// Copy the raw bitmap image into the new image representation
memcpy([imageRep bitmapData],rawRGBA,rawRGBASize);
// Create an empty NSImage then add the bitmap representation to it
NSImage* image = [[NSImage alloc] initWithSize:NSMakeSize(width,height)];
[image addRepresentation:imageRep];
return image;
}
Related
Let's say you have a NSImage with NSBitmapImageRep (raster image) of 16x16 pixels.
It can be a color image or contain only black pixels with alpha channel.
When it only has black pixels, I can set .isTemplate for the NSImage and handle it correspondingly.
The question is - how do you quickly detect it has black pixels only?
What is the fastest way to check if provided image is a template?
Here is how I do it. It works, but requires moving through all the pixels and check them one-by-one. Even with 16x16 size it takes about a second for 10-20 images to process. So I am looking for a more optimized approach:
+ (BOOL)detectImageIsTemplate:(NSImage *)image
{
BOOL result = NO;
if (image)
{
// If we have a valid image, assume it's a template until we face any non-black pixel
result = YES;
NSSize imageSize = image.size;
NSRect imageRect = NSMakeRect(0, 0, imageSize.width, imageSize.height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef ctx = CGBitmapContextCreate(NULL,
imageSize.width,
imageSize.height,
8,
0,
colorSpace,
kCGImageAlphaPremultipliedLast);
NSGraphicsContext* gctx = [NSGraphicsContext graphicsContextWithCGContext:ctx flipped:NO];
[NSGraphicsContext setCurrentContext:gctx];
[image drawInRect:imageRect];
// ......................................................
size_t width = CGBitmapContextGetWidth(ctx);
size_t height = CGBitmapContextGetHeight(ctx);
uint32_t* pixel = (uint32_t*)CGBitmapContextGetData(ctx);
for (unsigned y = 0; y < height; y++)
{
for (unsigned x = 0; x < width; x++)
{
uint32_t rgba = *pixel;
uint8_t red = (rgba & 0x000000ff) >> 0;
uint8_t green = (rgba & 0x0000ff00) >> 8;
uint8_t blue = (rgba & 0x00ff0000) >> 16;
if (red != 0 || green != 0 || blue != 0)
{
result = NO;
break;
}
pixel++; // Next pixel
}
if (result == NO) break;
}
// ......................................................
[NSGraphicsContext setCurrentContext:nil];
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
}
return result;
}
Pure black images are in your category, is color image or only pixels with alpha channel?
Why not judge the image type by the number of channels? RGBX or only A.
I am trying to get color value from a pixel on screen with a cocoa app. The idea is that app should be able to get a color value anywhere on the screen, even outside the scope of the app itself.
I did a bunch of research and this is essentially what I am doing
- (void) keepRunning:(NSTimer *)timer{
NSPoint mouseLoc = [NSEvent mouseLocation];
uint32_t count = 0;
CGDirectDisplayID displayForPoint;
if (CGGetDisplaysWithPoint(NSPointToCGPoint(mouseLoc), 1, &displayForPoint, &count) != kCGErrorSuccess)
{
NSLog(#"Break");
return;
}
CGImageRef image = CGDisplayCreateImageForRect(displayForPoint, CGRectMake(mouseLoc.x-10, mouseLoc.y-10, 1, 1));
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithCGImage:image];
CGImageRelease(image);
NSColor* color = [bitmap colorAtX:0 y:0];
NSLog(#"%#", color);
}
keepRunning fires every 100 ms or so. This seems to be the correct way of doing this. Problem is, I don't get the correct color values. All values outputted are gray and basically wrong. Any ideas about what I am doing wrong? Is it something to do with transparency?
I changed your code a bit and put it in my animation timer. This seems to work. Note that I'm only creating a 1 pixel image and I flipped the mouseLoc.y value.
CGFloat screenHeight = 900.0; // This is the display height of my machine
NSPoint mouseLoc = [NSEvent mouseLocation];
uint32_t count = 0;
CGDirectDisplayID displayForPoint;
if (CGGetDisplaysWithPoint(NSPointToCGPoint(mouseLoc), 1, &displayForPoint, &count) != kCGErrorSuccess)
{
NSLog(#"Break");
return;
}
CGImageRef image = CGDisplayCreateImageForRect(displayForPoint, CGRectMake(mouseLoc.x, screenHeight - mouseLoc.y, 1, 1)); // mouseLoc.y is flipped
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithCGImage:image];
CGImageRelease(image);
NSColor* color = [bitmap colorAtX:0 y:0];
NSLog(#"%#", color);
I have memory leak problem and i can't find answer since 1 week. :(
I loop more than 20 Images in my photos array I always receive an memory warning.
Maybe someone can help me with my frustrating problem?
Debug Message
"Received memory warning."
Loop in my UIView
images = [[NSMutableArray alloc] init];
for (Photo *photo in photos) {
[images addObject:[[UIImage imageWithData:photo.image_compressed] ToSize:videoSize]];
}
ToSize Method
#import "UIImage+toSize.h"
#implementation UIImage (toSize)
- (UIImage *)ToSize:(CGSize)newSize {
float originalWidth = self.size.width;
float originalHeight = self.size.height;
float newWidth = newSize.width;
float newHeight = newSize.height;
float xMargin = 0.0f;
float yMargin = 0.0f;
float ratioWidth = (originalWidth / originalHeight) * newSize.height;
float ratioHeight = (originalHeight / originalWidth) * newSize.width;
// LEFT & RIGHT Margin
if (ratioHeight > newSize.height)
{
// set new image size
newWidth = ratioWidth;
newHeight = newSize.height;
// calculate margin
xMargin = (newSize.width - ratioWidth) / 2;
} else if (ratioWidth > newSize.width)
{
// set new image size
newWidth = newSize.width;
newHeight = ratioHeight;
// calculate margin
yMargin = (newSize.height - ratioHeight) / 2;
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(NULL, newSize.width, newSize.height,
CGImageGetBitsPerComponent(self.CGImage), 0,
colorSpace,
CGImageGetBitmapInfo(self.CGImage));
CGContextDrawImage(ctx, CGRectMake(xMargin, yMargin, newWidth, newHeight), self.CGImage);
CGImageRef cgimg = CGBitmapContextCreateImage(ctx);
UIImage *img = [UIImage imageWithCGImage:cgimg scale:self.scale orientation:UIImageOrientationUp];
CGColorSpaceRelease(colorSpace);
CGContextRelease(ctx);
CGImageRelease(cgimg);
return img;
}
#end
"Received memory warning." is not a memory leak, that's the OS telling you it's short on memory, at which point it calls the delegate method associated with low memory:
- (void)didReceiveMemoryWarning
To give running Apps a chance to clear out some cruft in order to not have to terminate anything, if not enough space is made apps start getting killed. Unless you can see a leak in instruments I really don't think this is a memory leak.
I have found myself in a situation where I have several NSImage objects that I need to rotate by 90 degrees, change the colour of pixels that are one colour to another colour and then get the RGB565 data representation for it as an NSData object.
I found the vImageConvert_ARGB8888toRGB565 function in the Accelerate framework so this should be able to do the RGB565 output.
There are a few UIImage rotation I have found here on StackOverflow, but I'm having trouble converting them to NSImage as it appears I have to use NSGraphicsContext not CGContextRef?
Ideally I would like these in an NSImage Category so I can just call.
NSImage *rotated = [inputImage rotateByDegrees:90];
NSImage *colored = [rotated changeColorFrom:[NSColor redColor] toColor:[NSColor blackColor]];
NSData *rgb565 = [colored rgb565Data];
I just don't know where to start as image manipulation is new to me.
I appreciate any help I can get.
Edit (22/04/2013)
I have managed to piece this code together to generate the RGB565 data, it generates it upside down and with some small artefacts, I assume the first is due to different coordinate systems being used and the second possibly due to me going from PNG to BMP. I will do some more testing using a BMP to start and also a non-tranparent PNG.
- (NSData *)RGB565Data
{
CGContextRef cgctx = CreateARGBBitmapContext(self.CGImage);
if (cgctx == NULL)
return nil;
size_t w = CGImageGetWidth(self.CGImage);
size_t h = CGImageGetHeight(self.CGImage);
CGRect rect = {{0,0},{w,h}};
CGContextDrawImage(cgctx, rect, self.CGImage);
void *data = CGBitmapContextGetData (cgctx);
CGContextRelease(cgctx);
if (!data)
return nil;
vImage_Buffer src;
src.data = data;
src.width = w;
src.height = h;
src.rowBytes = (w * 4);
void* destData = malloc((w * 2) * h);
vImage_Buffer dst;
dst.data = destData;
dst.width = w;
dst.height = h;
dst.rowBytes = (w * 2);
vImageConvert_ARGB8888toRGB565(&src, &dst, 0);
size_t dataSize = 2 * w * h; // RGB565 = 2 5-bit components and 1 6-bit (16 bits/2 bytes)
NSData *RGB565Data = [NSData dataWithBytes:dst.data length:dataSize];
free(destData);
return RGB565Data;
}
- (CGImageRef)CGImage
{
return [self CGImageForProposedRect:NULL context:[NSGraphicsContext currentContext] hints:nil];
}
CGContextRef CreateARGBBitmapContext (CGImageRef inImage)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
bitmapBytesPerRow = (int)(pixelsWide * 4);
bitmapByteCount = (int)(bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
return nil;
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
CGColorSpaceRelease( colorSpace );
return nil;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
CGColorSpaceRelease( colorSpace );
return context;
}
For most of this, you'll want to use Core Image.
Rotation you can do with the CIAffineTransform filter. This takes an NSAffineTransform object. You may have already worked with that class before. (You could do the rotation with NSImage itself, but it's easier with Core Image and you'll probably need to use it for the next step anyway.)
I don't know what you mean by “change the colour of pixels that are one colour to another colour”; that could mean any of a lot of different things. Chances are, though, there's a filter for that.
I also don't know why you need 565 data specifically, but assuming you have a real need for that, you're correct that that function will be involved. Use CIContext's lowest-level rendering method to get 8-bit-per-component ARGB output, and then use that vImage function to convert it to 565 RGB.
I have managed to get what I want by using NSBitmapImageRep (accessing it with a bit of a hack). If anyone knows a better way of doing this, please do share.
The - (NSBitmapImageRep)bitmap method is my hack. The NSImage starts of having only an NSBitmapImageRep, however after the rotation method a CIImageRep is added which takes priority over the NSBitmapImageRep which breaks the colour code (as NSImage renders the CIImageRep which doesn't get colored).
BitmapImage.m (Subclass of NSImage)
CGContextRef CreateARGBBitmapContext (CGImageRef inImage)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
bitmapBytesPerRow = (int)(pixelsWide * 4);
bitmapByteCount = (int)(bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
return nil;
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
CGColorSpaceRelease( colorSpace );
return nil;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
CGColorSpaceRelease( colorSpace );
return context;
}
- (NSData *)RGB565Data
{
CGContextRef cgctx = CreateARGBBitmapContext(self.CGImage);
if (cgctx == NULL)
return nil;
size_t w = CGImageGetWidth(self.CGImage);
size_t h = CGImageGetHeight(self.CGImage);
CGRect rect = {{0,0},{w,h}};
CGContextDrawImage(cgctx, rect, self.CGImage);
void *data = CGBitmapContextGetData (cgctx);
CGContextRelease(cgctx);
if (!data)
return nil;
vImage_Buffer src;
src.data = data;
src.width = w;
src.height = h;
src.rowBytes = (w * 4);
void* destData = malloc((w * 2) * h);
vImage_Buffer dst;
dst.data = destData;
dst.width = w;
dst.height = h;
dst.rowBytes = (w * 2);
vImageConvert_ARGB8888toRGB565(&src, &dst, 0);
size_t dataSize = 2 * w * h; // RGB565 = 2 5-bit components and 1 6-bit (16 bits/2 bytes)
NSData *RGB565Data = [NSData dataWithBytes:dst.data length:dataSize];
free(destData);
return RGB565Data;
}
- (NSBitmapImageRep*)bitmap
{
NSBitmapImageRep *bitmap = nil;
NSMutableArray *repsToRemove = [NSMutableArray array];
// Iterate through the representations that back the NSImage
for (NSImageRep *rep in self.representations)
{
// If the representation is a bitmap
if ([rep isKindOfClass:[NSBitmapImageRep class]])
{
bitmap = [(NSBitmapImageRep*)rep retain];
break;
}
else
{
[repsToRemove addObject:rep];
}
}
// If no bitmap representation was found, we create one (this shouldn't occur)
if (bitmap == nil)
{
bitmap = [[[NSBitmapImageRep alloc] initWithCGImage:self.CGImage] retain];
[self addRepresentation:bitmap];
}
for (NSImageRep *rep2 in repsToRemove)
{
[self removeRepresentation:rep2];
}
return [bitmap autorelease];
}
- (NSColor*)colorAtX:(NSInteger)x y:(NSInteger)y
{
return [self.bitmap colorAtX:x y:y];
}
- (void)setColor:(NSColor*)color atX:(NSInteger)x y:(NSInteger)y
{
[self.bitmap setColor:color atX:x y:y];
}
NSImage+Extra.m (NSImage Category)
- (CGImageRef)CGImage
{
return [self CGImageForProposedRect:NULL context:[NSGraphicsContext currentContext] hints:nil];
}
Usage
- (IBAction)load:(id)sender
{
NSOpenPanel* openDlg = [NSOpenPanel openPanel];
[openDlg setCanChooseFiles:YES];
[openDlg setCanChooseDirectories:YES];
if ( [openDlg runModalForDirectory:nil file:nil] == NSOKButton )
{
NSArray* files = [openDlg filenames];
for( int i = 0; i < [files count]; i++ )
{
NSString* fileName = [files objectAtIndex:i];
BitmapImage *image = [[BitmapImage alloc] initWithContentsOfFile:fileName];
imageView.image = image;
}
}
}
- (IBAction)colorize:(id)sender
{
float width = imageView.image.size.width;
float height = imageView.image.size.height;
BitmapImage *img = (BitmapImage*)imageView.image;
NSColor *newColor = [img colorAtX:1 y:1];
for (int x = 0; x <= width; x++)
{
for (int y = 0; y <= height; y++)
{
if ([img colorAtX:x y:y] == newColor)
{
[img setColor:[NSColor redColor] atX:x y:y];
}
}
}
[imageView setNeedsDisplay:YES];
}
- (IBAction)rotate:(id)sender
{
BitmapImage *img = (BitmapImage*)imageView.image;
BitmapImage *newImg = [img rotate90DegreesClockwise:NO];
imageView.image = newImg;
}
Edit (24/04/2013)
I have changed the following code:
- (RGBColor)colorAtX:(NSInteger)x y:(NSInteger)y
{
NSUInteger components[4];
[self.bitmap getPixel:components atX:x y:y];
//NSLog(#"R: %ld, G:%ld, B:%ld", components[0], components[1], components[2]);
RGBColor color = {components[0], components[1], components[2]};
return color;
}
- (BOOL)color:(RGBColor)a isEqualToColor:(RGBColor)b
{
return ((a.red == b.red) && (a.green == b.green) && (a.blue == b.blue));
}
- (void)setColor:(RGBColor)color atX:(NSUInteger)x y:(NSUInteger)y
{
NSUInteger components[4] = {(NSUInteger)color.red, (NSUInteger)color.green, (NSUInteger)color.blue, 255};
//NSLog(#"R: %ld, G: %ld, B: %ld", components[0], components[1], components[2]);
[self.bitmap setPixel:components atX:x y:y];
}
- (IBAction)colorize:(id)sender
{
float width = imageView.image.size.width;
float height = imageView.image.size.height;
BitmapImage *img = (BitmapImage*)imageView.image;
RGBColor oldColor = [img colorAtX:0 y:0];
RGBColor newColor;// = {255, 0, 0};
newColor.red = 255;
newColor.green = 0;
newColor.blue = 0;
for (int x = 0; x <= width; x++)
{
for (int y = 0; y <= height; y++)
{
if ([img color:[img colorAtX:x y:y] isEqualToColor:oldColor])
{
[img setColor:newColor atX:x y:y];
}
}
}
[imageView setNeedsDisplay:YES];
}
But now it changes the pixels to red the first time and then blue the second time the colorize method is called.
Edit 2 (24/04/2013)
The following code fixes it. It was because the rotation code was adding an alpha channel to the NSBitmapImageRep.
- (RGBColor)colorAtX:(NSInteger)x y:(NSInteger)y
{
if (self.bitmap.hasAlpha)
{
NSUInteger components[4];
[self.bitmap getPixel:components atX:x y:y];
RGBColor color = {components[1], components[2], components[3]};
return color;
}
else
{
NSUInteger components[3];
[self.bitmap getPixel:components atX:x y:y];
RGBColor color = {components[0], components[1], components[2]};
return color;
}
}
- (void)setColor:(RGBColor)color atX:(NSUInteger)x y:(NSUInteger)y
{
if (self.bitmap.hasAlpha)
{
NSUInteger components[4] = {255, (NSUInteger)color.red, (NSUInteger)color.green, (NSUInteger)color.blue};
[self.bitmap setPixel:components atX:x y:y];
}
else
{
NSUInteger components[3] = {color.red, color.green, color.blue};
[self.bitmap setPixel:components atX:x y:y];
}
}
Ok, I decided to spend the day researching Peter's suggestion of using CoreImage.
I had done some research previously and decided it was too hard but after an entire day of research I finally worked out what I needed to do and amazingly it couldn't be easier.
Early on I had decided that the Apple ChromaKey Core Image example would be a great starting point but the example code frightened me off due to the 3-dimensional colour cube. After watching the WWDC 2012 video on Core Image and finding some sample code on github (https://github.com/vhbit/ColorCubeSample) I decided to jump in and just give it a go.
Here are the important parts of the working code, I haven't included the RGB565Data method as I haven't written it yet, but it should be easy using the method Peter suggested:
CIImage+Extras.h
- (NSImage*) NSImage;
- (CIImage*) imageRotated90DegreesClockwise:(BOOL)clockwise;
- (CIImage*) imageWithChromaColor:(NSColor*)chromaColor BackgroundColor:(NSColor*)backColor;
- (NSColor*) colorAtX:(NSUInteger)x y:(NSUInteger)y;
CIImage+Extras.m
- (NSImage*) NSImage
{
CGContextRef cg = [[NSGraphicsContext currentContext] graphicsPort];
CIContext *context = [CIContext contextWithCGContext:cg options:nil];
CGImageRef cgImage = [context createCGImage:self fromRect:self.extent];
NSImage *image = [[NSImage alloc] initWithCGImage:cgImage size:NSZeroSize];
return [image autorelease];
}
- (CIImage*) imageRotated90DegreesClockwise:(BOOL)clockwise
{
CIImage *im = self;
CIFilter *f = [CIFilter filterWithName:#"CIAffineTransform"];
NSAffineTransform *t = [NSAffineTransform transform];
[t rotateByDegrees:clockwise ? -90 : 90];
[f setValue:t forKey:#"inputTransform"];
[f setValue:im forKey:#"inputImage"];
im = [f valueForKey:#"outputImage"];
CGRect extent = [im extent];
f = [CIFilter filterWithName:#"CIAffineTransform"];
t = [NSAffineTransform transform];
[t translateXBy:-extent.origin.x
yBy:-extent.origin.y];
[f setValue:t forKey:#"inputTransform"];
[f setValue:im forKey:#"inputImage"];
im = [f valueForKey:#"outputImage"];
return im;
}
- (CIImage*) imageWithChromaColor:(NSColor*)chromaColor BackgroundColor:(NSColor*)backColor
{
CIImage *im = self;
CIColor *backCIColor = [[CIColor alloc] initWithColor:backColor];
CIImage *backImage = [CIImage imageWithColor:backCIColor];
backImage = [backImage imageByCroppingToRect:self.extent];
[backCIColor release];
float chroma[3];
chroma[0] = chromaColor.redComponent;
chroma[1] = chromaColor.greenComponent;
chroma[2] = chromaColor.blueComponent;
// Allocate memory
const unsigned int size = 64;
const unsigned int cubeDataSize = size * size * size * sizeof (float) * 4;
float *cubeData = (float *)malloc (cubeDataSize);
float rgb[3];//, *c = cubeData;
// Populate cube with a simple gradient going from 0 to 1
size_t offset = 0;
for (int z = 0; z < size; z++){
rgb[2] = ((double)z)/(size-1); // Blue value
for (int y = 0; y < size; y++){
rgb[1] = ((double)y)/(size-1); // Green value
for (int x = 0; x < size; x ++){
rgb[0] = ((double)x)/(size-1); // Red value
float alpha = ((rgb[0] == chroma[0]) && (rgb[1] == chroma[1]) && (rgb[2] == chroma[2])) ? 0.0 : 1.0;
cubeData[offset] = rgb[0] * alpha;
cubeData[offset+1] = rgb[1] * alpha;
cubeData[offset+2] = rgb[2] * alpha;
cubeData[offset+3] = alpha;
offset += 4;
}
}
}
// Create memory with the cube data
NSData *data = [NSData dataWithBytesNoCopy:cubeData
length:cubeDataSize
freeWhenDone:YES];
CIFilter *colorCube = [CIFilter filterWithName:#"CIColorCube"];
[colorCube setValue:[NSNumber numberWithInt:size] forKey:#"inputCubeDimension"];
// Set data for cube
[colorCube setValue:data forKey:#"inputCubeData"];
[colorCube setValue:im forKey:#"inputImage"];
im = [colorCube valueForKey:#"outputImage"];
CIFilter *sourceOver = [CIFilter filterWithName:#"CISourceOverCompositing"];
[sourceOver setValue:im forKey:#"inputImage"];
[sourceOver setValue:backImage forKey:#"inputBackgroundImage"];
im = [sourceOver valueForKey:#"outputImage"];
return im;
}
- (NSColor*)colorAtX:(NSUInteger)x y:(NSUInteger)y
{
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithCIImage:self];
NSColor *color = [bitmap colorAtX:x y:y];
[bitmap release];
return color;
}
Very simple question... I have an array of pixels, how do I display them on the screen?
#define WIDTH 10
#define HEIGHT 10
#define SIZE WIDTH*HEIGHT
unsigned short pixels[SIZE];
for (int i = 0; i < WIDTH; i++) {
for (int j = 0; j < HEIGHT; j++) {
pixels[j*HEIGHT + i] = 0xFFFF;
}
}
That's it... now how can I show them on the screen?
Create a new "Cocoa Application" (if you don't know how to create a cocoa application go to Cocoa Dev Center)
Subclass NSView (if you don't know how to subclass a view read section "Create the NSView Subclass")
Set your NSWindow to size 400x400 on interface builder
Use this code in your NSView
#import "MyView.h"
#implementation MyView
#define WIDTH 400
#define HEIGHT 400
#define SIZE (WIDTH*HEIGHT)
#define BYTES_PER_PIXEL 2
#define BITS_PER_COMPONENT 5
#define BITS_PER_PIXEL 16
- (id)initWithFrame:(NSRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code here.
}
return self;
}
- (void)drawRect:(NSRect)dirtyRect
{
// Get current context
CGContextRef context = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
// Colorspace RGB
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Pixel Matrix allocation
unsigned short *pixels = calloc(SIZE, sizeof(unsigned short));
// Random pixels will give you a non-organized RAINBOW
for (int i = 0; i < WIDTH; i++) {
for (int j = 0; j < HEIGHT; j++) {
pixels[i+ j*HEIGHT] = arc4random() % USHRT_MAX;
}
}
// Provider
CGDataProviderRef provider = CGDataProviderCreateWithData(nil, pixels, SIZE, nil);
// CGImage
CGImageRef image = CGImageCreate(WIDTH,
HEIGHT,
BITS_PER_COMPONENT,
BITS_PER_PIXEL,
BYTES_PER_PIXEL*WIDTH,
colorSpace,
kCGImageAlphaNoneSkipFirst,
// xRRRRRGGGGGBBBBB - 16-bits, first bit is ignored!
provider,
nil, //No decode
NO, //No interpolation
kCGRenderingIntentDefault); // Default rendering
// Draw
CGContextDrawImage(context, self.bounds, image);
// Once everything is written on screen we can release everything
CGImageRelease(image);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
}
#end
There's a bunch of ways to do this. One of the more straightforward is to use CGContextDrawImage. In drawRect:
CGContextRef ctx = [[NSGraphicsContext currentContext] graphicsPort];
CGDataProviderRef provider = CGDataProviderCreateWithData(nil, bitmap, bitmap_bytes, nil);
CGImageRef img = CGImageCreate(..., provider, ...);
CGDataProviderRelease(provider);
CGContextDrawImage(ctx, dstRect, img);
CGImageRelease(img);
CGImageCreate has a bunch of arguments which I've left out here, as the correct values will depend on what your bitmap format is. See the CGImage reference for details.
Note that, if your bitmap is static, it may make sense to hold on to the CGImageRef instead of disposing of it immediately. You know best how your application works, so you decide whether that makes sense.
I solved this problem by using an NSImageView with NSBitmapImageRep to create the image from the pixel values. There are lots of options how you create the pixel values. In my case, I used 32-bit pixels (RGBA). In this code, pixels is the giant array of pixel value. display is the outlet for the NSImageView.
NSBitmapImageRep *myBitmap;
NSImage *myImage;
unsigned char *buff[4];
unsigned char *pixels;
int width, height, rectSize;
NSRect myBounds;
myBounds = [display bounds];
width = myBounds.size.width;
height = myBounds.size.height;
rectSize = width * height;
memset(buff, 0, sizeof(buff));
pixels = malloc(rectSize * 4);
(fill in pixels array)
buff[0] = pixels;
myBitmap = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:buff
pixelsWide:width
pixelsHigh:height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bitmapFormat:0
bytesPerRow:(4 * width)
bitsPerPixel:32];
myImage = [[NSImage alloc] init];
[myImage addRepresentation:myBitmap];
[display setImage: myImage];
[myImage release];
[myBitmap release];
free(pixels);