I have NSImage and I want to make OpenGL texture from it. So I do the fallowing:
someNSData = [someNSImage TIFFRepresentation];
someNSBitmapImageRepData = [[NSBitmapImageRep alloc] initWithData:someNSData]
And if someNSImage is .png it works OK. But if someNSImage is .jpg texture is being broken.
With .png it looks like that:
And same image but .jpg format it looks like that:
Whats wrong?
Try this
#implementation NSImage(NSImageToCGImageRef)
- (NSBitmapImageRep *)bitmapImageRepresentation
{
NSBitmapImageRep *ret = (NSBitmapImageRep *)[self bestRepresentationForDevice:nil];
if(![ret isKindOfClass:[NSBitmapImageRep class]])
{
ret = nil;
for(NSBitmapImageRep *rep in [self representations])
if([rep isKindOfClass:[NSBitmapImageRep class]])
{
ret = rep;
break;
}
}
// if ret is nil we create a new representation
if(ret == nil)
{
NSSize size = [self size];
size_t width = size.width;
size_t height = size.height;
size_t bitsPerComp = 32;
size_t bytesPerPixel = (bitsPerComp / CHAR_BIT) * 4;
size_t bytesPerRow = bytesPerPixel * width;
size_t totalBytes = height * bytesPerRow;
NSMutableData *data = [NSMutableData dataWithBytesNoCopy:calloc(totalBytes, 1) length:totalBytes freeWhenDone:YES];
CGColorSpaceRef space = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef ctx = CGBitmapContextCreate([data mutableBytes], width, height, bitsPerComp, bytesPerRow, space, kCGBitmapFloatComponents | kCGImageAlphaPremultipliedLast);
if(ctx != NULL)
{
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithGraphicsPort:ctx flipped:[self isFlipped]]];
[self drawAtPoint:NSZeroPoint fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
CGImageRef img = CGBitmapContextCreateImage(ctx);
ret = [[[NSBitmapImageRep alloc] initWithCGImage:img] autorelease];
[self addRepresentation:ret];
CFRelease(img);
CFRelease(space);
CGContextRelease(ctx);
}
else NSLog(#"%# Couldn't create CGBitmapContext", self);
}
return ret;
}
#end
//in your code
NSBitmapImageRep *tempRep = [image bitmapImageRepresentation];
the width and the height of the a texture must be power of 2, i.e. 128, 256, 512, 1024, etc.
It looks like your image format isn't 32 bit.
Related
I have found myself in a situation where I have several NSImage objects that I need to rotate by 90 degrees, change the colour of pixels that are one colour to another colour and then get the RGB565 data representation for it as an NSData object.
I found the vImageConvert_ARGB8888toRGB565 function in the Accelerate framework so this should be able to do the RGB565 output.
There are a few UIImage rotation I have found here on StackOverflow, but I'm having trouble converting them to NSImage as it appears I have to use NSGraphicsContext not CGContextRef?
Ideally I would like these in an NSImage Category so I can just call.
NSImage *rotated = [inputImage rotateByDegrees:90];
NSImage *colored = [rotated changeColorFrom:[NSColor redColor] toColor:[NSColor blackColor]];
NSData *rgb565 = [colored rgb565Data];
I just don't know where to start as image manipulation is new to me.
I appreciate any help I can get.
Edit (22/04/2013)
I have managed to piece this code together to generate the RGB565 data, it generates it upside down and with some small artefacts, I assume the first is due to different coordinate systems being used and the second possibly due to me going from PNG to BMP. I will do some more testing using a BMP to start and also a non-tranparent PNG.
- (NSData *)RGB565Data
{
CGContextRef cgctx = CreateARGBBitmapContext(self.CGImage);
if (cgctx == NULL)
return nil;
size_t w = CGImageGetWidth(self.CGImage);
size_t h = CGImageGetHeight(self.CGImage);
CGRect rect = {{0,0},{w,h}};
CGContextDrawImage(cgctx, rect, self.CGImage);
void *data = CGBitmapContextGetData (cgctx);
CGContextRelease(cgctx);
if (!data)
return nil;
vImage_Buffer src;
src.data = data;
src.width = w;
src.height = h;
src.rowBytes = (w * 4);
void* destData = malloc((w * 2) * h);
vImage_Buffer dst;
dst.data = destData;
dst.width = w;
dst.height = h;
dst.rowBytes = (w * 2);
vImageConvert_ARGB8888toRGB565(&src, &dst, 0);
size_t dataSize = 2 * w * h; // RGB565 = 2 5-bit components and 1 6-bit (16 bits/2 bytes)
NSData *RGB565Data = [NSData dataWithBytes:dst.data length:dataSize];
free(destData);
return RGB565Data;
}
- (CGImageRef)CGImage
{
return [self CGImageForProposedRect:NULL context:[NSGraphicsContext currentContext] hints:nil];
}
CGContextRef CreateARGBBitmapContext (CGImageRef inImage)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
bitmapBytesPerRow = (int)(pixelsWide * 4);
bitmapByteCount = (int)(bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
return nil;
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
CGColorSpaceRelease( colorSpace );
return nil;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
CGColorSpaceRelease( colorSpace );
return context;
}
For most of this, you'll want to use Core Image.
Rotation you can do with the CIAffineTransform filter. This takes an NSAffineTransform object. You may have already worked with that class before. (You could do the rotation with NSImage itself, but it's easier with Core Image and you'll probably need to use it for the next step anyway.)
I don't know what you mean by “change the colour of pixels that are one colour to another colour”; that could mean any of a lot of different things. Chances are, though, there's a filter for that.
I also don't know why you need 565 data specifically, but assuming you have a real need for that, you're correct that that function will be involved. Use CIContext's lowest-level rendering method to get 8-bit-per-component ARGB output, and then use that vImage function to convert it to 565 RGB.
I have managed to get what I want by using NSBitmapImageRep (accessing it with a bit of a hack). If anyone knows a better way of doing this, please do share.
The - (NSBitmapImageRep)bitmap method is my hack. The NSImage starts of having only an NSBitmapImageRep, however after the rotation method a CIImageRep is added which takes priority over the NSBitmapImageRep which breaks the colour code (as NSImage renders the CIImageRep which doesn't get colored).
BitmapImage.m (Subclass of NSImage)
CGContextRef CreateARGBBitmapContext (CGImageRef inImage)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
bitmapBytesPerRow = (int)(pixelsWide * 4);
bitmapByteCount = (int)(bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
return nil;
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
CGColorSpaceRelease( colorSpace );
return nil;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
CGColorSpaceRelease( colorSpace );
return context;
}
- (NSData *)RGB565Data
{
CGContextRef cgctx = CreateARGBBitmapContext(self.CGImage);
if (cgctx == NULL)
return nil;
size_t w = CGImageGetWidth(self.CGImage);
size_t h = CGImageGetHeight(self.CGImage);
CGRect rect = {{0,0},{w,h}};
CGContextDrawImage(cgctx, rect, self.CGImage);
void *data = CGBitmapContextGetData (cgctx);
CGContextRelease(cgctx);
if (!data)
return nil;
vImage_Buffer src;
src.data = data;
src.width = w;
src.height = h;
src.rowBytes = (w * 4);
void* destData = malloc((w * 2) * h);
vImage_Buffer dst;
dst.data = destData;
dst.width = w;
dst.height = h;
dst.rowBytes = (w * 2);
vImageConvert_ARGB8888toRGB565(&src, &dst, 0);
size_t dataSize = 2 * w * h; // RGB565 = 2 5-bit components and 1 6-bit (16 bits/2 bytes)
NSData *RGB565Data = [NSData dataWithBytes:dst.data length:dataSize];
free(destData);
return RGB565Data;
}
- (NSBitmapImageRep*)bitmap
{
NSBitmapImageRep *bitmap = nil;
NSMutableArray *repsToRemove = [NSMutableArray array];
// Iterate through the representations that back the NSImage
for (NSImageRep *rep in self.representations)
{
// If the representation is a bitmap
if ([rep isKindOfClass:[NSBitmapImageRep class]])
{
bitmap = [(NSBitmapImageRep*)rep retain];
break;
}
else
{
[repsToRemove addObject:rep];
}
}
// If no bitmap representation was found, we create one (this shouldn't occur)
if (bitmap == nil)
{
bitmap = [[[NSBitmapImageRep alloc] initWithCGImage:self.CGImage] retain];
[self addRepresentation:bitmap];
}
for (NSImageRep *rep2 in repsToRemove)
{
[self removeRepresentation:rep2];
}
return [bitmap autorelease];
}
- (NSColor*)colorAtX:(NSInteger)x y:(NSInteger)y
{
return [self.bitmap colorAtX:x y:y];
}
- (void)setColor:(NSColor*)color atX:(NSInteger)x y:(NSInteger)y
{
[self.bitmap setColor:color atX:x y:y];
}
NSImage+Extra.m (NSImage Category)
- (CGImageRef)CGImage
{
return [self CGImageForProposedRect:NULL context:[NSGraphicsContext currentContext] hints:nil];
}
Usage
- (IBAction)load:(id)sender
{
NSOpenPanel* openDlg = [NSOpenPanel openPanel];
[openDlg setCanChooseFiles:YES];
[openDlg setCanChooseDirectories:YES];
if ( [openDlg runModalForDirectory:nil file:nil] == NSOKButton )
{
NSArray* files = [openDlg filenames];
for( int i = 0; i < [files count]; i++ )
{
NSString* fileName = [files objectAtIndex:i];
BitmapImage *image = [[BitmapImage alloc] initWithContentsOfFile:fileName];
imageView.image = image;
}
}
}
- (IBAction)colorize:(id)sender
{
float width = imageView.image.size.width;
float height = imageView.image.size.height;
BitmapImage *img = (BitmapImage*)imageView.image;
NSColor *newColor = [img colorAtX:1 y:1];
for (int x = 0; x <= width; x++)
{
for (int y = 0; y <= height; y++)
{
if ([img colorAtX:x y:y] == newColor)
{
[img setColor:[NSColor redColor] atX:x y:y];
}
}
}
[imageView setNeedsDisplay:YES];
}
- (IBAction)rotate:(id)sender
{
BitmapImage *img = (BitmapImage*)imageView.image;
BitmapImage *newImg = [img rotate90DegreesClockwise:NO];
imageView.image = newImg;
}
Edit (24/04/2013)
I have changed the following code:
- (RGBColor)colorAtX:(NSInteger)x y:(NSInteger)y
{
NSUInteger components[4];
[self.bitmap getPixel:components atX:x y:y];
//NSLog(#"R: %ld, G:%ld, B:%ld", components[0], components[1], components[2]);
RGBColor color = {components[0], components[1], components[2]};
return color;
}
- (BOOL)color:(RGBColor)a isEqualToColor:(RGBColor)b
{
return ((a.red == b.red) && (a.green == b.green) && (a.blue == b.blue));
}
- (void)setColor:(RGBColor)color atX:(NSUInteger)x y:(NSUInteger)y
{
NSUInteger components[4] = {(NSUInteger)color.red, (NSUInteger)color.green, (NSUInteger)color.blue, 255};
//NSLog(#"R: %ld, G: %ld, B: %ld", components[0], components[1], components[2]);
[self.bitmap setPixel:components atX:x y:y];
}
- (IBAction)colorize:(id)sender
{
float width = imageView.image.size.width;
float height = imageView.image.size.height;
BitmapImage *img = (BitmapImage*)imageView.image;
RGBColor oldColor = [img colorAtX:0 y:0];
RGBColor newColor;// = {255, 0, 0};
newColor.red = 255;
newColor.green = 0;
newColor.blue = 0;
for (int x = 0; x <= width; x++)
{
for (int y = 0; y <= height; y++)
{
if ([img color:[img colorAtX:x y:y] isEqualToColor:oldColor])
{
[img setColor:newColor atX:x y:y];
}
}
}
[imageView setNeedsDisplay:YES];
}
But now it changes the pixels to red the first time and then blue the second time the colorize method is called.
Edit 2 (24/04/2013)
The following code fixes it. It was because the rotation code was adding an alpha channel to the NSBitmapImageRep.
- (RGBColor)colorAtX:(NSInteger)x y:(NSInteger)y
{
if (self.bitmap.hasAlpha)
{
NSUInteger components[4];
[self.bitmap getPixel:components atX:x y:y];
RGBColor color = {components[1], components[2], components[3]};
return color;
}
else
{
NSUInteger components[3];
[self.bitmap getPixel:components atX:x y:y];
RGBColor color = {components[0], components[1], components[2]};
return color;
}
}
- (void)setColor:(RGBColor)color atX:(NSUInteger)x y:(NSUInteger)y
{
if (self.bitmap.hasAlpha)
{
NSUInteger components[4] = {255, (NSUInteger)color.red, (NSUInteger)color.green, (NSUInteger)color.blue};
[self.bitmap setPixel:components atX:x y:y];
}
else
{
NSUInteger components[3] = {color.red, color.green, color.blue};
[self.bitmap setPixel:components atX:x y:y];
}
}
Ok, I decided to spend the day researching Peter's suggestion of using CoreImage.
I had done some research previously and decided it was too hard but after an entire day of research I finally worked out what I needed to do and amazingly it couldn't be easier.
Early on I had decided that the Apple ChromaKey Core Image example would be a great starting point but the example code frightened me off due to the 3-dimensional colour cube. After watching the WWDC 2012 video on Core Image and finding some sample code on github (https://github.com/vhbit/ColorCubeSample) I decided to jump in and just give it a go.
Here are the important parts of the working code, I haven't included the RGB565Data method as I haven't written it yet, but it should be easy using the method Peter suggested:
CIImage+Extras.h
- (NSImage*) NSImage;
- (CIImage*) imageRotated90DegreesClockwise:(BOOL)clockwise;
- (CIImage*) imageWithChromaColor:(NSColor*)chromaColor BackgroundColor:(NSColor*)backColor;
- (NSColor*) colorAtX:(NSUInteger)x y:(NSUInteger)y;
CIImage+Extras.m
- (NSImage*) NSImage
{
CGContextRef cg = [[NSGraphicsContext currentContext] graphicsPort];
CIContext *context = [CIContext contextWithCGContext:cg options:nil];
CGImageRef cgImage = [context createCGImage:self fromRect:self.extent];
NSImage *image = [[NSImage alloc] initWithCGImage:cgImage size:NSZeroSize];
return [image autorelease];
}
- (CIImage*) imageRotated90DegreesClockwise:(BOOL)clockwise
{
CIImage *im = self;
CIFilter *f = [CIFilter filterWithName:#"CIAffineTransform"];
NSAffineTransform *t = [NSAffineTransform transform];
[t rotateByDegrees:clockwise ? -90 : 90];
[f setValue:t forKey:#"inputTransform"];
[f setValue:im forKey:#"inputImage"];
im = [f valueForKey:#"outputImage"];
CGRect extent = [im extent];
f = [CIFilter filterWithName:#"CIAffineTransform"];
t = [NSAffineTransform transform];
[t translateXBy:-extent.origin.x
yBy:-extent.origin.y];
[f setValue:t forKey:#"inputTransform"];
[f setValue:im forKey:#"inputImage"];
im = [f valueForKey:#"outputImage"];
return im;
}
- (CIImage*) imageWithChromaColor:(NSColor*)chromaColor BackgroundColor:(NSColor*)backColor
{
CIImage *im = self;
CIColor *backCIColor = [[CIColor alloc] initWithColor:backColor];
CIImage *backImage = [CIImage imageWithColor:backCIColor];
backImage = [backImage imageByCroppingToRect:self.extent];
[backCIColor release];
float chroma[3];
chroma[0] = chromaColor.redComponent;
chroma[1] = chromaColor.greenComponent;
chroma[2] = chromaColor.blueComponent;
// Allocate memory
const unsigned int size = 64;
const unsigned int cubeDataSize = size * size * size * sizeof (float) * 4;
float *cubeData = (float *)malloc (cubeDataSize);
float rgb[3];//, *c = cubeData;
// Populate cube with a simple gradient going from 0 to 1
size_t offset = 0;
for (int z = 0; z < size; z++){
rgb[2] = ((double)z)/(size-1); // Blue value
for (int y = 0; y < size; y++){
rgb[1] = ((double)y)/(size-1); // Green value
for (int x = 0; x < size; x ++){
rgb[0] = ((double)x)/(size-1); // Red value
float alpha = ((rgb[0] == chroma[0]) && (rgb[1] == chroma[1]) && (rgb[2] == chroma[2])) ? 0.0 : 1.0;
cubeData[offset] = rgb[0] * alpha;
cubeData[offset+1] = rgb[1] * alpha;
cubeData[offset+2] = rgb[2] * alpha;
cubeData[offset+3] = alpha;
offset += 4;
}
}
}
// Create memory with the cube data
NSData *data = [NSData dataWithBytesNoCopy:cubeData
length:cubeDataSize
freeWhenDone:YES];
CIFilter *colorCube = [CIFilter filterWithName:#"CIColorCube"];
[colorCube setValue:[NSNumber numberWithInt:size] forKey:#"inputCubeDimension"];
// Set data for cube
[colorCube setValue:data forKey:#"inputCubeData"];
[colorCube setValue:im forKey:#"inputImage"];
im = [colorCube valueForKey:#"outputImage"];
CIFilter *sourceOver = [CIFilter filterWithName:#"CISourceOverCompositing"];
[sourceOver setValue:im forKey:#"inputImage"];
[sourceOver setValue:backImage forKey:#"inputBackgroundImage"];
im = [sourceOver valueForKey:#"outputImage"];
return im;
}
- (NSColor*)colorAtX:(NSUInteger)x y:(NSUInteger)y
{
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithCIImage:self];
NSColor *color = [bitmap colorAtX:x y:y];
[bitmap release];
return color;
}
This is my compress code
NSBitmapImageRep* tmpRep = [[_image representations] objectAtIndex:0];
[tmpRep setPixelsWide:512];
[tmpRep setPixelsHigh:512];
[tmpRep setSize:NSMakeSize(SmallThumbnailWidth, SmallThumbnailHeight)];
NSDictionary* imageProps = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:0.3] forKey:NSImageCompressionFactor];
NSData* outputImageData = [tmpRep
representationUsingType:NSJPEGFileType properties:imageProps];
NSString* imageFilePath = [NSString stringWithFormat:#"%#/thumbnail.jpg",imagePath];
[outputImageData writeToFile:imageFilePath atomically:YES];
The original image size is 960*960.I want to compress the original image into 512*512.But the output Image's size is 960*960 when I check it in finder and the location size which compares with the original has really been compressed.Any one could tell me why ? thank you
Try this one:
This will reduce the saving size in kbs:
-(NSImage *)imageCompressedByFactor:(float)factor{
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithData:[self TIFFRepresentation]];
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:factor] forKey:NSImageCompressionFactor];
NSData *compressedData = [imageRep representationUsingType:NSJPEGFileType properties:options];
return [[NSImage alloc] initWithData:compressedData];
}
This will reduce the file size in pixels :
Copied from here
#implementation NSImage (ProportionalScaling)
- (NSImage*)imageByScalingProportionallyToSize:(NSSize)targetSize{
NSImage* sourceImage = self;
NSImage* newImage = nil;
if ([sourceImage isValid]){
NSSize imageSize = [sourceImage size];
float width = imageSize.width;
float height = imageSize.height;
float targetWidth = targetSize.width;
float targetHeight = targetSize.height;
float scaleFactor = 0.0;
float scaledWidth = targetWidth;
float scaledHeight = targetHeight;
NSPoint thumbnailPoint = NSZeroPoint;
if ( NSEqualSizes( imageSize, targetSize ) == NO )
{
float widthFactor = targetWidth / width;
float heightFactor = targetHeight / height;
if ( widthFactor < heightFactor )
scaleFactor = widthFactor;
else
scaleFactor = heightFactor;
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
if ( widthFactor < heightFactor )
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
else if ( widthFactor > heightFactor )
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
newImage = [[NSImage alloc] initWithSize:targetSize];
[newImage lockFocus];
NSRect thumbnailRect;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect: thumbnailRect
fromRect: NSZeroRect
operation: NSCompositeSourceOver
fraction: 1.0];
[newImage unlockFocus];
}
return [newImage autorelease];
}
#end
Create a category method like so, in order to incrementally compress the image till you meet the desired file size:
- (NSImage*) compressUnderMegaBytes:(CGFloat)megabytes {
CGFloat compressionRatio = 1.0;
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithData:[self TIFFRepresentation]];
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:compressionRatio] forKey:NSImageCompressionFactor];
NSData *compressedData = [imageRep representationUsingType:NSJPEGFileType properties:options];
while ([compressedData length]>(megabytes*1024*1024)) {
#autoreleasepool {
compressionRatio = compressionRatio * 0.9;
options = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:compressionRatio] forKey:NSImageCompressionFactor];
compressedData = [imageRep representationUsingType:NSJPEGFileType properties:options];
// Safety check, 0.4 is a reasonable compression size, anything below will become blurry
if (compressionRatio <= 0.4) {
break;
}
}
}
return [[NSImage alloc] initWithData: compressedData];
}
You can then use it like this:
NSImage *compressedImage = [myImage compressUnderMegaBytes: 0.5];
level from 0.0 to 1.0
func getImageQualityWithLevel(image: NSImage, level: CGFloat) -> NSImage {
let _image = image
var newRect: NSRect = NSMakeRect(0, 0, _image.size.width, _image.size.height)
let imageSizeH: CGFloat = _image.size.height * level
let imageSizeW: CGFloat = _image.size.width * level
var newImage = NSImage(size: NSMakeSize(imageSizeW, imageSizeH))
newImage.lockFocus()
NSGraphicsContext.currentContext()?.imageInterpolation = NSImageInterpolation.Low
_image.drawInRect(NSMakeRect(0, 0, imageSizeW, imageSizeH), fromRect: newRect, operation: NSCompositingOperation.CompositeSourceOver, fraction: 1)
newImage.unlockFocus()
return newImage
}
I'm developing a quick app in which I have a method that should rescale a #2x image to a regular one. The problem is that it doesn't :(
Why?
-(BOOL)createNormalImage:(NSString*)inputRetinaImagePath {
NSImage *inputRetinaImage = [[NSImage alloc] initWithContentsOfFile:inputRetinaImagePath];
NSSize size = NSZeroSize;
size.width = inputRetinaImage.size.width*0.5;
size.height = inputRetinaImage.size.height*0.5;
[inputRetinaImage setSize:size];
NSLog(#"%f",inputRetinaImage.size.height);
NSBitmapImageRep *imgRep = [[inputRetinaImage representations] objectAtIndex: 0];
NSData *data = [imgRep representationUsingType: NSPNGFileType properties: nil];
NSString *outputFilePath = [[inputRetinaImagePath substringToIndex:inputRetinaImagePath.length - 7] stringByAppendingString:#".png"];
NSLog([#"Normal version file path: " stringByAppendingString:outputFilePath]);
[data writeToFile:outputFilePath atomically: NO];
return true;
}
You have to be very wary of the size attribute of an NSImage. It doesn't necessarily refer to the bitmapRepresentation's pixel dimensions, it could refer to the displayed size for example. An NSImage may have a number of bitmapRepresentations for use at different output sizes.
Likewise, changing the size attribute of an NSImage does nothing to alter the bitmapRepresentations
So what you need to do is work out the size you want your output image to be, and then draw a new image at that size using a bitmapRepresentation from the source NSImage.
Getting that size depends on how you have obtained your input image and what you know about it. For example, if you are confident that your input image has only one bitmapImageRep you can use this type of thing (as a category on NSImage)
- (NSSize) pixelSize
{
NSBitmapImageRep* bitmap = [[self representations] objectAtIndex:0];
return NSMakeSize(bitmap.pixelsWide,bitmap.pixelsHigh);
}
Even if you have a number of bitmapImageReps, the first one should be the largest one, and if that is the size that your Retina image was created at, it should be the Retina size you are after.
When you have worked out your final size, you can make the image:
- (NSImage*) resizeImage:(NSImage*)sourceImage size:(NSSize)size
{
NSRect targetFrame = NSMakeRect(0, 0, size.width, size.height);
NSImage* targetImage = nil;
NSImageRep *sourceImageRep =
[sourceImage bestRepresentationForRect:targetFrame
context:nil
hints:nil];
targetImage = [[NSImage alloc] initWithSize:size];
[targetImage lockFocus];
[sourceImageRep drawInRect: targetFrame];
[targetImage unlockFocus];
return targetImage;
}
update
Here is a more elaborate version of a pixel-size-getting category on NSImage... let's assume nothing about the image, how many imageReps it has, whether it has any bitmapImageReps... this will return the largest pixel dimensions it can find. If it can't find bitMapImageRep pixel dimensions it will use whatever else it can get, which will most likely be bounding box dimensions (used by eps and pdfs).
NSImage+PixelSize.h
#import <Cocoa/Cocoa.h>
#import <QuartzCore/QuartzCore.h>
#interface NSImage (PixelSize)
- (NSInteger) pixelsWide;
- (NSInteger) pixelsHigh;
- (NSSize) pixelSize;
#end
NSImage+PixelSize.m
#import "NSImage+PixelSize.h"
#implementation NSImage (Extensions)
- (NSInteger) pixelsWide
{
/*
returns the pixel width of NSImage.
Selects the largest bitmapRep by preference
If there is no bitmapRep returns largest size reported by any imageRep.
*/
NSInteger result = 0;
NSInteger bitmapResult = 0;
for (NSImageRep* imageRep in [self representations]) {
if ([imageRep isKindOfClass:[NSBitmapImageRep class]]) {
if (imageRep.pixelsWide > bitmapResult)
bitmapResult = imageRep.pixelsWide;
} else {
if (imageRep.pixelsWide > result)
result = imageRep.pixelsWide;
}
}
if (bitmapResult) result = bitmapResult;
return result;
}
- (NSInteger) pixelsHigh
{
/*
returns the pixel height of NSImage.
Selects the largest bitmapRep by preference
If there is no bitmapRep returns largest size reported by any imageRep.
*/
NSInteger result = 0;
NSInteger bitmapResult = 0;
for (NSImageRep* imageRep in [self representations]) {
if ([imageRep isKindOfClass:[NSBitmapImageRep class]]) {
if (imageRep.pixelsHigh > bitmapResult)
bitmapResult = imageRep.pixelsHigh;
} else {
if (imageRep.pixelsHigh > result)
result = imageRep.pixelsHigh;
}
}
if (bitmapResult) result = bitmapResult;
return result;
}
- (NSSize) pixelSize
{
return NSMakeSize(self.pixelsWide,self.pixelsHigh);
}
#end
You would #import "NSImage+PixelSize.h" in your current file to make it accessible.
With this image category and the resize: method, you would modify your method thus:
//size.width = inputRetinaImage.size.width*0.5;
//size.height = inputRetinaImage.size.height*0.5;
size.width = inputRetinaImage.pixelsWide*0.5;
size.height = inputRetinaImage.pixelsHigh*0.5;
//[inputRetinaImage setSize:size];
NSImage* outputImage = [self resizeImage:inputRetinaImage size:size];
//NSBitmapImageRep *imgRep = [[inputRetinaImage representations] objectAtIndex: 0];
NSBitmapImageRep *imgRep = [[outputImage representations] objectAtIndex: 0];
That should fix things for you (proviso: I haven't tested it on your code)
I modified the script i use to downscale my images for you :)
-(BOOL)createNormalImage:(NSString*)inputRetinaImagePath {
NSImage *inputRetinaImage = [[NSImage alloc] initWithContentsOfFile:inputRetinaImagePath];
//determine new size
NSBitmapImageRep* bitmapImageRep = [[inputRetinaImage representations] objectAtIndex:0];
NSSize size = NSMakeSize(bitmapImageRep.pixelsWide * 0.5,bitmapImageRep.pixelsHigh * 0.5);
NSLog(#"size = %#", NSStringFromSize(size));
//get CGImageRef
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)[inputRetinaImage TIFFRepresentation], NULL);
CGImageRef oldImageRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(oldImageRef);
if (alphaInfo == kCGImageAlphaNone) alphaInfo = kCGImageAlphaNoneSkipLast;
// Build a bitmap context
CGContextRef bitmap = CGBitmapContextCreate(NULL, size.width, size.height, 8, 4 * size.width, CGImageGetColorSpace(oldImageRef), alphaInfo);
// Draw into the context, this scales the image
CGContextDrawImage(bitmap, CGRectMake(0, 0, size.width, size.height), oldImageRef);
// Get an image from the context
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
//this does not work in my test.
NSString *outputFilePath = [[inputRetinaImagePath substringToIndex:inputRetinaImagePath.length - 7] stringByAppendingString:#".png"];
//but this does!
NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* docsDirectory = [paths objectAtIndex:0];
NSString *newfileName = [docsDirectory stringByAppendingFormat:#"/%#", [outputFilePath lastPathComponent]];
CFURLRef url = (__bridge CFURLRef)[NSURL fileURLWithPath:newfileName];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(url, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destination, newImageRef, nil);
if (!CGImageDestinationFinalize(destination)) {
NSLog(#"Failed to write image to %#", newfileName);
}
CFRelease(destination);
return true;
}
I have an AVPlayerLayer which I would like to create an OpenGL Texture out of. I'm comfortable with opengl textures, and even comfortable with converting a CGImageRef into an opengl texture. It seems to me the code below should work, but I get just plain black. What am I doing wrong? Do I need to set any properties on the CALayer / AVPlayerLayer first?
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int width = (int)[layer bounds].size.width;
int height = (int)[layer bounds].size.height;
CGContextRef context = CGBitmapContextCreate(NULL,
width,
height,
8,
width * 4,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (context== NULL) {
ofLog(OF_LOG_ERROR, "getTextureFromLayer: failed to create context 1");
return;
}
[[layer presentationLayer] renderInContext:context];
CGImageRef cgImage = CGBitmapContextCreateImage(context);
int bytesPerPixel = CGImageGetBitsPerPixel(cgImage)/8;
if(bytesPerPixel == 3) bytesPerPixel = 4;
GLubyte *pixels = (GLubyte *) malloc(width * height * bytesPerPixel);
CGContextRelease(context);
context = CGBitmapContextCreate(pixels,
width,
height,
CGImageGetBitsPerComponent(cgImage),
width * bytesPerPixel,
CGImageGetColorSpace(cgImage),
kCGImageAlphaPremultipliedLast);
if(context == NULL) {
ofLog(OF_LOG_ERROR, "getTextureFromLayer: failed to create context 2");
free(pixels);
return;
}
CGContextDrawImage(context, CGRectMake(0.0, 0.0, width, height), cgImage);
int glMode;
switch(bytesPerPixel) {
case 1:
glMode = GL_LUMINANCE;
break;
case 3:
glMode = GL_RGB;
break;
case 4:
default:
glMode = GL_RGBA; break;
}
if(texture.bAllocated() == false || texture.getWidth() != width || texture.getHeight() != height) {
NSLog(#"getTextureFromLayer: allocating texture %i, %i\n", width, height);
texture.allocate(width, height, glMode, true);
}
// test texture
// for(int i=0; i<width*height*4; i++) pixels[i] = ofRandomuf() * 255;
texture.loadData(pixels, width, height, glMode);
CGContextRelease(context);
CFRelease(cgImage);
free(pixels);
P.S. The variable 'texture' is a C++ opengl (-es compatible) texture object which I know works. If I uncomment the 'test texture' for-loop filling the texture with random noise, I can see that, so problem is definitely before.
UPDATE
In response to Nick Weaver's reply I tried a different approach, and now I'm always getting NULL back from copyNextSampleBuffer with status == 3 (AVAssetReaderStatusFailed). Am I missing something?
variables
AVPlayer *videoPlayer;
AVPlayerLayer *videoLayer;
AVAssetReader *videoReader;
AVAssetReaderTrackOutput*videoOutput;
init
videoPlayer = [[AVPlayer alloc] initWithURL:[NSURL fileURLWithPath:[NSString stringWithUTF8String:videoPath.c_str()]]];
if(videoPlayer == nil) {
NSLog(#"videoPlayer == nil ERROR LOADING %s\n", videoPath.c_str());
} else {
NSLog(#"videoPlayer: %#", videoPlayer);
videoLayer = [[AVPlayerLayer playerLayerWithPlayer:videoPlayer] retain];
videoLayer.frame = [ThreeDView instance].bounds;
// [[ThreeDView instance].layer addSublayer:videoLayer]; // test to see if it's loading and running
AVAsset *asset = videoPlayer.currentItem.asset;
NSArray *tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
NSDictionary *settings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA], (NSString*)kCVPixelBufferPixelFormatTypeKey, nil];
videoReader = [[AVAssetReader alloc] initWithAsset:asset error:nil];
videoOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:[tracks objectAtIndex:0] outputSettings:settings];
[videoReader addOutput:videoOutput];
[videoReader startReading];
}
draw loop
if(videoPlayer == 0) {
ofLog(OF_LOG_WARNING, "Shot::drawVideo: videoPlayer == 0");
return;
}
if(videoOutput == 0) {
ofLog(OF_LOG_WARNING, "Shot::drawVideo: videoOutput == 0");
return;
}
CMSampleBufferRef sampleBuffer = [videoOutput copyNextSampleBuffer];
if(sampleBuffer == 0) {
ofLog(OF_LOG_ERROR, "Shot::drawVideo: sampleBuffer == 0, status: %i", videoReader.status);
return;
}
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CFRelease(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
unsigned char *pixels = ( unsigned char *)CVPixelBufferGetBaseAddress(imageBuffer);
int width = CVPixelBufferGetWidth(imageBuffer);
int height = CVPixelBufferGetHeight(imageBuffer);
if(videoTexture.bAllocated() == false || videoTexture.getWidth() != width || videoTexture.getHeight() != height) {
NSLog(#"Shot::drawVideo() allocating texture %i, %i\n", width, height);
videoTexture.allocate(width, height, GL_RGBA, true);
}
videoTexture.loadData(pixels, width, height, GL_BGRA);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
I think iOS4: how do I use video file as an OpenGL texture? will be helpful for your question.
I've been trying to display a NSImage on a CALayer. Then I realised I need to convert it to a CGImage apparently, then display it...
I have this code which doesn't seem to be working
CALayer *layer = [CALayer layer];
NSImage *finderIcon = [[NSWorkspace sharedWorkspace] iconForFileType:NSFileTypeForHFSTypeCode(kFinderIcon)];
[finderIcon setSize:(NSSize){ 128.0f, 128.0f }];
CGImageSourceRef source;
source = CGImageSourceCreateWithData((CFDataRef)finderIcon, NULL);
CGImageRef finalIcon = CGImageSourceCreateImageAtIndex(source, 0, NULL);
layer.bounds = CGRectMake(128.0f, 128.0f, 4, 4);
layer.position = CGPointMake(128.0f, 128.0f);
layer.contents = finalIcon;
// Insert the layer into the root layer
[mainLayer addSublayer:layer];
Why? How can I get this to work?
From the comments: Actually, if you're on 10.6, you can also just set the CALayer's contents to an NSImage rather than a CGImageRef...
If you're on OS X 10.6 or later, take a look at NSImage's CGImageForProposedRect:context:hints: method.
If you're not, I've got this in a category on NSImage:
-(CGImageRef)CGImage
{
CGContextRef bitmapCtx = CGBitmapContextCreate(NULL/*data - pass NULL to let CG allocate the memory*/,
[self size].width,
[self size].height,
8 /*bitsPerComponent*/,
0 /*bytesPerRow - CG will calculate it for you if it's allocating the data. This might get padded out a bit for better alignment*/,
[[NSColorSpace genericRGBColorSpace] CGColorSpace],
kCGBitmapByteOrder32Host|kCGImageAlphaPremultipliedFirst);
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithGraphicsPort:bitmapCtx flipped:NO]];
[self drawInRect:NSMakeRect(0,0, [self size].width, [self size].height) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapCtx);
CGContextRelease(bitmapCtx);
return (CGImageRef)[(id)cgImage autorelease];
}
I think I wrote this myself. But it's entirely possible that I ripped it off from somewhere else like Stack Overflow. It's an older personal project and I don't really remember.
Here's some code which may help you - I sure hope the formatting of this does not get all messed up like it appears is going to happen - all I can offer is that this works for me.
// -------------------------------------------------------------------------------------
- (void)awakeFromNib
{
// setup our main window 'contentWindow' to use layers
[[contentWindow contentView] setWantsLayer:YES]; // NSWindow*
// create a root layer to contain all of our layers
CALayer *root = [[contentWindow contentView] layer];
// use constraint layout to allow sublayers to center themselves
root.layoutManager = [CAConstraintLayoutManager layoutManager];
// create a new layer which will contain ALL our sublayers
// -------------------------------------------------------
mContainer = [CALayer layer];
mContainer.bounds = root.bounds;
mContainer.frame = root.frame;
mContainer.position = CGPointMake(root.bounds.size.width * 0.5,
root.bounds.size.height * 0.5);
// insert layer on the bottom of the stack so it is behind the controls
[root insertSublayer:mContainer atIndex:0];
// make it resize when its superlayer does
root.autoresizingMask = kCALayerWidthSizable | kCALayerHeightSizable;
// make it resize when its superlayer does
mContainer.autoresizingMask = kCALayerWidthSizable | kCALayerHeightSizable;
}
// -------------------------------------------------------------------------------------
- (void) loadMyImage:(NSString*) path
n:(NSInteger) num
x:(NSInteger) xpos
y:(NSInteger) ypos
h:(NSInteger) hgt
w:(NSInteger) wid
b:(NSString*) blendstr
{
#ifdef __DEBUG_LOGGING__
NSLog(#"loadMyImage - ENTER [%#] num[%d] x[%d] y[%d] h[%d] w[%d] b[%#]",
path, num, xpos, ypos, hgt, wid, blendstr);
#endif
NSInteger xoffset = ((wid / 2) + xpos); // use CORNER versus CENTER for location
NSInteger yoffset = ((hgt / 2) + ypos);
CIFilter* filter = nil;
CGRect cgrect = CGRectMake((CGFloat) xoffset, (CGFloat) yoffset,
(CGFloat) wid, (CGFloat) hgt);
if(nil != blendstr) // would be equivalent to #"CIMultiplyBlendMode" or similar
{
filter = [CIFilter filterWithName:blendstr];
}
// read image file via supplied path
NSImage* theimage = [[NSImage alloc] initWithContentsOfFile:path];
if(nil != theimage)
{
[self setMyImageLayer:[CALayer layer]]; // create layer
myImageLayer.frame = cgrect; // locate & size image
myImageLayer.compositingFilter = filter; // nil is OK if no filter
[myImageLayer setContents:(id) theimage]; // deposit image into layer
// add new layer into our main layer [see awakeFromNib above]
[mContainer insertSublayer:myImageLayer atIndex:0];
[theimage release];
}
else
{
NSLog(#"ERROR loadMyImage - no such image [%#]", path);
}
}
+ (CGImageRef) getCachedImage:(NSString *) imageName
{
NSGraphicsContext *context = [[NSGraphicsContext currentContext] graphicsPort];
NSImage *img = [NSImage imageNamed:imageName];
NSRect rect = NSMakeRect(0, 0, [img size].width, [img size].height);
return [img CGImageForProposedRect:&rect context:context hints:NULL];
}
+ (CGImageRef) getImage:(NSString *) imageName withExtension:(NSString *) extension
{
NSGraphicsContext *context = [[NSGraphicsContext currentContext] graphicsPort];
NSString* imagePath = [[NSBundle mainBundle] pathForResource:imageName ofType:extension];
NSImage* img = [[NSImage alloc] initWithContentsOfFile:imagePath];
NSRect rect = NSMakeRect(0, 0, [img size].width, [img size].height);
CGImageRef imgRef = [img CGImageForProposedRect:&rect context:context hints:NULL];
[img release];
return imgRef;
}
then you can set it:
yourLayer.contents = (id)[self getCachedImage:#"myImage.png"];
or
yourLayer.contents = (id)[self getImage:#"myImage" withExtension:#"png"];