I have an image in the form of an NSURL as input. I converted this url to NSImage and then to NSData from which I could get CGImageRef. This imageRef helped me extracting the raw data information from the image such as the height, width, bytesPerRow, etc.
Here's the code that I used:
NSString * urlName = [url path];
NSImage *image = [[NSImage alloc] initWithContentsOfFile:urlName];
NSData *imageData = [image TIFFRepresentation];
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)CFBridgingRetain(imageData), NULL);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
NSUInteger numberOfBitsPerPixel = CGImageGetBitsPerPixel(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
...
...
Now, I checked the size of the image using:
int sz = [imageData length];
which is different from - int sz' = bytesPerRow * height
I cannot understand why is there such a difference. And sz is actually half of sz'.
Am I making some mistake while extracting various info? From what I can get is that maybe while conversion of image to NSData some decompressions are done. In such a case, what should I use that can get me the reliable data.
I am new to the world image processing in Objective-C, so please bear with me!
P.S. I actually checked the size of the file that I am getting as input in the form of NSURL which is same as sz.
Try This:
Instead of
NSData *imageData = [image TIFFRepresentation];
use this:
NSData *imageData = [image TIFFRepresentationUsingCompression:NSTIFFCompressionLZW factor:0];
Related
I'm using the following code to flip (rotate by 180 degrees) an NSImage.
But the new image is twice the size (MBs, not dimensions) of the original when saved to disk. I want it to be approximately the same as the original. How can I accomplish this?
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:[[_imageView image] TIFFRepresentation]];
NSImage *img = [[NSImage alloc] initWithSize:NSMakeSize(imageRep.pixelsWide, imageRep.pixelsHigh)];
[img lockFocus];
NSAffineTransform *rotator = [NSAffineTransform transform];
[rotator translateXBy:imageRep.pixelsWide yBy:imageRep.pixelsHigh];
[rotator scaleXBy:-1 yBy:-1];
[rotator concat];
[imageRep drawInRect:NSMakeRect(0, 0, imageRep.pixelsWide, imageRep.pixelsHigh)];
[img unlockFocus];
Code I'm using to save image to disk :
[[NSFileManager defaultManager] createFileAtPath:path contents:[img TIFFRepresentation] attributes:nil];
Thanks in advance!
I still don't know the root cause of this, but one work around is to save to the JPEG representation instead of the TIFF. The method I wrote is as follows :
- (void)CompressAndSaveImg:(NSImage *)img ToDiskAt:(NSString *)path WithCompressionFactor:(float)value{
NSData *imgData = [img TIFFRepresentation];
NSBitmapImageRep *imgRep = [NSBitmapImageRep imageRepWithData:imgData];
NSNumber *compressionFactor = [NSNumber numberWithFloat:value];
NSDictionary *imageProps = [NSDictionary dictionaryWithObject:compressionFactor forKey:NSImageCompressionFactor];
imgData = [imgRep representationUsingType:NSJPEGFileType properties:imageProps];
[imgData writeToFile:path atomically:YES];
}
You can play around the with the WithCompressionFactor:(float)value parameter, but 0.75 works fine for me.
everyone:
I've been working on this for days. Here's a little bit of background. I'm sending an image to a server using protobuf. The image is directly from the camera, so it is not a jpeg nor a png. I found code to get the data from the UIImage using the CGImage to create a CGImageRef. See the following code:
- (UIImage *)testProcessedImage:(UIImage *)processedImage
{
CGImageRef imageRef = processedImage.CGImage;
NSData *data1 = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
Google protobuf uses C++ code to send and receive the bytes to and from the server. When I tried to get the data bytes back into NSData and alloc init a UIImage with that data the UIImage was always nil. This tells me that my NSData is not in the correct format.
At first, I thought my issue was with the C++ conversion, as shown with my previous question here. But after much frustration, I cut out everything in the middle and just created a UIImage with the CGImageRef and it worked. See the following code:
- (UIImage *)testProcessedImage:(UIImage *)processedImage
{
CGImageRef imageRef = processedImage.CGImage;
NSData *data1 = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
// Added this line and cut out everything in the middle
UIImage *image = [UIImage imageWithCGImage:imageRef];
Following is a description of what I ultimately need to do. There are two parts. Part 1 takes a UIImage and converts it into a std::string.
take a UIImage
get the NSData from it
convert the data to unsigned char *
stuff the unsigned char * into a std::string
The string is what we would receive from the protobuf call. Part 2 takes the data from the string and converts it back into the NSData format to populate a UIImage. Following are the steps to do that:
convert the std::string to char array
convert the char array to a const char *
put the char * into NSData
return NSData
Now, with that background information and armed with the fact that populating the UIImage with a CGImageRef works, meaning that data in that format is the correct format to populate the UIImage, I'm looking for help in figuring out how to get the base64.data() into either a CFDataRef or a CGImageRef. Below is my test method:
- (UIImage *)testProcessedImage:(UIImage *)processedImage
{
CGImageRef imageRef = processedImage.CGImage;
NSData *data1 = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
unsigned char *pixels = (unsigned char *)[data1 bytes];
unsigned long size = [data1 length];
// ***************************************************************************
// This is where we would call transmit and receive the bytes in a std::string
//
// The following line simulates that:
//
const std::string byteString(pixels, pixels + size);
//
// ***************************************************************************
// converting to base64
std::string encoded = base64_encode(reinterpret_cast<const unsigned char*>(byteString.c_str()), byteString.length());
// retrieving base64
std::string decoded = base64_decode(encoded);
// put byte array back into NSData format
NSUInteger usize = decoded.length();
const char *bytes = decoded.data();
NSData *data2 = [NSData dataWithBytes:(const void *)bytes length:sizeof(unsigned char)*usize];
NSLog(#"examine data");
// But when I try to alloc init a UIImage with the data, the image is nil
UIImage *image2 = [[UIImage alloc] initWithData:data2];
NSLog(#"examine image2");
// *********** Below is my convoluted approach at CFDataRef and CGImageRef ****************
CFDataRef dataRef = CFDataCreate( NULL, (const UInt8*) decoded.data(), decoded.length() );
NSData *myData = (__bridge NSData *)dataRef;
//CGDataProviderRef ref = CGDataProviderCreateWithCFData(dataRef);
id sublayer = (id)[UIImage imageWithCGImage:imageRef].CGImage;
UIImage *image3 = [UIImage imageWithCGImage:(__bridge CGImageRef)(sublayer)];
return image3;
}
As any casual observer can see, I need help. HELP!!! I've tried some of the other questions on SO, such as this one and this one and this one and cannot find the information I need for the solution. I admit part of my problem is that I do not understand much about images (like RGBA values and other stuff).
I'm building an OS X app that needs to save the file to disk.
I'm currently using NSBitmapImageRep to represent the image in my code, and while saving the image to disk with representationUsingType:properties: method, I want to set the hasAlpha channel for the image, but the properties dictionary does not seem to support this.
So, I've tried to create a no-alpha bitmap representation, but according to many SO questions, the 3 channel/24 bits combination is not supported. Well, what should I do then?
Big thanks!
First off, I would try just making sure you create your NSBitmapImageRep with
-initWithBitmapDataPlanes:... hasAlpha:NO ...
And write it out and see if it the result doesn’t have alpha—one would kind of hope so.
If you’re trying to write out an image that has alpha, but not write the alpha, just copy it into a non-alpha image first, and write that out.
`
NSURL *url = [NSURL fileURLWithPath:name];
CGImageSourceRef source;
NSImage *srcImage =[[NSImage alloc] initWithContentsOfURL:url];;
NSLog(#"URL: %#",url);
source = CGImageSourceCreateWithData((__bridge CFDataRef)[srcImage TIFFRepresentation], NULL);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
CGRect rect = CGRectMake(0.f, 0.f, CGImageGetWidth(imageRef), CGImageGetHeight(imageRef));
CGContextRef bitmapContext = CGBitmapContextCreate(NULL,
rect.size.width,
rect.size.height,
CGImageGetBitsPerComponent(imageRef),
CGImageGetBytesPerRow(imageRef),
CGImageGetColorSpace(imageRef),
kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Little
);
CGContextDrawImage(bitmapContext, rect, imageRef);
CGImageRef decompressedImageRef = CGBitmapContextCreateImage(bitmapContext);
NSImage *finalImage = [[NSImage alloc] initWithCGImage:decompressedImageRef size:NSZeroSize];
NSData *imageData = [finalImage TIFFRepresentation];
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:imageData];
NSDictionary *imageProps = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:0.9] forKey:NSImageCompressionFactor];
imageData = [imageRep representationUsingType:NSPNGFileType properties:imageProps];
[imageData writeToFile:name atomically:NO];
CGImageRelease(decompressedImageRef);
CGContextRelease(bitmapContext);
`
ref:
https://github.com/bpolat/Alpha-Channel-Remover
Im trying to build an NSImage from some strange bytes.
Im using BlackMagic SDK to get the bytes of a recieved frame:
unsigned char* frame3 = NULL;
unsigned char* frame2 = (Byte*)malloc(699840);
videoFrame->GetBytes ( (void**)&frame3);
memcpy(frame2, frame3, 699840);
NSData* data = [NSData dataWithBytes:frame2 length:sizeof(frame2) ];
NSImage *image = [[NSImage alloc] initWithData:data];
//(till now i use statically 699840, because i know its size)
Why i said the bytes are strange is that the content of the "frame2" looks like this:
printf("content: %s",frame2);
\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200.........\200 (to the end)
It should be blank black frame.
Does somebody know how could I figure out something with this?
You should use these apis to get an image from data bytes.
NSString *filePath = [yourDirectory stringByAppendingPathComponent:#"imageName.jpg"];
[data writeToFile:filePath atomically:YES];
I have a Cocoa Mac image editing app which lets users export JPEG images. I'm currently using the following code to export these images as JPEG files:
//this is user specified
NSInteger resolution;
NSImage* savedImage = [[NSImage alloc] initWithSize:NSMakeSize(600, 600)];
[savedImage lockFocus];
//draw here
[savedImage unlockFocus];
NSBitmapImageRep* savedImageBitmapRep = [NSBitmapImageRep imageRepWithData:[savedImage TIFFRepresentationUsingCompression:NSTIFFCompressionNone factor:1.0]];
NSDictionary* properties = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithFloat:1.0], NSImageCompressionFactor, nil];
//holds the jpeg file
NSData * imageData = nil;
imageData = [savedImageBitmapRep representationUsingType:NSJPEGFileType properties:properties];
However, I would like for the user to be able to provide the pixels per inch for this JPEG image (like you can in Photoshop's export options). What would I need to modify in the above code to adjust this value for the exported JPEG?
I couldn't find a way to do it with the NSImage APIs but CGImage can by setting kCGImagePropertyDPIHeight/Width.
I also set kCGImageDestinationLossyCompressionQuality which I think is the same as NSImageCompressionFactor.
//this is user specified
NSInteger resolution = 100;
NSImage* savedImage = [[NSImage alloc] initWithSize:NSMakeSize(600, 600)];
[savedImage lockFocus];
//draw here
[savedImage unlockFocus];
NSBitmapImageRep* savedImageBitmapRep = [NSBitmapImageRep imageRepWithData:[savedImage TIFFRepresentationUsingCompression:NSTIFFCompressionNone factor:1.0]];
NSDictionary* properties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:1.0], kCGImageDestinationLossyCompressionQuality,
[NSNumber numberWithInteger:resolution], kCGImagePropertyDPIHeight,
[NSNumber numberWithInteger:resolution], kCGImagePropertyDPIWidth,
nil];
NSMutableData* imageData = [NSMutableData data];
CGImageDestinationRef imageDest = CGImageDestinationCreateWithData((CFMutableDataRef) imageData, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(imageDest, [savedImageBitmapRep CGImage], (CFDictionaryRef) properties);
CGImageDestinationFinalize(imageDest);
// Do something with imageData
if (![imageData writeToFile:[#"~/Desktop/test.jpg" stringByExpandingTildeInPath] atomically:NO])
NSLog(#"Failed to write imageData");
For NSImage or NSImageRep you do not set the resolution directly but set the size instead.
For size, numberOfPixels and resolution the following equation holds:
size = numberOfPixels * 72.0 / resolution
size is a length and is expressed in dots with the unit inch/72.
(size and resolution are floats). You can see that for an image with dpi=72 size and numberOfPixels are numerally the same (but the meaning is very different).
After creating an NSBitmapImageRep the size with the desired resolution can be set:
NSBitmapImageRep* savedImageBitmapRep = . . . ; // create the new rep
NSSize newSize;
newSize.width = [savedImageBitmapRep pixelsWide] * 72.0 / resolution; // x-resolution
newSize.height = [savedImageBitmapRep pixelsHigh] * 72.0 / resolution; // y-resolution
[savedImageBitmapRep setSize:newSize];
// save the rep
Two remarks: do you really need the lockFocus / unlockFocus way? The preferred way to build a new NSBitmapImageRep is to use NSGraphicsContext. see : http://www.mail-archive.com/cocoa-dev#lists.apple.com/msg74857.html
And: to use TIFFRepresentation for an NSBitmapImageRep is very time and space consuming. Since 10.6 another
way exists and costs nothing, because lockFocus and unlockFocus create an object of class NSCGImageSnapshotRep which under the hood is a CGImage. (In OS versions before 10.6 it was an NSCachedImageRep.) The following does it:
[anImage lockFocus];
// draw something
[anImage unlockFocus];
// now anImage contains an NSCGImageSnapshotRep
CGImageRef cg = [anImage CGImageForProposedRect:NULL context:nil hints:nil];
NSBitmapImageRep *newRep = [[NSBitmapImageRep alloc] initWithCGImage:cg];
// set the resolution
// here you may NSLog anImage, cg and newRep
// save the newRep
// release the newRep if needed