Objective-C / Cocoa:
I need to load the image from a JPG file into a two dimensional array so that I can access each pixel. I am trying (unsuccessfully) to load the image into a NSBitmapImageRep. I have tried several variations on the following two lines of code:
NSString *filePath = [NSString stringWithFormat: #"%#%#",#"/Users/adam/Documents/phoneimages/", [outLabel stringValue]]; //this coming from a window control
NSImageRep *controlBitmap = [[NSImageRep alloc] imageRepWithContentsOfFile:filePath];
With the code shown, I get a runtime error: -[NSImageRep imageRepWithContentsOfFile:]: unrecognized selector sent to instance 0x100147070.
I have tried replacing the second line of code with:
NSImage *controlImage = [[NSImage alloc] initWithContentsOfFile:filePath];
NSBitmapImageRep *controlBitmap = [[NSBitmapImageRep alloc] initWithData:controlImage];
But this yields a compiler error 'incompatible type' saying that initWithData wants a NSData variable not an NSImage.
I have also tried various other ways to get this done, but all are unsuccessful either due to compiler or runtime error. Can someone help me with this? I will eventually need to load some PNG files in the same way (so it would be nice to have a consistent technique for both).
And if you know of an easier / simpler way to accomplish what I am trying to do (i.e., get the images into a two-dimensional array), rather than using NSBitmapImageRep, then please let me know! And by the way, I know the path is valid (confirmed with fileExistsAtPath) -- and the filename in outLabel is a file with .jpg extension.
Thanks for any help!
Easy!
NSImage *controlImage = [[NSImage alloc] initWithContentsOfFile:filePath];
NSBitmapImageRep *imageRep = [[controlImage representations] objectAtIndex:0];
Then to get the actual bitmap pixel data:
unsigned char *pixelData = [imageRep bitmapData];
If your image has multiple representations (it probably doesn't), you can get them out of that same array. The same code will work for your .png images.
Edit:
Carl has a better answer. This is only good if you also want to manipulate the image in some way, like scaling or changing color mode.
Original:
I would probably use core graphics for this. NSImage and most NSImageRep were not designed to have a pixel array sitting around. They convert source image data into pixels only when drawn.
When you create a CGBitmapContext, you can pass it a buffer of pixels to use. That is your two dimensional array, with whatever row bytes, color depth, pixel format and other properties you specify.
You can initialize a CGImage with JPG or PNG data using CGImageCreateWithJPEGDataProvider or CGImageCreateWithPNGDataProvider respectively.
Once you draw the image into the context with CGContextDrawImage the original buffer you passed to CGBitmapContextCreate is now filled with the pixel data of the image.
Create a CGDataProvider with the image data.
Create a CGImageRef with the data provider.
Create a buffer large enough for the image pixels.
Create a CGBitmapContext that is the size of the image with the pixel buffer.
Draw the image into the bitmap context.
Access the pixels in the buffer.
If you want to use NSImage instead of CGImage, you can create an NSGraphicsContext from a CGContext with graphicsContextWithGraphicsPort:flipped: and set it as the current context. That basically replaces steps 1 and 2 above with whatever code you want to use to make an NSImage.
imageRepWithContentsOfFile is a static function
Correct way to use it:
[NSBitmapImageRep imageRepWithContentsOfFile:filePath];
Related
I'm trying to run a CoreML model using AVCaptureSession.
When I put the same image as input of my CoreML model, it gives me the same result every time.
But when using the image given by the function :
- (void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection {
__block CIImage* ciimage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)];
dispatch_sync(dispatch_get_main_queue(), ^{
VNImageRequestHandler* handler = [[VNImageRequestHandler alloc] initWithCIImage:ciimage options:#{}];
[handler performRequests:#[self.coreRequest] error:nil];
});
}
It doesn't give me exactly the same result even if I don't move my phone and the background is always the same too. . (To be clear, my phone is on my table, the camera is looking to the floor on my room, nothing is moving).
I have tried to compare the two image pixel by pixel (previous and new image) and there are different.
I want to understand why these images are different ?
Thanks,
Camera noise, most likely. The picture you get from a camera is never completely stable. The noise creates small differences in pixel values, even if the camera points at the same thing. These small differences can have a big influence on the predictions.
I draw my design image using CGContextDrawImage but its too blurry.
Please help me to improve its quality.
I am using this for drawing:
//NSImage* design_image
both sizes are same and i use this for interpolation
CGContextRef ct_ref=[context CGContext];
CGContextSetAllowsAntialiasing(ct_ref, YES);
CGContextSetShouldAntialias(ct_ref, YES);
CGContextSetInterpolationQuality(ct_ref, kCGInterpolationHigh);
id hints = [NSDictionary dictionaryWithObject:[NSNumber numberWithInteger:NSImageInterpolationDefault] forKey:NSImageHintInterpolation];
CGImageRef img_ref=[design_image CGImageForProposedRect:&image_dr_rct context:context hints:hints];
CGContextDrawImage(ct_ref, CGRectMake(image_dr_rct.origin.x, image_dr_rct.origin.y, image_dr_rct.size.width, image_dr_rct.size.height), img_ref);
If you're getting blurry results while drawing an image, the problem probably has to do with scaling the image. I can't tell from your question whether you're drawing the image larger or smaller than the original, but you can get poor results either way. You could try a different interpolation setting, like NSImageInterpolationHigh, but your best bet would be to create an original image at the size you need.
I have an NSMutableDicitionary containing two NSStrings and one NSImage, which I am trying to write to a file. However the output file is huge compared to the input image file (88kB to 1.1MB). The image is loaded via an Image Well drag and drop.
My first try was:
NSImage * image = _imageView.image;
[dictionary setObject:image forKey:#"Image"];
[NSKeyedArchiver archiveRootObject:dictionary toFile:filePath];
Result: 1.1MB
Second try:
NSImage * image = _imageView.image;
NSData * imageData = [image TIFFRepresentation];
NSBitmapImageRep * encodedImage = [NSBitmapImageRep imageRepWithData:imageData];
[dictionary setObject:encodedImage forKey:#"Image"];
[NSKeyedArchiver archiveRootObject:dictionary toFile:filePath];
Result: still 1.1MB
At first I thought that NSBitmapImage would keep the image in a (at least relatively) compressed format, but the file size does not differ one bit.
How should I write the NSMutable dictionary to a file so that it does not exceed the initial image file size by so much? Or how should I store the image efficiently in the dictionary in the first place?
I don't know if it matters, but the original format is png.
When you drop an image into an Image View, it gets loaded into an NSImage as a bitmap (color data for every pixel). It isn't compressed any longer like the original image you dropped into it was (PNG in your case). PNG uses lossless compression and its data footprint will be in most cases less than an uncompressed bitmap or TIFF representation of the image.
Then when you request the TIFF representation of the image, it has nothing to do with the original PNG image you loaded into it anymore except the pixel data itself.
If you want to store back the image in a compressed format or keep the original image, you can try one of the following:
Ask for compressed TIFF representation using TIFFRepresentationUsingCompression:factor:. Will still be different format and you will lose all metadata
Re-compressed the image with your desired format e.g PNG, jpeg, etc (i.e. with Quartz API) and store the compressed format in your dictionary. You will still need to preserve metadata yourself if you need it
Keep reference to the original image file or store its data. By overriding the ImageView's drag/drop methods and keeping track of the original's image URL or load it from the file and store the raw data of the original format. Then save that into your dictionary. You will get the exact copy of the original image and won't have to deal with format conversions and metadata copying.
I would like to know how to retrieve the dimensions of an image with objective-c. I have searched google and the documentation with no success, unfortunately.
NSImage has a size method, returning an NSSize struct containing the width and height of the image, or {0, 0} if the size couldn't be determined.
NSImage *image = [NSImage imageNamed:#"myimage.png"];
NSSize dimensions = [image size];
NSLog (#"Width: %lf, height: %lf", (double) dimensions.width, (double) dimensions.height);
Edit:
I have placed explicit casts to double because on some systems (64-bit especially) the width and height members may not be float types but double, and we would be providing the wrong types to NSLog. A double can represent more values than float, so we can cast up with a bit more safety.
In iOS, UIImage has a size property, which returns a CGSize.
In OS X, NSImage has a size method which returns an NSSize. It also has a representations property, which returns an NSArray of NSImageRep objects. NSImageRep has a size method.
The size method is correct for a size in points. Sometimes you need the pixel dimensions rather than the nominal size, in which case you must select an image rep (NSImages can have several) and query its pixelsWide and pixelsHigh properties. (Alternatively, load images using CGImage and use CGImageGetWidth() and CGImageGetHeight().
You could use the NSString+FastImageSize category from the Additions section in https://github.com/danielgindi/drunken-danger-zone
I've only recently written it, and it seems to work fine.
This code will actually open the file and check the headers for dimensions (= it is really really fast), and supports PNG,JPG,GIF and BMP. Although I really hope you don't use BMP on a mobile device...
Enjoy!
I think i'm missing something really basic here. If I do this with a legal URL/path which I know exists:
NSImage* img = [[NSImage alloc] initWithContentsOfFile:[[selectedItem url] path]];
NSLog(#"Image width: %d height: %d", [img size].width, [img size].height);
then I get reported to the console that the width is -2080177216 and the height 0. Although I know that the width is actually 50 and the height 50. I tried calling isValid and it returns YES, and I also tried checking the size of the first representation and it returned the same messed up values. How come the image is not loading properly?
The size method returns an NSSize, a struct whose width and height members are of type float. You're treating them as int. Use %f and everything should be fine.
Does this help?
setSize:
Sets the width and height of the image.
- (void)setSize:(NSSize)aSize
Discussion:
The size of an NSImage object must be set before it can be used. If the size of the image hasn’t already been set when an image representation is added, the size is taken from the image representation's data. For EPS images, the size is taken from the image's bounding box. For TIFF images, the size is taken from the ImageLength and ImageWidth attributes.
Changing the size of an NSImage after it has been used effectively resizes the image. Changing the size invalidates all its caches and frees them. When the image is next composited, the selected representation will draw itself in an offscreen window to recreate the cache.
Availability
Available in Mac OS X v10.0 and later.
See Also
See NSImage size not real size with some pictures?
You need to iterate through NSImageRep and set the size from the largest found.