NSImage - can't load image from file - objective-c

I program command-line application that will load image from Supporting Files. Image is copied in Supporting Files, but when I use following code, variable imgC returns nil.
NSString* pathC = #"galaxy.jpg";
NSImage* imgC = [NSImage imageNamed: pathC];
ImgC returns nil even if I use the following code:
NSString* pathC = #"galaxy.jpg";
NSImage* imgC = [[NSImage alloc] initWithContentsOfFile: pathC];
Can someone please help me?
(PS: Sorry for my bad English.)
Many thanks, Peter

Swift Style:
if let imageRef = NSImage(byReferencingFile: "/path/to/galaxy.jpg")
{
print("image size \(imageRef.size.width):\(imageRef.size.height)")
}

As the target is commend line tool since those don't have a bundle [NSBundle mainBundle] so its returns nil. So You need to used an absolute path, not an relative path. As this is Strange Xcode do't show any kind of warning regarding to this.
NSImage *image = [[NSImage alloc]initWithContentsOfFile:#"/Users/Zenga/Documents/iOS/Research/Test/star.png"];
if (image == nil) {
NSLog(#"image nil");
}
NSLog(#"%f and %f",image.size.width, image.size.height);
and i found this path from When i click on the Image and on Right Corner i found this path and i used it and working fine for me.

Related

CGImageRef not init/alloc correctly

I am currently having problems with CGImageRef.
Whenever I create a CGImageRef and look at it in debugger view, in Xcode, it is nil.
Here's the code:
-(void)mouseMoved:(NSEvent *)theEvent{
if (self.shoulddrag) {
NSPoint event_location = [theEvent locationInWindow];//direct from the docs
NSPoint local_point = [self convertPoint:event_location fromView:nil];//direct from the docs
CGImageRef theImage = (__bridge CGImageRef)(self.image);
CGImageRef theClippedImage = CGImageCreateWithImageInRect(theImage, CGRectMake(local_point.x,local_point.y,1,1));
NSImage * image = [[NSImage alloc] initWithCGImage:theClippedImage size:NSZeroSize];
self.pixleView.image = image;
CGImageRelease(theClippedImage);
}
}
Everything else seems to be working though. I can't understand. Any help would be appreciated.
Note: self.pixelView is an NSImageView instance that has not been overridden in any way.
Very likely local_point is not inside of the image. You've converted the point from the window to the view coordinates, but that may not be equivalent to the image coordinates. Test this to see if the lower-left corner of your image results in local_point being (0,0).
It's not clear how your view is laid out, but I suspect that what you want to do is subtract the origin of whatever region (possibly a subview) the user is interacting with relative to self.
Alright, I figured it out.
What I was using to create the CGImageRef was:
CGImageRef theImage = (__bridge CGImageRef)(self.image);
Apparently what I should have used is:
CGImageSourceRef theImage = CGImageSourceCreateWithData((CFDataRef)[self.image TIFFRepresentation], NULL);
I guess my problem was that for some reason I thought NSImage and CGImageRef had toll free bridging.
Apparently, I was wrong.

How to make a perfect crop (without changing the quality) in Objective-c/Cocoa (OSX)

Is there any way in Objective-c/cocoa (OSX) to crop an image without changing the quality of the image?
I am very near to a solution, but there are still some differences that I can detect in the color. I can notice it when zooming into the text. Here is the code I am currently using:
NSImage *target = [[[NSImage alloc]initWithSize:panelRect.size] autorelease];
target.backgroundColor = [NSColor greenColor];
//start drawing on target
[target lockFocus];
[NSGraphicsContext saveGraphicsState];
[[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationNone];
[[NSGraphicsContext currentContext] setShouldAntialias:NO];
//draw the portion of the source image on target image
[source drawInRect:NSMakeRect(0,0,panelRect.size.width,panelRect.size.height)
fromRect:NSMakeRect(panelRect.origin.x , source.size.height - panelRect.origin.y - panelRect.size.height, panelRect.size.width, panelRect.size.height)
operation:NSCompositeCopy
fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
//end drawing
[target unlockFocus];
//create a NSBitmapImageRep
NSBitmapImageRep *bmpImageRep = [[[NSBitmapImageRep alloc]initWithData:[target TIFFRepresentation]] autorelease];
//add the NSBitmapImage to the representation list of the target
[target addRepresentation:bmpImageRep];
//get the data from the representation
NSData *data = [bmpImageRep representationUsingType: NSJPEGFileType
properties: imageProps];
NSString *filename = [NSString stringWithFormat:#"%#%#.jpg", panelImagePrefix, panelNumber];
NSLog(#"This is the filename: %#", filename);
//write the data to a file
[data writeToFile:filename atomically:NO];
Here is a zoomed-in comparison of the original and the cropped image:
(Original image - above)
(Cropped image - above)
The difference is hard to see, but if you flick between them, you can notice it. You can use a colour picker to notice the difference as well. For example, the darkest pixel on the bottom row of the image is a different shade.
I also have a solution that works exactly the way I want it in iOS. Here is the code:
-(void)testMethod:(int)page forRect:(CGRect)rect{
NSString *filePath = #"imageName";
NSData *data = [HeavyResourceManager dataForPath:filePath];//this just gets the image as NSData
UIImage *image = [UIImage imageWithData:data];
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);//crop in the rect
UIImage *result = [UIImage imageWithCGImage:imageRef scale:0 orientation:image.imageOrientation];
CGImageRelease(imageRef);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectoryPath = [paths objectAtIndex:0];
[UIImageJPEGRepresentation(result, 1.0) writeToFile:[documentsDirectoryPath stringByAppendingPathComponent::#"output.jpg"] atomically:YES];
}
So, is there a way to crop an image in OSX so that the cropped image does not change at all? Perhaps I have to look into a different library, but I would be surprised if I could not do this with Objective-C...
Note, This is a follow up question to my previous question here.
Update I have tried (as per the suggestion) to round the CGRect values to whole numbers, but did not notice a difference. Here is the code in case I used:
[source drawInRect:NSMakeRect(0,0,(int)panelRect.size.width,(int)panelRect.size.height)
fromRect:NSMakeRect((int)panelRect.origin.x , (int)(source.size.height - panelRect.origin.y - panelRect.size.height), (int)panelRect.size.width, (int)panelRect.size.height)
operation:NSCompositeCopy
fraction:1.0];
Update I have tried mazzaroth code and it works if I save it as a png, but if I try and save it as a jpeg, the image loses quality. So close, but not close enough. Still hoping for a complete answer...
use CGImageCreateWithImageInRect.
// this chunk of code loads a jpeg image into a cgimage
// creates a second crop of the original image with CGImageCreateWithImageInRect
// writes the new cropped image to the desktop
// ensure that the xy origin of the CGRectMake call is smaller than the width or height of the original image
NSURL *originalImage = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:#"lockwood" ofType:#"jpg"]];
CGImageRef imageRef = NULL;
CGImageSourceRef loadRef = CGImageSourceCreateWithURL((CFURLRef)originalImage, NULL);
if (loadRef != NULL)
{
imageRef = CGImageSourceCreateImageAtIndex(loadRef, 0, NULL);
CFRelease(loadRef); // Release CGImageSource reference
}
CGImageRef croppedImage = CGImageCreateWithImageInRect(imageRef, CGRectMake(200., 200., 100., 100.));
CFURLRef saveUrl = (CFURLRef)[NSURL fileURLWithPath:[#"~/Desktop/lockwood-crop.jpg" stringByExpandingTildeInPath]];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(saveUrl, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(destination, croppedImage, nil);
if (!CGImageDestinationFinalize(destination)) {
NSLog(#"Failed to write image to %#", saveUrl);
}
CFRelease(destination);
CFRelease(imageRef);
CFRelease(croppedImage);
I also made a gist:
https://gist.github.com/4259594
Try to change the drawInRect orign to 0.5,0.5. Otherwise Quartz will distribute each pixel color to the adjacent 4 fixels.
Set the color space of the target image. You might be having a different colorspace causing to to look slightly different.
Try the various rendering intents and see which gets the best result, perceptual versus relative colorimetric etc. There are 4 options I think.
You mention that the colors get modified by the saving of JPEG versus PNG.
You can specify the compression level when saving to JPEG. Try with something like 0.8 or 0.9. you can also save JPEG without compression with 1.0, but ther PNG has a distinct advantage. You specify the compression level in the options dictionary for CGImageDesinationAddImage.
Finally - if nothing her helps - you should open a TSI with DTS, they can certainly provide you with the guidance you seek.
The usual problem is that cropping sizes are float, but image pixels are integer.
cocoa interpolates it automatically.
You need to floor, round or ceil the size and coordinates to be sure that they are integer.
This may help.
I am doing EXIF deleting of JPG files and I think I caught the reason:
All losses and changes come from the re-compress of your image during saving to file.
You may notice the change too if you just to save the whole image again.
What I am to do is to read the original JPG and re-compress it to a quality that take equivalent file size.

Check whether +[UIImage imageNamed:] found an image

I have an long if statement to decide what image to show in a UIImageView. They are all .png files, and I use the code:
if (whatever) {
image = [UIImage imageNamed: #"imageName.png"];
}
Does anyone know of a way to check to see if the program can find the image? So that if the image does not exist in the program, it can display an error image or something?
+[UIImage imageNamed:]
will return nil if it couldn't find a corresponding image file. So just check for that:
UIImage *image = [UIImage imageNamed:#"foo.png"];
if (image == nil)
{
[self displayErrorMessage];
}
The shortest snippet would be
image = [UIImage imageNamed:#"imageName"] ? : [UIImage imageNamed:#"fallback_image"]
but do you really want such code?
Make another check:
if (whatever) {
image = [UIImage imageNamed: #"imageName.png"];
if(image == nil) {
image = [UIImage imageNamed: #"fallback_image"];
}
}
it still can be shorted, like
if(! (image = [UIImage imageNamed: #"imageName.png"]) ) {
...
}
but you're toying with readability here.
First, I'd suggest using a dictionary instead of a long if. You can key each image name by whatever. If whatever isn't currently an object, make an enum that will encapsulate that information and use NSNumbers to box the values of the enum.
Then, you can check for nil when you try to retrieve the image. imageNamed: uses nil to indicate failure, so:
if( !image ){
// No image found
}

Get pixel colour from a Webcam

I am trying to get the pixel colour from an image displayed by the webcam. I want to see how the pixel colour is changing with time.
My current solution sucks a LOT of CPU, it works and gives me the correct answer, but I am not 100% sure if I am doing this correctly or I could cut some steps out.
- (IBAction)addFrame:(id)sender
{
// Get the most recent frame
// This must be done in a #synchronized block because the delegate method that sets the most recent frame is not called on the main thread
CVImageBufferRef imageBuffer;
#synchronized (self) {
imageBuffer = CVBufferRetain(mCurrentImageBuffer);
}
if (imageBuffer) {
// Create an NSImage and add it to the movie
// I think I can remove some steps here, but not sure where.
NSCIImageRep *imageRep = [NSCIImageRep imageRepWithCIImage:[CIImage imageWithCVImageBuffer:imageBuffer]];
NSSize n = {320,160 };
//NSImage *image = [[[NSImage alloc] initWithSize:[imageRep size]] autorelease];
NSImage *image = [[[NSImage alloc] initWithSize:n] autorelease];
[image addRepresentation:imageRep];
CVBufferRelease(imageBuffer);
NSBitmapImageRep* raw_img = [NSBitmapImageRep imageRepWithData:[image TIFFRepresentation]];
NSLog(#"image width is %f", [image size].width);
NSColor* color = [raw_img colorAtX:1279 y:120];
float colourValue = [color greenComponent]+ [color redComponent]+ [color blueComponent];
[graphView setXY:10 andY:200*colourValue/3];
NSLog(#"%0.3f", colourValue);
Any help is appreciated and I am happy to try other ideas.
Thanks guys.
There are a couple of ways that this could be made more efficient. Take a look at the imageFromSampleBuffer: method in this Tech Q&A, which presents a cleaner way of getting from a CVImageBufferRef to an image (the sample uses a UIImage, but it's practically identical for an NSImage).
You can also pull the pixel values straight out of the CVImageBufferRef without any conversion. Once you have the base address of the buffer, you an calculate the offset of any pixel and just read the values from there.

Image from URL for Retina Display

I have an application that pulls images from an NSURL. Is it possible to inform the application that they are retina ('#2x') versions (the images are of retina resolution)? I currently have the following but the images appear pixelated on the higher resolution displays:
NSURL *url = [NSURL URLWithString:self.imageURL];
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:data];
self.pictureImageView.image = image;
You need to rescale the UIImage before adding it to the image view.
NSURL *url = [NSURL URLWithString:self.imageURL];
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:data];
CGFloat screenScale = [UIScreen mainScreen].scale;
if (image.scale != screenScale)
image = [UIImage imageWithCGImage:image.CGImage scale:screenScale orientation:image.imageOrientation];
self.pictureImageView.image = image;
It's best to avoid hard-coding the scale value, thus the UIScreen call. See Apple’s documentation on UIImage’s scale property for more information about why this is necessary.
It’s also best to avoid using NSData’s -dataWithContentsOfURL: method (unless your code is running on a background thread), as it uses a synchronous network call which cannot be monitored or cancelled. You can read more about the pains of synchronous networking and the ways to avoid it in this Apple Technical Q&A.
Try using imageWithData:scale: (iOS 6 and later)
NSData *imageData = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:imageData scale:[[UIScreen mainScreen] scale]];
You need to set the scale on the UIImage.
UIImage* img = [[UIImage alloc] initWithData:data];
CGFloat screenScale = [UIScreen mainScreen].scale;
if (screenScale != img.scale) {
img = [UIImage imageWithCGImage:img.CGImage scale:screenScale orientation:img.imageOrientation];
}
The documentation says to be careful to construct all your UIImages at the same scale, otherwise you might get weird display issues where things show at half size, double size, half resolution, et cetera. To avoid all that, load all UIImages at retina resolution. Resources will be loaded at the correct scale automatically. For UIImages constructed from URL data, you need to set it.
Just to add to this, what I did specifically was the following, in the same situation, works like a charm.
double scaleFactor = [UIScreen mainScreen].scale;
NSLog(#"Scale Factor is %f", scaleFactor);
if (scaleFactor==1.0) {
[cell.videoImageView setImageWithURL:[NSURL URLWithString:regularThumbnailURLString];
}else if (scaleFactor==2.0){
[cell.videoImageView setImageWithURL:[NSURL URLWithString:retinaThumbnailURLString];
}
#2x convention is just convenient way for loading images from application bundle.
If you wan't to show image on retina display then you have to make it 2x bigger:
Image size 100x100
View size: 50x50.
Edit: i think if you're loading images from server the best solution would be adding some additional param (e.g. scale) and return images of the appropriate size:
www.myserver.com/get_image.php?image_name=img.png&scale=2
You can obtain scale using [[UIScreen mainScreen] scale]
To tell the iPhone programmatically that particular image is Retina, you can do something like this:
UIImage *img = [self getImageFromDocumentDirectory];
img = [UIImage imageWithCGImage:img.CGImage scale:2 orientation:img.imageOrientation];
In my case, TabBarItem image was dynamic i.e. that was downloading from server. Then the iOS cannot identify it as retina. The above code snippet worked for me like a charm.