I'm facing a weird issue, I'm drawing inside an NSImage using the following pseudo-code:
NSString* text = #"Hello world!";
NSDictionary *dict = [[[NSDictionary alloc] initWithObjectsAndKeys:[NSColor colorWithCGColor:textColor],NSForegroundColorAttributeName,font, NSFontAttributeName,nil] autorelease];
NSMutableAttributedString* str = [[[NSMutableAttributedString alloc] initWithString:text attributes:dict] autorelease];
NSSize stringSize = [str size];
NSImage* image = [[[NSImage alloc] initWithSize:stringSize] autorelease];
[image lockFocus];
NSRect drawRect = NSMakeRect(0,0,stringSize.width,stringSize.height);
[str drawInRect:drawRect];
[image unlockFocus];
Now the problem is that, with a dual monitor configuration, if I keep my retina display open, the string is mangled (I get half of the string drawn), while by simply closing my retina display and using only my cinema display, the string is drawn correctly. It's like the NSImage is getting the default context and some scaling factor from the retina display.
Do you have any hints ?
Thanks !
Ok, I will keep this for future reference, even there's something about displaying NSImage that covers the same aspect.
No matter what's your primary display but seems that the NSGraphicContext comes with an affine transformation that multiplies x 2 to address the retina resolution.
You just need to reset the affine transformations, before drawing into NSImage with:
NSAffineTransform *trans = [[[NSAffineTransform alloc] init] autorelease];
[trans set];
Related
I have met problems when working on retina display. NSImage size is correct, but if I create NSBitmapImageRep from it and write it to file I get image witch's size is twice as big as original image. There is no such problem when I use it on non retina display.
I create NSImage from file (1920x1080)
I do some drawings on
I create NSBitmapImageRep from image with drawings
I write it to file
I get image with 3840x2160 dimensions
What could cause that?
NSImage *originalImage = [[NSImage alloc] initWithContentsOfURL:fileUrl];
NSImage *editedImage = [[NSImage alloc] initWithSize:originalImage.size];
[editedImage lockFocus];
//I draw here NSBezierPaths
[editedImage unlockFocus];
NSBitmapImageRep *savingRep = [NSBitmapImageRep imageRepsWithData:[editedImage TIFFRepresentation]];
NSData *savingData = [savingRep representationUsingType: NSPNGFileType properties: nil];
[savingData writeToFile:desiredFileLocationAndName atomically:no];
If I open image and save it without editing I get the correct dimensions image
NSImage *imageFromFile = [[NSImage alloc] initWithContentsOfURL:fileURL];
NSBitmapImageRep *newRepresentation = [[NSBitmapImageRep imageRepsWithData:[imageFromFile TIFFRepresentation]];
NSData *savingData = [newRepresentation representationUsingType: NSPNGFileType properties: nil];
[savingData writeToFile:desiredFileLocationAndName atomically:no];
A bitmap representation of the image is measured in pixels. It is twice the size. The NSImage is giving you a size in points which on retina devices measure 2 pixels per point. There is nothing wrong with what its giving you.
I am cutting up a large image and saving it into many different images. I first implemented this in iOS and it is working fine, but when I try and port the code to OSX, a thin white line (1 pixel) appears on the top and right of the image. The line is not pure white, or solid (see sample below).
Here is the iOS code to make one sub-image, that works like a champ:
-(void)testMethod:(int)page forRect:(CGRect)rect{
NSString *filePath = #"imageName";
NSData *data = [HeavyResourceManager dataForPath:filePath];//this just gets the image as NSData
UIImage *image = [UIImage imageWithData:data];
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);//crop in the rect
UIImage *result = [UIImage imageWithCGImage:imageRef scale:0 orientation:image.imageOrientation];
CGImageRelease(imageRef);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectoryPath = [paths objectAtIndex:0];
[UIImageJPEGRepresentation(result, 1.0) writeToFile:[documentsDirectoryPath stringByAppendingPathComponent::#"output.jpg"] atomically:YES];
}
Here is the ported code in OSX that causes the white lines to be added:
NSImage *source = [[[NSImage alloc]initWithContentsOfFile:imagePath] autorelease];
//init the image
NSImage *target = [[[NSImage alloc]initWithSize:panelRect.size] autorelease];
//start drawing
[target lockFocus];
[source drawInRect:NSMakeRect(0,0,panelRect.size.width,panelRect.size.height)
fromRect:NSMakeRect(panelRect.origin.x , source.size.height - panelRect.origin.y - panelRect.size.height, panelRect.size.width, panelRect.size.height)
operation:NSCompositeCopy
fraction:1.0];
[target unlockFocus];
//create a NSBitmapImageRep
NSBitmapImageRep *bmpImageRep = [[[NSBitmapImageRep alloc]initWithData:[target TIFFRepresentation]] autorelease];
//write to tiff
[[target TIFFRepresentation] writeToFile:#"outputImage.tiff" atomically:NO];
[target addRepresentation:bmpImageRep];
NSDictionary *imageProps = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:1.0] forKey:NSImageCompressionFactor];
//get the data from the representation
NSData *data = [bmpImageRep representationUsingType: NSJPEGFileType
properties: imageProps];
//write the data to a file
[data writeToFile: #"outputImage.jpg" atomically:NO];
data = [bmpImageRep representationUsingType: NSPNGFileType properties: imageProps];
//write the data to png
[data writeToFile: #"outputImage.png" atomically:NO];
The above code saves the image to three different formats to check if the problem was not in the save process of a specific format. It does not seem to be because all the formats have the same problem.
Here is a blown up (4x) version of top right hand corner of the images:
(OSX, note the white line top and left. It looks like a blur here, because the image is blown up)
(iOS, note there are no white lines)
If someone could tell me why this might be happening, I would be very happy. Perhaps it has something to do with the quality difference (the OSX version seems lower quality - though you can't notice)? Perhaps there is a completely different way to do this?
For reference, here is the unscaled osx image:
Update: Thanks to Daij-Djan, I was able to stop the drawInRect method from antialiasing:
//start drawing on target
[target lockFocus];
[NSGraphicsContext saveGraphicsState];
[[NSGraphicsContext currentContext]
setImageInterpolation:NSImageInterpolationNone];
[[NSGraphicsContext currentContext] setShouldAntialias:NO];
//draw the portion of the source image on target image
[source drawInRect:NSMakeRect(0,0,panelRect.size.width,panelRect.size.height)
fromRect:NSMakeRect(panelRect.origin.x , source.size.height - panelRect.origin.y - panelRect.size.height, panelRect.size.width, panelRect.size.height)
operation:NSCompositeDestinationAtop
fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
//end drawing
[target unlockFocus];
Update: Changed 'Interpolation' to NSImageInterpolationNone, as this gives a better representation. The high interpolation makes minor adjustments, which is noticeable when zooming in on text. Removing interpolation stops pixels from jumping around, but still, there is a little difference in the color (164 to 155 for a grey color). Would be great to be able to just cut up an image like I can in iOS...
it looks like antialiasing... you gotta round the float values you calculate when cutting/scaling the image.
use froundf() on the float values
I'm experiencing some massive memory leaks that don't show up using the "leaks" instrument. I pop up a Modal View Controller and apply 2 CoreImage filters to 4 or 5 different images. Using Instruments I can see the memory jump up about 40-50 MB as these images are created, but even after I dismiss the Modal View Controller, I never get that memory back, and the application will crash after repeating this process 2 or 3 times. I'm happy for any advice you can provide because this is driving me absolutely crazy. Below is the method in question:
UIView *finalView = [[UIView alloc] initWithFrame:CGRectMake(1024, 0, 1792, 1345)];
UIImageView *templateImageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 1792, 1345)];
templateImageView.image = [UIImage imageNamed:[NSString stringWithFormat:#"%#.png",[theme objectForKey:#"template_background"]]];
//CI background Setup
NSString *filePath5 = [[NSBundle mainBundle] pathForResource:[theme objectForKey:#"template_background"] ofType:#"png"];
NSURL *fileNameAndPath5 = [NSURL fileURLWithPath:filePath5];
#autoreleasepool {
finalBackBeginImage = [CIImage imageWithContentsOfURL:fileNameAndPath5];
finalBackImage = [CIFilter filterWithName:#"CIHueAdjust" keysAndValues:#"inputAngle", [NSNumber numberWithFloat:[[boothPrefs objectForKey:#"templateBackground_hue"] floatValue]*6.28], #"inputImage", finalBackBeginImage, nil].outputImage;
finalBackImage = [CIFilter filterWithName:#"CIColorControls" keysAndValues:#"inputSaturation", [NSNumber numberWithFloat:([[boothPrefs objectForKey:#"templateBackground_saturation"] floatValue] * 5)], #"inputImage", finalBackImage, nil].outputImage;
finalBackContent = [CIContext contextWithOptions:nil];
CGImageRef cgimgFinalBack =
[finalBackContent createCGImage:finalBackImage fromRect:[finalBackImage extent]];
UIImage *newFinalBackImg = [UIImage imageWithCGImage:cgimgFinalBack];
[templateImageView setImage:newFinalBackImg];
CGImageRelease(cgimgFinalBack);
}
[finalView addSubview:templateImageView];
I've switched from using imageNamed to using imageWithData using the code below. In 5 minutes of testing (sorry, spent close to 12 hours on this issue now), I see that my real memory usage for the same operation is up to 50% lower (115mb versus up to 230 mb) and the mysterious "Push +80mb, pop -30mb" real memory issue appears to be solved.
I'm keeping my fingers crossed though.
//use images like this as base images for CIFilters
NSData* imageData = [NSData dataWithContentsOfFile:[[NSBundle mainBundle] pathForResource:self.frameName ofType:nil]];
;
UIImage* imageForFilter =[UIImage imageWithData: imageData];
I have a Cocoa Mac image editing app which lets users export JPEG images. I'm currently using the following code to export these images as JPEG files:
//this is user specified
NSInteger resolution;
NSImage* savedImage = [[NSImage alloc] initWithSize:NSMakeSize(600, 600)];
[savedImage lockFocus];
//draw here
[savedImage unlockFocus];
NSBitmapImageRep* savedImageBitmapRep = [NSBitmapImageRep imageRepWithData:[savedImage TIFFRepresentationUsingCompression:NSTIFFCompressionNone factor:1.0]];
NSDictionary* properties = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithFloat:1.0], NSImageCompressionFactor, nil];
//holds the jpeg file
NSData * imageData = nil;
imageData = [savedImageBitmapRep representationUsingType:NSJPEGFileType properties:properties];
However, I would like for the user to be able to provide the pixels per inch for this JPEG image (like you can in Photoshop's export options). What would I need to modify in the above code to adjust this value for the exported JPEG?
I couldn't find a way to do it with the NSImage APIs but CGImage can by setting kCGImagePropertyDPIHeight/Width.
I also set kCGImageDestinationLossyCompressionQuality which I think is the same as NSImageCompressionFactor.
//this is user specified
NSInteger resolution = 100;
NSImage* savedImage = [[NSImage alloc] initWithSize:NSMakeSize(600, 600)];
[savedImage lockFocus];
//draw here
[savedImage unlockFocus];
NSBitmapImageRep* savedImageBitmapRep = [NSBitmapImageRep imageRepWithData:[savedImage TIFFRepresentationUsingCompression:NSTIFFCompressionNone factor:1.0]];
NSDictionary* properties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:1.0], kCGImageDestinationLossyCompressionQuality,
[NSNumber numberWithInteger:resolution], kCGImagePropertyDPIHeight,
[NSNumber numberWithInteger:resolution], kCGImagePropertyDPIWidth,
nil];
NSMutableData* imageData = [NSMutableData data];
CGImageDestinationRef imageDest = CGImageDestinationCreateWithData((CFMutableDataRef) imageData, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(imageDest, [savedImageBitmapRep CGImage], (CFDictionaryRef) properties);
CGImageDestinationFinalize(imageDest);
// Do something with imageData
if (![imageData writeToFile:[#"~/Desktop/test.jpg" stringByExpandingTildeInPath] atomically:NO])
NSLog(#"Failed to write imageData");
For NSImage or NSImageRep you do not set the resolution directly but set the size instead.
For size, numberOfPixels and resolution the following equation holds:
size = numberOfPixels * 72.0 / resolution
size is a length and is expressed in dots with the unit inch/72.
(size and resolution are floats). You can see that for an image with dpi=72 size and numberOfPixels are numerally the same (but the meaning is very different).
After creating an NSBitmapImageRep the size with the desired resolution can be set:
NSBitmapImageRep* savedImageBitmapRep = . . . ; // create the new rep
NSSize newSize;
newSize.width = [savedImageBitmapRep pixelsWide] * 72.0 / resolution; // x-resolution
newSize.height = [savedImageBitmapRep pixelsHigh] * 72.0 / resolution; // y-resolution
[savedImageBitmapRep setSize:newSize];
// save the rep
Two remarks: do you really need the lockFocus / unlockFocus way? The preferred way to build a new NSBitmapImageRep is to use NSGraphicsContext. see : http://www.mail-archive.com/cocoa-dev#lists.apple.com/msg74857.html
And: to use TIFFRepresentation for an NSBitmapImageRep is very time and space consuming. Since 10.6 another
way exists and costs nothing, because lockFocus and unlockFocus create an object of class NSCGImageSnapshotRep which under the hood is a CGImage. (In OS versions before 10.6 it was an NSCachedImageRep.) The following does it:
[anImage lockFocus];
// draw something
[anImage unlockFocus];
// now anImage contains an NSCGImageSnapshotRep
CGImageRef cg = [anImage CGImageForProposedRect:NULL context:nil hints:nil];
NSBitmapImageRep *newRep = [[NSBitmapImageRep alloc] initWithCGImage:cg];
// set the resolution
// here you may NSLog anImage, cg and newRep
// save the newRep
// release the newRep if needed
How can I draw to an image in objective-c? all I need to do is create an image with a size I set, draw few AA lines and save the image to a png file. I tried to find it in apple docs but there are CGImage, NSImage, CIImage and more. which one is easiest for my goal? I only need to support the latest mac os x version so new things are not a problem.
Probably the easiest way is to use an NSImage and draw directly into it after calling lockFocus.
Example:
NSSize imageSize = NSMakeSize(512, 512);
NSImage *image = [[[NSImage alloc] initWithSize:imageSize] autorelease];
[image lockFocus];
//draw a line:
[NSBezierPath strokeLineFromPoint:NSMakePoint(100, 100) toPoint:NSMakePoint(200, 200)];
//...
NSBitmapImageRep *imageRep = [[[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, imageSize.width, imageSize.height)] autorelease];
NSData *pngData = [imageRep representationUsingType:NSPNGFileType properties:nil];
[image unlockFocus];
[pngData writeToFile:#"/path/to/your/file.png" atomically:YES];
Well your question is actually two questions in one.
First question is about how to draw an image. You should first read the docs about drawing images. Apple has a Cocoa Drawing Guide about this topic. Start from there to draw images.
Then you need to save the image to disk. Here is a nice piece of code from over here:
NSBitmapImageRep *bits = ...; // get a rep from your image, or grab from a view
NSData *data;
data = [bits representationUsingType: NSPNGFileType
properties: nil];
[data writeToFile: #"/path/to/wherever/test.png"
atomically: NO];