NSBitmapImageRep -initWithFocusedViewRect is doubling size of image - objective-c

I have the following objective-C function meant to resize an NSBitmapImageRep to a designated size.
Currently, when working with an image of size 2048x1536 and trying to resize it to 300x225, this function keeps returning an NSBitmapImageRep of size 600x450.
- (NSBitmapImageRep*) resizeImageRep: (NSBitmapImageRep*) anOriginalImageRep toTargetSize: (NSSize) aTargetSize
{
NSImage* theTempImageRep = [[[NSImage alloc] initWithSize: aTargetSize ] autorelease];
[ theTempImageRep lockFocus ];
[NSGraphicsContext currentContext].imageInterpolation = NSImageInterpolationHigh;
NSRect theTargetRect = NSMakeRect(0.0, 0.0, aTargetSize.width, aTargetSize.height);
[ anOriginalImageRep drawInRect: theTargetRect];
NSBitmapImageRep* theResizedImageRep = [[[NSBitmapImageRep alloc] initWithFocusedViewRect: theTargetRect ] autorelease];
[ theTempImageRep unlockFocus];
return theResizedImageRep;
}
Debugging it, I'm finding that theTargetRect is of the proper size, but the call to initWithFocusedRec returns a bitmap of 600x450 pixels (high x wide)
I'm at a complete loss as to why this may be happening. Does anyone have any insight?

Your technique won't produce a resized image. For one thing, the method initWithFocusedViewRect:reads bitmap data from the focused window and is used to create screen grabs.
You should create a new graphics context with a new NSBitmapImageRep or NSImage of the desired size then you draw your image into that context.
Something like this.
NSGraphicsContext* context = [NSGraphicsContext graphicsContextWithBitmapImageRep:theTempImageRep];
if (context)
{
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:context];
[anOriginalImageRep drawAtPoint:NSZeroPoint];
[anOriginalImageRep drawInRect:theTargetRect];
[NSGraphicsContext restoreGraphicsState];
}
// Now your temp image rep should have the resized original.

Related

Offloading expensive drawing to background - can I prevent drawRect from clearing the NSView's dirtyRect?

I have a custom NSView 'MyView' that displays an NSImage that is expensive to create. Ideally, this rendering should happen on a background threat and MyView should update itself when rendering is done.
To achieve this, I followed the suggestion in WWDC 2013, Session 215 (around 4:00).
It works like this: When drawRect is called and the image wasn't created yet, rendering is triggered on a background queue. There, the image is created, stored in an instance variable and setNeedsDisplay is called again (on the main thread) to mark the view as dirty. That will call drawRect a second time where the image is now present and can be drawn:
- (void)drawRect:(NSRect)dirtyRect
{
// Do we have an image?
if( self.image )
{
// Yes, we can draw the image (and invalidate it right away for demo purposes)
[self.image drawInRect:self.bounds];
self.image = nil;
}
else
{
// No, we have to async render the image first and mark the view as dirty afterwards
CGSize imageSize = self.bounds.size;
dispatch_async( dispatch_get_global_queue( QOS_CLASS_USER_INTERACTIVE, 0 ), ^
{
self.image = [self _renderImageWithSize:imageSize];
dispatch_async( dispatch_get_main_queue(), ^
{
[self setNeedsDisplayInRect:dirtyRect];
});
});
}
}
- (NSImage *)_renderImageWithSize:(NSSize)size
{
// Simulate expensive image rendering (just for demo purposes)
NSBitmapImageRep * bitmapRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:nil pixelsWide:size.width pixelsHigh:size.height bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace bytesPerRow:0 bitsPerPixel:32];
NSGraphicsContext * context = [NSGraphicsContext graphicsContextWithBitmapImageRep:bitmapRep];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:context];
// Draw oval with random hue.
float hue = ( (float)( labs( random() % 100 )) / 100.0 );
[[NSColor colorWithHue:hue saturation:0.5 brightness:1.0 alpha:1.0] setFill];
[[NSBezierPath bezierPathWithOvalInRect:NSMakeRect( 0.0, 0.0, size.width, size.height )] fill];
[NSGraphicsContext restoreGraphicsState];
NSImage * image = [[NSImage alloc] init];
[image addRepresentation:bitmapRep];
// Simulate super-expensive rendering
sleep( 1 );
return image;
}
That code works fine but it creates an annoying flicker. It seems that the view is cleared in the first call to drawRect. It stays cleared until the second drawRect actually draws the rendered image.
Of course, I could just draw the old image, but I would prefer not to draw stale data unnecessarily.
Is there a way to keep the view from clearing in drawRect?

CIFilter guassianBlur and boxBlur are shrinking the image - how to avoid the resizing?

I am taking a snapshot of the contents of an NSView, applying a CIFilter, and placing the result back into the view. If the CIFilter is a form of blur, such as CIBoxBlur or CIGuassianBlur, the filtered result is slightly smaller than the original. As I am doing this iteratively the result becomes increasingly small, which I want to avoid.
The issue is alluded to here albeit in a slightly different context (Quartz Composer). Apple FunHouse demo app applies a Guassian blur without the image shrinking, but I haven't yet worked out how this app does it (it seems to be using OpenGL which I am not familiar with).
Here is the relevant part of the code (inside an NSView subclass)
NSImage* background = [[NSImage alloc] initWithData:[self dataWithPDFInsideRect:[self bounds]]];
CIContext* context = [[NSGraphicsContext currentContext] CIContext];
CIImage* ciImage = [background ciImage];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"
keysAndValues: kCIInputImageKey, ciImage,
#"inputRadius", [NSNumber numberWithFloat:10.0], nil];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result
fromRect:[result extent]];
NSImage* newBackground = [[NSImage alloc] initWithCGImage:cgImage size:background.size];
If I try a color-changing filter such as CISepiaTone, which is not shifting pixels around, the shrinking does not occur.
I am wondering if there is a quick fix that doesn't involve diving into openGL?
They're actually not shrinking the image, they're expanding it (I think by 7 pixels around all edges) and the default UIView 'scale To View' makes it looks like it's been shrunk.
Crop your CIImage with:
CIImage *cropped=[output imageByCroppingToRect:CGRectMake(0, 0, view.bounds.size.width*scale, view.bounds.size.height*scale)];
where view is the original bounds of your NSView that you drew into and 'scale' is your [UIScreen mainScreen] scale].
You probably want to clamp your image before using the blur:
- (CIImage*)imageByClampingToExtent {
CIFilter *clamp = [CIFilter filterWithName:#"CIAffineClamp"];
[clamp setValue:[NSAffineTransform transform] forKey:#"inputTransform"];
[clamp setValue:self forKey:#"inputImage"];
return [clamp valueForKey:#"outputImage"];
}
Then blur, and then crop to the original extent. You'll get non-transparent edges this way.
#BBC_Z's solution is correct.
Although I find it more elegant to crop not according to the view, but to the image.
And you can cut away the useless blurred edges:
// Crop transparent edges from blur
resultImage = [resultImage imageByCroppingToRect:(CGRect){
.origin.x = blurRadius,
.origin.y = blurRadius,
.size.width = originalCIImage.extent.size.width - blurRadius*2,
.size.height = originalCIImage.extent.size.height - blurRadius*2
}];

How to take region screenshot in Mac OS X using Cocoa and CGDisplayCreateImageForRect?

How to take region screenshot in Mac OS X using Cocoa and CGDisplayCreateImageForRect? I found how to make a full size screenshot How to take screenshot in Mac OS X using Cocoa or C++ , but how I make region screenshot?
Sorry for ugly formatting. The second function does exactly what you want.
// this is a cute function for creating CGImageRef from NSImage.
// I found it somewhere on SO but I do not remember the link, I am sorry..
CGImageRef CGImageCreateWithNSImage(NSImage *image) {
NSSize imageSize = [image size];
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, imageSize.width, imageSize.height, 8, 0, [[NSColorSpace genericRGBColorSpace] CGColorSpace], kCGBitmapByteOrder32Host|kCGImageAlphaPremultipliedFirst);
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithGraphicsPort:bitmapContext flipped:NO]];
[image drawInRect:NSMakeRect(0, 0, imageSize.width, imageSize.height) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapContext);
CGContextRelease(bitmapContext);
return cgImage;
}
NSImage* NSImageFromScreenWithRect(CGRect rect){
// copy screenshot to clipboard, works on OS X only..
system("screencapture -c -x");
// get NSImage from clipboard..
NSImage *imageFromClipboard=[[NSImage alloc]initWithPasteboard:[NSPasteboard generalPasteboard]];
// get CGImageRef from NSImage for further cutting..
CGImageRef screenShotImage=CGImageCreateWithNSImage(imageFromClipboard);
// cut desired subimage from fullscreen screenshot..
CGImageRef screenShotCenter=CGImageCreateWithImageInRect(screenShotImage,rect);
// create NSImage from CGImageRef..
NSImage *resultImage=[[NSImage alloc]initWithCGImage:screenShotCenter size:rect.size];
// release CGImageRefs cause ARC has no effect on them..
CGImageRelease(screenShotCenter);
CGImageRelease(screenShotImage);
return resultImage;
}

Generate scaled image from off-screen NSView

I have a sequence of off screen NSViews in a Cocoa application, which are used to compose a PDF for printing. The views are not in an NSWindow, or visible in any way.
I'd like to be able to generate thumbnail images of that view, exactly as the PDF would look, but scaled down to fit a certain pixel size (constrained to a width or height). This needs to be as fast as possible, so I'd like to avoid rendering to PDF, then converting to raster and scaling - I'd like to go direct to the raster.
At the moment I'm doing:
NSBitmapImageRep *bitmapImageRep = [pageView bitmapImageRepForCachingDisplayInRect:pageView.bounds];
[pageView cacheDisplayInRect:pageView.bounds toBitmapImageRep:bitmapImageRep];
NSImage *image = [[NSImage alloc] initWithSize:bitmapImageRep.size];
[image addRepresentation:bitmapImageRep];
This approach is working well, but I can't work out how to apply a scaling to the NSView before rendering the bitmapImageRep. I want to avoid using scaleUnitSquareToSize, because as I understand it, that only changes the bounds, not the frame of the NSView.
Any suggestions on the best way of doing this?
This is what I ended up doing, which works perfectly. We draw directly into an NSBitmapImageRep, but scale the context explicitly using CGContextScaleCTM beforehand. graphicsContext.graphicsPort gives you the handle on the CGContextRef for the NSGraphicsContext.
NSView *pageView = [self viewForPageIndex:pageIndex];
float scale = width / pageView.bounds.size.width;
float height = scale * pageView.bounds.size.height;
NSRect targetRect = NSMakeRect(0.0, 0.0, width, height);
NSBitmapImageRep *bitmapRep;
bitmapRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:nil
pixelsWide:targetRect.size.width
pixelsHigh:targetRect.size.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bitmapFormat:0
bytesPerRow:(4 * targetRect.size.width)
bitsPerPixel:32];
[NSGraphicsContext saveGraphicsState];
NSGraphicsContext *graphicsContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:bitmapRep];
[NSGraphicsContext setCurrentContext:graphicsContext];
CGContextScaleCTM(graphicsContext.graphicsPort, scale, scale);
[pageView displayRectIgnoringOpacity:pageView.bounds inContext:graphicsContext];
[NSGraphicsContext restoreGraphicsState];
NSImage *image = [[NSImage alloc] initWithSize:bitmapRep.size];
[image addRepresentation:bitmapRep];
return image;
How about using scaleUnitSquareToSize: and then passing in a smaller rect in to bitmapImageRepForCachingDisplayInRect: and cacheDisplayInRect:toBitmapImageRep:?
So, if you downscale it by a factor of 2, you'd pass a rect to with half with bounds and height.

NSBitmapImageRep and multi-page TIFFs

I've got a program that can open TIFF documents and display them. I'm using setFlipped:YES.
If I'm just dealing with single page image files, I can do
[image setFlipped: YES];
and that, in addition to the view being flipped, seems to draw the image correctly.
However, for some reason, setting the flipped of the image doesn't seem to affect the flippedness of the individual representations.
This is relevant because the multiple images of a multi-page TIFF seem to appear as different "representations" of the same image. So, if I just draw the IMAGE, it's flipped, but if I draw a specific representation, it isn't flipped. I also can't seem to figure out how to chose which representation is the default one that gets drawn when you draw the NSImage.
thanks.
You shouldn't use the -setFlipped: method to control how the image is drawn. You should use a transform based on the flipped-ness of the context you are drawing into. Something like this (a category on NSImage):
#implementation NSImage (FlippedDrawing)
- (void)drawAdjustedInRect:(NSRect)dstRect fromRect:(NSRect)srcRect operation:(NSCompositingOperation)op fraction:(CGFloat)delta
{
NSGraphicsContext* context = [NSGraphicsContext currentContext];
BOOL contextIsFlipped = [context isFlipped];
if (contextIsFlipped)
{
NSAffineTransform* transform;
[context saveGraphicsState];
// Flip the coordinate system back.
transform = [NSAffineTransform transform];
[transform translateXBy:0 yBy:NSMaxY(dstRect)];
[transform scaleXBy:1 yBy:-1];
[transform concat];
// The transform above places the y-origin right where the image should be drawn.
dstRect.origin.y = 0.0;
}
[self drawInRect:dstRect fromRect:srcRect operation:op fraction:delta];
if (contextIsFlipped)
{
[context restoreGraphicsState];
}
}
- (void)drawAdjustedAtPoint:(NSPoint)point
{
[self drawAdjustedAtPoint:point fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
}
- (void)drawAdjustedInRect:(NSRect)rect
{
[self drawAdjustedInRect:rect fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
}
- (void)drawAdjustedAtPoint:(NSPoint)aPoint fromRect:(NSRect)srcRect operation:(NSCompositingOperation)op fraction:(CGFloat)delta
{
NSSize size = [self size];
[self drawAdjustedInRect:NSMakeRect(aPoint.x, aPoint.y, size.width, size.height) fromRect:srcRect operation:op fraction:delta];
}
#end
I believe that the answer is that Yes, different pages are separate representations, and the correct way to deal with them is to turn them into images with:
NSImage *im = [[NSImage alloc] initWithData:[representation TIFFRepresentation]];
[im setFlipped:YES];