Generate scaled image from off-screen NSView - objective-c

I have a sequence of off screen NSViews in a Cocoa application, which are used to compose a PDF for printing. The views are not in an NSWindow, or visible in any way.
I'd like to be able to generate thumbnail images of that view, exactly as the PDF would look, but scaled down to fit a certain pixel size (constrained to a width or height). This needs to be as fast as possible, so I'd like to avoid rendering to PDF, then converting to raster and scaling - I'd like to go direct to the raster.
At the moment I'm doing:
NSBitmapImageRep *bitmapImageRep = [pageView bitmapImageRepForCachingDisplayInRect:pageView.bounds];
[pageView cacheDisplayInRect:pageView.bounds toBitmapImageRep:bitmapImageRep];
NSImage *image = [[NSImage alloc] initWithSize:bitmapImageRep.size];
[image addRepresentation:bitmapImageRep];
This approach is working well, but I can't work out how to apply a scaling to the NSView before rendering the bitmapImageRep. I want to avoid using scaleUnitSquareToSize, because as I understand it, that only changes the bounds, not the frame of the NSView.
Any suggestions on the best way of doing this?

This is what I ended up doing, which works perfectly. We draw directly into an NSBitmapImageRep, but scale the context explicitly using CGContextScaleCTM beforehand. graphicsContext.graphicsPort gives you the handle on the CGContextRef for the NSGraphicsContext.
NSView *pageView = [self viewForPageIndex:pageIndex];
float scale = width / pageView.bounds.size.width;
float height = scale * pageView.bounds.size.height;
NSRect targetRect = NSMakeRect(0.0, 0.0, width, height);
NSBitmapImageRep *bitmapRep;
bitmapRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:nil
pixelsWide:targetRect.size.width
pixelsHigh:targetRect.size.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bitmapFormat:0
bytesPerRow:(4 * targetRect.size.width)
bitsPerPixel:32];
[NSGraphicsContext saveGraphicsState];
NSGraphicsContext *graphicsContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:bitmapRep];
[NSGraphicsContext setCurrentContext:graphicsContext];
CGContextScaleCTM(graphicsContext.graphicsPort, scale, scale);
[pageView displayRectIgnoringOpacity:pageView.bounds inContext:graphicsContext];
[NSGraphicsContext restoreGraphicsState];
NSImage *image = [[NSImage alloc] initWithSize:bitmapRep.size];
[image addRepresentation:bitmapRep];
return image;

How about using scaleUnitSquareToSize: and then passing in a smaller rect in to bitmapImageRepForCachingDisplayInRect: and cacheDisplayInRect:toBitmapImageRep:?
So, if you downscale it by a factor of 2, you'd pass a rect to with half with bounds and height.

Related

NSBitmapImageRep -initWithFocusedViewRect is doubling size of image

I have the following objective-C function meant to resize an NSBitmapImageRep to a designated size.
Currently, when working with an image of size 2048x1536 and trying to resize it to 300x225, this function keeps returning an NSBitmapImageRep of size 600x450.
- (NSBitmapImageRep*) resizeImageRep: (NSBitmapImageRep*) anOriginalImageRep toTargetSize: (NSSize) aTargetSize
{
NSImage* theTempImageRep = [[[NSImage alloc] initWithSize: aTargetSize ] autorelease];
[ theTempImageRep lockFocus ];
[NSGraphicsContext currentContext].imageInterpolation = NSImageInterpolationHigh;
NSRect theTargetRect = NSMakeRect(0.0, 0.0, aTargetSize.width, aTargetSize.height);
[ anOriginalImageRep drawInRect: theTargetRect];
NSBitmapImageRep* theResizedImageRep = [[[NSBitmapImageRep alloc] initWithFocusedViewRect: theTargetRect ] autorelease];
[ theTempImageRep unlockFocus];
return theResizedImageRep;
}
Debugging it, I'm finding that theTargetRect is of the proper size, but the call to initWithFocusedRec returns a bitmap of 600x450 pixels (high x wide)
I'm at a complete loss as to why this may be happening. Does anyone have any insight?
Your technique won't produce a resized image. For one thing, the method initWithFocusedViewRect:reads bitmap data from the focused window and is used to create screen grabs.
You should create a new graphics context with a new NSBitmapImageRep or NSImage of the desired size then you draw your image into that context.
Something like this.
NSGraphicsContext* context = [NSGraphicsContext graphicsContextWithBitmapImageRep:theTempImageRep];
if (context)
{
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:context];
[anOriginalImageRep drawAtPoint:NSZeroPoint];
[anOriginalImageRep drawInRect:theTargetRect];
[NSGraphicsContext restoreGraphicsState];
}
// Now your temp image rep should have the resized original.

Xcode making a pdf, trying to round corners

I am making a pdf in an iPad app. Now i can make the pdf however want to add a picture with a rounded corner border. For example to achieve the effect i want on the border on a simple view item i use the following code.
self.SaveButtonProp.layer.cornerRadius=8.0f;
self.SaveButtonProp.layer.masksToBounds=YES;
self.SaveButtonProp.layer.borderColor=[[UIColor blackColor]CGColor];
self.SaveButtonProp.layer.borderWidth= 1.0f;
With the pdf i am using the following method to add the picture with the border to the pdf.
CGContextRef currentContext = UIGraphicsGetCurrentContext();
UIImage * demoImage = [UIImage imageWithData : Image];
UIColor *borderColor = [UIColor blackColor];
CGRect rectFrame = CGRectMake(20, 125, 200, 200);
[demoImage drawInRect:rectFrame];
CGContextSetStrokeColorWithColor(currentContext, borderColor.CGColor);
CGContextSetLineWidth(currentContext, 2);
CGContextStrokeRect(currentContext, rectFrame);
How do i round the corners?
Thanks
While drawing you can set clipping masks. For example, it's relatively easy to create a Bezier path with the shape of a rounded rectangle and apply that as clipping mask to your graphics context. Everything subsequently drawn will be clipped.
If you want remove the clipping mask later (for example because you have an image with rounded corners but follow that by other elements) you'll have to save the graphic state first, then apply your clipping mask and restore the graphics state when you're done with your rounded corners.
You can see actual code that comes pretty close to what I think you need here:
UIImage with rounded corners
You can use a method to get any UIView/UIImageView to PDF NSData:
UIImageView *imageView = [[UIImageView alloc] initWithImage:image];
NSData *data = [self makePDFfromView:imageView];
Method:
- (NSData *)makePDFfromView:(UIView *)view
{
NSMutableData *pdfData = [NSMutableData data];
UIGraphicsBeginPDFContextToData(pdfData, view.bounds, nil);
UIGraphicsBeginPDFPage();
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
[view.layer renderInContext:pdfContext];
UIGraphicsEndPDFContext();
return pdfData;
}
Maybe you can change or use this code to help you with your problem.

CIFilter guassianBlur and boxBlur are shrinking the image - how to avoid the resizing?

I am taking a snapshot of the contents of an NSView, applying a CIFilter, and placing the result back into the view. If the CIFilter is a form of blur, such as CIBoxBlur or CIGuassianBlur, the filtered result is slightly smaller than the original. As I am doing this iteratively the result becomes increasingly small, which I want to avoid.
The issue is alluded to here albeit in a slightly different context (Quartz Composer). Apple FunHouse demo app applies a Guassian blur without the image shrinking, but I haven't yet worked out how this app does it (it seems to be using OpenGL which I am not familiar with).
Here is the relevant part of the code (inside an NSView subclass)
NSImage* background = [[NSImage alloc] initWithData:[self dataWithPDFInsideRect:[self bounds]]];
CIContext* context = [[NSGraphicsContext currentContext] CIContext];
CIImage* ciImage = [background ciImage];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"
keysAndValues: kCIInputImageKey, ciImage,
#"inputRadius", [NSNumber numberWithFloat:10.0], nil];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result
fromRect:[result extent]];
NSImage* newBackground = [[NSImage alloc] initWithCGImage:cgImage size:background.size];
If I try a color-changing filter such as CISepiaTone, which is not shifting pixels around, the shrinking does not occur.
I am wondering if there is a quick fix that doesn't involve diving into openGL?
They're actually not shrinking the image, they're expanding it (I think by 7 pixels around all edges) and the default UIView 'scale To View' makes it looks like it's been shrunk.
Crop your CIImage with:
CIImage *cropped=[output imageByCroppingToRect:CGRectMake(0, 0, view.bounds.size.width*scale, view.bounds.size.height*scale)];
where view is the original bounds of your NSView that you drew into and 'scale' is your [UIScreen mainScreen] scale].
You probably want to clamp your image before using the blur:
- (CIImage*)imageByClampingToExtent {
CIFilter *clamp = [CIFilter filterWithName:#"CIAffineClamp"];
[clamp setValue:[NSAffineTransform transform] forKey:#"inputTransform"];
[clamp setValue:self forKey:#"inputImage"];
return [clamp valueForKey:#"outputImage"];
}
Then blur, and then crop to the original extent. You'll get non-transparent edges this way.
#BBC_Z's solution is correct.
Although I find it more elegant to crop not according to the view, but to the image.
And you can cut away the useless blurred edges:
// Crop transparent edges from blur
resultImage = [resultImage imageByCroppingToRect:(CGRect){
.origin.x = blurRadius,
.origin.y = blurRadius,
.size.width = originalCIImage.extent.size.width - blurRadius*2,
.size.height = originalCIImage.extent.size.height - blurRadius*2
}];

How to resize a jpg in a UIImageView?

I loaded a jpg into a UIImageView. The image is oversized to the iPhone screen. How can I resize it to a specific CGRect frame?
UIImageView *uivSplash = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"iPhone-Splash.jpg"]];
[self.view addSubview:uivSplash];
A UIImageView is just a UIView, so you can change its frame property.
uivSplash.frame = CGRectMake(0, 0, width, height);
You'll want a method like the following:
CGFloat newWidth = whateverYourDesiredWidth; // someView.size.width for example
CGFloat newHeight = whateverYourDesiredHeight; // someView.size.height for example
CGSize newSize = CGSizeMake(newWidth, newHeight);
UIGraphicsBeginImageContext(newSize);
[yourLargeImage drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
So this is getting your desired width and height (maybe the screen size, maybe hard-coded size, maybe a size based on a UIView) and re-drawing the image in a context that is that size.
~Good Luck
EDIT: it occurs to me I may have misunderstood your desire, so I'll also point out (as others have said) that UIImageView has properties for its image that let you fit it to size, scale to fill, retain aspect ratio, etc.

Using NSImage operation to make a crop effect

I have an NSView that display an image, and i'd like to make this view acts like a cropping image effect. Then i make 3 rectangles (imageRect, secRect and IntersectRect), the imageRect is the rect which show an image, secRect is rect which just act to darken whole imageRect, and the intersectRect is a rect which like an observe rect, what i want to do is like make a "hole" on secRect to see directly into imageRect (without the darken). here's my drawRect method :
- (void)drawRect:(NSRect)rect {
// Drawing code here.
NSImage *image = [NSImage imageNamed:#"Lonely_Tree_by_sican.jpg"];
NSRect imageRect = [self bounds];
[image compositeToPoint:NSZeroPoint operation:NSCompositeSourceOver ];
if (NSIntersectsRect([myDrawRect currentRect], [self bounds])) {
//get the intersectionRect
intersectionRect = NSIntersectionRect([myDrawRect currentRect], imageRect);
//draw the imageRect
[image compositeToPoint:imageRect.origin operation:NSCompositeSourceOver];
//draw the secRect and fill it with black and alpha 0.5
NSRect secRect = NSMakeRect(imageRect.origin.x, imageRect.origin.y, imageRect.size.width, imageRect.size.height);
[[NSColor colorWithCalibratedRed:0.0 green:0.0 blue:0.0 alpha:0.5] set];
[NSBezierPath fillRect:secRect];
//have no idea for the intersectRect
/*[image compositeToPoint:intersectionRect.origin
fromRect:secLayer
operation:NSCompositeXOR
fraction:1.0];*/
}
//draw the rectangle
[myDrawRect beginDrawing];
}
I have my own class (myDrawRect) to draw a rectangle based on mouse click on [self bounds], so just ignore the beginDrawing command.
Any help would be fine, thanks. Hebbian.
You're doing far more work than you need to, and you're using deprecated methods (the compositeToPoint:operation: and compositeToPoint:fromRect:operation:fraction: methods) to do it.
All you need to do is send the image a single drawInRect:fromRect:operation:fraction: message. The fromRect: parameter is the rectangle you want to crop to; if you don't want to scale the cropped section, then the destination rect (the drawInRect: parameter) should have the same size.
About the only extra work you may need to do is if the image may be bigger than the view and you want to only draw the section that's within the view's bounds: When that happens, you'll need to inset the crop rectangle by the difference in size between the crop rectangle and the view bounds.