Rendering PDF in Apple TV tvOS - pdf

I am working on an addition to my tvOS app that would allow viewing of PDFs stored in the app. However, without UIWebView, I'm at a loss on how to do this. I've asked question in other places, and get greeted with a link to a wordy and helpless document from Apple about the APIs that can be used, and even here it has been referenced (CGPDFPage) but no real guide on how to implement this. Has anyone successfully done this on tvOS, and if so, would you help me get started in this process?

Below is some code that I wrote and tested in tvOS. Note that this is in Objective-c.
I've created two functions to do the job, and one helper function to display the PDF images in a UIScrollView. The first one will open the PDF document from a URL. A web URL was used. A local file could also be used in this sample.
There is also a helper function to open a document from a local file.
The second function renders the PDF document to a context. I chose to display the context by creating an image from it. There are other ways of handling the context too.
Opening the document is fairly straight forward, so there are no comments in the code for that. Rendering the document is slightly more involved, and so there are comments explaining that function.
The complete application is below.
- (CGPDFDocumentRef)openPDFLocal:(NSString *)pdfURL {
NSURL* NSUrl = [NSURL fileURLWithPath:pdfURL];
return [self openPDF:NSUrl];
}
- (CGPDFDocumentRef)openPDFURL:(NSString *)pdfURL {
NSURL* NSUrl= [NSURL URLWithString:pdfURL];
return [self openPDF:NSUrl];
}
- (CGPDFDocumentRef)openPDF:(NSURL*)NSUrl {
CFURLRef url = (CFURLRef)CFBridgingRetain(NSUrl);
CGPDFDocumentRef myDocument;
myDocument = CGPDFDocumentCreateWithURL(url);
if (myDocument == NULL) {
NSLog(#"can't open %#", NSUrl);
CFRelease (url);
return nil;
}
CFRelease (url);
if (CGPDFDocumentGetNumberOfPages(myDocument) == 0) {
CGPDFDocumentRelease(myDocument);
return nil;
}
return myDocument;
}
- (void)drawDocument:(CGPDFDocumentRef)pdfDocument
{
// Get the total number of pages for the whole PDF document
int totalPages= (int)CGPDFDocumentGetNumberOfPages(pdfDocument);
NSMutableArray *pageImages = [[NSMutableArray alloc] init];
// Iterate through the pages and add each page image to an array
for (int i = 1; i <= totalPages; i++) {
// Get the first page of the PDF document
CGPDFPageRef page = CGPDFDocumentGetPage(pdfDocument, i);
CGRect pageRect = CGPDFPageGetBoxRect(page, kCGPDFMediaBox);
// Begin the image context with the page size
// Also get the grapgics context that we will draw to
UIGraphicsBeginImageContext(pageRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// Rotate the page, so it displays correctly
CGContextTranslateCTM(context, 0.0, pageRect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextConcatCTM(context, CGPDFPageGetDrawingTransform(page, kCGPDFMediaBox, pageRect, 0, true));
// Draw to the graphics context
CGContextDrawPDFPage(context, page);
// Get an image of the graphics context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[pageImages addObject:image];
}
// Set the image of the PDF to the current view
[self addImagesToScrollView:pageImages];
}
-(void)addImagesToScrollView:(NSMutableArray*)imageArray {
int heigth = 0;
for (UIImage *image in imageArray) {
UIImageView *imgView = [[UIImageView alloc] initWithImage:image];
imgView.frame=CGRectMake(0, heigth, imgView.frame.size.width, imgView.frame.size.height);
[_scrollView addSubview:imgView];
heigth += imgView.frame.size.height;
}
}
And to tie it all together, you can do this:
CGPDFDocumentRef pdfDocument = [self openPDFURL:#"http://www.guardiansuk.com/uploads/accreditation/10testing.pdf"];
[self drawDocument:pdfDocument];
Note that I'm using a random PDF that was available for free on the web. I ran into some problems with https URLs, but I'm sure this can be resolved, and it's not actually related to the PDF opening question.

The tvOS documentation contains a section on creating, viewing and transforming PDF documents so I think it contains the functionality you need.
There’s lots of example code on that page, but here’s some code I use on iOS for the same purpose. It should work on tvOS, but I don’t have a way to test it:
func imageForPDF(URL: NSURL, pageNumber: Int, imageWidth: CGFloat) -> UIImage {
let document = CGPDFDocumentCreateWithURL(URL)
let page = CGPDFDocumentGetPage(document, pageNumber)
var pageRect = CGPDFPageGetBoxRect(page, .MediaBox)
let scale = imageWidth / pageRect.size.width
pageRect.size = CGSizeMake(pageRect.size.width * scale, pageRect.size.height * scale)
pageRect.origin = CGPointZero
UIGraphicsBeginImageContext(pageRect.size)
let ctx = UIGraphicsGetCurrentContext()
CGContextSetRGBFillColor(ctx, 1.0, 1.0, 1.0, 1.0) // White background
CGContextFillRect(ctx, pageRect)
CGContextSaveGState(ctx)
// Rotate the PDF so that it’s the right way around
CGContextTranslateCTM(ctx, 0.0, pageRect.size.height)
CGContextScaleCTM(ctx, 1.0, -1.0)
CGContextConcatCTM(ctx, CGPDFPageGetDrawingTransform(page, .MediaBox, pageRect, 0, true))
CGContextDrawPDFPage(ctx, page)
CGContextRestoreGState(ctx)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}

This is #jbg's answer in swift 5
func imageForPDF(URL: NSURL, pageNumber: Int, imageWidth: CGFloat) -> UIImage? {
guard let document = CGPDFDocument(URL) else { return nil }
guard let page = document.page(at: pageNumber) else { return nil }
var pageRect = page.getBoxRect(.mediaBox)
let scale = imageWidth / pageRect.size.width
pageRect.size = CGSize(width: pageRect.size.width * scale,
height: pageRect.size.height * scale)
pageRect.origin = CGPoint.zero
UIGraphicsBeginImageContext(pageRect.size)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
context.setFillColor(red: 1.0, green: 1.0, blue: 1.0, alpha: 1.0)
context.fill(pageRect)
context.saveGState()
// Rotate the PDF so that it’s the right way around
context.translateBy(x: 0.0, y: pageRect.size.height)
context.scaleBy(x: 1.0, y: -1.0)
context.concatenate(page.getDrawingTransform(.mediaBox, rect: pageRect, rotate: 0, preserveAspectRatio: true))
context.drawPDFPage(page)
context.restoreGState()
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
`

Related

Translate detected rectangle from portrait CIImage to landscape CIImage

I am modifying code found here. In the code we are capturing video from the phone camera using AVCaptureSession and using CIDetector to detect a rectangle in the image feed. The feed has an image which is 640x842 (iphone5 in portrait). We then do an overlay on the image so the user can see the detected rectangle (actually it's a trapezoid most of the time).
When the user presses a button on the UI, we capture an image from the video and re-run the rectangle detection on this larger image (3264x2448) which as you can see is landscape. We then do a perspective transform on the detected rectangle and crop on the image.
This is working pretty well but the issue I have is say 1 out of 5 captures the detected rectangle on the larger image is different to the one detected (and presented to the user) from the smaller image. Even though I only capture when I detect the phone is (relatively) still, the final image then does not represent the rectangle the user expected.
To resolve this my idea is to use the coordinates of the originally captured rectangle and translate them to a rectangle on the captured still image. This is where I'm struggling.
I tried this with the detected rectangle:
CGFloat radians = -90 * (M_PI/180);
CGAffineTransform rotation = CGAffineTransformMakeRotation(radians);
CGRect rect = CGRectMake(detectedRect.bounds.origin.x, detectedRect.bounds.origin.y, detectedRect.bounds.size.width, detectedRect.bounds.size.height);
CGRect rotatedRect = CGRectApplyAffineTransform(rect, rotation);
So given a detected rect:
TopLeft: 88.213425, 632.31329
TopRight: 545.59302, 632.15546
BottomRight: 575.57819, 369.22321
BottomLeft: 49.973862, 369.40466
I now have this rotated rect:
origin = (x = 369.223206, y = -575.578186)
size = (width = 263.090088, height = 525.604309)
How do I translate the rotated rectangle coordinates in the smaller portrait image to coordinates to the 3264x2448 image?
Edit
Duh.. reading my own approach realised that creating a rectangle out of a trapezoid will not solve my problem!
Supporting code to detect the rectangle etc...
// In this method we detect a rect from the video feed and overlay
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
// image is 640x852 on iphone5
NSArray *rects = [[CIDetector detectorOfType:CIDetectorTypeRectangle context:nil options:#{CIDetectorAccuracy : CIDetectorAccuracyHigh}] featuresInImage:image];
CIRectangleFeature *detectedRect = rects[0];
// draw overlay on image code....
}
This is a summarized version of how the still image is obtained:
// code block to handle output from AVCaptureStillImageOutput
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
CIImage *enhancedImage = [[CIImage alloc] initWithData:imageData options:#{kCIImageColorSpace:[NSNull null]}];
imageData = nil;
CIRectangleFeature *rectangleFeature = [self getDetectedRect:[[self highAccuracyRectangleDetector] featuresInImage:enhancedImage]];
if (rectangleFeature) {
enhancedImage = [self correctPerspectiveForImage:enhancedImage withTopLeft:rectangleFeature.topLeft andTopRight:rectangleFeature.topRight andBottomRight:rectangleFeature.bottomRight andBottomLeft:rectangleFeature.bottomLeft];
}
}
Thank you.
I had the same issue while doing this kind of stuff.I have resolved it by doing below code in swift. Take a look if it can help you.
if let videoConnection = stillImageOutput.connection(withMediaType: AVMediaTypeVideo) {
stillImageOutput.captureStillImageAsynchronously(from: videoConnection) {
(imageDataSampleBuffer, error) -> Void in
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
var img = UIImage(data: imageData!)!
let outputRect = self.previewLayer?.metadataOutputRectOfInterest(for: (self.previewLayer?.bounds)!)
let takenCGImage = img.cgImage
let width = (takenCGImage?.width)!
let height = (takenCGImage?.height)!
let cropRect = CGRect(x: (outputRect?.origin.x)! * CGFloat(width), y: (outputRect?.origin.y)! * CGFloat(height), width: (outputRect?.size.width)! * CGFloat(width), height: (outputRect?.size.height)! * CGFloat(height))
let cropCGImage = takenCGImage!.cropping(to: cropRect)
img = UIImage(cgImage: cropCGImage!, scale: 1, orientation: img.imageOrientation)
let cropViewController = TOCropViewController(image: self.cropToBounds(image: img))
cropViewController.delegate = self
self.navigationController?.pushViewController(cropViewController, animated: true)
}
}

How Do I Blur a Scene in SpriteKit?

How would I add a gaussian blur to all nodes (there's no fixed number of nodes) in an SKScene in SpriteKit? A label will be added on top of the scene later, this will be my pause menu.
Almost anything would help!
Something like this is what I'm going for:
What you're looking for is an SKEffectNode. It applies a CoreImage filter to itself (and thus all subnodes). Just make it the root view of your scene, give it one of CoreImage's blur filters, and you're set.
For example, I set up an SKScene with an SKEffectNode as it's first child node and a property, root that holds a weak reference to it:
-(void)createLayers{
SKEffectNode *node = [SKEffectNode node];
[node setShouldEnableEffects:NO];
CIFilter *blur = [CIFilter filterWithName:#"CIGaussianBlur" keysAndValues:#"inputRadius", #1.0f, nil];
[node setFilter:blur];
[self setRoot:node];
}
And here's the method I use to (animate!) the blur of my scene:
-(void)blurWithCompletion:(void (^)())handler{
CGFloat duration = 0.5f;
[[self root] setShouldRasterize:YES];
[[self root] setShouldEnableEffects:YES];
[[self root] runAction:[SKAction customActionWithDuration:duration actionBlock:^(SKNode *node, CGFloat elapsedTime){
NSNumber *radius = [NSNumber numberWithFloat:(elapsedTime/duration) * 10.0];
[[(SKEffectNode *)node filter] setValue:radius forKey:#"inputRadius"];
}] completion:handler];
}
Note that, like you, I'm using this as a pause screen, so I rasterize the scene. If you want your scene to animate while blurred, you should probably setShouldResterize: to NO.
And if you're not interested in animating the transition to the blur, you could always just set the filter to an initial radius of 10.0f or so and do a simple setShouldEnableEffects:YES when you want to switch it on.
See also: SKEffectNode class reference
UPDATE:
See Markus's comment below. He points out that SKScene is, in fact, a subclass of SKEffectNode, so you really ought to be able to call all of this on the scene itself rather than arbitrarily inserting an effect node in your node tree.
To add to this by using #Bendegúz's answer and code from http://www.bytearray.org/?p=5360
I was able to get this to work in my current game project that's being done in IOS 8 Swift. Done a bit differently by returning an SKSpriteNode instead of a UIImage. Also note that my unwrapped currentScene.view! call is to a weak GameScene reference but should work with self.view.frame based on where you are calling these methods. My pause screen is called in a separate HUD class hence why this is the case.
I would imagine this could be done more elegantly, maybe more like #jemmons's answer. Just wanted to possibly help out anyone else trying to do this in SpriteKit projects written in all or some Swift code.
func getBluredScreenshot() -> SKSpriteNode{
create the graphics context
UIGraphicsBeginImageContextWithOptions(CGSize(width: currentScene.view!.frame.size.width, height: currentScene.view!.frame.size.height), true, 1)
currentScene.view!.drawViewHierarchyInRect(currentScene.view!.frame, afterScreenUpdates: true)
// retrieve graphics context
let context = UIGraphicsGetCurrentContext()
// query image from it
let image = UIGraphicsGetImageFromCurrentImageContext()
// create Core Image context
let ciContext = CIContext(options: nil)
// create a CIImage, think of a CIImage as image data for processing, nothing is displayed or can be displayed at this point
let coreImage = CIImage(image: image)
// pick the filter we want
let filter = CIFilter(name: "CIGaussianBlur")
// pass our image as input
filter.setValue(coreImage, forKey: kCIInputImageKey)
//edit the amount of blur
filter.setValue(3, forKey: kCIInputRadiusKey)
//retrieve the processed image
let filteredImageData = filter.valueForKey(kCIOutputImageKey) as CIImage
// return a Quartz image from the Core Image context
let filteredImageRef = ciContext.createCGImage(filteredImageData, fromRect: filteredImageData.extent())
// final UIImage
let filteredImage = UIImage(CGImage: filteredImageRef)
// create a texture, pass the UIImage
let texture = SKTexture(image: filteredImage!)
// wrap it inside a sprite node
let sprite = SKSpriteNode(texture:texture)
// make image the position in the center
sprite.position = CGPointMake(CGRectGetMidX(currentScene.frame), CGRectGetMidY(currentScene.frame))
var scale:CGFloat = UIScreen.mainScreen().scale
sprite.size.width *= scale
sprite.size.height *= scale
return sprite
}
func loadPauseBGScreen(){
let duration = 1.0
let pauseBG:SKSpriteNode = self.getBluredScreenshot()
//pauseBG.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame))
pauseBG.alpha = 0
pauseBG.zPosition = self.zPosition + 1
pauseBG.runAction(SKAction.fadeAlphaTo(1, duration: duration))
self.addChild(pauseBG)
}
This is my solution for the pause screen.
It will take a screenshot, blur it and after that show it with animation.
I think you should do it if you don't wanna waste to much fps.
-(void)pause {
SKSpriteNode *pauseBG = [SKSpriteNode spriteNodeWithTexture:[SKTexture textureWithImage:[self getBluredScreenshot]]];
pauseBG.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame));
pauseBG.alpha = 0;
pauseBG.zPosition = 2;
[pauseBG runAction:[SKAction fadeAlphaTo:1 duration:duration / 2]];
[self addChild:pauseBG];
}
And this is the helper method:
- (UIImage *)getBluredScreenshot {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 1);
[self.view drawViewHierarchyInRect:self.view.frame afterScreenUpdates:YES];
UIImage *ss = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CIFilter *gaussianBlurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[gaussianBlurFilter setDefaults];
[gaussianBlurFilter setValue:[CIImage imageWithCGImage:[ss CGImage]] forKey:kCIInputImageKey];
[gaussianBlurFilter setValue:#10 forKey:kCIInputRadiusKey];
CIImage *outputImage = [gaussianBlurFilter outputImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGRect rect = [outputImage extent];
rect.origin.x += (rect.size.width - ss.size.width ) / 2;
rect.origin.y += (rect.size.height - ss.size.height) / 2;
rect.size = ss.size;
CGImageRef cgimg = [context createCGImage:outputImage fromRect:rect];
UIImage *image = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
return image;
}
Swift 4:
add this to your gameScene if you want to blur everything in the scene:
let blur = CIFilter(name:"CIGaussianBlur",withInputParameters: ["inputRadius": 10.0])
self.filter = blur
self.shouldRasterize = true
self.shouldEnableEffects = false
change self.shouldEnableEffects = true when you want to use it.
This is another example of getting this done in swift 2 without the layers:
func blurWithCompletion() {
let duration: CGFloat = 0.5
let filter: CIFilter = CIFilter(name: "CIGaussianBlur", withInputParameters: ["inputRadius" : NSNumber(double:1.0)])!
scene!.filter = filter
scene!.shouldRasterize = true
scene!.shouldEnableEffects = true
scene!.runAction(SKAction.customActionWithDuration(0.5, actionBlock: { (node: SKNode, elapsedTime: CGFloat) in
let radius = (elapsedTime/duration)*10.0
(node as? SKEffectNode)!.filter!.setValue(radius, forKey: "inputRadius")
}))
}
Swift 3 Update: This is #Chuck Gaffney's answer updated for Swift 3. I know this question is tagged objective-c, but this page ranked 2nd in Google for "swift spritekit blur". I changed currentScene to self.
func getBluredScreenshot() -> SKSpriteNode{
//create the graphics context
UIGraphicsBeginImageContextWithOptions(CGSize(width: self.view!.frame.size.width, height: self.view!.frame.size.height), true, 1)
self.view!.drawHierarchy(in: self.view!.frame, afterScreenUpdates: true)
// retrieve graphics context
_ = UIGraphicsGetCurrentContext()
// query image from it
let image = UIGraphicsGetImageFromCurrentImageContext()
// create Core Image context
let ciContext = CIContext(options: nil)
// create a CIImage, think of a CIImage as image data for processing, nothing is displayed or can be displayed at this point
let coreImage = CIImage(image: image!)
// pick the filter we want
let filter = CIFilter(name: "CIGaussianBlur")
// pass our image as input
filter?.setValue(coreImage, forKey: kCIInputImageKey)
//edit the amount of blur
filter?.setValue(3, forKey: kCIInputRadiusKey)
//retrieve the processed image
let filteredImageData = filter?.value(forKey: kCIOutputImageKey) as! CIImage
// return a Quartz image from the Core Image context
let filteredImageRef = ciContext.createCGImage(filteredImageData, from: filteredImageData.extent)
// final UIImage
let filteredImage = UIImage(cgImage: filteredImageRef!)
// create a texture, pass the UIImage
let texture = SKTexture(image: filteredImage)
// wrap it inside a sprite node
let sprite = SKSpriteNode(texture:texture)
// make image the position in the center
sprite.position = CGPoint(x: self.frame.midX, y: self.frame.midY)
let scale:CGFloat = UIScreen.main.scale
sprite.size.width *= scale
sprite.size.height *= scale
return sprite
}
func loadPauseBGScreen(){
let duration = 1.0
let pauseBG:SKSpriteNode = self.getBluredScreenshot()
pauseBG.alpha = 0
pauseBG.zPosition = self.zPosition + 1
pauseBG.run(SKAction.fadeAlpha(to: 1, duration: duration))
self.addChild(pauseBG)
}
I was trying to do this same thing now, in May 2020 (Xcode 11 and iOS 13.x), but wasn't unable to 'animate' the blur radius. In my case, I start with the scene fully blurred, and then 'unblur' it gradually (set inputRadius to 0).
Somehow, the new input radius value set in the custom action block wasn't reflected in the rendered scene. My code was as follows:
private func unblur() {
run(SKAction.customAction(withDuration: unblurDuration, actionBlock: { [weak self] (_, elapsed) in
guard let this = self else { return }
let ratio = (TimeInterval(elapsed) / this.unblurDuration)
let radius = this.maxBlurRadius * (1 - ratio) // goes to 0 as ratio goes to 1
this.filter?.setValue(radius, forKey: kCIInputRadiusKey)
}))
}
I even tried updating the value manually using SKScene.update(_:) and some variables for time book-keeping, but the same result.
It occurred to me that perhaps I could force the refresh if I "re-assingned" the blur filter to the .filter property of my SKScene (see comments in ALL CAPS near the end of the code), to the same effect, and it worked.
The full code:
class MyScene: SKScene {
private let maxBlurRadius: Double = 50
private let unblurDuration: TimeInterval = 5
init(size: CGSize) {
super.init(size: size)
let filter = CIFilter(name: "CIGaussianBlur")
filter?.setValue(maxBlurRadius, forKey: kCIInputRadiusKey)
self.filter = filter
self.shouldEnableEffects = true
self.shouldRasterize = false
// (...rest of the child nodes, etc...)
}
override func didMove(to view: SKView) {
super.didMove(to: view)
self.unblur()
}
private func unblur() {
run(SKAction.customAction(withDuration: unblurDuration, actionBlock: { [weak self] (_, elapsed) in
guard let this = self else { return }
let ratio = (TimeInterval(elapsed) / this.unblurDuration)
let radius = this.maxBlurRadius * (1 - ratio) // goes to 0 as ratio goes to 1
// OBTAIN THE FILTER
let filter = this.filter
// MODIFY ATTRIBUTE
filter?.setValue(radius, forKey: kCIInputRadiusKey)
// RE=ASSIGN TO SCENE
this.filter = filter
}))
}
}
I hope this helps someone!

Mac OS X: Drawing into an offscreen NSGraphicsContext using CGContextRef C functions has no effect. Why?

Mac OS X 10.7.4
I am drawing into an offscreen graphics context created via +[NSGraphicsContext graphicsContextWithBitmapImageRep:].
When I draw into this graphics context using the NSBezierPath class, everything works as expected.
However, when I draw into this graphics context using the CGContextRef C functions, I see no results of my drawing. Nothing works.
For reasons I won't get into, I really need to draw using the CGContextRef functions (rather than the Cocoa NSBezierPath class).
My code sample is listed below. I am attempting to draw a simple "X". One stroke using NSBezierPath, one stroke using CGContextRef C functions. The first stroke works, the second does not. What am I doing wrong?
NSRect imgRect = NSMakeRect(0.0, 0.0, 100.0, 100.0);
NSSize imgSize = imgRect.size;
NSBitmapImageRep *offscreenRep = [[[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:imgSize.width
pixelsHigh:imgSize.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bitmapFormat:NSAlphaFirstBitmapFormat
bytesPerRow:0
bitsPerPixel:0] autorelease];
// set offscreen context
NSGraphicsContext *g = [NSGraphicsContext graphicsContextWithBitmapImageRep:offscreenRep];
[NSGraphicsContext setCurrentContext:g];
NSImage *img = [[[NSImage alloc] initWithSize:imgSize] autorelease];
CGContextRef ctx = [g graphicsPort];
// lock and draw
[img lockFocus];
// draw first stroke with Cocoa. this works!
NSPoint p1 = NSMakePoint(NSMaxX(imgRect), NSMinY(imgRect));
NSPoint p2 = NSMakePoint(NSMinX(imgRect), NSMaxY(imgRect));
[NSBezierPath strokeLineFromPoint:p1 toPoint:p2];
// draw second stroke with Core Graphics. This doesn't work!
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, 0.0, 0.0);
CGContextAddLineToPoint(ctx, imgSize.width, imgSize.height);
CGContextClosePath(ctx);
CGContextStrokePath(ctx);
[img unlockFocus];
You don't specify how you are looking at the results. I assume you are looking at the NSImage img and not the NSBitmapImageRep offscreenRep.
When you call [img lockFocus], you are changing the current NSGraphicsContext to be a context to draw into img. So, the NSBezierPath drawing goes into img and that's what you see. The CG drawing goes into offscreenRep which you aren't looking at.
Instead of locking focus onto an NSImage and drawing into it, create an NSImage and add the offscreenRep as one of its reps.
NSRect imgRect = NSMakeRect(0.0, 0.0, 100.0, 100.0);
NSSize imgSize = imgRect.size;
NSBitmapImageRep *offscreenRep = [[[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:imgSize.width
pixelsHigh:imgSize.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bitmapFormat:NSAlphaFirstBitmapFormat
bytesPerRow:0
bitsPerPixel:0] autorelease];
// set offscreen context
NSGraphicsContext *g = [NSGraphicsContext graphicsContextWithBitmapImageRep:offscreenRep];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:g];
// draw first stroke with Cocoa
NSPoint p1 = NSMakePoint(NSMaxX(imgRect), NSMinY(imgRect));
NSPoint p2 = NSMakePoint(NSMinX(imgRect), NSMaxY(imgRect));
[NSBezierPath strokeLineFromPoint:p1 toPoint:p2];
// draw second stroke with Core Graphics
CGContextRef ctx = [g graphicsPort];
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, 0.0, 0.0);
CGContextAddLineToPoint(ctx, imgSize.width, imgSize.height);
CGContextClosePath(ctx);
CGContextStrokePath(ctx);
// done drawing, so set the current context back to what it was
[NSGraphicsContext restoreGraphicsState];
// create an NSImage and add the rep to it
NSImage *img = [[[NSImage alloc] initWithSize:imgSize] autorelease];
[img addRepresentation:offscreenRep];
// then go on to save or view the NSImage
The solution by #Robin Stewart worked well for me. I was able to condense it to an NSImage extension.
extension NSImage {
convenience init(size: CGSize, actions: (CGContext) -> Void) {
self.init(size: size)
lockFocusFlipped(false)
actions(NSGraphicsContext.current!.cgContext)
unlockFocus()
}
}
Usage:
let image = NSImage(size: CGSize(width: 100, height: 100), actions: { ctx in
// Drawing commands here for example:
// ctx.setFillColor(.white)
// ctx.fill(pageRect)
})
I wonder why everyone writes such complicated code for drawing to an image. Unless you care for the exact bitmap representation of an image (and usually you don't!), there is no need to create one. You can just create a blank image and directly draw to it. In that case the system will create an appropriate bitmap representation (or maybe a PDF representation or whatever the system believes to be more suitable for drawing).
The documentation of the init method
- (instancetype)initWithSize:(NSSize)aSize
which exists since MacOS 10.0 and still isn't deprecated, clearly says:
After using this method to initialize an image object, you are
expected to provide the image contents before trying to draw the
image. You might lock focus on the image and draw to the image or you
might explicitly add an image representation that you created.
So here's how I would have written that code:
NSRect imgRect = NSMakeRect(0.0, 0.0, 100.0, 100.0);
NSImage * image = [[NSImage alloc] initWithSize:imgRect.size];
[image lockFocus];
// draw first stroke with Cocoa
NSPoint p1 = NSMakePoint(NSMaxX(imgRect), NSMinY(imgRect));
NSPoint p2 = NSMakePoint(NSMinX(imgRect), NSMaxY(imgRect));
[NSBezierPath strokeLineFromPoint:p1 toPoint:p2];
// draw second stroke with Core Graphics
CGContextRef ctx = [[NSGraphicsContext currentContext] graphicsPort];
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, 0.0, 0.0);
CGContextAddLineToPoint(ctx, imgRect.size.width, imgRect.size.height);
CGContextClosePath(ctx);
CGContextStrokePath(ctx);
[image unlockFocus];
That's all folks.
graphicsPort is actually void *:
#property (readonly) void * graphicsPort
and documented as
The low-level, platform-specific graphics context represented
by the graphic port.
Which may be pretty much everything, but the final note says
In OS X, this is the Core Graphics context,
a CGContextRef object (opaque type).
This property has been deprecated in 10.10 in favor of the new property
#property (readonly) CGContextRef CGContext
which is only available in 10.10 and later. If you have to support older systems, it's fine to still use graphicsPort.
Swift 4: I use this code, which replicates the convenient API from UIKit (but runs on macOS):
public class UIGraphicsImageRenderer {
let size: CGSize
init(size: CGSize) {
self.size = size
}
func image(actions: (CGContext) -> Void) -> NSImage {
let image = NSImage(size: size)
image.lockFocusFlipped(true)
actions(NSGraphicsContext.current!.cgContext)
image.unlockFocus()
return image
}
}
Usage:
let renderer = UIGraphicsImageRenderer(size: imageSize)
let image = renderer.image { ctx in
// Drawing commands here
}
Here are 3 ways of drawing same image (Swift 4).
The method suggested by #Mecki produces an image without blurring artefacts (like blurred curves). But this can be fixed by adjusting CGContext settings (not included in this example).
public struct ImageFactory {
public static func image(size: CGSize, fillColor: NSColor, rounded: Bool = false) -> NSImage? {
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
return drawImage(size: size) { context in
if rounded {
let radius = min(size.height, size.width)
let path = NSBezierPath(roundedRect: rect, xRadius: 0.5 * radius, yRadius: 0.5 * radius).cgPath
context.addPath(path)
context.clip()
}
context.setFillColor(fillColor.cgColor)
context.fill(rect)
}
}
}
extension ImageFactory {
private static func drawImage(size: CGSize, drawingCalls: (CGContext) -> Void) -> NSImage? {
return drawImageInLockedImageContext(size: size, drawingCalls: drawingCalls)
}
private static func drawImageInLockedImageContext(size: CGSize, drawingCalls: (CGContext) -> Void) -> NSImage? {
let image = NSImage(size: size)
image.lockFocus()
guard let context = NSGraphicsContext.current else {
image.unlockFocus()
return nil
}
drawingCalls(context.cgContext)
image.unlockFocus()
return image
}
// Has scalling or antialiasing issues, like blurred curves.
private static func drawImageInBitmapImageContext(size: CGSize, drawingCalls: (CGContext) -> Void) -> NSImage? {
guard let offscreenRep = NSBitmapImageRep(pixelsWide: Int(size.width), pixelsHigh: Int(size.height),
bitsPerSample: 8, samplesPerPixel: 4, hasAlpha: true,
isPlanar: false, colorSpaceName: .deviceRGB) else {
return nil
}
guard let context = NSGraphicsContext(bitmapImageRep: offscreenRep) else {
return nil
}
NSGraphicsContext.saveGraphicsState()
NSGraphicsContext.current = context
drawingCalls(context.cgContext)
NSGraphicsContext.restoreGraphicsState()
let img = NSImage(size: size)
img.addRepresentation(offscreenRep)
return img
}
// Has scalling or antialiasing issues, like blurred curves.
private static func drawImageInCGContext(size: CGSize, drawingCalls: (CGContext) -> Void) -> NSImage? {
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
guard let context = CGContext(data: nil, width: Int(size.width), height: Int(size.height), bitsPerComponent: 8,
bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue) else {
return nil
}
drawingCalls(context)
guard let image = context.makeImage() else {
return nil
}
return NSImage(cgImage: image, size: size)
}
}

Porting creation of a PDF document from iOS to Mac OS X

I am porting my code from iPhone to Mac and I have no idea how I can do this in Mac. Here's my code that I am trying to convert and I know that there's no UIGraphic in Mac. Can someone point me to a guide or give me a quick hint? Thanks.
NSString *newFilePath = #"path/to/your/newfile.pdf";
NSString *templatePath = #"path/to/your/template.pdf";
//create empty pdf file;
UIGraphicsBeginPDFContextToFile(newFilePath, CGRectMake(0, 0, 792, 612), nil);
CFURLRef url = CFURLCreateWithFileSystemPath (NULL, (CFStringRef)templatePath, kCFURLPOSIXPathStyle, 0);
//open template file
CGPDFDocumentRef templateDocument = CGPDFDocumentCreateWithURL(url);
CFRelease(url);
//get amount of pages in template
size_t count = CGPDFDocumentGetNumberOfPages(templateDocument);
//for each page in template
for (size_t pageNumber = 1; pageNumber <= count; pageNumber++) {
//get bounds of template page
CGPDFPageRef templatePage = CGPDFDocumentGetPage(templateDocument, pageNumber);
CGRect templatePageBounds = CGPDFPageGetBoxRect(templatePage, kCGPDFCropBox);
//create empty page with corresponding bounds in new document
UIGraphicsBeginPDFPageWithInfo(templatePageBounds, nil);
CGContextRef context = UIGraphicsGetCurrentContext();
//flip context due to different origins
CGContextTranslateCTM(context, 0.0, templatePageBounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
//copy content of template page on the corresponding page in new file
CGContextDrawPDFPage(context, templatePage);
//flip context back
CGContextTranslateCTM(context, 0.0, templatePageBounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
/* Here you can do any drawings */
[#"Test" drawAtPoint:CGPointMake(200, 300) withFont:[UIFont systemFontOfSize:20]];
}
CGPDFDocumentRelease(templateDocument);
UIGraphicsEndPDFContext();
Use CGPDFContextCreateWithURL instead of UIGraphicsBeginPDFContextToFile (the parameters are very similar). To begin/end pages, use CGPDFContextBeginPage and CGPDFContextEndPage. When you're done, call CGPDFContextClose instead of UIGraphicsEndPDFContext.
The rest can remain the same – Core Graphics exists on both iOS and Mac OS X – which also means that you could use the functions I've mentioned above on iOS as well if you want to use the same code on both platforms.
Swift 4, macOS High Sierra Update
func generatePdfWithFilePath(thefilePath: String)
{
let url = URL(fileURLWithPath: thefilePath) as CFURL
guard let currentContext = CGContext(url, mediaBox: nil, documentInfo as CFDictionary) else {
return
}
self.context = currentContext
self.context!.beginPDFPage(pageInfo as CFDictionary)
drawReport()
self.context!.endPDFPage()
// Close the PDF context and write the contents out.
self.context!.closePDF()
self.context = nil
//DebugLog("generatePdfWithFilePath() completed")
}

iPad App: Merge PDF files into 1 PDF document / Create PDF Document of multi-page scrollview

I am writing an iPad application which uses a scrollview with page control.
I need to create a PDF of all the pages as 1 PDF file.
So far, I figured that I should loop through all the sub-views (pages) and create PDF files for each (using CGPDFContext). BUT I do need to combine all the files into 1 PDF document. Can you help me to do so??
OR if you have a better way to create a PDF document with multiple pages from this scrollview, that would even be better!!
Please help. I've searched everywhere and saw that Mac OS has something using PDFDocument, insertPage function. I can't find a similar method for iOS??
to create a multi-part PDF:
-(CGContextRef) createPDFContext:(CGRect)inMediaBox path:(NSString *) path
{
CGContextRef myOutContext = NULL;
NSURL * url;
url = [NSURL fileURLWithPath:path];
if (url != NULL) {
myOutContext = CGPDFContextCreateWithURL (url,// 2
&inMediaBox,
NULL);
}
return myOutContext;// 4
}
-(void)createPdfFromScrollview:(UIScrollView *)scrollview
{
CGContextRef pdfContext = [self createPDFContext:CGRectMake(0, 0, WIDTH, HEIGHT) path:outputFilePath];
for(UIView * view in scrollview.subviews)
{
CGContextBeginPage (pdfContext,nil);
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0, HEIGHT);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(pdfContext, transform);
//Draw view into PDF
[view.layer renderInContext:pdfContext];
CGContextEndPage (pdfContext);
}
CGContextRelease (pdfContext);
}
Hope this helps.