Obj-C View to PDF only renders first page - objective-c

I've got a UIView extension class called UIView+PDFView which takes the current view, splits it into pages, and renders a PDF document. My issue lies in the rendering part; I can successfully 'create' pages for content, however all pages after the first are blank. My end goal is to take the current view, 'scale' the width equal to the page, and paginate the rest of the view within a PDF.
What the issue isn't:
Attaching the PDF to e-mail
Sending the PDF to a printer/print simulator
Only printing one page while actually generating the PDF correctly
Correctly resizing the view (SO Question 4919388)
I do this before I send to method. Verified by making frame 10px tall, still prints one (and only one) full page
Correctly translating the view. Verified by making both a translation and a scale; both correctly changed the view, however neither rendered on more than the first page.
My code is as follows:
#interface UIView (PDFView)
-(id)createPDFAndSaveToDocumentsWithFileName:(NSString*)aFilename andDocumentInfo:(NSDictionary *)documentInfo;
#end
#implementation UIView (PDFView)
-(id)createPDFAndSaveToDocumentsWithFileName:(NSString*)aFilename andDocumentInfo:(NSDictionary *)documentInfo {
// http://developer.apple.com/library/ios/DOCUMENTATION/GraphicsImaging/Reference/CGPDFContext/Reference/reference.html#//apple_ref/doc/constant_group/Auxiliary_Dictionary_Keys
// Creates a mutable data object for updating with binary data, like a byte array
NSMutableData *pdfData = [NSMutableData data];
// Points the pdf converter to the mutable data object and to the UIView to be converted
CGSize pageSize = CGSizeMake(self.bounds.size.width, 792);
UIGraphicsBeginPDFContextToData(pdfData, CGRectZero, documentInfo);
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
NSInteger currentPage = 0;
BOOL done = NO;
do {
CGRect currentPageRect = CGRectMake(0, (pageSize.height*currentPage), pageSize.width, pageSize.height);
UIGraphicsBeginPDFPageWithInfo(currentPageRect, nil);
// draws rect to the view and thus this is captured by UIGraphicsBeginPDFContextToData
[self.layer renderInContext:pdfContext];
// If we're at the end of the view, exit the loop.
if ((pageSize.height*currentPage) > self.bounds.size.height) {
done = YES;
} else {
currentPage++;
}
} while (!done);
// remove PDF rendering context
UIGraphicsEndPDFContext();
if (aFilename == nil) {
return pdfData;
} else {
// Retrieves the document directories from the iOS device
NSArray* documentDirectories = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask,YES);
NSString* documentDirectory = [documentDirectories objectAtIndex:0];
NSString* documentDirectoryFilename = [documentDirectory stringByAppendingPathComponent:aFilename];
// instructs the mutable data object to write its context to a file on disk
[pdfData writeToFile:documentDirectoryFilename atomically:YES];
NSLog(#"documentDirectoryFileName: %#",documentDirectoryFilename);
return nil;
}
}

i think your problem lies
CGRect currentPageRect = CGRectMake(0, (pageSize.height*currentPage), pageSize.width, pageSize.height);
instead of that try using either of the below statements
UIGraphicsBeginPDFPageWithInfo(CGRectMake(0.0, 0.0, 612.0, 792.0), nil);
UIGraphicsBeginPDFPage();
everytime you wish to add a new page to the context use the above statements and you would be able to add pages to the context.
if you wish to use the default size of the page i.e. 612 X 792 you can directly use UIGraphicsBeginPDFPage();
for custom page size you can use UIGraphicsBeginPDFPageWithInfo(CGRectMake(0.0, 0.0, 612.0, 792.0), nil);
I think that should solve your problem.

Related

iOS 8 API printing PDFs: broken when drawing text?

I have an app that has been happily generating PDFs using quartz/UIKit since iOS 4, but since upgrading the project to iOS 8, crashes whenever it tries to render text into the PDF context. Drawing lines & rectangles is fine, but any permutation of string rendering fails with an exception in one of the low level libraries.
Rather than posting my own source, I tried working backwards from Apple's documentation. Granted it is out of date, but if it's no longer supposed to work, they ought to have fixed it.
https://developer.apple.com/library/ios/documentation/2DDrawing/Conceptual/DrawingPrintingiOS/GeneratingPDF/GeneratingPDF.html
Adapted source code:
- (void)producePDF
{
NSString *text=#"Bzorg blarf gloop foo!";
CFAttributedStringRef currentText = CFAttributedStringCreate(NULL, (CFStringRef)text, NULL);
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString(currentText);
NSString *pdfFileName = fullPath;
// Create the PDF context using the default page size of 612 x 792.
UIGraphicsBeginPDFContextToFile(pdfFileName, CGRectZero, nil);
CFRange currentRange = CFRangeMake(0, 0);
NSInteger currentPage = 0;
BOOL done = NO;
do {
// Mark the beginning of a new page.
UIGraphicsBeginPDFPageWithInfo(CGRectMake(0, 0, 612, 792), nil);
// Draw a page number at the bottom of each page.
currentPage++;
//[self drawPageNumber:currentPage];
// Render the current page and update the current range to
// point to the beginning of the next page.
//currentRange = [self renderPageWithTextRange:currentRange andFramesetter:framesetter];
currentRange=[self renderPage:currentPage withTextRange:currentRange andFramesetter:framesetter];
// If we're at the end of the text, exit the loop.
if (currentRange.location == CFAttributedStringGetLength((CFAttributedStringRef)currentText))
done = YES;
} while (!done);
// Close the PDF context and write the contents out.
UIGraphicsEndPDFContext();
// Release the framewetter.
CFRelease(framesetter);
// Release the attributed string.
CFRelease(currentText);
}
- (CFRange)renderPage:(NSInteger)pageNum withTextRange:(CFRange)currentRange
andFramesetter:(CTFramesetterRef)framesetter
{
// Get the graphics context.
CGContextRef currentContext = UIGraphicsGetCurrentContext();
// Put the text matrix into a known state. This ensures
// that no old scaling factors are left in place.
CGContextSetTextMatrix(currentContext, CGAffineTransformIdentity);
// Create a path object to enclose the text. Use 72 point
// margins all around the text.
CGRect frameRect = CGRectMake(72, 72, 468, 648);
CGMutablePathRef framePath = CGPathCreateMutable();
CGPathAddRect(framePath, NULL, frameRect);
// Get the frame that will do the rendering.
// The currentRange variable specifies only the starting point. The framesetter
// lays out as much text as will fit into the frame.
CTFrameRef frameRef = CTFramesetterCreateFrame(framesetter, currentRange, framePath, NULL);
CGPathRelease(framePath);
// Core Text draws from the bottom-left corner up, so flip
// the current transform prior to drawing.
CGContextTranslateCTM(currentContext, 0, 792);
CGContextScaleCTM(currentContext, 1.0, -1.0);
// Draw the frame.
CTFrameDraw(frameRef, currentContext);
// Update the current range based on what was drawn.
currentRange = CTFrameGetVisibleStringRange(frameRef);
currentRange.location += currentRange.length;
currentRange.length = 0;
CFRelease(frameRef);
return currentRange;
}
I've tried numerous permutations, and they all seem to fail at the exact point of rendering text. The Apple-derived example above dies at the line:
CTFrameDraw(frameRef, currentContext);
Other code attempts to get the minimum working:
NSMutableParagraphStyle* textStyle = NSMutableParagraphStyle.defaultParagraphStyle.mutableCopy;
textStyle.alignment = NSTextAlignmentLeft;
NSDictionary* textFontAttributes = #{
NSFontAttributeName: [UIFont fontWithName: #"Helvetica" size: 12], NSForegroundColorAttributeName: UIColor.redColor,
NSParagraphStyleAttributeName: textStyle};
[#"Hello, World!" drawAtPoint:CGPointZero withAttributes:textFontAttributes];
... crashes at the "drawAtPoint" call.
For what it's worth, if I execute the app on a device without the debugger attached (i.e. run/kill/launch from springboard), the PDF creation works just fine. Presumably whatever bogus exception was getting thrown just gets ignored in real life.

Image generator ( generateCGImagesAsynchronouslyForTimes method) doesn't give live results

Pretty new to cocoa development and really stuck with probably a fundamental problem.
So in short, my app UI looks like a simple window with a nsslider at the bottom. What I need is to generate N images and place them, onto N nsviews in my app window.
What it does so far:
I'm clicking on the slider (holding it) and dragging it. While I'm dragging it nothing happens to my views (pictures are not generated). When I release the slider the pictures got generated and my view get filled with them.
What I want:
- I need the views to be filled with pictures as I'm moving the slider.
I figured out the little check box on the NSSlider properties, which is continuous, and I'm using it, but my image generator still doesn't do anything until I release the slider.
Here is my code:
// slider move action
- (IBAction)sliderMove:(id)sender
{
[self generateProcess:[_slider floatValue];
}
// generation process
- (void) generateProcess:(Float64) startPoint
{
// create an array of times for frames to display
NSMutableArray *stops = [[NSMutableArray alloc] init];
for (int j = 0; j < _numOfFramesToDisplay; j++)
{
CMTime time = CMTimeMakeWithSeconds(startPoint, 60000);
[stops addObject:[NSValue valueWithCMTime:time]];
_currentPosition = initialTime; // set the current position to the last frame displayed
startPoint+=0.04; // the step between frames is 0.04sec
}
__block CMTime lastTime = CMTimeMake(-1, 1);
__block int count = 0;
[_imageGenerator generateCGImagesAsynchronouslyForTimes:stops
completionHandler:^(CMTime requestedTime, CGImageRef image, CMTime actualTime,AVAssetImageGeneratorResult result, NSError *error)
{
if (result == AVAssetImageGeneratorSucceeded)
{
if (CMTimeCompare(actualTime, lastTime) != 0)
{
NSLog(#"new frame found");
lastTime = actualTime;
}
else
{
NSLog(#"skipping");
return;
}
// place the image onto the view
NSRect rect = CGRectMake((count+0.5) * 110, 500, 100, 100);
NSImageView *iView = [[NSImageView alloc] initWithFrame:rect];
[iView setImageScaling:NSScaleToFit];
NSImage *myImage = [[NSImage alloc] initWithCGImage:image size:(NSSize){50.0,50.0}];
[iView setImage:myImage];
[self.windowForSheet.contentView addSubview: iView];
[_viewsToRemove addObject:iView];
}
if (result == AVAssetImageGeneratorFailed)
{
NSLog(#"Failed with error: %#", [error localizedDescription]);
}
if (result == AVAssetImageGeneratorCancelled)
{
NSLog(#"Canceled");
}
count++;
}];
}
}
If you have any thoughts or ideas, please share with me, I will really appreciate it!
Thank you
In order to make your NSSlider continuous, open your window controller's XIB file in Interface Builder and click on the NSSlider. Then, open the Utilities area
select the Attributes Inspector
and check the "Continuous" checkbox
under the Control header. Once you've done this, your IBAction sliderMove: will be called as the slider is moved rather than once the mouse is released.
Note: Alternatively, with an
NSSlider *slider = //...
one can simply call
[slider setContinuous:YES];

Using in-memory UIWebView to generate PDF in PhoneGap

I'm trying to work out how to do this.
NOTE: I'm not an experienced objective-c developer (hence why I'm using PhoneGap in the first place)
The short of it: My UIWebView (no, not the PhoneGap one that renders the webapp, a 2nd UIWebView created in-memory and not visible) is not rendering into the PDF. I just get an blank PDF. I'll post some of my thinking and code, and hopefully someone will know what I'm doing wrong.
My starting place is that there is already a print plugin for PhoneGap here:
https://github.com/phonegap/phonegap-plugins/tree/master/iPhone/PrintPlugin
This plugin creates a UIWebView on-the-fly, you pass some HTML to it via JavaScript, and then it calls some print controller to do the printing.
So I borrowed some ideas from that. Then I noticed this awesome blog post on generating PDF's
http://www.ioslearner.com/convert-html-uiwebview-pdf-iphone-ipad/
So I'm trying to combine the two into my own PhoneGap plugin for taking some HTML (from my webapp) and generating a PDF on-the-fly.
HEADER:
#import <Foundation/Foundation.h>
#import <QuartzCore/QuartzCore.h>
#ifdef PHONEGAP_FRAMEWORK
#import <PhoneGap/PGPlugin.h>
#else
#import "PGPlugin.h"
#endif
#interface ExportPlugin : PGPlugin <UIWebViewDelegate> {
NSString* exportHTML;
}
#property (nonatomic, copy) NSString* exportHTML;
//This gets called from my HTML5 app (Javascript):
- (void) exportPdf:(NSMutableArray*)arguments withDict:(NSMutableDictionary*)options;
#end
MAIN:
#import "ExportPlugin.h"
#interface ExportPlugin (Private)
-(void) doExport;
-(void) drawPdf;
#end
#implementation ExportPlugin
#synthesize exportHTML;
- (void) exportPdf:(NSMutableArray*)arguments withDict:(NSMutableDictionary*)options{
NSUInteger argc = [arguments count];
if (argc < 1) {
return;
}
self.exportHTML = [arguments objectAtIndex:0];
[self doExport];
}
int imageName = 0;
double webViewHeight = 0.0;
- (void) doExport{
//Set the base URL to be the www directory.
NSString *dbFilePath = [[NSBundle mainBundle] pathForResource:#"www" ofType:nil ];
NSURL *baseURL = [NSURL fileURLWithPath:dbFilePath];
//Load custom html into a webview
UIWebView *webViewExport = [[UIWebView alloc] init];
webViewExport.delegate = self;
//[webViewExport loadHTMLString:exportHTML baseURL:baseURL];
[webViewExport loadHTMLString:#"<html><body><h1>testing</h1></body></html>" baseURL:baseURL];
}
- (BOOL)webView:(UIWebView *)theWebView shouldStartLoadWithRequest:(NSURLRequest *)request navigationType:(UIWebViewNavigationType)navigationType
{
return YES;
}
- (void)webViewDidFinishLoad:(UIWebView *)webViewExport
{
webViewHeight = [[webViewExport stringByEvaluatingJavaScriptFromString:#"document.body.scrollHeight;"] integerValue];
CGRect screenRect = webViewExport.frame;
//WHY DO I HAVE TO SET THE SIZE? OTHERWISE IT IS 0
screenRect.size.width = 768;
screenRect.size.height = 1024;
double currentWebViewHeight = webViewHeight;
while (currentWebViewHeight > 0)
{
imageName ++;
UIGraphicsBeginImageContext(screenRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
//[[UIColor blackColor] set];
//CGContextFillRect(ctx, screenRect);
[webViewExport.layer renderInContext:ctx];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *pngPath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"%d.png",imageName]];
if(currentWebViewHeight < 960)
{
CGRect lastImageRect = CGRectMake(0, 960 - currentWebViewHeight, webViewExport.frame.size.width, currentWebViewHeight);
CGImageRef imageRef = CGImageCreateWithImageInRect([newImage CGImage], lastImageRect);
newImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
}
[UIImagePNGRepresentation(newImage) writeToFile:pngPath atomically:YES];
[webViewExport stringByEvaluatingJavaScriptFromString:#"window.scrollBy(0,960);"];
currentWebViewHeight -= 960;
}
[self drawPdf];
}
- (void) drawPdf
{
CGSize pageSize = CGSizeMake(612, webViewHeight);
NSString *fileName = #"Demo.pdf";
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *pdfFileName = [documentsDirectory stringByAppendingPathComponent:fileName];
UIGraphicsBeginPDFContextToFile(pdfFileName, CGRectZero, nil);
// Mark the beginning of a new page.
UIGraphicsBeginPDFPageWithInfo(CGRectMake(0, 0, pageSize.width, pageSize.height), nil);
double currentHeight = 0.0;
for (int index = 1; index <= imageName ; index++)
{
NSString *pngPath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"%d.png", index]];
UIImage *pngImage = [UIImage imageWithContentsOfFile:pngPath];
[pngImage drawInRect:CGRectMake(0, currentHeight, pageSize.width, pngImage.size.height)];
currentHeight += pngImage.size.height;
}
UIGraphicsEndPDFContext();
}
#end
The first indication something is not right, is above I have to set the UIWebView.frame size:
screenRect.size.width = 768;
screenRect.size.height = 1024;
But why? The PhoneGap PrintPlugin doesn't have to do this. If I don't set it, the size is 0, and then I get lots of context errors.
And then the next problem is that the UIWebView is not rendering anything. A symptom of the first problem perhaps?
How do I go about debugging this and working out what the problem is?
UPDATE
I'm pretty sure that it may be impossible to render the UIWebView layer into the image context, unless that UIWebView is actually visible.
I'm not sure how the PhoneGap PrintPlugin works then. It seems to render it's UIWebView quite fine with it not being visible.
I'm currently experimenting with rendering the actual PhoneGap UIWebView into the PDF (as opposed to my own UIWebView). But this is not ideal.
It means I have to hide all toolbars and whatnot, and then pan the UIWebView around so I capture everything outside the viewport. This is not ideal, because the user will visually see this occurring!
Point 1 above doesn't seem to work anyway, because iPad is too slow to update the screen when dynamically fiddling with the layout. On iPad, if you do visual things very quickly, (like panning the screen around) the iPad is too slow and just wont show it. You end up only seeing the end state. So when I take the screenshots, the screen visually hasn't panned (even though the DOM says it has). (Hope that makes sense).
Agh, frustrating.
I've got a working solution now, but it's not ideal.
What I do is render the phonegap UIWebView into the PDF.
To do this is quite tricky. I have a couple of objective-c functions
- (void) takeScreenshot;
- (void) renderPdf;
that I call from Javascript.
Then I have to write a recursive JS algorithm that pans the screen in every direction whilst calling takeScreenshot.
In between calls to takeScreenshot I use setTimeout which gives a 20 millisecond break in the JS processing - enough time for the iPad to update the screen so the next screenshot can be taken.
It was a royal pain in the arse. Bounty is still open in case someone knows of a better way of dealing with this - I would be very curious to know!
If you want to render a UIWebView into a PDF, I think you could go for this :
1/ use the convertRect:fromView method implemented by your UIWebView object to get the CGRect
2/ see the UIPrintPageRenderer Class Reference to make like a print preview
3/ Use UIGraphicsGetCurrentContext to get the CGContextRef out of it
4/ create the PDF from the CGRect and CGContextRef (you can use the help provided in the Apple sample code zoomingPDFViewer for building PDF using CGPdf
Here is how to render PDF in a UIWebView (webPage being your UIWebView), your delegate (here "self") could implement the UIWebViewDelegate protocol :
- (void)loadingPDFwithURL:(NSURL *)anURL {
CGRect appFrame = [[UIScreen mainScreen] applicationFrame];
appFrame.origin.y = 0;
[self.webPage initWithFrame:appFrame];
[self.webPage setScalesPageToFit:YES];
[self.webPage setDelegate:self];
NSURLRequest *requestObj = [NSURLRequest requestWithURL:anURL];
[self.webPage loadRequest:requestObj];
[self.view addSubview:self.webPage];
}
So you're giving the URL of the PDF (if it's in memory, just write the file in your application as the URL can be a filepath).
Another possibility is to dive into CGPdf but it's getting harder.

Why does capturing images with AVFoundation give me 480x640 images when the preset is 640x480?

I have some pretty basic code to capture a still image using AVFoundation.
AVCaptureDeviceInput *newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self backFacingCamera] error:nil];
AVCaptureStillImageOutput *newStillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:
AVVideoCodecJPEG, AVVideoCodecKey,
nil];
[newStillImageOutput setOutputSettings:outputSettings];
[outputSettings release];
AVCaptureSession *newCaptureSession = [[AVCaptureSession alloc] init];
[newCaptureSession beginConfiguration];
newCaptureSession.sessionPreset = AVCaptureSessionPreset640x480;
[newCaptureSession commitConfiguration];
if ([newCaptureSession canAddInput:newVideoInput]) {
[newCaptureSession addInput:newVideoInput];
}
if ([newCaptureSession canAddOutput:newStillImageOutput]) {
[newCaptureSession addOutput:newStillImageOutput];
}
self.stillImageOutput = newStillImageOutput;
self.videoInput = newVideoInput;
self.captureSession = newCaptureSession;
[newStillImageOutput release];
[newVideoInput release];
[newCaptureSession release];
My method that captures the still image is also pretty simple and prints out the orientation which is AVCaptureVideoOrientationPortrait:
- (void) captureStillImage
{
AVCaptureConnection *stillImageConnection = [AVCamUtilities connectionWithMediaType:AVMediaTypeVideo fromConnections:[[self stillImageOutput] connections]];
if ([stillImageConnection isVideoOrientationSupported]){
NSLog(#"isVideoOrientationSupported - orientation = %d", orientation);
[stillImageConnection setVideoOrientation:orientation];
}
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:stillImageConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
ALAssetsLibraryWriteImageCompletionBlock completionBlock = ^(NSURL *assetURL, NSError *error) {
if (error) { // HANDLE }
};
if (imageDataSampleBuffer != NULL) {
CFDictionaryRef exifAttachments = CMGetAttachment(imageDataSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
NSLog(#"attachements: %#", exifAttachments);
} else {
NSLog(#"no attachments");
}
self.stillImageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
self.stillImage = [UIImage imageWithData:self.stillImageData];
UIImageWriteToSavedPhotosAlbum(self.stillImage, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
}
else
completionBlock(nil, error);
}];
}
So the device understands it's in portrait mode as it should be, the exif attachements show me:
PixelXDimension = 640;
PixelYDimension = 480;
so it seems to know that we're in 640x480 and that means WxH (obviously...)
However when I email the photo to myself from Apples Photos app, I get a 480x640 image if I check the properties in Preview. This didn't make any sense to me until I dug further into image properties to find out that the image orientation is set to "6 (Rotated 90 degrees CCW)" I'm sure CCW is counter clockwise
So looking at the image in a browser:
http://tonyamoyal.com/stuff/things_that_make_you_go_hmm/photo.JPG
We see a the image rotated 90 degrees CCW and it is 640x480.
I'm really confused about this behavior. When I take a 640x480 still image using AVFoundation, I would expect the default to have no rotated orientation. I expect a 640x480 image oriented exactly as my eye sees the image in the preview layer. Can someone explain why this is happening and how to configure the capture so that when I save my image to the server to later display in a web view, it is not rotated 90 degrees CCW?
This happens because the orientation set in the metadata of the new image is being affected by the orientation of the AV system that creates it. The layout of the actual image data is, of course, different from the orientation mentioned in your metadata. Some image viewing programs respect the metadata orientation, some ignore it.
You can affect the metadata orientation of the AV system by calling:
AVCaptureConnection *videoConnection = ...;
if ([videoConnection isVideoOrientationSupported])
[videoConnection setVideoOrientation:AVCaptureVideoOrientationSomething];
You can affect the metadata orientation of a UIImage by calling:
UIImage *rotatedImage = [[UIImage alloc] initWithCGImage:image.CGImage scale:1.0f orientation:UIImageOrientationSomething];
But the actual data from the AVCapture system will always appear with the wider dimension as X and the narrower dimension as Y, and will appear to be oriented in LandscapeLeft.
If you want the actual data to line up with what your metadata claims, you need to modify the actual data. You can do this by writing the image out to a new image using CGContexts and AffineTransforms. Or there is an easier workaround. Use the UIImage+Resize package as discussed here. And resize the image to it's current size by calling:
UIImage *rotatedImage = [image resizedImage:CGSizeMake(image.size.width, image.size.height) interpolationQuality:kCGInterpolationDefault];
This will rectify the data's orientation as a side effect.
If you don't want to include the whole UIImage+Resize thing you can check out it's code and strip out the parts where the data is transformed.

Rendering PDF document having 200 pages on iPad

I have to develop an application which takes PDF as input, that PDF document has 200 pages. I am using UIScrollView to swipe left and right. On each swipe I am drawing a PDF document. The code is as under :
- (id)initWithFrame:(CGRect)frame content:(NSString *)aPDFPage type:(NSString *)contentType
{
if ((self = [super initWithFrame:frame]))
{
// Initialization code
self.backgroundColor = [UIColor whiteColor];
pageRef = [[NSString alloc]initWithString:aPDFPage];
pageTypeRef = [[NSString alloc]initWithString:contentType];
}
return self;
}
- (void)drawRect:(CGRect)rect
{
ctx = UIGraphicsGetCurrentContext();
[self drawPDF];
}
-(void)drawPDF
{
NSString *pathToPdfDoc = [[NSBundle mainBundle] pathForResource:self.pageRef ofType:self.pageTypeRef];
NSURL *pdfUrl = [NSURL fileURLWithPath:pathToPdfDoc];
document = CGPDFDocumentCreateWithURL((CFURLRef)pdfUrl);
page = CGPDFDocumentGetPage(document, 1);
CGContextTranslateCTM(ctx, 0.0, [self bounds].size.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
CGAffineTransform transform = aspectFit(CGPDFPageGetBoxRect(page, kCGPDFTrimBox),CGContextGetClipBoundingBox(ctx));
CGContextConcatCTM(ctx, transform);
CGContextSetInterpolationQuality(ctx, kCGInterpolationLow);
CGContextSetRenderingIntent(ctx, kCGRenderingIntentDefault);
CGContextDrawPDFPage(ctx, page);
CGPDFDocumentRelease(document);
}
-(void)dealloc
{
[pageRef release];
[pageTypeRef release];
[super dealloc];
}
This works fine. But if I swipe very fast, more often subsequent pages does not load instantly. Screen becomes white.
How to solve this, please guide.
Regards,
Ranjan
one of the obvious issues is that drawRect may be called regularly, and may only request that you draw some of the view's rect.
in your implementation, you:
read the pdf from disk. this takes a lot of time, especially since drawRect may be called at a high frequency.
read the pdf. this takes some time -- it can be avoided from occurring during drawRect.
draw one page. draw only what you need to draw, where possible.
dispose of the pdf. you should hold on to the pdf document while the pdf view is visible, rather than reading it from disk every time you need to draw.
Have a look at this thread: Fast and Lean PDF Viewer for iPhone / iPad / iOs - tips and hints?