I'm trying to work out how to do this.
NOTE: I'm not an experienced objective-c developer (hence why I'm using PhoneGap in the first place)
The short of it: My UIWebView (no, not the PhoneGap one that renders the webapp, a 2nd UIWebView created in-memory and not visible) is not rendering into the PDF. I just get an blank PDF. I'll post some of my thinking and code, and hopefully someone will know what I'm doing wrong.
My starting place is that there is already a print plugin for PhoneGap here:
https://github.com/phonegap/phonegap-plugins/tree/master/iPhone/PrintPlugin
This plugin creates a UIWebView on-the-fly, you pass some HTML to it via JavaScript, and then it calls some print controller to do the printing.
So I borrowed some ideas from that. Then I noticed this awesome blog post on generating PDF's
http://www.ioslearner.com/convert-html-uiwebview-pdf-iphone-ipad/
So I'm trying to combine the two into my own PhoneGap plugin for taking some HTML (from my webapp) and generating a PDF on-the-fly.
HEADER:
#import <Foundation/Foundation.h>
#import <QuartzCore/QuartzCore.h>
#ifdef PHONEGAP_FRAMEWORK
#import <PhoneGap/PGPlugin.h>
#else
#import "PGPlugin.h"
#endif
#interface ExportPlugin : PGPlugin <UIWebViewDelegate> {
NSString* exportHTML;
}
#property (nonatomic, copy) NSString* exportHTML;
//This gets called from my HTML5 app (Javascript):
- (void) exportPdf:(NSMutableArray*)arguments withDict:(NSMutableDictionary*)options;
#end
MAIN:
#import "ExportPlugin.h"
#interface ExportPlugin (Private)
-(void) doExport;
-(void) drawPdf;
#end
#implementation ExportPlugin
#synthesize exportHTML;
- (void) exportPdf:(NSMutableArray*)arguments withDict:(NSMutableDictionary*)options{
NSUInteger argc = [arguments count];
if (argc < 1) {
return;
}
self.exportHTML = [arguments objectAtIndex:0];
[self doExport];
}
int imageName = 0;
double webViewHeight = 0.0;
- (void) doExport{
//Set the base URL to be the www directory.
NSString *dbFilePath = [[NSBundle mainBundle] pathForResource:#"www" ofType:nil ];
NSURL *baseURL = [NSURL fileURLWithPath:dbFilePath];
//Load custom html into a webview
UIWebView *webViewExport = [[UIWebView alloc] init];
webViewExport.delegate = self;
//[webViewExport loadHTMLString:exportHTML baseURL:baseURL];
[webViewExport loadHTMLString:#"<html><body><h1>testing</h1></body></html>" baseURL:baseURL];
}
- (BOOL)webView:(UIWebView *)theWebView shouldStartLoadWithRequest:(NSURLRequest *)request navigationType:(UIWebViewNavigationType)navigationType
{
return YES;
}
- (void)webViewDidFinishLoad:(UIWebView *)webViewExport
{
webViewHeight = [[webViewExport stringByEvaluatingJavaScriptFromString:#"document.body.scrollHeight;"] integerValue];
CGRect screenRect = webViewExport.frame;
//WHY DO I HAVE TO SET THE SIZE? OTHERWISE IT IS 0
screenRect.size.width = 768;
screenRect.size.height = 1024;
double currentWebViewHeight = webViewHeight;
while (currentWebViewHeight > 0)
{
imageName ++;
UIGraphicsBeginImageContext(screenRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
//[[UIColor blackColor] set];
//CGContextFillRect(ctx, screenRect);
[webViewExport.layer renderInContext:ctx];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *pngPath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"%d.png",imageName]];
if(currentWebViewHeight < 960)
{
CGRect lastImageRect = CGRectMake(0, 960 - currentWebViewHeight, webViewExport.frame.size.width, currentWebViewHeight);
CGImageRef imageRef = CGImageCreateWithImageInRect([newImage CGImage], lastImageRect);
newImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
}
[UIImagePNGRepresentation(newImage) writeToFile:pngPath atomically:YES];
[webViewExport stringByEvaluatingJavaScriptFromString:#"window.scrollBy(0,960);"];
currentWebViewHeight -= 960;
}
[self drawPdf];
}
- (void) drawPdf
{
CGSize pageSize = CGSizeMake(612, webViewHeight);
NSString *fileName = #"Demo.pdf";
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *pdfFileName = [documentsDirectory stringByAppendingPathComponent:fileName];
UIGraphicsBeginPDFContextToFile(pdfFileName, CGRectZero, nil);
// Mark the beginning of a new page.
UIGraphicsBeginPDFPageWithInfo(CGRectMake(0, 0, pageSize.width, pageSize.height), nil);
double currentHeight = 0.0;
for (int index = 1; index <= imageName ; index++)
{
NSString *pngPath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"%d.png", index]];
UIImage *pngImage = [UIImage imageWithContentsOfFile:pngPath];
[pngImage drawInRect:CGRectMake(0, currentHeight, pageSize.width, pngImage.size.height)];
currentHeight += pngImage.size.height;
}
UIGraphicsEndPDFContext();
}
#end
The first indication something is not right, is above I have to set the UIWebView.frame size:
screenRect.size.width = 768;
screenRect.size.height = 1024;
But why? The PhoneGap PrintPlugin doesn't have to do this. If I don't set it, the size is 0, and then I get lots of context errors.
And then the next problem is that the UIWebView is not rendering anything. A symptom of the first problem perhaps?
How do I go about debugging this and working out what the problem is?
UPDATE
I'm pretty sure that it may be impossible to render the UIWebView layer into the image context, unless that UIWebView is actually visible.
I'm not sure how the PhoneGap PrintPlugin works then. It seems to render it's UIWebView quite fine with it not being visible.
I'm currently experimenting with rendering the actual PhoneGap UIWebView into the PDF (as opposed to my own UIWebView). But this is not ideal.
It means I have to hide all toolbars and whatnot, and then pan the UIWebView around so I capture everything outside the viewport. This is not ideal, because the user will visually see this occurring!
Point 1 above doesn't seem to work anyway, because iPad is too slow to update the screen when dynamically fiddling with the layout. On iPad, if you do visual things very quickly, (like panning the screen around) the iPad is too slow and just wont show it. You end up only seeing the end state. So when I take the screenshots, the screen visually hasn't panned (even though the DOM says it has). (Hope that makes sense).
Agh, frustrating.
I've got a working solution now, but it's not ideal.
What I do is render the phonegap UIWebView into the PDF.
To do this is quite tricky. I have a couple of objective-c functions
- (void) takeScreenshot;
- (void) renderPdf;
that I call from Javascript.
Then I have to write a recursive JS algorithm that pans the screen in every direction whilst calling takeScreenshot.
In between calls to takeScreenshot I use setTimeout which gives a 20 millisecond break in the JS processing - enough time for the iPad to update the screen so the next screenshot can be taken.
It was a royal pain in the arse. Bounty is still open in case someone knows of a better way of dealing with this - I would be very curious to know!
If you want to render a UIWebView into a PDF, I think you could go for this :
1/ use the convertRect:fromView method implemented by your UIWebView object to get the CGRect
2/ see the UIPrintPageRenderer Class Reference to make like a print preview
3/ Use UIGraphicsGetCurrentContext to get the CGContextRef out of it
4/ create the PDF from the CGRect and CGContextRef (you can use the help provided in the Apple sample code zoomingPDFViewer for building PDF using CGPdf
Here is how to render PDF in a UIWebView (webPage being your UIWebView), your delegate (here "self") could implement the UIWebViewDelegate protocol :
- (void)loadingPDFwithURL:(NSURL *)anURL {
CGRect appFrame = [[UIScreen mainScreen] applicationFrame];
appFrame.origin.y = 0;
[self.webPage initWithFrame:appFrame];
[self.webPage setScalesPageToFit:YES];
[self.webPage setDelegate:self];
NSURLRequest *requestObj = [NSURLRequest requestWithURL:anURL];
[self.webPage loadRequest:requestObj];
[self.view addSubview:self.webPage];
}
So you're giving the URL of the PDF (if it's in memory, just write the file in your application as the URL can be a filepath).
Another possibility is to dive into CGPdf but it's getting harder.
Related
I'm trying to make a program where I need to show images and videos on an external screen. So I have a Table View where I can enter names and links to video files (mp4 for now) and image files (jpg for now).
I can't find the way to display still images in AVPlayer.
So to visualize the video files or the images, I created an AVPlayer and an UIImageView which have the same sizes. The AVPlayer is placed above the UIImageView.
If I want to display an image, I hide the AVPlayer.
if ([self selectedVideoURL]!=nil){
NSString *myString = [[self selectedVideoURL] absoluteString];
NSString *extension = [myString substringFromIndex: [myString length] - 3];
if (![extension isEqual:#"jpg"]){
self.playerView.hidden = false;
[self playingVideo:[self selectedVideoURL]];
}
else{
self.playerView.hidden = true;
[self displayingImage:[self selectedVideoURL]];
}
}
Is there any way to make it simpler?
Thanks...
First I couldn't find anyone else having this problem. Working on a game in spritekit - there is a mainTitle.h/m and gamePlay.h/m files. Below is code for the share button to share your progress via text, fb, twitter, ect. The code below is located in the gamePlay.m inside a touch method. The code works however after the user selects to send his/her score view text message - the new message window slides up and then the game appears to restart and load mainTitle.m scene. Any ideas as to why this happens?
-(void)share {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 1.0);
[self.view.drawViewHierarchyInRect:self.view.boundsafterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSString *message = [NSString stringWithFormat:#"messge"];
NSString *urlString = [NSString stringWithFormat#"www..."];
NSURL *gmURL = [NSURL URLWithString:urlString];
UIActivityViewController *actVC = [[UIActivityViewController alloc]
initWithActivityItems:#[message, gmURL, image] applicationActivites:nil];
actVC.excludedActivityTypes = #[UIActivityTypePrint, UIActivityTypeAriDrop];
UIViewController *viewControl = self.view.window.rootViewController;
[viewControl presentViewController:actVC animated:YES completion:nil];
}
-(void)touchBegins ... {
[self share];
}
Probably its not a good idea to call share method in touchBegan, since it might be fired multiple times in some conditions. Use i.e. UIButton instead
I have a UICollectionView in my app, and each cell is a UIImageView and some text labels. The problem is that when I have the UIImageViews displaying their images, the scrolling performance is terrible. It's nowhere near as smooth as the scrolling experience of a UITableView or even the same UICollectionView without the UIImageView.
I found this question from a few months ago, and it seems like an answer was found, but it's written in RubyMotion, and I don't understand that. I tried to see how to convert it to Xcode, but since I have never used NSCache either, it's a little hard to. The poster there also pointed to here about implementing something in addition to their solution, but I'm not sure where to put that code either. Possibly because I don't understand the code from the first question.
Would someone be able to help translate this into Xcode?
def viewDidLoad
...
#images_cache = NSCache.alloc.init
#image_loading_queue = NSOperationQueue.alloc.init
#image_loading_queue.maxConcurrentOperationCount = 3
...
end
def collectionView(collection_view, cellForItemAtIndexPath: index_path)
cell = collection_view.dequeueReusableCellWithReuseIdentifier(CELL_IDENTIFIER, forIndexPath: index_path)
image_path = #image_paths[index_path.row]
if cached_image = #images_cache.objectForKey(image_path)
cell.image = cached_image
else
#operation = NSBlockOperation.blockOperationWithBlock lambda {
#image = UIImage.imageWithContentsOfFile(image_path)
Dispatch::Queue.main.async do
return unless collectionView.indexPathsForVisibleItems.containsObject(index_path)
#images_cache.setObject(#image, forKey: image_path)
cell = collectionView.cellForItemAtIndexPath(index_path)
cell.image = #image
end
}
#image_loading_queue.addOperation(#operation)
end
end
Here is the code from the second question that the asker of the first question said solved the problem:
UIImage *productImage = [[UIImage alloc] initWithContentsOfFile:path];
CGSize imageSize = productImage.size;
UIGraphicsBeginImageContext(imageSize);
[productImage drawInRect:CGRectMake(0, 0, imageSize.width, imageSize.height)];
productImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Again, I'm not sure how/where to implement that.
Many thanks.
Here's the pattern I follow. Always load asynch and cache the result. Make no assumption about the state of the view when the asynch load finishes. I have a class that simplifies the loads as follows:
//
// ImageRequest.h
// This class keeps track of in-flight instances, creating only one NSURLConnection for
// multiple matching requests (requests with matching URLs). It also uses NSCache to cache
// retrieved images. Set the cache count limit with the macro in this file.
#define kIMAGE_REQUEST_CACHE_LIMIT 100
typedef void (^CompletionBlock) (UIImage *, NSError *);
#interface ImageRequest : NSMutableURLRequest
- (UIImage *)cachedResult;
- (void)startWithCompletion:(CompletionBlock)completion;
#end
//
// ImageRequest.m
#import "ImageRequest.h"
NSMutableDictionary *_inflight;
NSCache *_imageCache;
#implementation ImageRequest
- (NSMutableDictionary *)inflight {
if (!_inflight) {
_inflight = [NSMutableDictionary dictionary];
}
return _inflight;
}
- (NSCache *)imageCache {
if (!_imageCache) {
_imageCache = [[NSCache alloc] init];
_imageCache.countLimit = kIMAGE_REQUEST_CACHE_LIMIT;
}
return _imageCache;
}
- (UIImage *)cachedResult {
return [self.imageCache objectForKey:self];
}
- (void)startWithCompletion:(CompletionBlock)completion {
UIImage *image = [self cachedResult];
if (image) return completion(image, nil);
NSMutableArray *inflightCompletionBlocks = [self.inflight objectForKey:self];
if (inflightCompletionBlocks) {
// a matching request is in flight, keep the completion block to run when we're finished
[inflightCompletionBlocks addObject:completion];
} else {
[self.inflight setObject:[NSMutableArray arrayWithObject:completion] forKey:self];
[NSURLConnection sendAsynchronousRequest:self queue:[NSOperationQueue mainQueue] completionHandler:^(NSURLResponse *response, NSData *data, NSError *error) {
if (!error) {
// build an image, cache the result and run completion blocks for this request
UIImage *image = [UIImage imageWithData:data];
[self.imageCache setObject:image forKey:self];
id value = [self.inflight objectForKey:self];
[self.inflight removeObjectForKey:self];
for (CompletionBlock block in (NSMutableArray *)value) {
block(image, nil);
}
} else {
[self.inflight removeObjectForKey:self];
completion(nil, error);
}
}];
}
}
#end
Now the cell (collection or table) update is fairly simple:
-(UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath {
UICollectionViewCell *cell = [collectionView dequeueReusableCellWithReuseIdentifier:#"Cell" forIndexPath:indexPath];
NSURL *url = [NSURL URLWithString:#"http:// some url from your model"];
// note that this can be a web url or file url
ImageRequest *request = [[ImageRequest alloc] initWithURL:url];
UIImage *image = [request cachedResult];
if (image) {
UIImageView *imageView = (UIImageView *)[cell viewWithTag:127];
imageView.image = image;
} else {
[request startWithCompletion:^(UIImage *image, NSError *error) {
if (image && [[collectionView indexPathsForVisibleItems] containsObject:indexPath]) {
[collectionView reloadItemsAtIndexPaths:#[indexPath]];
}
}];
}
return cell;
}
In general bad scrolling behaviour for UICollectionViews or UITableViews happens because the cells are dequeued and constructed in the main thread by iOS. There is little freedom to precache cells or construct them in a background thread, instead they are dequeued and constructed as you scroll blocking the UI. (Personally I find this bad design by Apple all though it does simplify matters because you don't have to be aware about potential threading issues. I think they should have given a hook though to provide a custom implementation for a UICollectionViewCell/UITableViewCell pool which can handle dequeuing/reusing of cells.)
The most important causes for performance decrease are indeed related to image data and (in decreasing order of magnitude) are in my experience:
Synchronous calls to download image data: always do this asynchronously and call [UIImageView setImage:] with the constructed image when ready in the main thread
Synchronous calls to construct images from data on the local file system, or from other serialized data: do this asynchronously as well. (e.g. [UIImage imageWithContentsOfFile:], [UIImage imageWithData:], etc).
Calls to [UIImage imageNamed:]: the first time this image is loaded it is served from the file system. You may want to precache images (just by loading [UIImage imageNamed:] before the cell is actually constructed such that they can be served from memory immediately instead.
Calling [UIImageView setImage:] is not the fastest method either, but can often not be avoided unless you use static images. For static images it is sometimes faster to used different image views which you set to hidden or not depending on whether they should be displayed instead of changing the image on the same image view.
First time a cell is dequeued it is either loaded from a Nib or constructed with alloc-init and some initial layout or properties are set (probably also images if you used them). This causes bad scrolling behaviour the first time a cell is used.
Because I am very picky about smooth scrolling (even if it's only the first time a cell is used) I constructed a whole framework to precache cells by subclassing UINib (this is basically the only hook you get into the dequeuing process used by iOS). But that may be beyond your needs.
I had issues about UICollectionView scrolling.
What worked (almost) like a charm for me: I populated the cells with png thumbnails 90x90. I say almost because the first complete scroll is not so smooth, but never crashed anymore.
In my case, the cell size is 90x90.
I had many original png sizes before, and it was very choppy when png original size was greater than ~1000x1000 (many crashes on first scroll).
So, I select 90x90 (or the like) on the UICollectionView and display the original png's (no matter the size). hope it may help others.
I am creating an app where a user can load their schedule into the app, and it is subsequently displayed.
When the image is allowed to be edited before loading, it shows up perfectly:
Once the user "chooses" the image, it shows up in the UIImageView blurred:
Here it is slightly zoomed in:
I know that the image resolution is okay because image displays perfectly beforehand. How can I stop this from being blurred?
I am using the basic method of zooming an UIImageView in an UIScrollView.
Here is the code I use to assign the image. zoomscroll is the UIScrollView and myschedule is the UIImageView:
-(void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
[self dismissModalViewControllerAnimated:YES];
//Obtaining saving path
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *imagePath = [documentsDirectory stringByAppendingPathComponent:#"myschedule.png"];
//Extracting image from the picker and saving it
NSString *mediaType = [info objectForKey:UIImagePickerControllerMediaType];
if ([mediaType isEqualToString:#"public.image"]){
UIImage *editedImage = [info objectForKey:UIImagePickerControllerEditedImage];
NSData *webData = UIImagePNGRepresentation(editedImage);
[webData writeToFile:imagePath atomically:YES];
myschedule.image = editedImage;
[zoomscroll addSubview:myschedule];
zoomscroll.contentSize = CGSizeMake(myschedule.frame.size.width , myschedule.frame.size.height);
zoomscroll.minimumZoomScale = 1.0;
zoomscroll.maximumZoomScale = 4.0;
}
}
Thank you!
iOS doesn't redraw views when it zooms them, it just scales the view up or down. The underlying implementation is basically a textured OpenGL polygon, which is why zooming is so fast. Regenerating the texture at a higher res is slow, so iOS doesn't do that unless you explicitly tell it to.
There are various ways you can fix this. The simplest is probably to set the contentSize of your scrollview to the actual size of the image and then zoom out initially, so that instead of zooming a small version of the image up to 400% (which results in blurring) the user is zooming back in from 25% up to 100%. Something like this:
myschedule.image = editedImage;
myschedule.frame = CGRectMake(0, 0, editedImage.size.width, editedImage.size.height);
[zoomscroll addSubview:myschedule];
zoomscroll.contentSize = editedImage.size;
zoomscroll.minimumZoomScale = 0.25;
zoomscroll.maximumZoomScale = 1.0;
zoomscroll.zoomScale = zoomscroll.minimumZoomScale;
I've got a UIView extension class called UIView+PDFView which takes the current view, splits it into pages, and renders a PDF document. My issue lies in the rendering part; I can successfully 'create' pages for content, however all pages after the first are blank. My end goal is to take the current view, 'scale' the width equal to the page, and paginate the rest of the view within a PDF.
What the issue isn't:
Attaching the PDF to e-mail
Sending the PDF to a printer/print simulator
Only printing one page while actually generating the PDF correctly
Correctly resizing the view (SO Question 4919388)
I do this before I send to method. Verified by making frame 10px tall, still prints one (and only one) full page
Correctly translating the view. Verified by making both a translation and a scale; both correctly changed the view, however neither rendered on more than the first page.
My code is as follows:
#interface UIView (PDFView)
-(id)createPDFAndSaveToDocumentsWithFileName:(NSString*)aFilename andDocumentInfo:(NSDictionary *)documentInfo;
#end
#implementation UIView (PDFView)
-(id)createPDFAndSaveToDocumentsWithFileName:(NSString*)aFilename andDocumentInfo:(NSDictionary *)documentInfo {
// http://developer.apple.com/library/ios/DOCUMENTATION/GraphicsImaging/Reference/CGPDFContext/Reference/reference.html#//apple_ref/doc/constant_group/Auxiliary_Dictionary_Keys
// Creates a mutable data object for updating with binary data, like a byte array
NSMutableData *pdfData = [NSMutableData data];
// Points the pdf converter to the mutable data object and to the UIView to be converted
CGSize pageSize = CGSizeMake(self.bounds.size.width, 792);
UIGraphicsBeginPDFContextToData(pdfData, CGRectZero, documentInfo);
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
NSInteger currentPage = 0;
BOOL done = NO;
do {
CGRect currentPageRect = CGRectMake(0, (pageSize.height*currentPage), pageSize.width, pageSize.height);
UIGraphicsBeginPDFPageWithInfo(currentPageRect, nil);
// draws rect to the view and thus this is captured by UIGraphicsBeginPDFContextToData
[self.layer renderInContext:pdfContext];
// If we're at the end of the view, exit the loop.
if ((pageSize.height*currentPage) > self.bounds.size.height) {
done = YES;
} else {
currentPage++;
}
} while (!done);
// remove PDF rendering context
UIGraphicsEndPDFContext();
if (aFilename == nil) {
return pdfData;
} else {
// Retrieves the document directories from the iOS device
NSArray* documentDirectories = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask,YES);
NSString* documentDirectory = [documentDirectories objectAtIndex:0];
NSString* documentDirectoryFilename = [documentDirectory stringByAppendingPathComponent:aFilename];
// instructs the mutable data object to write its context to a file on disk
[pdfData writeToFile:documentDirectoryFilename atomically:YES];
NSLog(#"documentDirectoryFileName: %#",documentDirectoryFilename);
return nil;
}
}
i think your problem lies
CGRect currentPageRect = CGRectMake(0, (pageSize.height*currentPage), pageSize.width, pageSize.height);
instead of that try using either of the below statements
UIGraphicsBeginPDFPageWithInfo(CGRectMake(0.0, 0.0, 612.0, 792.0), nil);
UIGraphicsBeginPDFPage();
everytime you wish to add a new page to the context use the above statements and you would be able to add pages to the context.
if you wish to use the default size of the page i.e. 612 X 792 you can directly use UIGraphicsBeginPDFPage();
for custom page size you can use UIGraphicsBeginPDFPageWithInfo(CGRectMake(0.0, 0.0, 612.0, 792.0), nil);
I think that should solve your problem.