I have a Paging UIScrollView with a contentSize large enough to hold a number of small UIScrollViews for zooming, The viewForZoomingInScrollView is a viewController that holds a CALayer for drawing a PDF page onto. This allows me to navigate through a PDF much like the ibooks PDF reader.
The code that draws the PDF (Tiled Layers) is located in:
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx;
And simply adding a 'page' to the visible screen calls this method automatically. When I change page there is some delay before all the tiles are drawn, even though the object (page) has already been created.
What i want to be able to do is render the next page before the user scrolls to it, thus preventing the visible tiling effect. However, i have found that if the layer is located offscreen adding it to the scrollview doesn't call the drawLayer.
Any Ideas/common gotchas here?
I have tried:
[viewController.view.layer setNeedsLayout];
[viewController.view.layer setNeedsDisplay];
NB: The fact that this is replicating the ibooks functionally is irrelevant within the context of the full app.
As i mentioned above, CALayers don't render if they are offscreen.
I ended up not drawing the PDF directly to the layer but instead, rendered the PDF page to an image when i needed (renders 1 page plus and minus one of the focused page)
Here is the render code:
-(UIImage *)renderPDFPageToImage:(int)pageNumber//NSOPERATION?
{
//you may not want to permanently (app life) retain doc ref
CGSize size = CGSizeMake(x,y);
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0, 750);
CGContextScaleCTM(context, 1.0, -1.0);
CGPDFPageRef page; //Move to class member
page = CGPDFDocumentGetPage (myDocumentRef, pageNumber);
CGContextDrawPDFPage (context, page);
UIImage * pdfImage = UIGraphicsGetImageFromCurrentImageContext();//autoreleased
UIGraphicsEndImageContext();
return pdfImage;
}
Related
Here's my drawRect code:
- (void)drawRect:(CGRect)rect {
if (currentLayer) {
[currentLayer removeFromSuperlayer];
}
if (currentPath) {
currentLayer = [[CAShapeLayer alloc] init];
currentLayer.frame = self.bounds;
currentLayer.path = currentPath.CGPath;
if ([SettingsManager shared].isColorInverted) {
currentLayer.strokeColor = [UIColor blackColor].CGColor;
} else {
currentLayer.strokeColor = [UIColor whiteColor].CGColor;
}
currentLayer.strokeColor = _strokeColorX.CGColor;
currentLayer.fillColor = [UIColor clearColor].CGColor;
currentLayer.lineWidth = self.test;
currentLayer.contentsGravity = kCAGravityCenter;
[self.layer addSublayer:currentLayer];
//currentLayer.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame));
}
//[currentPath stroke];
}
"The portion of the view’s bounds that needs to be updated"
That's from the apple dev docs.
I am dealing with a UIView and a device called a slate -the slate has the ability to record pencil drawings to my iOS app. I managed to get it working on the entire view. But, I dont want the entire view to be filled with the slate input. Instead, I'd like to modify the UIView to have a height of phonesize - 40px; Makes sense?
I found out if I set the frame that I have as currentLayer to new bounds the screen that can be drawn on is resized accordingly.
I think you’re mixing up the meaning of the various rectangles in UIView. The drawRect parameter designates a partial section of the view that needs to be redrawn. This can be that you called setNeedsDisplay(rect), or this can be because iOS thinks it needs to be redrawn (e.g. because your app‘s drawings were ignored because the screen was locked, and now the user unlocked the screen and current drawings are needed).
This rectangle has nothing to do with the size at which your contents are drawn. The size of the area your view is drawn in is controlled by its frame and bounds, the former of which is usually controlled using Auto Layout (i.e. layout constraint objects).
In general, while in drawRect, you look at the bounds of your view to get the full rectangle to draw in. The parameter to drawRect, on the other hand, is there to allow you to optimize your drawing to only redraw the parts that actually changed.
Also, it is in general a bad idea to manipulate the view and layer hierarchies from inside drawRect. The UIView drawing mechanism expects you to only draw your current view‘s state in there. Since it recursively walks the list of views and subviews to call drawRect, you changing the list of views or layers can cause weird side effects like views being skipped until the next redraw.
Create and add new layers in your state-changing methods or event handling methods, and position them using auto layout or from inside layoutSubviews.
I am creating a PDF by taking a screenshot of a UIView, this is currently working great on the iPad3 with the retina display, but when testing on other devices with lower resolution screens I am having problems with text resolution.
Here is my code:
//start a new page with default size and info
//this can be changed later to include extra info.
UIGraphicsBeginPDFPage();
//render the view's layer into an image context
//the last option specifies scale. If 0, it uses the devices scale.
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 2.0);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//render the screenshot into the pdf page CGContext
[screenShot drawInRect:view.bounds];
//close the pdf context (saves the pdf to the NSData object)
UIGraphicsEndPDFContext();
I have also tried to set the UIGraphicsBeginImageContextWithOptions scale to 2.0, but this gives no change. How can I force a view on an iPad2 to render at 2x resolution?
Expected output:
Actual output:
I ended up fixing this by recursively setting the contentScaleFactor property of the parent view and its subviews to 2.0.
The UIImage was rendering at the correct resolution, but the layer wasn't when renderInContext was being called.
I have two ImageViews, one called imageView and the other called subView (which is a subview of imageView).
I want to blend the images on these views together, with the user being able to switch the alpha of the blend with a pan. My code works, but right now, the code is slow as we are redrawing the image each time the pan gesture is moved. Is there a faster/more efficient way of doing this?
BONUS Q: I want to allow for my subView image to drawn zoomed in. Currently I've set my subView to be UIViewContentModeCenter, however I can't seem to draw a zoomed in part of my image with this content mode. Is there any way around this?
My drawrect:
- (void)drawRect:(CGRect)rect
{
float xCenter = self.center.x - self.currentImage1.size.width/2.0;
float yCenter = self.center.y - self.currentImage1.size.height/2.0;
subView.alpha = self.blendAmount; // Customize the opacity of the top image.
UIGraphicsBeginImageContext(self.currentImage1.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(c, kCGBlendModeColorBurn);
[imageView.layer renderInContext:c];
self.blendedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.blendedImage drawAtPoint:CGPointMake(xCenter,yCenter)];
}
You need to use GPU for image processing which is far faster than using CPU (as you're doing right now).
You can use Core Image framework which is very fast and easy to use but requires iOS 5, or you can use Open GL directly but you need to be experienced and have some knowledge about Open GL Shading.
I'm building an app like the photo app by apple in the iPad. I have large full-screen image and I show them using a scrollView for managing zooming and paging. The main problem happen when I try to create a grid with the thumbnail of the images. I create them as UIImageView overlapped on a UIButton. All works great, but when I try the app on the iPad, it requires a lot of memory, I suppose it depend on the rescaling of the Image. There's a way to create a UIImageView with the little image,rescaling the larger image, without using so much memory?
You can use UIGraphics to create a thumbnail. Here's this code to do it:
UIGraphicsBeginImageContext(CGSizeMake(length, length));
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextClipToRect( currentContext, clippedRect);
CGFloat scaleFactor = length/sideFull;
if (widthGreaterThanHeight) {
//a landscape image – make context shift the original image to the left when drawn into the context
CGContextTranslateCTM(currentContext, -((mainImage.size.width - sideFull) / 2) * scaleFactor, 0);
}
else {
//a portfolio image – make context shift the original image upwards when drawn into the context
CGContextTranslateCTM(currentContext, 0, -((mainImage.size.height - sideFull) / 2) * scaleFactor);
}
//this will automatically scale any CGImage down/up to the required thumbnail side (length) when the CGImage gets drawn into the context on the next line of code
CGContextScaleCTM(currentContext, scaleFactor, scaleFactor);
[mainImageView.layer renderInContext:currentContext];
UIImage* thumbnail = UIGraphicsGetImageFromCurrentImageContext();
I have an NSImage that came from a PDF, so it has one representation, of type NSPDFImageRep. I do an image setDataRetained:YES; to make sure that it remains a NSPDFImageRep. Later, I want to change the page, so I get the rep, and set the current page. This is fine.
The problem is that when I draw the image, only the 1st page comes out.
My impression is that when I draw an NSImage, it picks a representation, and draws that representation. Now, the image only has one rep, so that's the one that is being drawn, and that's the PDFrep. So, why when I draw the image, is it not drawing the correct page?
HOWEVER, when I draw the representation itself, I get the correct page.
What am I missing?
NSImage does a caching of the NSImageRep, when first displayed. In the case of NSPDFImageRep, the "setCacheMode:" message has no effect. Thus, the page that will be displayed will always be the first page. See this guide for more information.
You have then two solutions:
Drawing the representation directly.
Call the "recache" message on the NSImage to force the rasterization of the selected page.
An alternative mechanism to draw a PDF is to use the CGPDF* functions. To do this, use CGPDFDocumentCreateWithURL to create a CGPDFDocumentRef object. Then, use CGPDFDocumentGetPage to get a CGPDFPageRef object. You can then use CGContextDrawPDFPage to draw the page into your graphics context.
You may have to apply a transform to ensure that the document ends up sized like you want. Use a CGAffineTransform and CGContextConcatCTM to do this.
Here is some sample code pulled out of one of my projects:
// use your own constants here
NSString *path = #"/path/to/my.pdf";
NSUInteger pageNumber = 14;
CGSize size = [self frame].size;
// if we're drawing into an NSView, then we need to get the current graphics context
CGContextRef context = (CGContextRef)([[NSGraphicsContext currentContext] graphicsPort]);
CFURLRef url = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)path, kCFURLPOSIXPathStyle, NO);
CGPDFDocumentRef document = CGPDFDocumentCreateWithURL(url);
CGPDFPageRef page = CGPDFDocumentGetPage(document, pageNumber);
// in my case, I wanted the PDF page to fill in the view
// so we apply a scaling transform to fir the page into the view
double ratio = size.width / CGPDFPageGetBoxRect(page, kCGPDFTrimBox).size.width;
CGAffineTransform transform = CGAffineTransformMakeScale(ratio, ratio);
CGContextConcatCTM(context, transform);
// now we draw the PDF into the context
CGContextDrawPDFPage(context, page);
// don't forget memory management!
CGPDFDocumentRelease(document);