NSImageRep confusion - objective-c

I have an NSImage that came from a PDF, so it has one representation, of type NSPDFImageRep. I do an image setDataRetained:YES; to make sure that it remains a NSPDFImageRep. Later, I want to change the page, so I get the rep, and set the current page. This is fine.
The problem is that when I draw the image, only the 1st page comes out.
My impression is that when I draw an NSImage, it picks a representation, and draws that representation. Now, the image only has one rep, so that's the one that is being drawn, and that's the PDFrep. So, why when I draw the image, is it not drawing the correct page?
HOWEVER, when I draw the representation itself, I get the correct page.
What am I missing?

NSImage does a caching of the NSImageRep, when first displayed. In the case of NSPDFImageRep, the "setCacheMode:" message has no effect. Thus, the page that will be displayed will always be the first page. See this guide for more information.
You have then two solutions:
Drawing the representation directly.
Call the "recache" message on the NSImage to force the rasterization of the selected page.

An alternative mechanism to draw a PDF is to use the CGPDF* functions. To do this, use CGPDFDocumentCreateWithURL to create a CGPDFDocumentRef object. Then, use CGPDFDocumentGetPage to get a CGPDFPageRef object. You can then use CGContextDrawPDFPage to draw the page into your graphics context.
You may have to apply a transform to ensure that the document ends up sized like you want. Use a CGAffineTransform and CGContextConcatCTM to do this.
Here is some sample code pulled out of one of my projects:
// use your own constants here
NSString *path = #"/path/to/my.pdf";
NSUInteger pageNumber = 14;
CGSize size = [self frame].size;
// if we're drawing into an NSView, then we need to get the current graphics context
CGContextRef context = (CGContextRef)([[NSGraphicsContext currentContext] graphicsPort]);
CFURLRef url = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)path, kCFURLPOSIXPathStyle, NO);
CGPDFDocumentRef document = CGPDFDocumentCreateWithURL(url);
CGPDFPageRef page = CGPDFDocumentGetPage(document, pageNumber);
// in my case, I wanted the PDF page to fill in the view
// so we apply a scaling transform to fir the page into the view
double ratio = size.width / CGPDFPageGetBoxRect(page, kCGPDFTrimBox).size.width;
CGAffineTransform transform = CGAffineTransformMakeScale(ratio, ratio);
CGContextConcatCTM(context, transform);
// now we draw the PDF into the context
CGContextDrawPDFPage(context, page);
// don't forget memory management!
CGPDFDocumentRelease(document);

Related

how to display data from plist in pdf format in a new view?

i've been hammering my brain trying to figure this one out and can't find anything in the doc's or on SO that is helpful so far. i have a project that allows the user the input data and save it to a plist. is there a way to display the data that has been stored in the plist in a new view in pdf format? what i am trying to do is to display the recorded data in a new view controller with pdf format so the user can print that list. i know there is a way but i just can't figure it out and i finally threw the towel in and here i am. i will be eternally grateful for any help guys. and girls too.
i can create a new pdf with the following code. i just can't seem to understand how to get the eta from the plist to display.
- (IBAction)didClickMakePDF {
[self setupPDFDocumentNamed:#"NewPDF" Width:850 Height:1100];
[self beginPDFPage];
CGRect textRect = [self addText:#"This is some nice text here, don't you agree?"
withFrame:CGRectMake(kPadding, kPadding, 400, 200) fontSize:48.0f];
CGRect blueLineRect = [self addLineWithFrame:CGRectMake(kPadding, textRect.origin.y + textRect.size.height + kPadding, _pageSize.width - kPadding*2, 4)
withColor:[UIColor blueColor]];
UIImage *anImage = [UIImage imageNamed:#"tree.jpg"];
CGRect imageRect = [self addImage:anImage
atPoint:CGPointMake((_pageSize.width/2)-(anImage.size.width/2), blueLineRect.origin.y + blueLineRect.size.height + kPadding)];
[self addLineWithFrame:CGRectMake(kPadding, imageRect.origin.y + imageRect.size.height + kPadding, _pageSize.width - kPadding*2, 4)
withColor:[UIColor redColor]];
[self finishPDF];
}
So, you've got your PDF context and some text loaded from your plist. You need to decide how it will be laid out to be rendered into the PDF. Core Text can make a really nice job of it. The quick and easy route to get you started is:
start by flipping the context
CGContextScaleCTM(pdfContext, 1.0, -1.0);
CGContextTranslateCTM(pdfContext, 0.0, -bounds.size.height);
draw your text
[text drawAtPoint:CGPointMake(x, y) withFont:[UIFont boldSystemFontOfSize:48.0f]];
where you will obviously want to change:
the text content in a loop
the y position so each line is drawn further down the page
the font
Images can be drawn into the context in the same way.
Then, move on to Core Text to do a better job with paragraphs of text.
"When you draw to the PDF context using CGContext functions the drawing operations are recorded in PDF format. The PDF commands that represent the drawing are written to the destination specified when you create the PDF graphics context."
This comes from the same page referenced above: https://developer.apple.com/library/ios/documentation/GraphicsImaging/Reference/CGPDFContext/Reference/reference.html
A CGPDFContext is "just" a CGContext. You could set a color in it using "CGContextSetCMYKFillColor" for example or draw text in it using the NSString "drawInRect" method.

Sequentially shift square blocks in UIImage

I am new to Objective-C, but I need to write a fast method, which will divide an UIImage into square blocks of fixed size, and then mix them. I have already implemented it in the following way:
Get UIImage
Represent it as PNG
Convert it to RGBA8 unsigned char array
For each block, calculate it's coordinates, then xor each pixel with pixel from block that gets replaced
Assemble that RGBA8 meat back into a new UIImage
Return it
It works as intended, but it is extremely slow. It takes about 12 seconds to process single 1024x768 PNG on iPhone 4S. Inspector shows that methods somehow connected to PNGRepresentation, eat up about 50% of total run time.
Will it possibly be faster, if I use Quartz2D here somehow? I am now simply trying to copy/paste a single rectangle from and to my _image, but I don't know how to go further. It returns an UIImage with the _image provided as is, without the blockLayer pasted inside it:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), YES, 1.0);
CGContextRef context = UIGraphicsGetCurrentContext();
/* Start drawing */
//Draw in my image first
[_image drawAtPoint:CGPointMake(0,0) blendMode:kCGBlendModeNormal alpha:1.0];
//Here I am trying to make a 400x400 square, starting presumably at the origin
CGLayerRef blockLayer = CGLayerCreateWithContext(context, CGSizeMake(400, 400), NULL);
//Then I attempt to draw it back at the middle
CGContextDrawLayerAtPoint(context, CGPointMake(1024/2, 768/2), blockLayer);
CGContextSaveGState(context);
/* End drawing */
//Make UIImage from context
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
You can follow these steps to do what you need:
Load the image
Split it up into squareshow?
Create a CALayer for each image, setting the location to the place of the square in the image before shuffling
Go through the layers, and set their positions to their target locations after shuffling
Watch the squares moving to their new placeswhat if you don't want the animation?

Divide UIImage into two parts along a UIBezierPath

How to divide this UIImage by the black line into two parts. The upper contour set of UIBezierPath.
I need to get two resulting UIImages. So is it possible?
The following set of routines create versions of a UIImage with either only the content inside a path, or only content outside that path.
Both make use of the compositeImage method, which uses CGBlendMode. CGBlendMode is very powerful for masking anything you can draw against anything else you can draw. Calling compositeImage: with other blend modes can have interesting (if not always useful) effects. See the CGContext Reference for all the modes.
The clipping method I described in my comment to your OP does work and is probably faster, but only if you have UIBezierPaths defining all the regions you want to clip.
- (UIImage*) compositeImage:(UIImage*) sourceImage onPath:(UIBezierPath*) path usingBlendMode:(CGBlendMode) blend;
{
// Create a new image of the same size as the source.
UIGraphicsBeginImageContext([sourceImage size]);
// First draw an opaque path...
[path fill];
// ...then composite with the image.
[sourceImage drawAtPoint:CGPointZero blendMode:blend alpha:1.0];
// With drawing complete, store the composited image for later use.
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
// Graphics contexts must be ended manually.
UIGraphicsEndImageContext();
return maskedImage;
}
- (UIImage*) maskImage:(UIImage*) sourceImage toAreaInsidePath:(UIBezierPath*) maskPath;
{
return [self compositeImage:sourceImage onPath:maskPath usingBlendMode:kCGBlendModeSourceIn];
}
- (UIImage*) maskImage:(UIImage*) sourceImage toAreaOutsidePath:(UIBezierPath*) maskPath;
{
return [self compositeImage:sourceImage onPath:maskPath usingBlendMode:kCGBlendModeSourceOut];
}
I tested clipping, and in a few different tests it was 25% slower than masking to achieve the same result as the [maskImage: toAreaInsidePath:] method in my other answer. For completeness I include it here, but please don't use it without a good reason.
- (UIImage*) clipImage:(UIImage*) sourceImage toPath:(UIBezierPath*) path;
{
// Create a new image of the same size as the source.
UIGraphicsBeginImageContext([sourceImage size]);
// Clipping means drawing only happens within the path.
[path addClip];
// Draw the image to the context.
[sourceImage drawAtPoint:CGPointZero];
// With drawing complete, store the composited image for later use.
UIImage *clippedImage = UIGraphicsGetImageFromCurrentImageContext();
// Graphics contexts must be ended manually.
UIGraphicsEndImageContext();
return clippedImage;
}
This can be done but it requires some trigonometry. Let's consider the case for the upper image. First, determine the bottommost end point of the UIBezierPath and use UIGraphicsBeginImageContext to get the top part of the image above the line. This will look as follows:
Now, assuming that your line is straight, move pixel by pixel along the line drawing vertical strokes of clearColor (loop for top portion. Proceed on similar lines for bottom portion):
for(int currentPixel_x=0;currentPixel_x<your_ui_image_top.size.width)
UIGraphicsBeginImageContext(your_ui_image_top.size);
[your_ui_image_top drawInRect:CGRectMake(0, 0, your_ui_image_top.size.width, your_ui_image_top.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 1.0);
CGContextSetBlendMode(UIGraphicsGetCurrentContext(),kCGBlendModeClear);
CGContextSetStrokeColorWithColor(UIGraphicsGetCurrentContext(),[UIColor clearColor].CGColor);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), currentPixel_x, m*currentPixel_x + c);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPixel_x, your_ui_image_top.size.height);
CGContextStrokePath(UIGraphicsGetCurrentContext());
your_ui_image_top = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Your UIBezierPath will have to be converted to a straight line of the form y = m*x + c. The x in this equation will be currentPixel_x above. Iterate through the width of the image, increasingcurrentPixel_x by 1 each time. next_y_point_on_your_line will be calculated as:
next_y_point_on_your_line = m*currentPixel_x + c
Each vertical stroke will be 1 pixel wide and its height will depend on how you traverse through them. After some iterations, your image will look roughly (please excuse my poor photo-editing skills!) like:
There are multiple ways of how you draw the clear strokes and this is just one way of going about it. You can also have clear strokes that are parallel to the given path if it gives better results.
Another way is to set the alpha of the pixels below the line to 0.

CALayer and Off-Screen Rendering

I have a Paging UIScrollView with a contentSize large enough to hold a number of small UIScrollViews for zooming, The viewForZoomingInScrollView is a viewController that holds a CALayer for drawing a PDF page onto. This allows me to navigate through a PDF much like the ibooks PDF reader.
The code that draws the PDF (Tiled Layers) is located in:
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx;
And simply adding a 'page' to the visible screen calls this method automatically. When I change page there is some delay before all the tiles are drawn, even though the object (page) has already been created.
What i want to be able to do is render the next page before the user scrolls to it, thus preventing the visible tiling effect. However, i have found that if the layer is located offscreen adding it to the scrollview doesn't call the drawLayer.
Any Ideas/common gotchas here?
I have tried:
[viewController.view.layer setNeedsLayout];
[viewController.view.layer setNeedsDisplay];
NB: The fact that this is replicating the ibooks functionally is irrelevant within the context of the full app.
As i mentioned above, CALayers don't render if they are offscreen.
I ended up not drawing the PDF directly to the layer but instead, rendered the PDF page to an image when i needed (renders 1 page plus and minus one of the focused page)
Here is the render code:
-(UIImage *)renderPDFPageToImage:(int)pageNumber//NSOPERATION?
{
//you may not want to permanently (app life) retain doc ref
CGSize size = CGSizeMake(x,y);
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0, 750);
CGContextScaleCTM(context, 1.0, -1.0);
CGPDFPageRef page; //Move to class member
page = CGPDFDocumentGetPage (myDocumentRef, pageNumber);
CGContextDrawPDFPage (context, page);
UIImage * pdfImage = UIGraphicsGetImageFromCurrentImageContext();//autoreleased
UIGraphicsEndImageContext();
return pdfImage;
}

CGContextDrawPDFPage displays white or garbled text

In the process of updating my iPad app I've been attempting to draw a page from an existing PDF document into a Core Graphics context then save it as a new PDF, but am having difficulty getting the text to display properly. Images in the newly-created PDF look great, but text rarely appears correctly: more often that not it appears white/invisible or garbled. When the text is invisible, I am still able to to select where it -should- be and copy/paste correctly into a text editor. Is this an issue related to the limited number of fonts available on the iPad?
My code is as follows:
CGPDFDocumentRef document = CGPDFDocumentCreateWithProvider(dataProvider);
CGPDFPageRef page = CGPDFDocumentGetPage(document, pageNumberToRetrieve);
CGRect pageRect = CGPDFPageGetBoxRect(page, kCGPDFMediaBox);
UIGraphicsBeginPDFContextToFile(pathToFile, pageRect, nil);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextBeginPage(context, NULL);
// I don't think this line is necessary, but I have tried both with and without it.
CGContextSetTextDrawingMode (context, kCGTextFill);
CGContextDrawPDFPage(context, page);
CGContextEndPage(context);
UIGraphicsEndPDFContext();
CGDataProviderRelease(dataProvider);
CGPDFDocumentRelease(document);
If anyone has any suggestions I would greatly appreciate hearing them.
Thanks for your time.
Rob
Drawing into an image context does not pose a problem (text displays correctly).
What I am trying to do is create a -new- PDF file containing just a few pages from the original PDF. It seems that text does not draw correctly into the new file for some reason.
The information is there (I can select text by 'guessing' where it should be in Preview) but it doesn't render. I assume CGContextDrawPDFPage writes the string to the PDF file, but doesn't draw it because it doesn't know what the characters of that font 'look like'?
I thought the point of embedded fonts in PDFs was that programs would be able to perform these sorts of manipulations even if that font wasn't installed on the system (in this case, the iPad). Is this a limitation of the format, or the Quartz framework?
Do you want to render on the screen? I don't see the need for UIGraphicsBeginPDFContextToFile?
However, to render on the screen, you can use something like this:
pageReference = CGPDFDocumentGetPage(pdfReference, page);
CGContextRef context = UIGraphicsGetCurrentContext();
#try {
CGContextTranslateCTM(context, 0.0, self.bounds.size.height);
CGContextScaleCTM(context, scale, -scale);
CGContextSaveGState(context);
#try {
CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(pageReference, kCGPDFCropBox, self.bounds, 0, true);
CGContextConcatCTM(context, pdfTransform);
CGContextDrawPDFPage(context, pageReference);
}
#finally {
CGContextRestoreGState(context);
}
}
#finally {
UIGraphicsEndImageContext();
}