Drawing a layer shadow to a PDF context - objective-c

I have a bunch of UIViews to which I add shadows via their layers, in their drawRect method:
self.layer.shadowPath = path;
self.layer.shadowColor = [[UIColor blackColor] CGColor];
self.layer.shadowOpacity = .6;
self.layer.shadowOffset = CGSizeMake(2,3);
self.layer.shadowRadius = 2;
This works well, but my problem is I also need to create a PDF with those views.
I'm doing this by creating a PDF context and passing it to the drawing method so that the drawing happens in the PDF context.
That also works well, except that the shadows are not rendered in the PDF. I've experimented with a couple approaches, but haven't managed to find a proper, easy way to get those shadows to appear where they belong in the PDF.
Would anyone know how to do this?

You will need to make the relevant CoreGraphics calls in the drawrect to draw the shadows rather than using the CALayer properties.
Check out the Apple docs on shadows.

Related

how to customize the input of drawRect

Here's my drawRect code:
- (void)drawRect:(CGRect)rect {
if (currentLayer) {
[currentLayer removeFromSuperlayer];
}
if (currentPath) {
currentLayer = [[CAShapeLayer alloc] init];
currentLayer.frame = self.bounds;
currentLayer.path = currentPath.CGPath;
if ([SettingsManager shared].isColorInverted) {
currentLayer.strokeColor = [UIColor blackColor].CGColor;
} else {
currentLayer.strokeColor = [UIColor whiteColor].CGColor;
}
currentLayer.strokeColor = _strokeColorX.CGColor;
currentLayer.fillColor = [UIColor clearColor].CGColor;
currentLayer.lineWidth = self.test;
currentLayer.contentsGravity = kCAGravityCenter;
[self.layer addSublayer:currentLayer];
//currentLayer.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame));
}
//[currentPath stroke];
}
"The portion of the view’s bounds that needs to be updated"
That's from the apple dev docs.
I am dealing with a UIView and a device called a slate -the slate has the ability to record pencil drawings to my iOS app. I managed to get it working on the entire view. But, I dont want the entire view to be filled with the slate input. Instead, I'd like to modify the UIView to have a height of phonesize - 40px; Makes sense?
I found out if I set the frame that I have as currentLayer to new bounds the screen that can be drawn on is resized accordingly.
I think you’re mixing up the meaning of the various rectangles in UIView. The drawRect parameter designates a partial section of the view that needs to be redrawn. This can be that you called setNeedsDisplay(rect), or this can be because iOS thinks it needs to be redrawn (e.g. because your app‘s drawings were ignored because the screen was locked, and now the user unlocked the screen and current drawings are needed).
This rectangle has nothing to do with the size at which your contents are drawn. The size of the area your view is drawn in is controlled by its frame and bounds, the former of which is usually controlled using Auto Layout (i.e. layout constraint objects).
In general, while in drawRect, you look at the bounds of your view to get the full rectangle to draw in. The parameter to drawRect, on the other hand, is there to allow you to optimize your drawing to only redraw the parts that actually changed.
Also, it is in general a bad idea to manipulate the view and layer hierarchies from inside drawRect. The UIView drawing mechanism expects you to only draw your current view‘s state in there. Since it recursively walks the list of views and subviews to call drawRect, you changing the list of views or layers can cause weird side effects like views being skipped until the next redraw.
Create and add new layers in your state-changing methods or event handling methods, and position them using auto layout or from inside layoutSubviews.

iOS 7 iPad SpriteKit - Use irregular SKShapeNode as mask for SKCropNode

Does anyone know how to crop an image with spritekit for iOS using an irregular shape node? The problem is when I do an skcrop on it, the shape has 2 layers, and thus the cropping fails. To crop one must use a single layer. Any idea how to rasterize a shape first, before the scene is loaded? I've tried skeffectnode and shouldRasterize on it but that fails too, most likely because it contains 2 children as well, or the rasterization is occuring after the scene is loaded. I've also tried converting the shape to a texture, but that fails for the same reasons as the skeffectnode does. I've looked at other possible solutions on stack overflow, and none seem to work or are very limited to squares, so I'm thinking this is a bug that must exist only in iOS7, so please don't say this is a duplicate without letting me check out the duplicate first to make sure that it really is one.
Right now all signs point to not using an skshapenode with a fill to crop an image.
I basically kept wrapping it until it worked. End all be all solution seemed to be creating an SKEffectNode and adding my SKShapeNode as a child of SKEffectNode. That wasn't enough though. I also had to wrap the SKEffectNode in an SKSpriteNode. Then finally, add the SKSpriteNode as the maskNode of SKCropNode. See the code below. I have additional settings that may be helpful to the solution as well.
SKCropNode *cropNode = [[SKCropNode alloc] init];
SKEffectNode*rasterEffectNode = [SKEffectNode node];
[rasterEffectNode addChild:ballShape];
rasterEffectNode.shouldRasterize = true;
ballShape.lineWidth = 0;
ballShape.antialiased = true;
[ballShape setStrokeColor:[SKColor clearColor]];
ballShape.fillColor = [SKColor blackColor];
SKSpriteNode*spriteWrapperFix = [SKSpriteNode node];
[spriteWrapperFix addChild:rasterEffectNode];
[cropNode setMaskNode:spriteWrapperFix];
[cropNode addChild:sprite];
[container addChild:cropNode];

Drawing shadow around NSImageView

I'm trying to draw shadow under an NSImageView. I'm accessing the CALayer of the view and setting its shadow properties there:
[self.originalImageView setWantsLayer:YES];
CALayer *imageLayer = self.originalImageView.layer;
[imageLayer setShadowRadius:5.f];
[imageLayer setShadowOffset:CGSizeZero];
[imageLayer setShadowOpacity:0.5f];
[imageLayer setShadowColor:CGColorCreateGenericGray(0, 1)];
imageLayer.masksToBounds = NO;
But even though I have set masks to bounds as NO, I'm getting this:
Look carefully at the shadow. It is displayed perfectly horizontally, but vertically, it's clipped. The image is aspect-fitted, and both the image and the image view can be arbitrary size. If I try it with NSShadow instead of layer's shadow, I'm getting exactly the same results. I could use myView.clipsToBounds = NO in iOS and it used to solve this problem, however I can't find that property on Mac.
How can I draw shadow under arbitrary-sized NSImageView without clipping?
With the latest Xcode, I've discovered that I can apply CI filters to views (I don't know if that feature was also available before or not.) and I went and applied a shadow on my image view on the container view's layer. It works perfectly. Bottomline: If I apply the filter on image view's own layer, the same thing happens, but if I apply it on the container view's layer, it works!

iOS Newbie: Need help understanding Core Graphics Layers

A bit of a newbie question for iOS, just started learning about Core Graphics but I'm still not sure how it's integrated into the actual view
So I have a method that creates a CGLayer ref, draws some bezier curves on it, and then returns the CGLayer; here's a very condensed version of my procedure:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGLayerRef layer = CGLayerCreateWithContext(ctx, CGSizeMake(300.0,300.0), NULL);
CGContextRef layerCtx = CGLayerGetContext(layer);
Then I add some bezier fills that are dynamically generated by parsing an SVG file.
I've already verified that the SVG is being parsed properly, and I am using this code http://ponderwell.net/2011/05/converting-svg-paths-to-objective-c-paths/ to parse the path instructions (from the path tags' d parameters) into UIBezierPath objects
So I start by creating the fill
CGColorRef color = [UIColor colorWithRed:red green:green blue:blue alpha:1.0].CGColor;
CGContextSetFillColorWithColor(layerCtx, color);
Then adding the UIBezierPath
SvgToBezier *bezier = [[SvgToBezier alloc] initFromSVGPathNodeDAttr:pathAttr rect:CGRectMake(0, 0, 300, 300)];
UIBezierPath *bezierPath = bezier.bezier;
CGContextAddPath(layerCtx, bezierPath.CGPath);
CGContextDrawPath(layerCtx, kCGPathFill);
And finally I return the CGLayerRef like so
return layer;
My question: Now that I have this layer ref, I want to draw it to a UIView (or whatever type of object would be best to draw to)
The ultimate goal is to have the layer show up in a UITableViewCell.
Also, the method I've described above exists in a class I've created from scratch (which inherits from NSObject). There is separate method which uses this method to assign the layer ref to a property of the class instance.
I would be creating this instance from the view controller connected to my table view. Then hopefully adding the layer to a UITableView cell within the "cellForRowAtIndexPath" method.
I hope I included all of the relevant information and I greatly appreciate your help.
Cheers
Short answer is the view you want to draw on's layer needs to be set to your newly created layer.
[myCurrentViewController.view setLayer:newLayer];
Long answer can be found in the excellent Quartz2D programming guide, found here.

Saving photo with added overlay to photo library

I'm making an app where the user takes a photo, an overlay is placed over the image, and the user can then save the image or upload it to Facebook or other sites.
I have managed to get the app to take the photo, and to make the overlay I am using a UIImageView, which is placed over the top of the photo.
I'm not sure how exactly I can export the image, with the overlay, as one file, to the photo library, or to Facebook.
This should give you the idea. This code may need a tweak or two, though. (I admit that I haven't run it.)
Your UIImageView's base layer can render itself into a CoreGraphics bitmap context (this will include all the sublayers, of course), which can then provide you with a new UIImage.
UIImage *finalImage;
// Get the layer
CALayer *layer = [myImageView layer];
// Create a bitmap context and make it the current context
UIGraphicsBeginImageContext(layer.bounds.size);
// Get a reference to the context
CGContextRef ctx = UIGraphicsGetCurrentContext();
// Ask the layer to draw itself
[layer drawInContext:ctx];
// Then ask the context itself for the image
finalImage = UIGraphicsGetImageFromCurrentImageContext();
// Finally, pop the context off the stack
UIGraphicsEndImageContext();
These functions are all described in the UIKit Function Reference.
There's also a slightly more complicated way that gives you more control over things like color, which involves creating a CGBitmapContext and getting a CGImageRef, from which you can again produce a UIImage. That is described in the Quartz 2D Programming Guide, section "Creating a Bitmap Graphics Context".