Flatten CALayer sublayers into one layer - objective-c

In my app I have one root layer, and many images which are sublayers of rootLayer. I'd like to flatten all the sublayers of rootLayer into one layer/image, that don't have any sublayers. I think I should do this by drawing all sublayers in core graphics context, but I don't know how to do that.
I hope you'll understand me, and sorry for my English.

From your own example for Mac OS X:
CGContextRef myBitmapContext = MyCreateBitmapContext(800,600);
[rootLayer renderInContext:myBitmapContext];
CGImageRef myImage = CGBitmapContextCreateImage(myBitmapContext);
rootLayer.contents = (id) myImage;
rootLayer.sublayers = nil;
CGImageRelease(myImage);
iOS:
UIGraphicsBeginImageContextWithOptions(rootLayer.bounds.size, NO, 0.0);
[rootLayer renderInContext: UIGraphicsGetCurrentContext()];
UIImage *layerImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
rootLayer.contents = (id) layerImage.CGImage;
rootLayer.sublayers = nil;
Also note the caveat in the docs:
The Mac OS X v10.5 implementation of
this method does not support the
entire Core Animation composition
model. QCCompositionLayer,
CAOpenGLLayer, and QTMovieLayer layers
are not rendered. Additionally, layers
that use 3D transforms are not
rendered, nor are layers that specify
backgroundFilters, filters,
compositingFilter, or a mask values.

Do you still want the layer to be interactive? If not, call -renderInContext: and show the bitmap context.

I also once considered using render(in: CGContext).
However, I decided not to after reading this:
Important
The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of macOS may add support for rendering these layers and properties.
Seriously, no masks? I say, "WORK, Apple."
For a caveat, it's too huge to ignore, so I had to find a different approach.
solution
extension CALayer {
func flatten(in size: CGSize) -> CALayer {
let flattenedLayer = CALayer(in: size)
flattenedLayer.contents = getCGImage(in: size)
return flattenedLayer
}
}
It converts CALayer into CGImage and make a new CALayer with flattened image as its contents.
Of course, I had to make getCGImage(in: CGSize) as well, using CARenderer.
implementation
import AppKit
import Metal
let device = MTLCreateSystemDefaultDevice()!
let context = CIContext()
var textureDescriptor: MTLTextureDescriptor = {
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: 0, height: 0, mipmapped: false)
textureDescriptor.usage = [MTLTextureUsage.shaderRead, .shaderWrite, .renderTarget]
return textureDescriptor
}()
let maxScaleFactor = NSScreen.screens.reduce(1) { max($0, $1.backingScaleFactor) }
let scaleTransform = CATransform3DMakeScale(maxScaleFactor, maxScaleFactor, 1)
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB)!
extension CALayer {
convenience init(in size: CGSize, with sublayer: CALayer? = nil) {
self.init()
frame.size = size
if let sublayer { addSublayer(sublayer) }
}
func getCGImage(in size: CGSize) -> CGImage {
let ciImage = getCIImage(in: size)
return context.createCGImage(ciImage, from: ciImage.extent)!
}
func getCIImage(in size: CGSize) -> CIImage {
let superlayer = self.superlayer
let ciImage = CALayer(in: size, with: self).ciImage
if let superlayer { superlayer.addSublayer(self) }
return ciImage
}
var ciImage: CIImage {
let width = frame.size.width * maxScaleFactor
let height = frame.size.height * maxScaleFactor
textureDescriptor.width = Int(width)
textureDescriptor.height = Int(height)
let texture: MTLTexture = device.makeTexture(descriptor: textureDescriptor)!
let renderer = CARenderer(mtlTexture: texture)
transform = scaleTransform
frame = .init(origin: .zero, size: .init(width: width, height: height))
renderer.bounds = frame
renderer.layer = self
CATransaction.flush()
CATransaction.commit()
renderer.beginFrame(atTime: 0, timeStamp: nil)
renderer.render()
renderer.endFrame()
return CIImage(mtlTexture: texture, options: [.colorSpace: colorSpace])!
}
}
There are a few things to note:
use of CATransform3D to match resolution of HiDPI display
temporarily added as a new CALayer's sublayer in getCIImage(in: CGSize) for layers like CAShapeLayer with zero frame size; this is also to get the image independent from its frame origin/position
use of CATransaction.flush() to transfer ownership of the layer tree to the CARenderer’s context
extras
Since CALayer can have NSImage as its contents too in macOS 10.6 and later, you can change the function accordingly.
extension CALayer {
func getNSImage(in size: CGSize) -> NSImage {
let ciImage = getCIImage(in: size)
return NSImage(data: ciImage.pngData)!
}
}
extension CIImage {
var pngData: Data { context.pngRepresentation(of: self, format: .RGBA8, colorSpace: colorSpace!)! }
// jpeg, tiff, etc. can be created the same way
}
Coming this far, saving as image is not hard at all,
but using write...Representation(of: CIImage, ...) is probably the easiest way:
extension CALayer {
func saveAsPNG(at url: URL, in size: CGSize? = nil) {
let image = size == nil ? ciImage : getCIImage(in: size!)
return try! context.writePNGRepresentation(of: image, to: url, format: .RGBA8, colorSpace: colorSpace)
}
// jpeg, tiff, etc. can be saved the same way
}

Related

Translate detected rectangle from portrait CIImage to landscape CIImage

I am modifying code found here. In the code we are capturing video from the phone camera using AVCaptureSession and using CIDetector to detect a rectangle in the image feed. The feed has an image which is 640x842 (iphone5 in portrait). We then do an overlay on the image so the user can see the detected rectangle (actually it's a trapezoid most of the time).
When the user presses a button on the UI, we capture an image from the video and re-run the rectangle detection on this larger image (3264x2448) which as you can see is landscape. We then do a perspective transform on the detected rectangle and crop on the image.
This is working pretty well but the issue I have is say 1 out of 5 captures the detected rectangle on the larger image is different to the one detected (and presented to the user) from the smaller image. Even though I only capture when I detect the phone is (relatively) still, the final image then does not represent the rectangle the user expected.
To resolve this my idea is to use the coordinates of the originally captured rectangle and translate them to a rectangle on the captured still image. This is where I'm struggling.
I tried this with the detected rectangle:
CGFloat radians = -90 * (M_PI/180);
CGAffineTransform rotation = CGAffineTransformMakeRotation(radians);
CGRect rect = CGRectMake(detectedRect.bounds.origin.x, detectedRect.bounds.origin.y, detectedRect.bounds.size.width, detectedRect.bounds.size.height);
CGRect rotatedRect = CGRectApplyAffineTransform(rect, rotation);
So given a detected rect:
TopLeft: 88.213425, 632.31329
TopRight: 545.59302, 632.15546
BottomRight: 575.57819, 369.22321
BottomLeft: 49.973862, 369.40466
I now have this rotated rect:
origin = (x = 369.223206, y = -575.578186)
size = (width = 263.090088, height = 525.604309)
How do I translate the rotated rectangle coordinates in the smaller portrait image to coordinates to the 3264x2448 image?
Edit
Duh.. reading my own approach realised that creating a rectangle out of a trapezoid will not solve my problem!
Supporting code to detect the rectangle etc...
// In this method we detect a rect from the video feed and overlay
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
// image is 640x852 on iphone5
NSArray *rects = [[CIDetector detectorOfType:CIDetectorTypeRectangle context:nil options:#{CIDetectorAccuracy : CIDetectorAccuracyHigh}] featuresInImage:image];
CIRectangleFeature *detectedRect = rects[0];
// draw overlay on image code....
}
This is a summarized version of how the still image is obtained:
// code block to handle output from AVCaptureStillImageOutput
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
CIImage *enhancedImage = [[CIImage alloc] initWithData:imageData options:#{kCIImageColorSpace:[NSNull null]}];
imageData = nil;
CIRectangleFeature *rectangleFeature = [self getDetectedRect:[[self highAccuracyRectangleDetector] featuresInImage:enhancedImage]];
if (rectangleFeature) {
enhancedImage = [self correctPerspectiveForImage:enhancedImage withTopLeft:rectangleFeature.topLeft andTopRight:rectangleFeature.topRight andBottomRight:rectangleFeature.bottomRight andBottomLeft:rectangleFeature.bottomLeft];
}
}
Thank you.
I had the same issue while doing this kind of stuff.I have resolved it by doing below code in swift. Take a look if it can help you.
if let videoConnection = stillImageOutput.connection(withMediaType: AVMediaTypeVideo) {
stillImageOutput.captureStillImageAsynchronously(from: videoConnection) {
(imageDataSampleBuffer, error) -> Void in
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
var img = UIImage(data: imageData!)!
let outputRect = self.previewLayer?.metadataOutputRectOfInterest(for: (self.previewLayer?.bounds)!)
let takenCGImage = img.cgImage
let width = (takenCGImage?.width)!
let height = (takenCGImage?.height)!
let cropRect = CGRect(x: (outputRect?.origin.x)! * CGFloat(width), y: (outputRect?.origin.y)! * CGFloat(height), width: (outputRect?.size.width)! * CGFloat(width), height: (outputRect?.size.height)! * CGFloat(height))
let cropCGImage = takenCGImage!.cropping(to: cropRect)
img = UIImage(cgImage: cropCGImage!, scale: 1, orientation: img.imageOrientation)
let cropViewController = TOCropViewController(image: self.cropToBounds(image: img))
cropViewController.delegate = self
self.navigationController?.pushViewController(cropViewController, animated: true)
}
}

How do I take Cropped screenshot with Retina image quality in my snapshot implementation in swift

I am trying to take a screenshot of my UIView and Crop it, save it to my Photo library. As i am trying to do this there are 3 conflicts.
(1) - I want to take Screenshot with Blur in it, As blur filter never gets saved in the screenshot.
(2) - The image quality is very low.
(3) - I am not able to crop the image.
This is my code -
#IBAction func Screenshot(_ sender: UIButton) {
// Declare the snapshot boundaries
let top: CGFloat = 70
let bottom: CGFloat = 400
// The size of the cropped image
let size = CGSize(width: view.frame.size.width, height: view.frame.size.height - top - bottom)
// Start the context
UIGraphicsBeginImageContext(size)
// we are going to use context in a couple of places
let context = UIGraphicsGetCurrentContext()!
// Transform the context so that anything drawn into it is displaced "top" pixels up
// Something drawn at coordinate (0, 0) will now be drawn at (0, -top)
// This will result in the "top" pixels being cut off
// The bottom pixels are cut off because the size of the of the context
context.translateBy(x: 0, y: -top)
// Draw the view into the context (this is the snapshot)
view.drawHierarchy(in: view.bounds, afterScreenUpdates: true)
let snapshot = UIGraphicsGetImageFromCurrentImageContext()
// End the context (this is required to not leak resources)
UIGraphicsEndImageContext()
// Save to photos
UIImageWriteToSavedPhotosAlbum(snapshot!, nil, nil, nil)
}
You said:
I want to take Screenshot with Blur in it, As blur filter never gets saved in the screenshot.
I wonder if the view being snapshotted might not be the one with the UIVisualEffectView as a subview. Because when I use the code at the end of the answer, the blur effect (and the impact of changing the fractionCompleted) is captured.
The image quality is very low.
If you use UIGraphicsBeginImageContextWithOptions with a scale of zero, it should capture the image at the resolution of the device:
UIGraphicsBeginImageContextWithOptions(size, isOpaque, 0)
I am not able to crop the image.
I personally capture the whole view, and then crop as needed. See UIView extension below.
In Swift 3:
class ViewController: UIViewController {
var animator: UIViewPropertyAnimator?
#IBOutlet weak var imageView: UIImageView!
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
let blur = UIBlurEffect(style: .light)
let effectView = UIVisualEffectView(effect: blur)
view.addSubview(effectView)
effectView.translatesAutoresizingMaskIntoConstraints = false
NSLayoutConstraint.activate([
effectView.leadingAnchor.constraint(equalTo: imageView.leadingAnchor),
effectView.trailingAnchor.constraint(equalTo: imageView.trailingAnchor),
effectView.topAnchor.constraint(equalTo: imageView.topAnchor),
effectView.bottomAnchor.constraint(equalTo: imageView.bottomAnchor)
])
animator = UIViewPropertyAnimator(duration: 0, curve: .linear) { effectView.effect = nil }
}
#IBAction func didChangeValueForSlider(_ sender: UISlider) {
animator?.fractionComplete = CGFloat(sender.value)
}
#IBAction func didTapSnapshotButton(_ sender: AnyObject) {
if let snapshot = view.snapshot(of: imageView.frame) {
UIImageWriteToSavedPhotosAlbum(snapshot, nil, nil, nil)
}
}
}
extension UIView {
/// Create snapshot
///
/// - parameter rect: The `CGRect` of the portion of the view to return. If `nil` (or omitted),
/// return snapshot of the whole view.
///
/// - returns: Returns `UIImage` of the specified portion of the view.
func snapshot(of rect: CGRect? = nil) -> UIImage? {
// snapshot entire view
UIGraphicsBeginImageContextWithOptions(bounds.size, isOpaque, 0)
drawHierarchy(in: bounds, afterScreenUpdates: true)
let wholeImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// if no `rect` provided, return image of whole view
guard let image = wholeImage, let rect = rect else { return wholeImage }
// otherwise, grab specified `rect` of image
let scale = image.scale
let scaledRect = CGRect(x: rect.origin.x * scale, y: rect.origin.y * scale, width: rect.size.width * scale, height: rect.size.height * scale)
guard let cgImage = image.cgImage?.cropping(to: scaledRect) else { return nil }
return UIImage(cgImage: cgImage, scale: scale, orientation: .up)
}
}
}
Or in Swift 2:
class ViewController: UIViewController {
var animator: UIViewPropertyAnimator?
#IBOutlet weak var imageView: UIImageView!
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
let blur = UIBlurEffect(style: .Light)
let effectView = UIVisualEffectView(effect: blur)
view.addSubview(effectView)
effectView.translatesAutoresizingMaskIntoConstraints = false
NSLayoutConstraint.activateConstraints([
effectView.leadingAnchor.constraintEqualToAnchor(imageView.leadingAnchor),
effectView.trailingAnchor.constraintEqualToAnchor(imageView.trailingAnchor),
effectView.topAnchor.constraintEqualToAnchor(imageView.topAnchor),
effectView.bottomAnchor.constraintEqualToAnchor(imageView.bottomAnchor)
])
animator = UIViewPropertyAnimator(duration: 0, curve: .Linear) { effectView.effect = nil }
}
#IBAction func didChangeValueForSlider(sender: UISlider) {
animator?.fractionComplete = CGFloat(sender.value)
}
#IBAction func didTapSnapshotButton(sender: AnyObject) {
if let snapshot = view.snapshot(of: imageView.frame) {
UIImageWriteToSavedPhotosAlbum(snapshot, nil, nil, nil)
}
}
}
extension UIView {
/// Create snapshot
///
/// - parameter rect: The `CGRect` of the portion of the view to return. If `nil` (or omitted),
/// return snapshot of the whole view.
///
/// - returns: Returns `UIImage` of the specified portion of the view.
func snapshot(of rect: CGRect? = nil) -> UIImage? {
// snapshot entire view
UIGraphicsBeginImageContextWithOptions(bounds.size, opaque, 0)
drawViewHierarchyInRect(bounds, afterScreenUpdates: true)
let wholeImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// if no `rect` provided, return image of whole view
guard let rect = rect, let image = wholeImage else { return wholeImage }
// otherwise, grab specified `rect` of image
let scale = image.scale
let scaledRect = CGRect(x: rect.origin.x * scale, y: rect.origin.y * scale, width: rect.size.width * scale, height: rect.size.height * scale)
guard let cgImage = CGImageCreateWithImageInRect(image.CGImage!, scaledRect) else { return nil }
return UIImage(CGImage: cgImage, scale: scale, orientation: .Up)
}
}
So, when I capture four images at four different slider positions, that yields:
I am not able to crop the image in the right way, As there is navigation bar and status bar showing with blank (White) background. (Rest of the image crops well).
here is the code -
let top: CGFloat = 70
let bottom: CGFloat = 280
// The size of the cropped image
let size = CGSize(width: view.frame.size.width, height: view.frame.size.height - top - bottom)
// Start the context
UIGraphicsBeginImageContext(size)
// we are going to use context in a couple of places
let context = UIGraphicsGetCurrentContext()!
// Transform the context so that anything drawn into it is displaced "top" pixels up
// Something drawn at coordinate (0, 0) will now be drawn at (0, -top)
// This will result in the "top" pixels being cut off
// The bottom pixels are cut off because the size of the of the context
context.translateBy(x: 0, y: 0)
// Draw the view into the context (this is the snapshot)
UIGraphicsBeginImageContextWithOptions(size,view.isOpaque, 0)
self.view.drawHierarchy(in: view.bounds, afterScreenUpdates: true)
let snapshot = UIGraphicsGetImageFromCurrentImageContext()

Drawing a CIImage to a NSOpenGLView (without hitting main memory)

I feel like I must be missing something here. I've subclassed NSOpenGLView, and I'm attempting to draw a CIImage in the drawRect: call.
override func drawRect(dirtyRect: NSRect) {
super.drawRect(dirtyRect)
let start = CFAbsoluteTimeGetCurrent()
openGLContext!.makeCurrentContext()
let cglContext = openGLContext!.CGLContextObj
let pixelFormat = openGLContext!.pixelFormat.CGLPixelFormatObj
let colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB)!
let options: [String: AnyObject] = [kCIContextOutputColorSpace: colorSpace]
let context = CIContext(CGLContext: cglContext, pixelFormat: pixelFormat, colorSpace: colorSpace, options: options)
context.drawImage(inputImage, inRect: self.bounds, fromRect: input.extent)
openGLContext!.flushBuffer()
let end = CFAbsoluteTimeGetCurrent()
Swift.print("updated view in \(end - start)")
}
I'm obviously under the mistaken impression that the NSOpenGLContext (and it's underlying CGLContext) can be wrapped in a CIContext, and that rendering into that will produce an image in the view. But while work is being done in the above code, I have no idea where the actual pixels are ending up (because the view ends up blank).
If I just grab the current NSGraphicsContext and render into that, then I get an image in the NSOpenGLView, but the rendering seems to take about 10x as long (i.e. by changing the CIContext declaration to this):
// Works, but is slow
let context = NSGraphicsContext.currentContext()!.CIContext!
Also tried this, which is both slow AND doesn't actually display an image, making it a double fail:
// Doesn't work. AND it's slow.
input.drawInRect(bounds, fromRect: input.extent, operation: .CompositeDestinationAtop, fraction: 1.0)
The simple solution is just to render out to a CGImage, and then pop that onto the screen (via CGImage -> NSImage and then an NSImageView, or a backing CALayer). But that's not performant enough for my case. In my app, I'm looking to render a couple dozen thumbnail-sized images, each with their own different chain of CIFilters, and refresh them in realtime as their underlying base image changes. While they each render in a few milliseconds each (with my current CGImage-bracketed pathway), the view updates still are on the order of a few frames per second.
I currently have this working with a path that looks something like CGImageRef -> CIImage -> Bunch of CIFilters -> CGImage -> assign to CALayer for display. But it appears that having CGImages at both ends of the rendering chain is killing performance.
After profiling, it appears that MOST of the time is being spent just copying memory around, which I suppose is expected, but not very efficient. The backing CGImage needs to be shuttled to the GPU, then filtered, then goes back to main memory as a CGImage, then (presumably) goes right back to the GPU to be scaled and displayed by the CALayer. Ideally, the root images (before filtering) would just sit on the GPU, and results would be rendered directly to video memory, but I have no idea how to accomplish this. My current rendering pathway does the pixel smashing on the GPU (that's fantastic!), but is swamped by shuttling memory around to actually display the darned thing.
So can anyone enlighten me on how to do Core Image filtering with a pathway that keeps things on the GPU end-to-end? (or at least only has to swap that data in once?) If I have, say, a IOSurface-backed CIImage, how do I draw that directly to the UI without hitting main memory? Any hints?
After a day of banging my head against the wall on this, still no dice with NSOpenGLView... BUT, I think I'm able to do what I want via CAOpenGLLayer instead, which is just fine with me:
class GLLayer: CAOpenGLLayer {
var image: CIImage?
override func drawInCGLContext(ctx: CGLContextObj, pixelFormat pf: CGLPixelFormatObj, forLayerTime t: CFTimeInterval, displayTime ts: UnsafePointer<CVTimeStamp>) {
if image == nil {
let url = NSURL(fileURLWithPath: "/Users/doug/Desktop/test.jpg")
image = CIImage(contentsOfURL: url)!
}
let colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB)!
let options: [String: AnyObject] = [kCIContextOutputColorSpace: colorSpace]
let context = CIContext(CGLContext: ctx, pixelFormat: pf, colorSpace: colorSpace, options: options)
let targetRect = CGRectMake(-1, -1, 2, 2)
context.drawImage(image!, inRect: targetRect, fromRect: image!.extent)
}
}
One thing of note - the coordinate system for the CAOpenGLView is different. The view is 2.0x2.0 units, with the origin at 0,0 (took a while to figure that out). Other than that, this is basically the same as the non-working code in my original question (except, you know, it actually works here). Perhaps the NSOpenGLView class isn't returning the proper context. Who knows. Still interested in why, but at least I can move on now.
The code should work with following additions/modifications:
var ciContext: CIContext!
override func drawRect(dirtyRect: NSRect) {
super.drawRect(dirtyRect)
let start = CFAbsoluteTimeGetCurrent()
openGLContext!.makeCurrentContext()
let cglContext = openGLContext!.CGLContextObj
let attr = [
NSOpenGLPixelFormatAttribute(NSOpenGLPFAAccelerated),
NSOpenGLPixelFormatAttribute(NSOpenGLPFANoRecovery),
NSOpenGLPixelFormatAttribute(NSOpenGLPFAColorSize), 32,
0
]
let pixelFormat = NSOpenGLPixelFormat(attributes: attr)
let colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB)!
let options: [String: AnyObject] = [kCIContextOutputColorSpace:colorSpace, kCIContextWorkingColorSpace:colorSpace]
ciContext = CIContext(CGLContext: cglContext, pixelFormat: pixelFormat!.CGLPixelFormatObj, colorSpace: colorSpace, options: options)
glClearColor(0.0, 0.0, 0.0, 0.0)
glClear(GLbitfield(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT))
// setup viewing projection
glMatrixMode(GLenum(GL_PROJECTION))
// start with identity matrix
glLoadIdentity();
// Create an image
let image = NSImage(contentsOfFile: "/Users/kuestere/Pictures/DSCI0004.JPG")
let imageData = image?.TIFFRepresentation!
let ciImage = CIImage(data: imageData!)
if (ciImage == nil) {
Swift.print("ciImage loading error")
}
let imageFrame = ciImage!.extent
let viewFrame = self.visibleRect
// setup a viewing world left, right, bottom, top, zNear, zFar
glOrtho(0.0, Double(viewFrame.width), 0.0, Double(viewFrame.height), -1.0, 1.0);
ciContext.drawImage(ciImage!, inRect: self.visibleRect, fromRect: imageFrame)
glFlush();
let end = CFAbsoluteTimeGetCurrent()
Swift.print("updated view in \(end - start)")
}

How Do I Blur a Scene in SpriteKit?

How would I add a gaussian blur to all nodes (there's no fixed number of nodes) in an SKScene in SpriteKit? A label will be added on top of the scene later, this will be my pause menu.
Almost anything would help!
Something like this is what I'm going for:
What you're looking for is an SKEffectNode. It applies a CoreImage filter to itself (and thus all subnodes). Just make it the root view of your scene, give it one of CoreImage's blur filters, and you're set.
For example, I set up an SKScene with an SKEffectNode as it's first child node and a property, root that holds a weak reference to it:
-(void)createLayers{
SKEffectNode *node = [SKEffectNode node];
[node setShouldEnableEffects:NO];
CIFilter *blur = [CIFilter filterWithName:#"CIGaussianBlur" keysAndValues:#"inputRadius", #1.0f, nil];
[node setFilter:blur];
[self setRoot:node];
}
And here's the method I use to (animate!) the blur of my scene:
-(void)blurWithCompletion:(void (^)())handler{
CGFloat duration = 0.5f;
[[self root] setShouldRasterize:YES];
[[self root] setShouldEnableEffects:YES];
[[self root] runAction:[SKAction customActionWithDuration:duration actionBlock:^(SKNode *node, CGFloat elapsedTime){
NSNumber *radius = [NSNumber numberWithFloat:(elapsedTime/duration) * 10.0];
[[(SKEffectNode *)node filter] setValue:radius forKey:#"inputRadius"];
}] completion:handler];
}
Note that, like you, I'm using this as a pause screen, so I rasterize the scene. If you want your scene to animate while blurred, you should probably setShouldResterize: to NO.
And if you're not interested in animating the transition to the blur, you could always just set the filter to an initial radius of 10.0f or so and do a simple setShouldEnableEffects:YES when you want to switch it on.
See also: SKEffectNode class reference
UPDATE:
See Markus's comment below. He points out that SKScene is, in fact, a subclass of SKEffectNode, so you really ought to be able to call all of this on the scene itself rather than arbitrarily inserting an effect node in your node tree.
To add to this by using #Bendegúz's answer and code from http://www.bytearray.org/?p=5360
I was able to get this to work in my current game project that's being done in IOS 8 Swift. Done a bit differently by returning an SKSpriteNode instead of a UIImage. Also note that my unwrapped currentScene.view! call is to a weak GameScene reference but should work with self.view.frame based on where you are calling these methods. My pause screen is called in a separate HUD class hence why this is the case.
I would imagine this could be done more elegantly, maybe more like #jemmons's answer. Just wanted to possibly help out anyone else trying to do this in SpriteKit projects written in all or some Swift code.
func getBluredScreenshot() -> SKSpriteNode{
create the graphics context
UIGraphicsBeginImageContextWithOptions(CGSize(width: currentScene.view!.frame.size.width, height: currentScene.view!.frame.size.height), true, 1)
currentScene.view!.drawViewHierarchyInRect(currentScene.view!.frame, afterScreenUpdates: true)
// retrieve graphics context
let context = UIGraphicsGetCurrentContext()
// query image from it
let image = UIGraphicsGetImageFromCurrentImageContext()
// create Core Image context
let ciContext = CIContext(options: nil)
// create a CIImage, think of a CIImage as image data for processing, nothing is displayed or can be displayed at this point
let coreImage = CIImage(image: image)
// pick the filter we want
let filter = CIFilter(name: "CIGaussianBlur")
// pass our image as input
filter.setValue(coreImage, forKey: kCIInputImageKey)
//edit the amount of blur
filter.setValue(3, forKey: kCIInputRadiusKey)
//retrieve the processed image
let filteredImageData = filter.valueForKey(kCIOutputImageKey) as CIImage
// return a Quartz image from the Core Image context
let filteredImageRef = ciContext.createCGImage(filteredImageData, fromRect: filteredImageData.extent())
// final UIImage
let filteredImage = UIImage(CGImage: filteredImageRef)
// create a texture, pass the UIImage
let texture = SKTexture(image: filteredImage!)
// wrap it inside a sprite node
let sprite = SKSpriteNode(texture:texture)
// make image the position in the center
sprite.position = CGPointMake(CGRectGetMidX(currentScene.frame), CGRectGetMidY(currentScene.frame))
var scale:CGFloat = UIScreen.mainScreen().scale
sprite.size.width *= scale
sprite.size.height *= scale
return sprite
}
func loadPauseBGScreen(){
let duration = 1.0
let pauseBG:SKSpriteNode = self.getBluredScreenshot()
//pauseBG.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame))
pauseBG.alpha = 0
pauseBG.zPosition = self.zPosition + 1
pauseBG.runAction(SKAction.fadeAlphaTo(1, duration: duration))
self.addChild(pauseBG)
}
This is my solution for the pause screen.
It will take a screenshot, blur it and after that show it with animation.
I think you should do it if you don't wanna waste to much fps.
-(void)pause {
SKSpriteNode *pauseBG = [SKSpriteNode spriteNodeWithTexture:[SKTexture textureWithImage:[self getBluredScreenshot]]];
pauseBG.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame));
pauseBG.alpha = 0;
pauseBG.zPosition = 2;
[pauseBG runAction:[SKAction fadeAlphaTo:1 duration:duration / 2]];
[self addChild:pauseBG];
}
And this is the helper method:
- (UIImage *)getBluredScreenshot {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 1);
[self.view drawViewHierarchyInRect:self.view.frame afterScreenUpdates:YES];
UIImage *ss = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CIFilter *gaussianBlurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[gaussianBlurFilter setDefaults];
[gaussianBlurFilter setValue:[CIImage imageWithCGImage:[ss CGImage]] forKey:kCIInputImageKey];
[gaussianBlurFilter setValue:#10 forKey:kCIInputRadiusKey];
CIImage *outputImage = [gaussianBlurFilter outputImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGRect rect = [outputImage extent];
rect.origin.x += (rect.size.width - ss.size.width ) / 2;
rect.origin.y += (rect.size.height - ss.size.height) / 2;
rect.size = ss.size;
CGImageRef cgimg = [context createCGImage:outputImage fromRect:rect];
UIImage *image = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
return image;
}
Swift 4:
add this to your gameScene if you want to blur everything in the scene:
let blur = CIFilter(name:"CIGaussianBlur",withInputParameters: ["inputRadius": 10.0])
self.filter = blur
self.shouldRasterize = true
self.shouldEnableEffects = false
change self.shouldEnableEffects = true when you want to use it.
This is another example of getting this done in swift 2 without the layers:
func blurWithCompletion() {
let duration: CGFloat = 0.5
let filter: CIFilter = CIFilter(name: "CIGaussianBlur", withInputParameters: ["inputRadius" : NSNumber(double:1.0)])!
scene!.filter = filter
scene!.shouldRasterize = true
scene!.shouldEnableEffects = true
scene!.runAction(SKAction.customActionWithDuration(0.5, actionBlock: { (node: SKNode, elapsedTime: CGFloat) in
let radius = (elapsedTime/duration)*10.0
(node as? SKEffectNode)!.filter!.setValue(radius, forKey: "inputRadius")
}))
}
Swift 3 Update: This is #Chuck Gaffney's answer updated for Swift 3. I know this question is tagged objective-c, but this page ranked 2nd in Google for "swift spritekit blur". I changed currentScene to self.
func getBluredScreenshot() -> SKSpriteNode{
//create the graphics context
UIGraphicsBeginImageContextWithOptions(CGSize(width: self.view!.frame.size.width, height: self.view!.frame.size.height), true, 1)
self.view!.drawHierarchy(in: self.view!.frame, afterScreenUpdates: true)
// retrieve graphics context
_ = UIGraphicsGetCurrentContext()
// query image from it
let image = UIGraphicsGetImageFromCurrentImageContext()
// create Core Image context
let ciContext = CIContext(options: nil)
// create a CIImage, think of a CIImage as image data for processing, nothing is displayed or can be displayed at this point
let coreImage = CIImage(image: image!)
// pick the filter we want
let filter = CIFilter(name: "CIGaussianBlur")
// pass our image as input
filter?.setValue(coreImage, forKey: kCIInputImageKey)
//edit the amount of blur
filter?.setValue(3, forKey: kCIInputRadiusKey)
//retrieve the processed image
let filteredImageData = filter?.value(forKey: kCIOutputImageKey) as! CIImage
// return a Quartz image from the Core Image context
let filteredImageRef = ciContext.createCGImage(filteredImageData, from: filteredImageData.extent)
// final UIImage
let filteredImage = UIImage(cgImage: filteredImageRef!)
// create a texture, pass the UIImage
let texture = SKTexture(image: filteredImage)
// wrap it inside a sprite node
let sprite = SKSpriteNode(texture:texture)
// make image the position in the center
sprite.position = CGPoint(x: self.frame.midX, y: self.frame.midY)
let scale:CGFloat = UIScreen.main.scale
sprite.size.width *= scale
sprite.size.height *= scale
return sprite
}
func loadPauseBGScreen(){
let duration = 1.0
let pauseBG:SKSpriteNode = self.getBluredScreenshot()
pauseBG.alpha = 0
pauseBG.zPosition = self.zPosition + 1
pauseBG.run(SKAction.fadeAlpha(to: 1, duration: duration))
self.addChild(pauseBG)
}
I was trying to do this same thing now, in May 2020 (Xcode 11 and iOS 13.x), but wasn't unable to 'animate' the blur radius. In my case, I start with the scene fully blurred, and then 'unblur' it gradually (set inputRadius to 0).
Somehow, the new input radius value set in the custom action block wasn't reflected in the rendered scene. My code was as follows:
private func unblur() {
run(SKAction.customAction(withDuration: unblurDuration, actionBlock: { [weak self] (_, elapsed) in
guard let this = self else { return }
let ratio = (TimeInterval(elapsed) / this.unblurDuration)
let radius = this.maxBlurRadius * (1 - ratio) // goes to 0 as ratio goes to 1
this.filter?.setValue(radius, forKey: kCIInputRadiusKey)
}))
}
I even tried updating the value manually using SKScene.update(_:) and some variables for time book-keeping, but the same result.
It occurred to me that perhaps I could force the refresh if I "re-assingned" the blur filter to the .filter property of my SKScene (see comments in ALL CAPS near the end of the code), to the same effect, and it worked.
The full code:
class MyScene: SKScene {
private let maxBlurRadius: Double = 50
private let unblurDuration: TimeInterval = 5
init(size: CGSize) {
super.init(size: size)
let filter = CIFilter(name: "CIGaussianBlur")
filter?.setValue(maxBlurRadius, forKey: kCIInputRadiusKey)
self.filter = filter
self.shouldEnableEffects = true
self.shouldRasterize = false
// (...rest of the child nodes, etc...)
}
override func didMove(to view: SKView) {
super.didMove(to: view)
self.unblur()
}
private func unblur() {
run(SKAction.customAction(withDuration: unblurDuration, actionBlock: { [weak self] (_, elapsed) in
guard let this = self else { return }
let ratio = (TimeInterval(elapsed) / this.unblurDuration)
let radius = this.maxBlurRadius * (1 - ratio) // goes to 0 as ratio goes to 1
// OBTAIN THE FILTER
let filter = this.filter
// MODIFY ATTRIBUTE
filter?.setValue(radius, forKey: kCIInputRadiusKey)
// RE=ASSIGN TO SCENE
this.filter = filter
}))
}
}
I hope this helps someone!

Cocoa Touch - Adding texture with overlay view

I have a set of tiles as UIViews that have a programmable background color, and each one
can be a different color. I want to add texture, like a side-lit bevel, to each one. Can this be done with an overlay view or by some other method?
I'm looking for suggestions that don't require a custom image file for each case.
This may help someone, although this was pieced together from other topics on SO.
To create a beveled tile image with an arbitrary color for normal and for retina display, I made a beveled image in photoshop and set the saturation to zero, making a grayscale image called tileBevel.png
I also created one for the retina display (tileBevel#2x.png)
Here is the code:
+ (UIImage*) createTileWithColor:(UIColor*)tileColor {
int pixelsHigh = 44;
int pixelsWide = 46;
UIImage *bottomImage;
if([UIScreen respondsToSelector:#selector(scale)] && [[UIScreen mainScreen] scale] == 2.0) {
pixelsHigh *= 2;
pixelsWide *= 2;
bottomImage = [UIImage imageNamed:#"tileBevel#2x.png"];
}
else {
bottomImage = [UIImage imageNamed:#"tileBevel.png"];
}
CGImageRef theCGImage = NULL;
CGContextRef tileBitmapContext = NULL;
CGRect rectangle = CGRectMake(0,0,pixelsWide,pixelsHigh);
UIGraphicsBeginImageContext(rectangle.size);
[bottomImage drawInRect:rectangle];
tileBitmapContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(tileBitmapContext, kCGBlendModeOverlay);
CGContextSetFillColorWithColor(tileBitmapContext, tileColor.CGColor);
CGContextFillRect(tileBitmapContext, rectangle);
theCGImage=CGBitmapContextCreateImage(tileBitmapContext);
UIGraphicsEndImageContext();
return [UIImage imageWithCGImage:theCGImage];
}
This checks to see if the retina display is used, sizes the rectangle to draw in, picks the appropriate grayscale base image, set the blending mode to overlay, then draws a rectangle on top of the bottom image. All of this is done inside a graphics context bracketed by the BeginImageContext and EndImageContext calls. These set the current context needed by the UIImage drawRect: method. The Core Graphics functions need the context as a parameter, which is obtained by a call to get the current context.
And the result looks like this:
If you want to preserve the alpha channel of the source image, just add this to jim's code before the fill rect:
// Apply mask
CGContextTranslateCTM(tileBitmapContext, 0, rectangle.size.height);
CGContextScaleCTM(tileBitmapContext, 1.0f, -1.0f);
CGContextClipToMask(tileBitmapContext, rectangle, bottomImage.CGImage);
Swift 3 solution, essentially based on Jim's answer with Scriptease's addition, and some minor changes:
class func image(bottomImage: UIImage, topImage: UIImage, tileColor: UIColor) -> UIImage? {
let pixelsHigh: CGFloat = bottomImage.size.height
let pixelsWide: CGFloat = bottomImage.size.width
let rectangle = CGRect.init(x: 0, y: 0, width: pixelsWide, height: pixelsHigh)
UIGraphicsBeginImageContext(rectangle.size);
bottomImage.draw(in: rectangle)
if let tileBitmapContext = UIGraphicsGetCurrentContext() {
tileBitmapContext.setBlendMode(.overlay)
tileBitmapContext.setFillColor(tileColor.cgColor)
tileBitmapContext.scaleBy(x: 1.0, y: -1.0)
tileBitmapContext.clip(to: rectangle, mask: bottomImage.cgImage!)
tileBitmapContext.fill(rectangle)
let theCGImage = tileBitmapContext.makeImage()
UIGraphicsEndImageContext();
if let theImage = theCGImage {
return UIImage.init(cgImage: theImage)
}
}
return nil
}