How Do I Blur a Scene in SpriteKit? - objective-c

How would I add a gaussian blur to all nodes (there's no fixed number of nodes) in an SKScene in SpriteKit? A label will be added on top of the scene later, this will be my pause menu.
Almost anything would help!
Something like this is what I'm going for:

What you're looking for is an SKEffectNode. It applies a CoreImage filter to itself (and thus all subnodes). Just make it the root view of your scene, give it one of CoreImage's blur filters, and you're set.
For example, I set up an SKScene with an SKEffectNode as it's first child node and a property, root that holds a weak reference to it:
-(void)createLayers{
SKEffectNode *node = [SKEffectNode node];
[node setShouldEnableEffects:NO];
CIFilter *blur = [CIFilter filterWithName:#"CIGaussianBlur" keysAndValues:#"inputRadius", #1.0f, nil];
[node setFilter:blur];
[self setRoot:node];
}
And here's the method I use to (animate!) the blur of my scene:
-(void)blurWithCompletion:(void (^)())handler{
CGFloat duration = 0.5f;
[[self root] setShouldRasterize:YES];
[[self root] setShouldEnableEffects:YES];
[[self root] runAction:[SKAction customActionWithDuration:duration actionBlock:^(SKNode *node, CGFloat elapsedTime){
NSNumber *radius = [NSNumber numberWithFloat:(elapsedTime/duration) * 10.0];
[[(SKEffectNode *)node filter] setValue:radius forKey:#"inputRadius"];
}] completion:handler];
}
Note that, like you, I'm using this as a pause screen, so I rasterize the scene. If you want your scene to animate while blurred, you should probably setShouldResterize: to NO.
And if you're not interested in animating the transition to the blur, you could always just set the filter to an initial radius of 10.0f or so and do a simple setShouldEnableEffects:YES when you want to switch it on.
See also: SKEffectNode class reference
UPDATE:
See Markus's comment below. He points out that SKScene is, in fact, a subclass of SKEffectNode, so you really ought to be able to call all of this on the scene itself rather than arbitrarily inserting an effect node in your node tree.

To add to this by using #Bendegúz's answer and code from http://www.bytearray.org/?p=5360
I was able to get this to work in my current game project that's being done in IOS 8 Swift. Done a bit differently by returning an SKSpriteNode instead of a UIImage. Also note that my unwrapped currentScene.view! call is to a weak GameScene reference but should work with self.view.frame based on where you are calling these methods. My pause screen is called in a separate HUD class hence why this is the case.
I would imagine this could be done more elegantly, maybe more like #jemmons's answer. Just wanted to possibly help out anyone else trying to do this in SpriteKit projects written in all or some Swift code.
func getBluredScreenshot() -> SKSpriteNode{
create the graphics context
UIGraphicsBeginImageContextWithOptions(CGSize(width: currentScene.view!.frame.size.width, height: currentScene.view!.frame.size.height), true, 1)
currentScene.view!.drawViewHierarchyInRect(currentScene.view!.frame, afterScreenUpdates: true)
// retrieve graphics context
let context = UIGraphicsGetCurrentContext()
// query image from it
let image = UIGraphicsGetImageFromCurrentImageContext()
// create Core Image context
let ciContext = CIContext(options: nil)
// create a CIImage, think of a CIImage as image data for processing, nothing is displayed or can be displayed at this point
let coreImage = CIImage(image: image)
// pick the filter we want
let filter = CIFilter(name: "CIGaussianBlur")
// pass our image as input
filter.setValue(coreImage, forKey: kCIInputImageKey)
//edit the amount of blur
filter.setValue(3, forKey: kCIInputRadiusKey)
//retrieve the processed image
let filteredImageData = filter.valueForKey(kCIOutputImageKey) as CIImage
// return a Quartz image from the Core Image context
let filteredImageRef = ciContext.createCGImage(filteredImageData, fromRect: filteredImageData.extent())
// final UIImage
let filteredImage = UIImage(CGImage: filteredImageRef)
// create a texture, pass the UIImage
let texture = SKTexture(image: filteredImage!)
// wrap it inside a sprite node
let sprite = SKSpriteNode(texture:texture)
// make image the position in the center
sprite.position = CGPointMake(CGRectGetMidX(currentScene.frame), CGRectGetMidY(currentScene.frame))
var scale:CGFloat = UIScreen.mainScreen().scale
sprite.size.width *= scale
sprite.size.height *= scale
return sprite
}
func loadPauseBGScreen(){
let duration = 1.0
let pauseBG:SKSpriteNode = self.getBluredScreenshot()
//pauseBG.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame))
pauseBG.alpha = 0
pauseBG.zPosition = self.zPosition + 1
pauseBG.runAction(SKAction.fadeAlphaTo(1, duration: duration))
self.addChild(pauseBG)
}

This is my solution for the pause screen.
It will take a screenshot, blur it and after that show it with animation.
I think you should do it if you don't wanna waste to much fps.
-(void)pause {
SKSpriteNode *pauseBG = [SKSpriteNode spriteNodeWithTexture:[SKTexture textureWithImage:[self getBluredScreenshot]]];
pauseBG.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame));
pauseBG.alpha = 0;
pauseBG.zPosition = 2;
[pauseBG runAction:[SKAction fadeAlphaTo:1 duration:duration / 2]];
[self addChild:pauseBG];
}
And this is the helper method:
- (UIImage *)getBluredScreenshot {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 1);
[self.view drawViewHierarchyInRect:self.view.frame afterScreenUpdates:YES];
UIImage *ss = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CIFilter *gaussianBlurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[gaussianBlurFilter setDefaults];
[gaussianBlurFilter setValue:[CIImage imageWithCGImage:[ss CGImage]] forKey:kCIInputImageKey];
[gaussianBlurFilter setValue:#10 forKey:kCIInputRadiusKey];
CIImage *outputImage = [gaussianBlurFilter outputImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGRect rect = [outputImage extent];
rect.origin.x += (rect.size.width - ss.size.width ) / 2;
rect.origin.y += (rect.size.height - ss.size.height) / 2;
rect.size = ss.size;
CGImageRef cgimg = [context createCGImage:outputImage fromRect:rect];
UIImage *image = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
return image;
}

Swift 4:
add this to your gameScene if you want to blur everything in the scene:
let blur = CIFilter(name:"CIGaussianBlur",withInputParameters: ["inputRadius": 10.0])
self.filter = blur
self.shouldRasterize = true
self.shouldEnableEffects = false
change self.shouldEnableEffects = true when you want to use it.

This is another example of getting this done in swift 2 without the layers:
func blurWithCompletion() {
let duration: CGFloat = 0.5
let filter: CIFilter = CIFilter(name: "CIGaussianBlur", withInputParameters: ["inputRadius" : NSNumber(double:1.0)])!
scene!.filter = filter
scene!.shouldRasterize = true
scene!.shouldEnableEffects = true
scene!.runAction(SKAction.customActionWithDuration(0.5, actionBlock: { (node: SKNode, elapsedTime: CGFloat) in
let radius = (elapsedTime/duration)*10.0
(node as? SKEffectNode)!.filter!.setValue(radius, forKey: "inputRadius")
}))
}

Swift 3 Update: This is #Chuck Gaffney's answer updated for Swift 3. I know this question is tagged objective-c, but this page ranked 2nd in Google for "swift spritekit blur". I changed currentScene to self.
func getBluredScreenshot() -> SKSpriteNode{
//create the graphics context
UIGraphicsBeginImageContextWithOptions(CGSize(width: self.view!.frame.size.width, height: self.view!.frame.size.height), true, 1)
self.view!.drawHierarchy(in: self.view!.frame, afterScreenUpdates: true)
// retrieve graphics context
_ = UIGraphicsGetCurrentContext()
// query image from it
let image = UIGraphicsGetImageFromCurrentImageContext()
// create Core Image context
let ciContext = CIContext(options: nil)
// create a CIImage, think of a CIImage as image data for processing, nothing is displayed or can be displayed at this point
let coreImage = CIImage(image: image!)
// pick the filter we want
let filter = CIFilter(name: "CIGaussianBlur")
// pass our image as input
filter?.setValue(coreImage, forKey: kCIInputImageKey)
//edit the amount of blur
filter?.setValue(3, forKey: kCIInputRadiusKey)
//retrieve the processed image
let filteredImageData = filter?.value(forKey: kCIOutputImageKey) as! CIImage
// return a Quartz image from the Core Image context
let filteredImageRef = ciContext.createCGImage(filteredImageData, from: filteredImageData.extent)
// final UIImage
let filteredImage = UIImage(cgImage: filteredImageRef!)
// create a texture, pass the UIImage
let texture = SKTexture(image: filteredImage)
// wrap it inside a sprite node
let sprite = SKSpriteNode(texture:texture)
// make image the position in the center
sprite.position = CGPoint(x: self.frame.midX, y: self.frame.midY)
let scale:CGFloat = UIScreen.main.scale
sprite.size.width *= scale
sprite.size.height *= scale
return sprite
}
func loadPauseBGScreen(){
let duration = 1.0
let pauseBG:SKSpriteNode = self.getBluredScreenshot()
pauseBG.alpha = 0
pauseBG.zPosition = self.zPosition + 1
pauseBG.run(SKAction.fadeAlpha(to: 1, duration: duration))
self.addChild(pauseBG)
}

I was trying to do this same thing now, in May 2020 (Xcode 11 and iOS 13.x), but wasn't unable to 'animate' the blur radius. In my case, I start with the scene fully blurred, and then 'unblur' it gradually (set inputRadius to 0).
Somehow, the new input radius value set in the custom action block wasn't reflected in the rendered scene. My code was as follows:
private func unblur() {
run(SKAction.customAction(withDuration: unblurDuration, actionBlock: { [weak self] (_, elapsed) in
guard let this = self else { return }
let ratio = (TimeInterval(elapsed) / this.unblurDuration)
let radius = this.maxBlurRadius * (1 - ratio) // goes to 0 as ratio goes to 1
this.filter?.setValue(radius, forKey: kCIInputRadiusKey)
}))
}
I even tried updating the value manually using SKScene.update(_:) and some variables for time book-keeping, but the same result.
It occurred to me that perhaps I could force the refresh if I "re-assingned" the blur filter to the .filter property of my SKScene (see comments in ALL CAPS near the end of the code), to the same effect, and it worked.
The full code:
class MyScene: SKScene {
private let maxBlurRadius: Double = 50
private let unblurDuration: TimeInterval = 5
init(size: CGSize) {
super.init(size: size)
let filter = CIFilter(name: "CIGaussianBlur")
filter?.setValue(maxBlurRadius, forKey: kCIInputRadiusKey)
self.filter = filter
self.shouldEnableEffects = true
self.shouldRasterize = false
// (...rest of the child nodes, etc...)
}
override func didMove(to view: SKView) {
super.didMove(to: view)
self.unblur()
}
private func unblur() {
run(SKAction.customAction(withDuration: unblurDuration, actionBlock: { [weak self] (_, elapsed) in
guard let this = self else { return }
let ratio = (TimeInterval(elapsed) / this.unblurDuration)
let radius = this.maxBlurRadius * (1 - ratio) // goes to 0 as ratio goes to 1
// OBTAIN THE FILTER
let filter = this.filter
// MODIFY ATTRIBUTE
filter?.setValue(radius, forKey: kCIInputRadiusKey)
// RE=ASSIGN TO SCENE
this.filter = filter
}))
}
}
I hope this helps someone!

Related

Translate detected rectangle from portrait CIImage to landscape CIImage

I am modifying code found here. In the code we are capturing video from the phone camera using AVCaptureSession and using CIDetector to detect a rectangle in the image feed. The feed has an image which is 640x842 (iphone5 in portrait). We then do an overlay on the image so the user can see the detected rectangle (actually it's a trapezoid most of the time).
When the user presses a button on the UI, we capture an image from the video and re-run the rectangle detection on this larger image (3264x2448) which as you can see is landscape. We then do a perspective transform on the detected rectangle and crop on the image.
This is working pretty well but the issue I have is say 1 out of 5 captures the detected rectangle on the larger image is different to the one detected (and presented to the user) from the smaller image. Even though I only capture when I detect the phone is (relatively) still, the final image then does not represent the rectangle the user expected.
To resolve this my idea is to use the coordinates of the originally captured rectangle and translate them to a rectangle on the captured still image. This is where I'm struggling.
I tried this with the detected rectangle:
CGFloat radians = -90 * (M_PI/180);
CGAffineTransform rotation = CGAffineTransformMakeRotation(radians);
CGRect rect = CGRectMake(detectedRect.bounds.origin.x, detectedRect.bounds.origin.y, detectedRect.bounds.size.width, detectedRect.bounds.size.height);
CGRect rotatedRect = CGRectApplyAffineTransform(rect, rotation);
So given a detected rect:
TopLeft: 88.213425, 632.31329
TopRight: 545.59302, 632.15546
BottomRight: 575.57819, 369.22321
BottomLeft: 49.973862, 369.40466
I now have this rotated rect:
origin = (x = 369.223206, y = -575.578186)
size = (width = 263.090088, height = 525.604309)
How do I translate the rotated rectangle coordinates in the smaller portrait image to coordinates to the 3264x2448 image?
Edit
Duh.. reading my own approach realised that creating a rectangle out of a trapezoid will not solve my problem!
Supporting code to detect the rectangle etc...
// In this method we detect a rect from the video feed and overlay
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
// image is 640x852 on iphone5
NSArray *rects = [[CIDetector detectorOfType:CIDetectorTypeRectangle context:nil options:#{CIDetectorAccuracy : CIDetectorAccuracyHigh}] featuresInImage:image];
CIRectangleFeature *detectedRect = rects[0];
// draw overlay on image code....
}
This is a summarized version of how the still image is obtained:
// code block to handle output from AVCaptureStillImageOutput
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
CIImage *enhancedImage = [[CIImage alloc] initWithData:imageData options:#{kCIImageColorSpace:[NSNull null]}];
imageData = nil;
CIRectangleFeature *rectangleFeature = [self getDetectedRect:[[self highAccuracyRectangleDetector] featuresInImage:enhancedImage]];
if (rectangleFeature) {
enhancedImage = [self correctPerspectiveForImage:enhancedImage withTopLeft:rectangleFeature.topLeft andTopRight:rectangleFeature.topRight andBottomRight:rectangleFeature.bottomRight andBottomLeft:rectangleFeature.bottomLeft];
}
}
Thank you.
I had the same issue while doing this kind of stuff.I have resolved it by doing below code in swift. Take a look if it can help you.
if let videoConnection = stillImageOutput.connection(withMediaType: AVMediaTypeVideo) {
stillImageOutput.captureStillImageAsynchronously(from: videoConnection) {
(imageDataSampleBuffer, error) -> Void in
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
var img = UIImage(data: imageData!)!
let outputRect = self.previewLayer?.metadataOutputRectOfInterest(for: (self.previewLayer?.bounds)!)
let takenCGImage = img.cgImage
let width = (takenCGImage?.width)!
let height = (takenCGImage?.height)!
let cropRect = CGRect(x: (outputRect?.origin.x)! * CGFloat(width), y: (outputRect?.origin.y)! * CGFloat(height), width: (outputRect?.size.width)! * CGFloat(width), height: (outputRect?.size.height)! * CGFloat(height))
let cropCGImage = takenCGImage!.cropping(to: cropRect)
img = UIImage(cgImage: cropCGImage!, scale: 1, orientation: img.imageOrientation)
let cropViewController = TOCropViewController(image: self.cropToBounds(image: img))
cropViewController.delegate = self
self.navigationController?.pushViewController(cropViewController, animated: true)
}
}

How to add a gradient tint color to a UISlider in XCode 6?

I'm working on a design application that has a section for selecting colors by three sliders for RGB.
As we can see in xcode, where we want to select a color by RGB values, the slider tint color is a gradient color that changes when we change the sliders. I want to use this in my application. but I have no idea about how to do this?
I've found this code in a blog. but didn't work for me.
- (void)setGradientToSlider:(UISlider *)Slider WithColors:(NSArray *)Colors{
UIView * view = (UIView *)[[Slider subviews]objectAtIndex:0];
UIImageView * maxTrackImageView = (UIImageView *)[[view subviews]objectAtIndex:0];
CAGradientLayer * maxTrackGradient = [CAGradientLayer layer];
CGRect rect = maxTrackImageView.frame;
rect.origin.x = view.frame.origin.x;
maxTrackGradient.frame = rect;
maxTrackGradient.colors = Colors;
[maxTrackGradient setStartPoint:CGPointMake(0.0, 0.5)];
[maxTrackGradient setEndPoint:CGPointMake(1.0, 0.5)];
[[maxTrackImageView layer] insertSublayer:maxTrackGradient atIndex:0];
/////////////////////////////////////////////////////
UIImageView * minTrackImageView = (UIImageView *)[[view subviews]objectAtIndex:1];
CAGradientLayer * minTrackGradient = [CAGradientLayer layer];
rect = minTrackImageView.frame;
rect.size.width = maxTrackImageView.frame.size.width;
rect.origin.x = 0;
rect.origin.y = 0;
minTrackGradient.frame = rect;
minTrackGradient.colors = Colors;
[minTrackGradient setStartPoint:CGPointMake(0.0, 0.5)];
[minTrackGradient setEndPoint:CGPointMake(1.0, 0.5)];
[minTrackImageView.layer insertSublayer:minTrackGradient atIndex:0];
}
I would appreciate any helps. Thanks.
While it didnt give me the desired results here is a down and dirty Swift version of the answer above for those that want to try it.
func setSlider(slider:UISlider) {
let tgl = CAGradientLayer()
let frame = CGRectMake(0, 0, slider.frame.size.width, 5)
tgl.frame = frame
tgl.colors = [UIColor.blueColor().CGColor, UIColor.greenColor().CGColor, UIColor.yellowColor().CGColor, UIColor.orangeColor().CGColor, UIColor.redColor().CGColor]
tgl.startPoint = CGPointMake(0.0, 0.5)
tgl.endPoint = CGPointMake(1.0, 0.5)
UIGraphicsBeginImageContextWithOptions(tgl.frame.size, tgl.opaque, 0.0);
tgl.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
image.resizableImageWithCapInsets(UIEdgeInsetsZero)
slider.setMinimumTrackImage(image, forState: .Normal)
//slider.setMaximumTrackImage(image, forState: .Normal)
}
UPDATE for Swift 4.0
func setSlider(slider:UISlider) {
let tgl = CAGradientLayer()
let frame = CGRect.init(x:0, y:0, width:slider.frame.size.width, height:5)
tgl.frame = frame
tgl.colors = [UIColor.blue.cgColor, UIColor.green.cgColor, UIColor.yellow.cgColor, UIColor.orange.cgColor, UIColor.red.cgColor]
tgl.startPoint = CGPoint.init(x:0.0, y:0.5)
tgl.endPoint = CGPoint.init(x:1.0, y:0.5)
UIGraphicsBeginImageContextWithOptions(tgl.frame.size, tgl.isOpaque, 0.0);
tgl.render(in: UIGraphicsGetCurrentContext()!)
if let image = UIGraphicsGetImageFromCurrentImageContext() {
UIGraphicsEndImageContext()
image.resizableImage(withCapInsets: UIEdgeInsets.zero)
slider.setMinimumTrackImage(image, for: .normal)
}
}
Here is possible solution:
Usage:
//array of CGColor objects, color1 and color2 are UIColor objects
NSArray *colors = [NSArray arrayWithObjects:(id)color1.CGColor, (id)color2.CGColor, nil];
//your UISlider
[slider setGradientBackgroundWithColors:colors];
Implementation:
Create category on UISlider:
- (void)setGradientBackgroundWithColors:(NSArray *)colors
{
CAGradientLayer *trackGradientLayer = [CAGradientLayer layer];
CGRect frame = self.frame;
frame.size.height = 5.0; //set the height of slider
trackGradientLayer.frame = frame;
trackGradientLayer.colors = colors;
//setting gradient as horizontal
trackGradientLayer.startPoint = CGPointMake(0.0, 0.5);
trackGradientLayer.endPoint = CGPointMake(1.0, 0.5);
UIImage *trackImage = [[UIImage imageFromLayer:trackGradientLayer] resizableImageWithCapInsets:UIEdgeInsetsZero];
[self setMinimumTrackImage:trackImage forState:UIControlStateNormal];
[self setMaximumTrackImage:trackImage forState:UIControlStateNormal];
}
Where colors is array of CGColor.
I have also created a category on UIImage which creates image from layer as you need an UIImage for setting gradient on slider.
+ (UIImage *)imageFromLayer:(CALayer *)layer
{
UIGraphicsBeginImageContextWithOptions(layer.frame.size, layer.opaque, 0.0);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
For Swift 3 and to prevent the slider from scaling the Min image, apply this when setting the its image. Recalculating the slider's left side is not necessary. Only recalc if you can changing the color of the gradient. The Max image does not seem to scale, but you should probably apply the same setting for consistency. There is a slight difference on the Max image when not applying its insets.
slider.setMinimumTrackImage(image?.resizableImage(withCapInsets:.zero), for: .normal)
For some reason it only works properly when resizableImage(withCapInsets:.zero) is all done at the same time. Running that part separate does not allow the image to work and gets scaled.
Here is the entire routine in Swift 3:
func setSlider(slider:UISlider) {
let tgl = CAGradientLayer()
let frame = CGRect(x: 0.0, y: 0.0, width: slider.bounds.width, height: 5.0 )
tgl.frame = frame
tgl.colors = [ UIColor.yellow.cgColor,UIColor.black.cgColor]
tgl.endPoint = CGPoint(x: 1.0, y: 1.0)
tgl.startPoint = CGPoint(x: 0.0, y: 1.0)
UIGraphicsBeginImageContextWithOptions(tgl.frame.size, false, 0.0)
tgl.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
slider.setMaximumTrackImage(image?.resizableImage(withCapInsets:.zero), for: .normal)
slider.setMinimumTrackImage(image?.resizableImage(withCapInsets:.zero), for: .normal)
}
This is a really effective approach that I've found after a lot of web search. So it's better to share it here as a complete answer. The following code is a Swift Class That you can use to create and use gradients as UIView or UIImage.
import Foundation
import UIKit
class Gradient: UIView{
// Gradient Color Array
private var Colors: [UIColor] = []
// Start And End Points Of Linear Gradient
private var SP: CGPoint = CGPoint.zeroPoint
private var EP: CGPoint = CGPoint.zeroPoint
// Start And End Center Of Radial Gradient
private var SC: CGPoint = CGPoint.zeroPoint
private var EC: CGPoint = CGPoint.zeroPoint
// Start And End Radius Of Radial Gradient
private var SR: CGFloat = 0.0
private var ER: CGFloat = 0.0
// Flag To Specify If The Gradient Is Radial Or Linear
private var flag: Bool = false
// Some Overrided Init Methods
required init(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
override init(frame: CGRect) {
super.init(frame: frame)
}
// Draw Rect Method To Draw The Graphics On The Context
override func drawRect(rect: CGRect) {
// Get Context
let context = UIGraphicsGetCurrentContext()
// Get Color Space
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Create Arrays To Convert The UIColor to CG Color
var colorComponent: [CGColor] = []
var colorLocations: [CGFloat] = []
var i: CGFloat = 0.0
// Add Colors Into The Color Components And Use An Index Variable For Their Location In The Array [The Location Is From 0.0 To 1.0]
for color in Colors {
colorComponent.append(color.CGColor)
colorLocations.append(i)
i += CGFloat(1.0) / CGFloat(self.Colors.count - 1)
}
// Create The Gradient With The Colors And Locations
let gradient: CGGradientRef = CGGradientCreateWithColors(colorSpace, colorComponent, colorLocations)
// Create The Suitable Gradient Based On Desired Type
if flag {
CGContextDrawRadialGradient(context, gradient, SC, SR, EC, ER, 0)
} else {
CGContextDrawLinearGradient(context, gradient, SP, EP, 0)
}
}
// Get The Input Data For Linear Gradient
func CreateLinearGradient(startPoint: CGPoint, endPoint: CGPoint, colors: UIColor...) {
self.Colors = colors
self.SP = startPoint
self.EP = endPoint
self.flag = false
}
// Get The Input Data For Radial Gradient
func CreateRadialGradient(startCenter: CGPoint, startRadius: CGFloat, endCenter: CGPoint, endRadius: CGFloat, colors: UIColor...) {
self.Colors = colors
self.SC = startCenter
self.EC = endCenter
self.SR = startRadius
self.ER = endRadius
self.flag = true
}
// Function To Convert Gradient To UIImage And Return It
func getImage() -> UIImage {
// Begin Image Context
UIGraphicsBeginImageContext(self.bounds.size)
// Draw The Gradient
self.drawRect(self.frame)
// Get Image From The Current Context
let image = UIGraphicsGetImageFromCurrentImageContext()
// End Image Context
UIGraphicsEndImageContext()
// Return The Result Gradient As UIImage
return image
}
}

UIImage from SKTexture

How to get UIImage from SKTexture?
I tried to get UIImage from SKTextureAtlas, but it seems not working too:
// p40_prop1 is a part of SKTextureAtlas
UIImage *image = [UIImage imageNamed:#"p40_prop1"];
image is nil.
Starting from iOS 9 it is a piece of cake. SKTexture now has CGImage property, which is of CGImageRef type. So getting image from a texture is just one line now:
let image : UIImage = UIImage(CGImage:texture.CGImage)
This seems to be working for me:
- (UIImage*) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
- (UIImage*) imageFromNode:(SKNode*)node
{
SKTexture* tex = [self.scene.view textureFromNode:node];
SKView* view = [[SKView alloc]initWithFrame:CGRectMake(0, 0, tex.size.width, tex.size.height)];
SKScene* scene = [SKScene sceneWithSize:tex.size];
SKSpriteNode* sprite = [SKSpriteNode spriteNodeWithTexture:tex];
sprite.position = CGPointMake( CGRectGetMidX(view.frame), CGRectGetMidY(view.frame) );
[scene addChild:sprite];
[view presentScene:scene];
return [self imageWithView:view];
}
get the SKTexture for your node using the current SKView
make another SKView that is just big enough for your texture
add a SKSpriteNode with the texture into your new scene, placing it in the middle
render the view into a graphics context
Or for those who prefer Swift:
func imageWithView(view : UIView) -> UIImage {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0)
view.drawViewHierarchyInRect(view.bounds, afterScreenUpdates: true)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}
func imageFromNode(node : SKNode) -> UIImage? {
if let tex = self.scene?.view?.textureFromNode(node) {
let view = SKView(frame:CGRectMake(0, 0, tex.size().width, tex.size().height))
let scene = SKScene(size: tex.size())
let sprite = SKSpriteNode(texture: tex)
sprite.position = CGPoint(x: CGRectGetMidX(view.frame), y: CGRectGetMidY(view.frame))
scene.addChild(sprite)
view.presentScene(scene)
return self.imageWithView(view)
}
return nil
}
There is actually a way to get a UIImage out of a SKView in iOS 7.0!
It uses regular UIView APIs to render the view into an ImageContext, then pulls a UIImage out of that. However, this solution is very limited in scope. It draws the SKView into a UIImage, then crops the resulting image to fit a given node's frame. So there must not be anything covering that node you want to snapshot. Also, both the view and scene must be visible on-screen (which is stricter than the usual -[SKView textureFromNode:] method). There may even be further restrictions that I haven't discovered.
Given all that, this procedure was still enough for what I needed, so I thought it was worth sharing.
+(UIImage *)imageFromNode:(SKNode *)node {
SKView *view = node.scene.view;
CGFloat scale = [UIScreen mainScreen].scale;
CGRect nodeFrame = [node calculateAccumulatedFrame];
// render SKView into UIImage
UIGraphicsBeginImageContextWithOptions(view.bounds.size, YES, 0.0);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
UIImage *sceneSnapshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// crop to the requested node (making sure to flip the y-coordinate)
CGFloat originY = sceneSnapshot.size.height*scale - nodeFrame.origin.y*scale - nodeFrame.size.height*scale;
CGRect cropRect = CGRectMake(nodeFrame.origin.x * scale, originY, nodeFrame.size.width*scale, nodeFrame.size.height*scale);
CGImageRef croppedSnapshot = CGImageCreateWithImageInRect(sceneSnapshot.CGImage, cropRect);
UIImage *nodeSnapshot = [UIImage imageWithCGImage:croppedSnapshot];
CGImageRelease(croppedSnapshot);
return nodeSnapshot;
}
I've tested this on the simulator in 3.5" and 4" retina iPhones, retina and non-retina iPads. As for actual devices, it worked on iPhone 4S, iPhone 5S, and iPad 2, all running 7.0.4.
func loadBackground() {
guard let _ = childNode(withName: "background") as! SKSpriteNode? else {
let texture = SKTexture(image: UIImage(named: "stick.jpg")!)
let node = SKSpriteNode(texture: texture)
node.size = texture.size()
node.zPosition = StickHeroGameSceneZposition.backgroundZposition.rawValue
self.physicsWorld.gravity = CGVector(dx: 0, dy: gravity)
addChild(node)
return
}
}
As of iOS 7.0 there's no way to get a UIImage from SKTexture, SKTextureAtlas or the SKView.

Capturing full screenshot with status bar in iOS programmatically

I am using this code to capture a screenshot and to save it to the photo album.
-(void)TakeScreenshotAndSaveToPhotoAlbum
{
UIWindow *window = [UIApplication sharedApplication].keyWindow;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(window.bounds.size, NO, [UIScreen mainScreen].scale);
else
UIGraphicsBeginImageContext(window.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
}
But the problem is whenever the screenshot is saved, I see the status bar of iPhone is not captured. Instead a white space appears at the bottom. Like the following image:
What am I doing wrong?
The status bar is actually in its own UIWindow, in your code you are only rendering the view of your viewcontroller which does not include this.
The "official" screenshot method was here but now seems to have been removed by Apple, probably due to it being obsolete.
Under iOS 7 there is now a new method on UIScreen for getting a view holding the contents of the entire screen:
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates
This will give you a view which you can then manipulate on screen for various visual effects.
If you want to draw the view hierarchy into a context, you need to iterate through the windows of the application ([[UIApplication sharedApplication] windows]) and call this method on each one:
- (BOOL)drawViewHierarchyInRect:(CGRect)rect afterScreenUpdates:(BOOL)afterUpdates
You may be able to combine the two above approaches and take the snapshot view, then use the above method on the snapshot to draw it.
The suggested "official" screenshot method doesn't capture status bar (it is not in the windows list of the application). As tested on iOS 5.
I believe, this is for security reasons, but there is no mention of it in the docs.
I suggest two options:
draw a stub status bar image from resources of your app (optionally update time indicator);
capture only your view, without status bar, or trim image afterwards (image size will differ from standard device resolution); status bar frame is known from corresponding property of application object.
Here is my code to take a screenshot and store it as NSData (inside an IBAction). With the sotred NSData then you can share or email or whatever want to do
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *imageForEmail = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageDataForEmail = UIImageJPEGRepresentation(imageForEmail, 1.0);
Answer of above question for Objective-C is already write there, here is the Swift version answer of above question.
For Swift 3+
Take screenshot and then use it to display somewhere or to send over web.
extension UIImage {
class var screenShot: UIImage? {
let imageSize = UIScreen.main.bounds.size as CGSize;
UIGraphicsBeginImageContextWithOptions(imageSize, false, 0)
guard let context = UIGraphicsGetCurrentContext() else {return nil}
for obj : AnyObject in UIApplication.shared.windows {
if let window = obj as? UIWindow {
if window.responds(to: #selector(getter: UIWindow.screen)) || window.screen == UIScreen.main {
// so we must first apply the layer's geometry to the graphics context
context.saveGState();
// Center the context around the window's anchor point
context.translateBy(x: window.center.x, y: window.center
.y);
// Apply the window's transform about the anchor point
context.concatenate(window.transform);
// Offset by the portion of the bounds left of and above the anchor point
context.translateBy(x: -window.bounds.size.width * window.layer.anchorPoint.x,
y: -window.bounds.size.height * window.layer.anchorPoint.y);
// Render the layer hierarchy to the current context
window.layer.render(in: context)
// Restore the context
context.restoreGState();
}
}
}
guard let image = UIGraphicsGetImageFromCurrentImageContext() else {return nil}
return image
}
}
Usage of above screenshot
Lets display above screen shot on UIImageView
yourImageView = UIImage.screenShot
Get image Data to save/send over web
if let img = UIImage.screenShot {
if let data = UIImagePNGRepresentation(img) {
//send this data over web or store it anywhere
}
}
Swift, iOS 13:
The code below (and other ways of accessing) will now crash the app with a message:
App called -statusBar or -statusBarWindow on UIApplication: this code must be changed as there's no longer a status bar or status bar window. Use the statusBarManager object on the window scene instead.
The window scenes and statusBarManager's really only give us access to frame - if this is still possible, I am not aware how.
Swift, iOS10-12:
The following works for me, and after profiling all the methods for capturing programmatic screenshots - this is the quickest, and the recommended way from Apple following iOS 10
let screenshotSize = CGSize(width: UIScreen.main.bounds.width * 0.6, height: UIScreen.main.bounds.height * 0.6)
let renderer = UIGraphicsImageRenderer(size: screenshotSize)
let statusBar = UIApplication.shared.value(forKey: "statusBarWindow") as? UIWindow
let screenshot = renderer.image { _ in
UIApplication.shared.keyWindow?.drawHierarchy(in: CGRect(origin: .zero, size: screenshotSize), afterScreenUpdates: true)
statusBar?.drawHierarchy(in: CGRect(origin: .zero, size: screenshotSize), afterScreenUpdates: true)
}
You don't have to scale your screenshot size down (you can use UIScreen.main.bounds directly if you want)
Capture the full screen of iPhone, get the status bar by using KVC:
if let snapView = window.snapshotView(afterScreenUpdates: false) {
if let statusBarSnapView = (UIApplication.shared.value(forKey: "statusBar") as? UIView)?.snapshotView(afterScreenUpdates: false) {
snapView.addSubview(statusBarSnapView)
}
UIGraphicsBeginImageContextWithOptions(snapView.bounds.size, true, 0)
snapView.drawHierarchy(in: snapView.bounds, afterScreenUpdates: true)
let snapImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
The following works for me, capturing the status bar fine (iOS 9, Swift)
let screen = UIScreen.mainScreen()
let snapshotView = screen.snapshotViewAfterScreenUpdates(true)
UIGraphicsBeginImageContextWithOptions(snapshotView.bounds.size, true, 0)
snapshotView.drawViewHierarchyInRect(snapshotView.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()

Flatten CALayer sublayers into one layer

In my app I have one root layer, and many images which are sublayers of rootLayer. I'd like to flatten all the sublayers of rootLayer into one layer/image, that don't have any sublayers. I think I should do this by drawing all sublayers in core graphics context, but I don't know how to do that.
I hope you'll understand me, and sorry for my English.
From your own example for Mac OS X:
CGContextRef myBitmapContext = MyCreateBitmapContext(800,600);
[rootLayer renderInContext:myBitmapContext];
CGImageRef myImage = CGBitmapContextCreateImage(myBitmapContext);
rootLayer.contents = (id) myImage;
rootLayer.sublayers = nil;
CGImageRelease(myImage);
iOS:
UIGraphicsBeginImageContextWithOptions(rootLayer.bounds.size, NO, 0.0);
[rootLayer renderInContext: UIGraphicsGetCurrentContext()];
UIImage *layerImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
rootLayer.contents = (id) layerImage.CGImage;
rootLayer.sublayers = nil;
Also note the caveat in the docs:
The Mac OS X v10.5 implementation of
this method does not support the
entire Core Animation composition
model. QCCompositionLayer,
CAOpenGLLayer, and QTMovieLayer layers
are not rendered. Additionally, layers
that use 3D transforms are not
rendered, nor are layers that specify
backgroundFilters, filters,
compositingFilter, or a mask values.
Do you still want the layer to be interactive? If not, call -renderInContext: and show the bitmap context.
I also once considered using render(in: CGContext).
However, I decided not to after reading this:
Important
The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of macOS may add support for rendering these layers and properties.
Seriously, no masks? I say, "WORK, Apple."
For a caveat, it's too huge to ignore, so I had to find a different approach.
solution
extension CALayer {
func flatten(in size: CGSize) -> CALayer {
let flattenedLayer = CALayer(in: size)
flattenedLayer.contents = getCGImage(in: size)
return flattenedLayer
}
}
It converts CALayer into CGImage and make a new CALayer with flattened image as its contents.
Of course, I had to make getCGImage(in: CGSize) as well, using CARenderer.
implementation
import AppKit
import Metal
let device = MTLCreateSystemDefaultDevice()!
let context = CIContext()
var textureDescriptor: MTLTextureDescriptor = {
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: 0, height: 0, mipmapped: false)
textureDescriptor.usage = [MTLTextureUsage.shaderRead, .shaderWrite, .renderTarget]
return textureDescriptor
}()
let maxScaleFactor = NSScreen.screens.reduce(1) { max($0, $1.backingScaleFactor) }
let scaleTransform = CATransform3DMakeScale(maxScaleFactor, maxScaleFactor, 1)
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB)!
extension CALayer {
convenience init(in size: CGSize, with sublayer: CALayer? = nil) {
self.init()
frame.size = size
if let sublayer { addSublayer(sublayer) }
}
func getCGImage(in size: CGSize) -> CGImage {
let ciImage = getCIImage(in: size)
return context.createCGImage(ciImage, from: ciImage.extent)!
}
func getCIImage(in size: CGSize) -> CIImage {
let superlayer = self.superlayer
let ciImage = CALayer(in: size, with: self).ciImage
if let superlayer { superlayer.addSublayer(self) }
return ciImage
}
var ciImage: CIImage {
let width = frame.size.width * maxScaleFactor
let height = frame.size.height * maxScaleFactor
textureDescriptor.width = Int(width)
textureDescriptor.height = Int(height)
let texture: MTLTexture = device.makeTexture(descriptor: textureDescriptor)!
let renderer = CARenderer(mtlTexture: texture)
transform = scaleTransform
frame = .init(origin: .zero, size: .init(width: width, height: height))
renderer.bounds = frame
renderer.layer = self
CATransaction.flush()
CATransaction.commit()
renderer.beginFrame(atTime: 0, timeStamp: nil)
renderer.render()
renderer.endFrame()
return CIImage(mtlTexture: texture, options: [.colorSpace: colorSpace])!
}
}
There are a few things to note:
use of CATransform3D to match resolution of HiDPI display
temporarily added as a new CALayer's sublayer in getCIImage(in: CGSize) for layers like CAShapeLayer with zero frame size; this is also to get the image independent from its frame origin/position
use of CATransaction.flush() to transfer ownership of the layer tree to the CARenderer’s context
extras
Since CALayer can have NSImage as its contents too in macOS 10.6 and later, you can change the function accordingly.
extension CALayer {
func getNSImage(in size: CGSize) -> NSImage {
let ciImage = getCIImage(in: size)
return NSImage(data: ciImage.pngData)!
}
}
extension CIImage {
var pngData: Data { context.pngRepresentation(of: self, format: .RGBA8, colorSpace: colorSpace!)! }
// jpeg, tiff, etc. can be created the same way
}
Coming this far, saving as image is not hard at all,
but using write...Representation(of: CIImage, ...) is probably the easiest way:
extension CALayer {
func saveAsPNG(at url: URL, in size: CGSize? = nil) {
let image = size == nil ? ciImage : getCIImage(in: size!)
return try! context.writePNGRepresentation(of: image, to: url, format: .RGBA8, colorSpace: colorSpace)
}
// jpeg, tiff, etc. can be saved the same way
}