Drawing a CIImage to a NSOpenGLView (without hitting main memory) - objective-c

I feel like I must be missing something here. I've subclassed NSOpenGLView, and I'm attempting to draw a CIImage in the drawRect: call.
override func drawRect(dirtyRect: NSRect) {
super.drawRect(dirtyRect)
let start = CFAbsoluteTimeGetCurrent()
openGLContext!.makeCurrentContext()
let cglContext = openGLContext!.CGLContextObj
let pixelFormat = openGLContext!.pixelFormat.CGLPixelFormatObj
let colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB)!
let options: [String: AnyObject] = [kCIContextOutputColorSpace: colorSpace]
let context = CIContext(CGLContext: cglContext, pixelFormat: pixelFormat, colorSpace: colorSpace, options: options)
context.drawImage(inputImage, inRect: self.bounds, fromRect: input.extent)
openGLContext!.flushBuffer()
let end = CFAbsoluteTimeGetCurrent()
Swift.print("updated view in \(end - start)")
}
I'm obviously under the mistaken impression that the NSOpenGLContext (and it's underlying CGLContext) can be wrapped in a CIContext, and that rendering into that will produce an image in the view. But while work is being done in the above code, I have no idea where the actual pixels are ending up (because the view ends up blank).
If I just grab the current NSGraphicsContext and render into that, then I get an image in the NSOpenGLView, but the rendering seems to take about 10x as long (i.e. by changing the CIContext declaration to this):
// Works, but is slow
let context = NSGraphicsContext.currentContext()!.CIContext!
Also tried this, which is both slow AND doesn't actually display an image, making it a double fail:
// Doesn't work. AND it's slow.
input.drawInRect(bounds, fromRect: input.extent, operation: .CompositeDestinationAtop, fraction: 1.0)
The simple solution is just to render out to a CGImage, and then pop that onto the screen (via CGImage -> NSImage and then an NSImageView, or a backing CALayer). But that's not performant enough for my case. In my app, I'm looking to render a couple dozen thumbnail-sized images, each with their own different chain of CIFilters, and refresh them in realtime as their underlying base image changes. While they each render in a few milliseconds each (with my current CGImage-bracketed pathway), the view updates still are on the order of a few frames per second.
I currently have this working with a path that looks something like CGImageRef -> CIImage -> Bunch of CIFilters -> CGImage -> assign to CALayer for display. But it appears that having CGImages at both ends of the rendering chain is killing performance.
After profiling, it appears that MOST of the time is being spent just copying memory around, which I suppose is expected, but not very efficient. The backing CGImage needs to be shuttled to the GPU, then filtered, then goes back to main memory as a CGImage, then (presumably) goes right back to the GPU to be scaled and displayed by the CALayer. Ideally, the root images (before filtering) would just sit on the GPU, and results would be rendered directly to video memory, but I have no idea how to accomplish this. My current rendering pathway does the pixel smashing on the GPU (that's fantastic!), but is swamped by shuttling memory around to actually display the darned thing.
So can anyone enlighten me on how to do Core Image filtering with a pathway that keeps things on the GPU end-to-end? (or at least only has to swap that data in once?) If I have, say, a IOSurface-backed CIImage, how do I draw that directly to the UI without hitting main memory? Any hints?

After a day of banging my head against the wall on this, still no dice with NSOpenGLView... BUT, I think I'm able to do what I want via CAOpenGLLayer instead, which is just fine with me:
class GLLayer: CAOpenGLLayer {
var image: CIImage?
override func drawInCGLContext(ctx: CGLContextObj, pixelFormat pf: CGLPixelFormatObj, forLayerTime t: CFTimeInterval, displayTime ts: UnsafePointer<CVTimeStamp>) {
if image == nil {
let url = NSURL(fileURLWithPath: "/Users/doug/Desktop/test.jpg")
image = CIImage(contentsOfURL: url)!
}
let colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB)!
let options: [String: AnyObject] = [kCIContextOutputColorSpace: colorSpace]
let context = CIContext(CGLContext: ctx, pixelFormat: pf, colorSpace: colorSpace, options: options)
let targetRect = CGRectMake(-1, -1, 2, 2)
context.drawImage(image!, inRect: targetRect, fromRect: image!.extent)
}
}
One thing of note - the coordinate system for the CAOpenGLView is different. The view is 2.0x2.0 units, with the origin at 0,0 (took a while to figure that out). Other than that, this is basically the same as the non-working code in my original question (except, you know, it actually works here). Perhaps the NSOpenGLView class isn't returning the proper context. Who knows. Still interested in why, but at least I can move on now.

The code should work with following additions/modifications:
var ciContext: CIContext!
override func drawRect(dirtyRect: NSRect) {
super.drawRect(dirtyRect)
let start = CFAbsoluteTimeGetCurrent()
openGLContext!.makeCurrentContext()
let cglContext = openGLContext!.CGLContextObj
let attr = [
NSOpenGLPixelFormatAttribute(NSOpenGLPFAAccelerated),
NSOpenGLPixelFormatAttribute(NSOpenGLPFANoRecovery),
NSOpenGLPixelFormatAttribute(NSOpenGLPFAColorSize), 32,
0
]
let pixelFormat = NSOpenGLPixelFormat(attributes: attr)
let colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB)!
let options: [String: AnyObject] = [kCIContextOutputColorSpace:colorSpace, kCIContextWorkingColorSpace:colorSpace]
ciContext = CIContext(CGLContext: cglContext, pixelFormat: pixelFormat!.CGLPixelFormatObj, colorSpace: colorSpace, options: options)
glClearColor(0.0, 0.0, 0.0, 0.0)
glClear(GLbitfield(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT))
// setup viewing projection
glMatrixMode(GLenum(GL_PROJECTION))
// start with identity matrix
glLoadIdentity();
// Create an image
let image = NSImage(contentsOfFile: "/Users/kuestere/Pictures/DSCI0004.JPG")
let imageData = image?.TIFFRepresentation!
let ciImage = CIImage(data: imageData!)
if (ciImage == nil) {
Swift.print("ciImage loading error")
}
let imageFrame = ciImage!.extent
let viewFrame = self.visibleRect
// setup a viewing world left, right, bottom, top, zNear, zFar
glOrtho(0.0, Double(viewFrame.width), 0.0, Double(viewFrame.height), -1.0, 1.0);
ciContext.drawImage(ciImage!, inRect: self.visibleRect, fromRect: imageFrame)
glFlush();
let end = CFAbsoluteTimeGetCurrent()
Swift.print("updated view in \(end - start)")
}

Related

Translate detected rectangle from portrait CIImage to landscape CIImage

I am modifying code found here. In the code we are capturing video from the phone camera using AVCaptureSession and using CIDetector to detect a rectangle in the image feed. The feed has an image which is 640x842 (iphone5 in portrait). We then do an overlay on the image so the user can see the detected rectangle (actually it's a trapezoid most of the time).
When the user presses a button on the UI, we capture an image from the video and re-run the rectangle detection on this larger image (3264x2448) which as you can see is landscape. We then do a perspective transform on the detected rectangle and crop on the image.
This is working pretty well but the issue I have is say 1 out of 5 captures the detected rectangle on the larger image is different to the one detected (and presented to the user) from the smaller image. Even though I only capture when I detect the phone is (relatively) still, the final image then does not represent the rectangle the user expected.
To resolve this my idea is to use the coordinates of the originally captured rectangle and translate them to a rectangle on the captured still image. This is where I'm struggling.
I tried this with the detected rectangle:
CGFloat radians = -90 * (M_PI/180);
CGAffineTransform rotation = CGAffineTransformMakeRotation(radians);
CGRect rect = CGRectMake(detectedRect.bounds.origin.x, detectedRect.bounds.origin.y, detectedRect.bounds.size.width, detectedRect.bounds.size.height);
CGRect rotatedRect = CGRectApplyAffineTransform(rect, rotation);
So given a detected rect:
TopLeft: 88.213425, 632.31329
TopRight: 545.59302, 632.15546
BottomRight: 575.57819, 369.22321
BottomLeft: 49.973862, 369.40466
I now have this rotated rect:
origin = (x = 369.223206, y = -575.578186)
size = (width = 263.090088, height = 525.604309)
How do I translate the rotated rectangle coordinates in the smaller portrait image to coordinates to the 3264x2448 image?
Edit
Duh.. reading my own approach realised that creating a rectangle out of a trapezoid will not solve my problem!
Supporting code to detect the rectangle etc...
// In this method we detect a rect from the video feed and overlay
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
// image is 640x852 on iphone5
NSArray *rects = [[CIDetector detectorOfType:CIDetectorTypeRectangle context:nil options:#{CIDetectorAccuracy : CIDetectorAccuracyHigh}] featuresInImage:image];
CIRectangleFeature *detectedRect = rects[0];
// draw overlay on image code....
}
This is a summarized version of how the still image is obtained:
// code block to handle output from AVCaptureStillImageOutput
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
CIImage *enhancedImage = [[CIImage alloc] initWithData:imageData options:#{kCIImageColorSpace:[NSNull null]}];
imageData = nil;
CIRectangleFeature *rectangleFeature = [self getDetectedRect:[[self highAccuracyRectangleDetector] featuresInImage:enhancedImage]];
if (rectangleFeature) {
enhancedImage = [self correctPerspectiveForImage:enhancedImage withTopLeft:rectangleFeature.topLeft andTopRight:rectangleFeature.topRight andBottomRight:rectangleFeature.bottomRight andBottomLeft:rectangleFeature.bottomLeft];
}
}
Thank you.
I had the same issue while doing this kind of stuff.I have resolved it by doing below code in swift. Take a look if it can help you.
if let videoConnection = stillImageOutput.connection(withMediaType: AVMediaTypeVideo) {
stillImageOutput.captureStillImageAsynchronously(from: videoConnection) {
(imageDataSampleBuffer, error) -> Void in
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
var img = UIImage(data: imageData!)!
let outputRect = self.previewLayer?.metadataOutputRectOfInterest(for: (self.previewLayer?.bounds)!)
let takenCGImage = img.cgImage
let width = (takenCGImage?.width)!
let height = (takenCGImage?.height)!
let cropRect = CGRect(x: (outputRect?.origin.x)! * CGFloat(width), y: (outputRect?.origin.y)! * CGFloat(height), width: (outputRect?.size.width)! * CGFloat(width), height: (outputRect?.size.height)! * CGFloat(height))
let cropCGImage = takenCGImage!.cropping(to: cropRect)
img = UIImage(cgImage: cropCGImage!, scale: 1, orientation: img.imageOrientation)
let cropViewController = TOCropViewController(image: self.cropToBounds(image: img))
cropViewController.delegate = self
self.navigationController?.pushViewController(cropViewController, animated: true)
}
}

How to generate thumbnail image from source server url?

I want to create thumbnail image from original source url and show blur effect on tableview like WhatsApp i don't want to upload two separate url(low and high) to server, any one have idea how it can be done? Help.
For that you have to use these steps:
1. Download image from URL using any library using lazy loading mechanism
2. Convert image to blur one and show on UI
3. After tapping on it show actual image from cache
You can add blur effect to UIImageView with CIGaussianBlur effect
func getBlurImageFrom(image image: UIImage) -> UIImage {
let radius: CGFloat = 20;
let context = CIContext(options: nil);
let inputImage = CIImage(CGImage: image.CGImage!);
let filter = CIFilter(name: "CIGaussianBlur");
filter?.setValue(inputImage, forKey: kCIInputImageKey);
filter?.setValue("\(radius)", forKey:kCIInputRadiusKey);
let result = filter?.valueForKey(kCIOutputImageKey) as! CIImage;
let rect = CGRectMake(radius * 2, radius * 2, image.size.width - radius * 4, image.size.height - radius * 4)
let cgImage = context.createCGImage(result, fromRect: rect);
let returnImage = UIImage(CGImage: cgImage);
return returnImage;
}
Just follow above steps to achieve it. I did it in one of my app but I implemented in Objective C

Capturing full screenshot with status bar in iOS programmatically

I am using this code to capture a screenshot and to save it to the photo album.
-(void)TakeScreenshotAndSaveToPhotoAlbum
{
UIWindow *window = [UIApplication sharedApplication].keyWindow;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(window.bounds.size, NO, [UIScreen mainScreen].scale);
else
UIGraphicsBeginImageContext(window.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
}
But the problem is whenever the screenshot is saved, I see the status bar of iPhone is not captured. Instead a white space appears at the bottom. Like the following image:
What am I doing wrong?
The status bar is actually in its own UIWindow, in your code you are only rendering the view of your viewcontroller which does not include this.
The "official" screenshot method was here but now seems to have been removed by Apple, probably due to it being obsolete.
Under iOS 7 there is now a new method on UIScreen for getting a view holding the contents of the entire screen:
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates
This will give you a view which you can then manipulate on screen for various visual effects.
If you want to draw the view hierarchy into a context, you need to iterate through the windows of the application ([[UIApplication sharedApplication] windows]) and call this method on each one:
- (BOOL)drawViewHierarchyInRect:(CGRect)rect afterScreenUpdates:(BOOL)afterUpdates
You may be able to combine the two above approaches and take the snapshot view, then use the above method on the snapshot to draw it.
The suggested "official" screenshot method doesn't capture status bar (it is not in the windows list of the application). As tested on iOS 5.
I believe, this is for security reasons, but there is no mention of it in the docs.
I suggest two options:
draw a stub status bar image from resources of your app (optionally update time indicator);
capture only your view, without status bar, or trim image afterwards (image size will differ from standard device resolution); status bar frame is known from corresponding property of application object.
Here is my code to take a screenshot and store it as NSData (inside an IBAction). With the sotred NSData then you can share or email or whatever want to do
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *imageForEmail = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageDataForEmail = UIImageJPEGRepresentation(imageForEmail, 1.0);
Answer of above question for Objective-C is already write there, here is the Swift version answer of above question.
For Swift 3+
Take screenshot and then use it to display somewhere or to send over web.
extension UIImage {
class var screenShot: UIImage? {
let imageSize = UIScreen.main.bounds.size as CGSize;
UIGraphicsBeginImageContextWithOptions(imageSize, false, 0)
guard let context = UIGraphicsGetCurrentContext() else {return nil}
for obj : AnyObject in UIApplication.shared.windows {
if let window = obj as? UIWindow {
if window.responds(to: #selector(getter: UIWindow.screen)) || window.screen == UIScreen.main {
// so we must first apply the layer's geometry to the graphics context
context.saveGState();
// Center the context around the window's anchor point
context.translateBy(x: window.center.x, y: window.center
.y);
// Apply the window's transform about the anchor point
context.concatenate(window.transform);
// Offset by the portion of the bounds left of and above the anchor point
context.translateBy(x: -window.bounds.size.width * window.layer.anchorPoint.x,
y: -window.bounds.size.height * window.layer.anchorPoint.y);
// Render the layer hierarchy to the current context
window.layer.render(in: context)
// Restore the context
context.restoreGState();
}
}
}
guard let image = UIGraphicsGetImageFromCurrentImageContext() else {return nil}
return image
}
}
Usage of above screenshot
Lets display above screen shot on UIImageView
yourImageView = UIImage.screenShot
Get image Data to save/send over web
if let img = UIImage.screenShot {
if let data = UIImagePNGRepresentation(img) {
//send this data over web or store it anywhere
}
}
Swift, iOS 13:
The code below (and other ways of accessing) will now crash the app with a message:
App called -statusBar or -statusBarWindow on UIApplication: this code must be changed as there's no longer a status bar or status bar window. Use the statusBarManager object on the window scene instead.
The window scenes and statusBarManager's really only give us access to frame - if this is still possible, I am not aware how.
Swift, iOS10-12:
The following works for me, and after profiling all the methods for capturing programmatic screenshots - this is the quickest, and the recommended way from Apple following iOS 10
let screenshotSize = CGSize(width: UIScreen.main.bounds.width * 0.6, height: UIScreen.main.bounds.height * 0.6)
let renderer = UIGraphicsImageRenderer(size: screenshotSize)
let statusBar = UIApplication.shared.value(forKey: "statusBarWindow") as? UIWindow
let screenshot = renderer.image { _ in
UIApplication.shared.keyWindow?.drawHierarchy(in: CGRect(origin: .zero, size: screenshotSize), afterScreenUpdates: true)
statusBar?.drawHierarchy(in: CGRect(origin: .zero, size: screenshotSize), afterScreenUpdates: true)
}
You don't have to scale your screenshot size down (you can use UIScreen.main.bounds directly if you want)
Capture the full screen of iPhone, get the status bar by using KVC:
if let snapView = window.snapshotView(afterScreenUpdates: false) {
if let statusBarSnapView = (UIApplication.shared.value(forKey: "statusBar") as? UIView)?.snapshotView(afterScreenUpdates: false) {
snapView.addSubview(statusBarSnapView)
}
UIGraphicsBeginImageContextWithOptions(snapView.bounds.size, true, 0)
snapView.drawHierarchy(in: snapView.bounds, afterScreenUpdates: true)
let snapImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
The following works for me, capturing the status bar fine (iOS 9, Swift)
let screen = UIScreen.mainScreen()
let snapshotView = screen.snapshotViewAfterScreenUpdates(true)
UIGraphicsBeginImageContextWithOptions(snapshotView.bounds.size, true, 0)
snapshotView.drawViewHierarchyInRect(snapshotView.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()

UIButton image rotation issue

I have a [UIButton buttonWithType:UIButtonTypeCustom] that has an image (or a background image - same problem) created by [UIImage imageWithContentsOfFile:] pointing to a JPG file taken by the camera and saved in the documents folder by the application.
If I define the image for UIControlStateNormal only, then when I touch the button the image gets darker as expected, but it also rotates either 90 degrees or 180 degrees. When I remove my finger it returns to normal.
This does not happen if I use the same image for UIControlStateHighlighted, but then I lose the touch indication (darker image).
This only happens with an image read from a file. It does not happen with [UIImage ImageNamed:].
I tried saving the file in PNG format rather than as JPG. In this case the image shows up in the wrong orientation to begin with, and is not rotated again when touched. This is not a good solution anyhow because the PNG is far too large and slow to handle.
Is this a bug or am I doing something wrong?
I was not able to find a proper solution to this and I needed a quick workaround. Below is a function which, given a UIImage, returns a new image which is darkened with a dark alpha fill. The context fill commands could be replaced with other draw or fill routines to provide different types of darkening.
This is un-optimized and was made with minimal knowledge of the graphics api.
You can use this function to set the UIControlStateHighlighted state image so that at least it will be darker.
+ (UIImage *)darkenedImageWithImage:(UIImage *)sourceImage
{
UIImage * darkenedImage = nil;
if (sourceImage)
{
// drawing prep
CGImageRef source = sourceImage.CGImage;
CGRect drawRect = CGRectMake(0.f,
0.f,
sourceImage.size.width,
sourceImage.size.height);
CGContextRef context = CGBitmapContextCreate(NULL,
drawRect.size.width,
drawRect.size.height,
CGImageGetBitsPerComponent(source),
CGImageGetBytesPerRow(source),
CGImageGetColorSpace(source),
CGImageGetBitmapInfo(source)
);
// draw given image and then darken fill it
CGContextDrawImage(context, drawRect, source);
CGContextSetBlendMode(context, kCGBlendModeOverlay);
CGContextSetRGBFillColor(context, 0.f, 0.f, 0.f, 0.5f);
CGContextFillRect(context, drawRect);
// get context result
CGImageRef darkened = CGBitmapContextCreateImage(context);
CGContextRelease(context);
// convert to UIImage and preserve original orientation
darkenedImage = [UIImage imageWithCGImage:darkened
scale:1.f
orientation:sourceImage.imageOrientation];
CGImageRelease(darkened);
}
return darkenedImage;
}
To fix this you need additional normalization function like this:
public extension UIImage {
func normalizedImage() -> UIImage! {
if self.imageOrientation == .Up {
return self
}
UIGraphicsBeginImageContextWithOptions(self.size, false, self.scale)
self.drawInRect(CGRectMake(0, 0, self.size.width, self.size.height))
let normalized = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return normalized
}
}
then you can use it like that:
self.photoButton.sd_setImageWithURL(avatarURL,
forState: .Normal,
placeholderImage: UIImage(named: "user_avatar_placeholder")) {
[weak self] (image, error, cacheType, url) in
guard let strongSelf = self else {
return
}
strongSelf.photoButton.setImage(image.normalizedImage(), forState: .Normal
}

Flatten CALayer sublayers into one layer

In my app I have one root layer, and many images which are sublayers of rootLayer. I'd like to flatten all the sublayers of rootLayer into one layer/image, that don't have any sublayers. I think I should do this by drawing all sublayers in core graphics context, but I don't know how to do that.
I hope you'll understand me, and sorry for my English.
From your own example for Mac OS X:
CGContextRef myBitmapContext = MyCreateBitmapContext(800,600);
[rootLayer renderInContext:myBitmapContext];
CGImageRef myImage = CGBitmapContextCreateImage(myBitmapContext);
rootLayer.contents = (id) myImage;
rootLayer.sublayers = nil;
CGImageRelease(myImage);
iOS:
UIGraphicsBeginImageContextWithOptions(rootLayer.bounds.size, NO, 0.0);
[rootLayer renderInContext: UIGraphicsGetCurrentContext()];
UIImage *layerImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
rootLayer.contents = (id) layerImage.CGImage;
rootLayer.sublayers = nil;
Also note the caveat in the docs:
The Mac OS X v10.5 implementation of
this method does not support the
entire Core Animation composition
model. QCCompositionLayer,
CAOpenGLLayer, and QTMovieLayer layers
are not rendered. Additionally, layers
that use 3D transforms are not
rendered, nor are layers that specify
backgroundFilters, filters,
compositingFilter, or a mask values.
Do you still want the layer to be interactive? If not, call -renderInContext: and show the bitmap context.
I also once considered using render(in: CGContext).
However, I decided not to after reading this:
Important
The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of macOS may add support for rendering these layers and properties.
Seriously, no masks? I say, "WORK, Apple."
For a caveat, it's too huge to ignore, so I had to find a different approach.
solution
extension CALayer {
func flatten(in size: CGSize) -> CALayer {
let flattenedLayer = CALayer(in: size)
flattenedLayer.contents = getCGImage(in: size)
return flattenedLayer
}
}
It converts CALayer into CGImage and make a new CALayer with flattened image as its contents.
Of course, I had to make getCGImage(in: CGSize) as well, using CARenderer.
implementation
import AppKit
import Metal
let device = MTLCreateSystemDefaultDevice()!
let context = CIContext()
var textureDescriptor: MTLTextureDescriptor = {
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: 0, height: 0, mipmapped: false)
textureDescriptor.usage = [MTLTextureUsage.shaderRead, .shaderWrite, .renderTarget]
return textureDescriptor
}()
let maxScaleFactor = NSScreen.screens.reduce(1) { max($0, $1.backingScaleFactor) }
let scaleTransform = CATransform3DMakeScale(maxScaleFactor, maxScaleFactor, 1)
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB)!
extension CALayer {
convenience init(in size: CGSize, with sublayer: CALayer? = nil) {
self.init()
frame.size = size
if let sublayer { addSublayer(sublayer) }
}
func getCGImage(in size: CGSize) -> CGImage {
let ciImage = getCIImage(in: size)
return context.createCGImage(ciImage, from: ciImage.extent)!
}
func getCIImage(in size: CGSize) -> CIImage {
let superlayer = self.superlayer
let ciImage = CALayer(in: size, with: self).ciImage
if let superlayer { superlayer.addSublayer(self) }
return ciImage
}
var ciImage: CIImage {
let width = frame.size.width * maxScaleFactor
let height = frame.size.height * maxScaleFactor
textureDescriptor.width = Int(width)
textureDescriptor.height = Int(height)
let texture: MTLTexture = device.makeTexture(descriptor: textureDescriptor)!
let renderer = CARenderer(mtlTexture: texture)
transform = scaleTransform
frame = .init(origin: .zero, size: .init(width: width, height: height))
renderer.bounds = frame
renderer.layer = self
CATransaction.flush()
CATransaction.commit()
renderer.beginFrame(atTime: 0, timeStamp: nil)
renderer.render()
renderer.endFrame()
return CIImage(mtlTexture: texture, options: [.colorSpace: colorSpace])!
}
}
There are a few things to note:
use of CATransform3D to match resolution of HiDPI display
temporarily added as a new CALayer's sublayer in getCIImage(in: CGSize) for layers like CAShapeLayer with zero frame size; this is also to get the image independent from its frame origin/position
use of CATransaction.flush() to transfer ownership of the layer tree to the CARenderer’s context
extras
Since CALayer can have NSImage as its contents too in macOS 10.6 and later, you can change the function accordingly.
extension CALayer {
func getNSImage(in size: CGSize) -> NSImage {
let ciImage = getCIImage(in: size)
return NSImage(data: ciImage.pngData)!
}
}
extension CIImage {
var pngData: Data { context.pngRepresentation(of: self, format: .RGBA8, colorSpace: colorSpace!)! }
// jpeg, tiff, etc. can be created the same way
}
Coming this far, saving as image is not hard at all,
but using write...Representation(of: CIImage, ...) is probably the easiest way:
extension CALayer {
func saveAsPNG(at url: URL, in size: CGSize? = nil) {
let image = size == nil ? ciImage : getCIImage(in: size!)
return try! context.writePNGRepresentation(of: image, to: url, format: .RGBA8, colorSpace: colorSpace)
}
// jpeg, tiff, etc. can be saved the same way
}