Fastest way to draw offscreen CALayer content - core-graphics

I'm looking for the fastest way to draw offscreen CALayer content (no alpha needed) on macOS. Note, that these examples aren't threaded, but the point is (and why I'm not just using CALayer.setNeedsDisplay) because I'm doing this drawing on a background thread.
My original code did this:
let bounds = layer.bounds.size
let contents = NSImage(size: size)
contents.lockFocusFlipped(true)
let context = NSGraphicsContext.current()!.cgContext
layer.draw(in: context)
contents.unlockFocus()
layer.contents = contents

My current best is quite a bit faster:
let contentsScale = layer.contentsScale
let width = Int(bounds.width * contentsScale)
let height = Int(bounds.height * contentsScale)
let bytesPerRow = width * 4
let alignedBytesPerRow = ((bytesPerRow + (64 - 1)) / 64) * 64
let context = CGContext(
data: nil,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: alignedBytesPerRow,
space: NSScreen.main()?.colorSpace?.cgColorSpace ?? CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue
)!
context.scaleBy(x: contentsScale, y: contentsScale)
layer.draw(in: context)
layer.contents = context.makeImage()
Tips and recommendations for making it better/faster are welcome.

Related

CARenderer draws nothing into bound texture

I'm trying to setup a CARenderer to draw into a mtlTexture, but all my attempts to get this working in a playground don't draw anything.
The resulting image is solid red, the yellow layer doesn't seem to be rendered at all.
Here's the simplest version of what I've tried:
import QuartzCore
import Metal
let layerTest = CATextLayer()
layerTest.frame = .init(origin: .zero, size: .init(width: 1920, height: 1080))
layerTest.string = "TEST"
layerTest.foregroundColor = .black
layerTest.backgroundColor = CGColor(red: 1.0, green: 1.0, blue: 0.0, alpha: 1.0)
layerTest.position = CGPoint(x:0.0, y:0.0)
layerTest.anchorPoint = CGPoint(x:0.0, y:0.0)
layerTest.masksToBounds = true
let device = MTLCreateSystemDefaultDevice()!
let context = CIContext(mtlDevice: device)
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: 1920, height: 1080, mipmapped: false)
textureDescriptor.usage = [.unknown]
textureDescriptor.storageMode = .private
let bytes = UnsafeMutablePointer<UInt8>.allocate(capacity: 1920 * 1080 * 4)
//fill the buffer with red
let pattern = UnsafeMutablePointer<UInt8>.allocate(capacity: 4)
(pattern + 0).initialize(to: 255)
(pattern + 1).initialize(to: 0)
(pattern + 2).initialize(to: 0)
(pattern + 3).initialize(to: 255)
memset_pattern4(bytes, pattern, 1920 * 1080 * 4)
let mtlBuffer = device.makeBuffer(bytes: bytes, length: 1920 * 1080 * 4)!
let mtlTexture = mtlBuffer.makeTexture(descriptor: textureDescriptor, offset: 0, bytesPerRow: 1920 * 4)!
let render = CARenderer(mtlTexture: mtlTexture)
render.bounds = layerTest.frame
render.layer = layerTest
render.setDestination(mtlTexture)
render.beginFrame(atTime: CACurrentMediaTime(), timeStamp: nil)
render.addUpdate(render.bounds)
render.render()
render.endFrame()
let ciImage = CIImage(mtlTexture: mtlTexture)!
let cgImage: CGImage = context.createCGImage(ciImage, from: ciImage.extent)! //<- this is just red frame
I submitted this to apple DTS and got a reply:
Setting the root layer of the CARenderer requires one implicit CATransaction to transfer ownership of the layer tree to the CARenderer’s context. CARenderer frame render methods will not work correctly until this ownership transfer is complete.
To complete the current transaction we call flush() and then commit() it:
render.layer = layerTest
CATransaction.flush()
CATransaction.commit()
Adding these two lines resolved the issue for me.

How to add a overlay text on captured video/image in swift?

I want to add a overlay text like "Company watermark" on captured video or image programatically in swift. any help will be appreciated.
// create text Layer
let titleLayer = CATextLayer()
titleLayer.backgroundColor = UIColor.whiteColor().CGColor
titleLayer.string = "Company watermark"
titleLayer.font = UIFont(name: "Helvetica", size: 28)
titleLayer.shadowOpacity = 0.5
titleLayer.alignmentMode = kCAAlignmentCenter
titleLayer.frame = CGRectMake(0, 50, size.width, size.height / 6)
yourView.layer.addSublayer(titleLayer)
Hope it will help you for adding text in video.

How do I get the frame of visible content from SKCropNode?

It appears that, in SpriteKit, when I use a mask in a SKCropNode to hide some content, it fails to change the frame calculated by calculateAccumulatedFrame. I'm wondering if there's any way to calculate the visible frame.
A quick example:
import SpriteKit
let par = SKCropNode()
let bigShape = SKShapeNode(rect: CGRect(x: 0, y: 0, width: 100, height: 100))
bigShape.fillColor = UIColor.redColor()
bigShape.strokeColor = UIColor.clearColor()
par.addChild(bigShape)
let smallShape = SKShapeNode(rect: CGRect(x: 0, y: 0, width: 20, height: 20))
smallShape.fillColor = UIColor.greenColor()
smallShape.strokeColor = UIColor.clearColor()
par.maskNode = smallShape
par.calculateAccumulatedFrame() // returns (x=0, y=0, width=100, height=100)
I expected par.calculateAccumulatedFrame() to return (x=0, y=0, width=20, height=20) based on the crop node mask.
I thought maybe I could code the function myself as an extension that basically reimplements calculateAccumulatedFrame with support for checking for SKCropNodes and their masks, but it occurred to me that I would need to consider the alpha of that mask to determine if there's actual content that grows the frame. Sounds difficult.
Is there an easy way to calculate this?

Can I Snapshot a SKNode?

I have a SKScene and I want to snapshot a specific layer. It can be a layer displayed on a white UIView with a SKView in it. But in the end I want to take a snap from the state of this layer only.
Thanks in advance!
That's what worked for me:
var tempView = UIView(frame: CGRectMake(0, 0, 500, 500))
var tempSKView = SKView(frame: tempView.frame)
var tempScene = SKScene(size: tempSKView.bounds.size)
var lineTexture = SKTexture()
lineTexture = skView.textureFromNode(scene.lineLayer)
var lineLayerCopy = SKSpriteNode(texture: lineTexture)
lineLayerCopy.anchorPoint = CGPoint(x: 0.0, y: 0.0)
lineLayerCopy.size = CGSizeMake(500, 500)
tempScene.addChild(lineLayerCopy)
tempScene.backgroundColor = UIColor.whiteColor()
tempSKView.presentScene(tempScene)
tempView.addSubview(tempSKView)
self.view.addSubview(tempView)
Note: the anchorPoint is essential. It forces the scene to position in the center and so you have the perfect image.

How do I use the scanCrop property of a ZBar reader?

I am using the ZBar SDK for iPhone in order to scan a barcode. I want the reader to scan only a specific rectangle instead of the whole view, for doing that it is needed to set the scanCrop property of the reader to the desired rectangle.
I'm having hard time with understanding the rectangle parameter that has to be set.
Can someone please tell me what rect should I give as an argument if on portrait view its coordinates would be: CGRectMake( A, B, C, D )?
From the zbar's ZBarReaderView Class documentation :
CGRect scanCrop
The region of the video image that will be scanned, in normalized image coordinates. Note that the video image is in landscape mode (default {{0, 0}, {1, 1}})
The coordinates for all of the arguments is in a normalized float, which is from 0 - 1. So, in normalized value, theView.width is 1.0, and theView.height is 1.0. Therefore, the default rect is {{0,0},{1,1}}.
So for example, if I have a transparent UIView named scanView as a scanning region for my readerView. Rather than do :
readerView.scanCrop = scanView.frame;
We should do this, normalizing every arguments first :
CGFloat x,y,width,height;
x = scanView.frame.origin.x / readerView.bounds.size.width;
y = scanView.frame.origin.y / readerView.bounds.size.height;
width = scanView.frame.size.width / readerView.bounds.size.width;
height = scanView.frame.size.height / readerView.bounds.size.height;
readerView.scanCrop = CGRectMake(x, y, width, height);
It works for me. Hope that helps.
You can use scan crop area by doing this.
reader.scanCrop = CGRectMake(x,y,width,height);
for eg.
reader.scanCrop = CGRectMake(.25,0.25,0.5,0.45);
I used this and its working for me.
come on!!! this is the right way to adjust the crop area;
I had wasted tons of time on it;
readerView.scanCrop = [self getScanCrop:cropRect readerViewBounds:contentView.bounds];
- (CGRect)getScanCrop:(CGRect)rect readerViewBounds:(CGRect)rvBounds{
CGFloat x,y,width,height;
x = rect.origin.y / rvBounds.size.height;
y = 1 - (rect.origin.x + rect.size.width) / rvBounds.size.width;
width = rect.size.height / rvBounds.size.height;
height = rect.size.width / rvBounds.size.width;
return CGRectMake(x, y, width, height);
}