It appears that, in SpriteKit, when I use a mask in a SKCropNode to hide some content, it fails to change the frame calculated by calculateAccumulatedFrame. I'm wondering if there's any way to calculate the visible frame.
A quick example:
import SpriteKit
let par = SKCropNode()
let bigShape = SKShapeNode(rect: CGRect(x: 0, y: 0, width: 100, height: 100))
bigShape.fillColor = UIColor.redColor()
bigShape.strokeColor = UIColor.clearColor()
par.addChild(bigShape)
let smallShape = SKShapeNode(rect: CGRect(x: 0, y: 0, width: 20, height: 20))
smallShape.fillColor = UIColor.greenColor()
smallShape.strokeColor = UIColor.clearColor()
par.maskNode = smallShape
par.calculateAccumulatedFrame() // returns (x=0, y=0, width=100, height=100)
I expected par.calculateAccumulatedFrame() to return (x=0, y=0, width=20, height=20) based on the crop node mask.
I thought maybe I could code the function myself as an extension that basically reimplements calculateAccumulatedFrame with support for checking for SKCropNodes and their masks, but it occurred to me that I would need to consider the alpha of that mask to determine if there's actual content that grows the frame. Sounds difficult.
Is there an easy way to calculate this?
Related
I'm trying to setup a CARenderer to draw into a mtlTexture, but all my attempts to get this working in a playground don't draw anything.
The resulting image is solid red, the yellow layer doesn't seem to be rendered at all.
Here's the simplest version of what I've tried:
import QuartzCore
import Metal
let layerTest = CATextLayer()
layerTest.frame = .init(origin: .zero, size: .init(width: 1920, height: 1080))
layerTest.string = "TEST"
layerTest.foregroundColor = .black
layerTest.backgroundColor = CGColor(red: 1.0, green: 1.0, blue: 0.0, alpha: 1.0)
layerTest.position = CGPoint(x:0.0, y:0.0)
layerTest.anchorPoint = CGPoint(x:0.0, y:0.0)
layerTest.masksToBounds = true
let device = MTLCreateSystemDefaultDevice()!
let context = CIContext(mtlDevice: device)
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: 1920, height: 1080, mipmapped: false)
textureDescriptor.usage = [.unknown]
textureDescriptor.storageMode = .private
let bytes = UnsafeMutablePointer<UInt8>.allocate(capacity: 1920 * 1080 * 4)
//fill the buffer with red
let pattern = UnsafeMutablePointer<UInt8>.allocate(capacity: 4)
(pattern + 0).initialize(to: 255)
(pattern + 1).initialize(to: 0)
(pattern + 2).initialize(to: 0)
(pattern + 3).initialize(to: 255)
memset_pattern4(bytes, pattern, 1920 * 1080 * 4)
let mtlBuffer = device.makeBuffer(bytes: bytes, length: 1920 * 1080 * 4)!
let mtlTexture = mtlBuffer.makeTexture(descriptor: textureDescriptor, offset: 0, bytesPerRow: 1920 * 4)!
let render = CARenderer(mtlTexture: mtlTexture)
render.bounds = layerTest.frame
render.layer = layerTest
render.setDestination(mtlTexture)
render.beginFrame(atTime: CACurrentMediaTime(), timeStamp: nil)
render.addUpdate(render.bounds)
render.render()
render.endFrame()
let ciImage = CIImage(mtlTexture: mtlTexture)!
let cgImage: CGImage = context.createCGImage(ciImage, from: ciImage.extent)! //<- this is just red frame
I submitted this to apple DTS and got a reply:
Setting the root layer of the CARenderer requires one implicit CATransaction to transfer ownership of the layer tree to the CARenderer’s context. CARenderer frame render methods will not work correctly until this ownership transfer is complete.
To complete the current transaction we call flush() and then commit() it:
render.layer = layerTest
CATransaction.flush()
CATransaction.commit()
Adding these two lines resolved the issue for me.
I have an app written with RXSwift which processes 500+ days of HealthKit data to draw a chart for the user.
The chart image is drawn incrementally using the code below. Starting with a black screen, previous image is drawn in the graphics context, then a new segment is drawn over this image with certain offset. The combined image is saved and the process repeats around 70+ times. Each time the image is saved, so the user sees the update. The result is a single chart image which the user can export from the app.
Even with autorelease pool, I see spikes of memory usage up to 1Gb, which prevents me from doing other resource intensive processing.
How can I optimize incremental drawing of very large (1440 × 5000 pixels) image?
When image is displayed or saved at 3x scale, it is actually 4320 × 15360.
Is there a better way than trying to draw over an image?
autoreleasepool {
//activeEnergyCanvas is custom data processing class
let newActiveEnergySegment = activeEnergyCanvas.draw(in: CGRect(x: 0, y: 0, width: 1440, height: days * 10), with: energyPalette)
let size = CGSize(width: 1440, height: height)
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
//draw existing image
self.activeEnergyImage.draw(in: CGRect(origin: CGPoint(x: 0, y: 0),
size: size))
//calculate where to draw smaller image over larger one
let offsetRect = CGRect(origin: CGPoint(x: 0, y: offset * 10),
size: newActiveEnergySegment.size)
newActiveEnergySegment.draw(in: offsetRect)
//get the combined image
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//assign combined image to be displayed
if let unwrappedImage = newImage {
self.activeEnergyImage = unwrappedImage
}
}
Turns out my mistake was in passing invalid drawing scale (0.0) when creating graphics context, which defaulted to drawing at the device's native screen scale.
In case of iPhone 8 it was 3.0 The result is needing extreme amounts of memory to draw, zoom and export these images. Even if all debug logging prints that image is 1440 pixels wide, the actual canvas ends up being 1440 * 3.0 = 4320.
Passing 1.0 as the drawing scale makes the image more fuzzy, but reduces memory usage to less than 200mb.
// UIGraphicsBeginImageContext() <- also uses #3x scale, even when all display size printouts show
let drawingScale: CGFloat = 1.0
UIGraphicsBeginImageContextWithOptions(size, true, drawingScale)
I am trying to use an alppha mask filter to apply a texture to a canvas element but cannot seem to get things to work. I have a base image which is a flat white color and to which I want to apply a color filter at runtime based on a users selection for example:
bitmap = new createjs.Bitmap(image);
bitmap.filters = [
new createjs.ColorFilter(0,0, 0.5, 1, 0, 0, 120, 0)
];
bitmap.cache(0, 0, 500, 500, 2);
I then want to use a second image which is a texture png that will add various shading texture to that first one. Looking over the docs it would seem that I need to use an AlphaMaskFilter but that does not seem to work and nothing is rendered onto the canvas. For example:
//filterImage contains the transparent image which has a shaded texture
var bitmap2 = new createjs.Bitmap(filterImage);
bitmap2.cache(0, 0, 500, 500, 2);
var bitmap = new createjs.Bitmap(image);
bitmap.filters = [
new createjs.ColorFilter(0,0, 0.5, 1, 0, 0, 120, 0),
new createjs.AlphaMaskFilter(bitmap2.cacheCanvas)
];
bitmap.cache(0, 0, 500, 500, 2);
Can someone help point me in the right direction here or if I am trying to do something which is just not possible using that filter.
I'm looking for the fastest way to draw offscreen CALayer content (no alpha needed) on macOS. Note, that these examples aren't threaded, but the point is (and why I'm not just using CALayer.setNeedsDisplay) because I'm doing this drawing on a background thread.
My original code did this:
let bounds = layer.bounds.size
let contents = NSImage(size: size)
contents.lockFocusFlipped(true)
let context = NSGraphicsContext.current()!.cgContext
layer.draw(in: context)
contents.unlockFocus()
layer.contents = contents
My current best is quite a bit faster:
let contentsScale = layer.contentsScale
let width = Int(bounds.width * contentsScale)
let height = Int(bounds.height * contentsScale)
let bytesPerRow = width * 4
let alignedBytesPerRow = ((bytesPerRow + (64 - 1)) / 64) * 64
let context = CGContext(
data: nil,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: alignedBytesPerRow,
space: NSScreen.main()?.colorSpace?.cgColorSpace ?? CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue
)!
context.scaleBy(x: contentsScale, y: contentsScale)
layer.draw(in: context)
layer.contents = context.makeImage()
Tips and recommendations for making it better/faster are welcome.
I want to have a label which should be displayed upside down. That means after creating the label I want to turn it around 90 degrees. That works but now the label is anywhere. I don't know HOW the label is rotated. Maybe one could help me. The code is the following:
let label = CreatorClass.createLabelWithFrame(CGRect(x: 10, y: 10, width: 150, height: 15), text: "aString", size: 12.0, bold: false, textAlignment: .Left, textColor: UIColor.whiteColor(), addToView: self)
label.transform = CGAffineTransformMakeRotation(CGFloat(M_PI_2))
CreatorClass creates a label and add it to a certain view (it adds to self because this code is called in a subclass of UIView). Actually it's self-explanatory I think.
you label rotate around its center, that means:
You label rotate around : x = 10 + 75 = 85
y = 10 + 7.5 = 17.5
This is also the center of the new position, what has a width of 15.0 and height of 150;
The new rect of your view, after the transfer is:
x = 85 - 7.5 = 77.5;
y = 17.5 - 75 = -57.5;
width = 15 , height = 150.
It could be out of the self bounds.
You might want to go back your label to its init position, only need to put it to init position:
label.frame = CGRect(x: 10, y: 10, width: 15, height: 150)