MacOS MTKView metal self.device.newBufferWithBytes crashes with assert - objective-c

I want to draw a simple triangle and it crashes after I am trying to create MTLBuffer.
static float vertexes[] = {
0.0, 0.5, 0.0,
-0.5f, -0.5f, 0.0,
0.5, -0.5f, 0.0
};
id <MTLBuffer> buffer = [self.device newBufferWithBytes:vertexes
length:sizeof(vertexes) options:MTLResourceStorageModePrivate];
Here is the assert:
-[MTLDebugDevice newBufferWithBytes:length:options:]:392: failed assertion `storageModePrivate incompatible with ...WithBytes variant of newBuffer'
So how to create a buffer from the vertexes using MTLResourceStorageModePrivate option?

You must create a temporary blit buffer and use it to copy the contents to the private buffer. Here's example code:
buffer = [self.device newBufferWithLength:sizeof( vertexes )
options:MTLResourceStorageModePrivate];
id<MTLBuffer> blitBuffer = [self.device newBufferWithBytes:vertexes
length:sizeof( vertexes )
options:MTLResourceCPUCacheModeDefaultCache];
id <MTLCommandBuffer> cmd_buffer = [commandQueue commandBuffer];
id <MTLBlitCommandEncoder> blit_encoder = [cmd_buffer blitCommandEncoder];
[blit_encoder copyFromBuffer:blitBuffer
sourceOffset:0
toBuffer:buffer
destinationOffset:0
size:sizeof( vertexes )];
[blit_encoder endEncoding];
[cmd_buffer commit];
[cmd_buffer waitUntilCompleted];

Related

CARenderer draws nothing into bound texture

I'm trying to setup a CARenderer to draw into a mtlTexture, but all my attempts to get this working in a playground don't draw anything.
The resulting image is solid red, the yellow layer doesn't seem to be rendered at all.
Here's the simplest version of what I've tried:
import QuartzCore
import Metal
let layerTest = CATextLayer()
layerTest.frame = .init(origin: .zero, size: .init(width: 1920, height: 1080))
layerTest.string = "TEST"
layerTest.foregroundColor = .black
layerTest.backgroundColor = CGColor(red: 1.0, green: 1.0, blue: 0.0, alpha: 1.0)
layerTest.position = CGPoint(x:0.0, y:0.0)
layerTest.anchorPoint = CGPoint(x:0.0, y:0.0)
layerTest.masksToBounds = true
let device = MTLCreateSystemDefaultDevice()!
let context = CIContext(mtlDevice: device)
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: 1920, height: 1080, mipmapped: false)
textureDescriptor.usage = [.unknown]
textureDescriptor.storageMode = .private
let bytes = UnsafeMutablePointer<UInt8>.allocate(capacity: 1920 * 1080 * 4)
//fill the buffer with red
let pattern = UnsafeMutablePointer<UInt8>.allocate(capacity: 4)
(pattern + 0).initialize(to: 255)
(pattern + 1).initialize(to: 0)
(pattern + 2).initialize(to: 0)
(pattern + 3).initialize(to: 255)
memset_pattern4(bytes, pattern, 1920 * 1080 * 4)
let mtlBuffer = device.makeBuffer(bytes: bytes, length: 1920 * 1080 * 4)!
let mtlTexture = mtlBuffer.makeTexture(descriptor: textureDescriptor, offset: 0, bytesPerRow: 1920 * 4)!
let render = CARenderer(mtlTexture: mtlTexture)
render.bounds = layerTest.frame
render.layer = layerTest
render.setDestination(mtlTexture)
render.beginFrame(atTime: CACurrentMediaTime(), timeStamp: nil)
render.addUpdate(render.bounds)
render.render()
render.endFrame()
let ciImage = CIImage(mtlTexture: mtlTexture)!
let cgImage: CGImage = context.createCGImage(ciImage, from: ciImage.extent)! //<- this is just red frame
I submitted this to apple DTS and got a reply:
Setting the root layer of the CARenderer requires one implicit CATransaction to transfer ownership of the layer tree to the CARenderer’s context. CARenderer frame render methods will not work correctly until this ownership transfer is complete.
To complete the current transaction we call flush() and then commit() it:
render.layer = layerTest
CATransaction.flush()
CATransaction.commit()
Adding these two lines resolved the issue for me.

Fastest way to draw offscreen CALayer content

I'm looking for the fastest way to draw offscreen CALayer content (no alpha needed) on macOS. Note, that these examples aren't threaded, but the point is (and why I'm not just using CALayer.setNeedsDisplay) because I'm doing this drawing on a background thread.
My original code did this:
let bounds = layer.bounds.size
let contents = NSImage(size: size)
contents.lockFocusFlipped(true)
let context = NSGraphicsContext.current()!.cgContext
layer.draw(in: context)
contents.unlockFocus()
layer.contents = contents
My current best is quite a bit faster:
let contentsScale = layer.contentsScale
let width = Int(bounds.width * contentsScale)
let height = Int(bounds.height * contentsScale)
let bytesPerRow = width * 4
let alignedBytesPerRow = ((bytesPerRow + (64 - 1)) / 64) * 64
let context = CGContext(
data: nil,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: alignedBytesPerRow,
space: NSScreen.main()?.colorSpace?.cgColorSpace ?? CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue
)!
context.scaleBy(x: contentsScale, y: contentsScale)
layer.draw(in: context)
layer.contents = context.makeImage()
Tips and recommendations for making it better/faster are welcome.

Can I Snapshot a SKNode?

I have a SKScene and I want to snapshot a specific layer. It can be a layer displayed on a white UIView with a SKView in it. But in the end I want to take a snap from the state of this layer only.
Thanks in advance!
That's what worked for me:
var tempView = UIView(frame: CGRectMake(0, 0, 500, 500))
var tempSKView = SKView(frame: tempView.frame)
var tempScene = SKScene(size: tempSKView.bounds.size)
var lineTexture = SKTexture()
lineTexture = skView.textureFromNode(scene.lineLayer)
var lineLayerCopy = SKSpriteNode(texture: lineTexture)
lineLayerCopy.anchorPoint = CGPoint(x: 0.0, y: 0.0)
lineLayerCopy.size = CGSizeMake(500, 500)
tempScene.addChild(lineLayerCopy)
tempScene.backgroundColor = UIColor.whiteColor()
tempSKView.presentScene(tempScene)
tempView.addSubview(tempSKView)
self.view.addSubview(tempView)
Note: the anchorPoint is essential. It forces the scene to position in the center and so you have the perfect image.

OpenGL ES 2.0 blending

I set glBlendFunc to
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
and for testing purposes I set the color in my fragment shader like this
gl_FragColor = vec4(1.0,0.0,0.0,0.0);
Shouldn't the object be fully transparent? What could the reason be if it's not?
The first argument of glBlendFunc() is the source factor, the second is the destination factor. In your case:
sfactor = 1.0;
dfactor = 1.0 - src.alpha;
being src.alpha = 0.0, from your gl_FragColor:
sfactor = 1.0;
dfactor = 1.0;
So the color put to the buffer will be:
buffer = sfactor * src + dfactor * dst;
Substituting...
buffer = (1.0,0.0,0.0,0.0) + dst;
So, putting it simple, you are adding 1 to the red channel of the existing buffer.
If you want to make the output fully transparent, the usual function is:
glBlendFunc(GL_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
The one you wrote is usually used for pre-multiplied alpha in the source. But (1, 0, 0, 0) is obviously not a premultiplied alpha value!

Initialising a C Struct Array - Objective C - OpenGLES

I have the following Vertex struct in my OpenGL ES app :
typedef struct Vertex {
float Position[3];
float Color[4];
} Vertex;
In my header I then declare :
Vertex *Vertices;
Then in my init method :
int array = 4;
Vertices = (Vertex *)malloc(array * sizeof(Vertex));
I then later setup the mesh as follows, where vertices array in this case has 4 vertices :
- (void)setupMesh {
int count = 0;
for (VerticeObject * object in verticesArray) {
Vertices[count].Position[0] = object.x;
Vertices[count].Position[1] = object.y;
Vertices[count].Position[2] = object.z;
Vertices[count].Color[0] = 0.9f;
Vertices[count].Color[1] = 0.9f;
Vertices[count].Color[2] = 0.9f;
Vertices[count].Color[3] = 1.0f;
count ++;
}
}
Can anyone spot what I am doing wrong here ? When I pass this Vertices object to OpenGL nothing is drawn, whereas if I hard code the Vertices array as :
Vertex Vertices [] = {
{{0.0, 0.0, 0}, {0.9, 0.9, 0.9, 1}},
{{0.0 , 0.0 , 0}, {0.9, 0.9, 0.9, 1}},
{{0.0, 0.0, 0}, {0.9, 0.9, 0.9, 1}},
{{0.0, 0.0, 0}, {0.9, 0.9, 0.9, 1}},
};
Everything works ?
I think the problem is that before you had a array allocated on the stack where now you have a pointer(memory address) to a block of memory on the heap. So when you wright stuff like sizeof(Vertices) your original sizeof(Vertices) would result in 4 vertices each holding 3 floats position and 4 floats color -> 4 * (3 + 4) * 4(float = 4 bytes) = 112 bytes. Where sizeof(aPointer) = 4 bytes. OpenGL is a C library an not super easy to work with so you should really brush up on you C skills before trying to get it running. Also there in a GLKView class now days that will make all the setup allot easier.
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
Try to allocate same size as the array of vertices. In your case 4 * sizeof(Vertex).
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * 4, Vertices, GL_STATIC_DRAW);
If that doesn't work you can easily fix the problem by replacing your dynamically allocated array for a statically allocated since you know at compile time how big it needs to be.
Vertex Vertices[4];
Then set the values in your loop as you do.