Multiple Render Targets in Metal - objective-c

I am trying to implement two distinct CAMetalLayers and use one MTLRenderCommandEncoder to render the same scene to both layers (Metal for OS X).
For this purpose, I've tried creating one MTLRenderPassDescriptor and attaching the two layers' textures to its color attachments. My render method looks like the following:
- (void)render {
dispatch_semaphore_wait(_inflight_semaphore, DISPATCH_TIME_FOREVER);
id<MTLCommandBuffer> commandBuffer = [_commandQueue commandBuffer];
__block dispatch_semaphore_t block_sema = _inflight_semaphore;
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> buffer) {
dispatch_semaphore_signal(block_sema);
}];
MTLRenderPassDescriptor *renderPass = [MTLRenderPassDescriptor renderPassDescriptor];
for (int i = 0; i < [_metalLayers count]; i++) {
_metalDrawables[i] = [_metalLayers[i] nextDrawable];
renderPass.colorAttachments[i].texture = _metalDrawables[[_metalDrawables count] - 1].texture;
renderPass.colorAttachments[i].clearColor = MTLClearColorMake(0.5, 0.5, (float)i / (float)[_metalLayers count], 1);
renderPass.colorAttachments[i].storeAction = MTLStoreActionStore;
renderPass.colorAttachments[i].loadAction = MTLLoadActionClear;
}
id<MTLRenderCommandEncoder> commandEncoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPass];
[commandEncoder setRenderPipelineState:_pipeline];
[commandEncoder setVertexBuffer:_positionBuffer offset:0 atIndex:0 ];
[commandEncoder setVertexBuffer:_colorBuffer offset:0 atIndex:1 ];
[commandEncoder drawPrimitives:MTLPrimitiveTypeTriangle vertexStart:0 vertexCount:3 instanceCount:1];
[commandEncoder endEncoding];
for (int i = 0; i < [_metalDrawables count]; i++) {
[commandBuffer presentDrawable:_metalDrawables[i]];
}
[commandBuffer commit];
}
However, the scene gets rendered to just one of the layers, which turns out to be the one associated with the first color attachment's texture. The other layer is cleared with the specified clear color, but nothing is drawn.
Has the approach given any chance of succeeding or is using the render pass descriptor's color attachments entirely pointless when trying to render the same scene to multiple screens (i.e. CAMetalLayers)? If so, is there any other conceivable approach to achieve this result?

To write to multiple render targets, you would need to explicitly write out to that render target in your fragment shader. #lock has already pointed this out.
struct MyFragmentOutput {
// color attachment 0
float4 clr_f [[ color(0) ]];
// color attachment 1
int4 clr_i [[ color(1) ]];
// color attachment 2
uint4 clr_ui [[ color(2) ]];
};
fragment MyFragmentOutput
my_frag_shader( ... )
{
MyFragmentOutput f;
....
f.clr_f = ...;
f.clr_i = ...;
...
return f;
 }
However, this is an overkill since you don't really need the GPU to render the scene twice. So the answer above by #Kacper is more accurate for your case. However, to add to his answer, I would recommend using BlitEncoder that can copy data between two textures on the GPU, which I assume should be much faster than the CPU.
https://developer.apple.com/library/mac/documentation/Miscellaneous/Conceptual/MetalProgrammingGuide/Blit-Ctx/Blit-Ctx.html#//apple_ref/doc/uid/TP40014221-CH9-SW4

As far as I read about this problem you can try to render just to one MTLTexture (not drawable layer) and then try to use MTLTexture methods getBytes and replaceRegion to copy texture data into two drawable layers.
Currently I am working on rendering to ordinary texture but I encounter some artifacts and currently it not working for me, maybe you will find way to solve that.

Related

Get Depth from detected face using Vision and ARKit iOS

I am trying to achieve something like this using Vision and ARKit, so my idea is to get landmark points from Vision and deploy node using those points. I am using this demo as a reference. To date, I have been able to find the landmark points of the face using Vision. Now to use those points in ARKit to add nodes to the scene. I am unable to get the depth, which is essential for the node's position.
After searching SO, I found this post to convert CGPoint to SCNVector3, but here I am having an issue as I don't having any reference plane which is can use to get depth by hit testing against.
So, how can I get the perfect depth using CGPoints, other than using hitTest, or is there any other way I can achieve those result shown in video.
Here is the code which is implemented
CGPoint faceRectCenter = (CGPoint){
CGRectGetMidX(faceRect),CGRectGetMidY(faceRect)
}; // faceRect is detected face bounding box
__block NSMutableArray<ARHitTestResult* >* testResults = [NSMutableArray new];
void(^hitTest)(void) = ^{
NSArray<ARHitTestResult* >* hitTestResults = [self.sceneView hitTest:faceRectCenter types:ARHitTestResultTypeFeaturePoint];
if(hitTestResults.count > 0){
//get the first
ARHitTestResult* firstResult = nil;
for (ARHitTestResult* result in hitTestResults) {
if (result.distance > 0.10) {
firstResult = result;
[testResults addObject:firstResult];
break;
}
}
}
};
for(int i=0; i<3; i++){
hitTest();
}
if(testResults.count > 0){
NSLog(#"%#", testResults);
SCNVector3 postion = averagePostion([testResults copy]);
NSLog(#"<%.1f,%.1f,%.1f>",postion.x,postion.y,postion.z);
__block SCNNode* textNode = [ARTextNode nodeWithText:name Position:postion];
SCNVector3 plane = [self.sceneView projectPoint:textNode.position];
float projectedDepth = plane.z;
NSLog(#"projectedDepth: %f",projectedDepth);
dispatch_async(dispatch_get_main_queue(), ^{
[self.sceneView.scene.rootNode addChildNode:textNode];
[textNode show];
});
}
else{
// NSLog(#"HitTest invalid");
}
}
Any help will be great!!

Dividing MDLMesh into multiple SCNGeometry

I have an .obj file containing a 3D model, divided into multiple smaller meshes, e.g. a front and a back.
My goal is to load this file into a SceneKit View and interact with the different parts of the model, to color them, select them, hide them, or move them individually.
I was able to load the file into a MDLAsset, containing the MDLMesh, wich itself contains all of the sub meshes as MDLSubmesh.
To display the loaded model I have to convert the MDLMeshto an SCNGeometry.
The basic approach is to call SCNGeometry geometryWithMDLMesh:(MDLMesh *)mdlMesh. This works fine, the SCNGeometrycontains different SCNGeometryElements.
However, a lot of information that was in the MDLSubmeshis lost when converting this to the Scene Kit Geometry and my ability to interact with the different sub meshes is very limited.
It would be ideal if I convert all of the MDLSubmesh to SCNGeometry individually. I have tried two different approaches:
I tried to use [MDLMes newSubdividedMesh: aMesh submeshIndex: i subdivisionLevels:0] for each of the sub meshes. Then I created SCNGeometry out of them.
The problem was that SceneKit didn't render the scene as aspected. The geometry was working but the light was not applied to the model, something that worked when I was converting the whole MDLMesh.
Generate a new mesh from each submesh:
for (NSInteger i = 0; i < [[mesh submeshes] count]; i++) {
MDLMesh *submesh = [MDLMesh newSubdividedMesh:mesh submeshIndex:i subdivisionLevels:0];
SCNGeometry *geometry = [SCNGeometry geometryWithMDLMesh:submesh];
SCNNode *subNode = [SCNNode nodeWithGeometry:geometry];
[node addChildNode:subNode];
}
The resulting rendering with this approach rendering with subdivided meshes and when converting the whole MDLMeshrendering the whole mesh. Notice the missing light effects on the first rendering with the above code.
The second approach was to generate the SCNGeometry with the SCNGeometry geometryWithSources:elements: method. Although I doubt this is the 'right' way to do this, here is what I tried.
(SCNNode*) loadMDLMesh : (MDLMesh*) mesh withSubmeshes: (bool) sub {
if (sub) {
//Generate a scene kit node
SCNNode * node = [SCNNode node];
//Generate a new mesh from each submesh
for (NSInteger i = 0; i < [[mesh submeshes] count]; i++) {
//Create the geometry element from the mdl sub mesh
SCNGeometryElement *element = [SCNGeometryElement geometryElementWithMDLSubmesh: [[mesh submeshes] objectAtIndex:i] ];
//Create a geometry source from the index buffer
MDLMeshBufferMap *map = [[[[mesh submeshes] objectAtIndex:i] indexBuffer] map];
SCNGeometrySource *source = [SCNGeometrySource geometrySourceWithVertices:[map bytes] count:[[[mesh submeshes] objectAtIndex:i] indexCount]];
//Create the SCNGeometry from the source and the element
SCNGeometry *subMesh = [SCNGeometry geometryWithSources:[NSArray arrayWithObject:source] elements:[NSArray arrayWithObject:element]];
//Update the name
subMesh.name = [[[mesh submeshes] objectAtIndex:i] name];
//Create a subnode and add it to the object node
SCNNode *subNode = [SCNNode nodeWithGeometry:subMesh];
[node addChildNode:subNode];
}
return node;
} else {
return [SCNNode nodeWithMDLObject:mesh];
}
Unfortunately, the app crashes with an Bad Access Exception.
As one can see, I am not that experienced with developing objective-c. Any help to fix my ideas or different approaches to subdivide the mesh would be great.
Thank you.
[Re-posting comment as answer as it seems it provided a working solution]
I looks like you're missing the normals when using +newSubdividedMesh:submeshIndex:subdivisionLevels: to build a new mesh from a submesh. Maybe -initWithVertexBuffer:vertexCount:descriptor:submeshes: will lead to better results?

sprite kit - objective c: slow fps when I create a lot nodes

I wanted to create a space background so I make a for loop to create the stars. Here is the code:
for (int i = 0; i<100; i++) {
SKShapeNode *star= [SKShapeNode shapeNodeWithPath:Path.CGPath];
star.fillColor = [UIColor whiteColor];
star.physicsBody = nil;
int xposition = arc4random()%960;
int yposition = arc4random()%640;
star.position = CGPointMake(xposition, yposition);
float size = (arc4random()%3 + 1)/10.0;
star.xScale = size;
star.yScale = size;
star.alpha = (arc4random()%10 + 1 )/ 10.0;
star.zPosition = -2;
[self addChild:star];
}
But it takes a lot from my cpu. when the code is activated the cpu at top 78%.(I check the code in the iPhone simulator);
Somebody know how to fix it? thanks.
Your physics bodies continue to calculate even when off of the screen. You will need to remove them once they go out of the frame, otherwise everything will slow to a crawl. (And to echo what others have stated you will eventually need a real device).
From this document: Jumping Into Sprite Kit
You can implement the "Did Simulate Physics" method to get rid of the stars that fell from the bottom of the screen like so:
-(void)didSimulatePhysics
{
[self enumerateChildNodesWithName:#"star" usingBlock:^(SKNode *node, BOOL *stop) {
if (node.position.y < 0)
[node removeFromParent];
}];
}
Note that you will first need to set the name of your star shapes by using the name property like so:
star.name = "star"

cocos2d Generating multiple of the same sprites with velocity

I'm pretty new to iOS and cocos2d and I'm having a problem trying to create what I want the code to do. Let me give you the rundown first then i'll show what I've got.
What I got so far is a giant sprite in the middle and when that is touched, I want to have say 2000 of a different sprite generate from the center position and like a particle system, shoot off in all directions.
First off, I tried coding implementing the velocity code (written in Objective-c) over to Cocos2d and that didn't work. -code-
-(void)ccTouchBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
if(CGRectContainsPoint([[self getChildByTag:1] boundingBox], location))
{
for( int i = 0; i < 100; i++)
{
CCSprite *ballGuySprite = [CCSprite spriteWithFile:#"ball.png"];
[self addChild:ballGuySprite z:7];
ballGuySprite.position = ccp(((s.width + i *10) /2), (s.height + i *10) /2);
}
}
}
What that does is when I touch the first sprite, 100 of the other sprites are on top of each other leading to the top right corner.
The velocity code that I used when as followed and when I try to apply it to the sprite nothing happens. - Velocity code -
-(void) checkCollisionWithScreenEdges
{
if(ballGuysRect.origin.x <= 0)
{
ballVelocity.x = abs(ballVelocity.x);
}
if(ballGuysRect.origin.x >= VIEW_WIDTH - GUY_SIZE)
{
ballVelocity.x = -1 * abs(ballVelocity.x);
}
if(ballGuysRect.origin.y <= 0)
{
ballVelocity.y = abs(ballVelocity.y);
}
if(ballGuysRect.origin.y >= VIEW_HEIGHT - GUY_SIZE)
{
ballVelocity.y = -1 * abs(ballVelocity.y);
}
}
-(void) updateModelWithTime:(CFTimeInterval)timestamp
{
if(lastTime == 0.0)
{
lastTime = timestamp;
}
else
{
timeDelta = timestamp - lastTime;
lastTime = timestamp;
ballGuysRect.origin.x += ballVelocity.x * timeDelta;
ballGuysRect.origin.y += ballVelocity.y * timeDelta;
[self checkCollisionWithScreenEdges];
}
}
When I attach that code to the sprite, nothing happen.
I also tried adding a CCParticleExplosion which did do what I wanted but I still want to add a touch function to each individual sprite that's generated and they tend to just fade away.
So again, I'm still fairly new to this and if anyone could give any advice that would be great.
Thanks for your patients and time to read this.
Your code looks good to me, but you never seem to update the position of your sprites. Somewhere in updateModelWithTime I would expect you to set ballGuySprite.position = ballGuysRect.origin plus half of its height or width, respectively.
Also, I don't see how updateModelWithTime can control 100 different sprites. I see only one instance of ballGuysRect here. You will need a ballGuysRect for each sprite, e.g. an array.
Finally, I'd say that you don't really need ballGuysRect, ballVelocity, and the sprite. Ball could be a subclass of CCSprite, including a velocity vector. Then all you need to do is keep an array of Balls and manage those.
I am not sure what version of cocos2d you are using but a few things look a bit odd.
Your first problem appears to be that you are just using the same sprite over and over again.
Since you want so many different sprites shooting away, I would recommend that you use a CCSpriteBatchNode, as this should simplify things and speed things up.
The following code should help you get that set up and move them offscreen with CCMoveTo:
//in your header file:
CCSpriteBatchNode *batch;
//in your init method
batch = [CCSpriteBatchNode batchNodeWithFile:#"ball.png"];
//Then in your ccTouches method
for( int i = 0; i < 100; i++)
{
CCSprite *ballGuySprite = [CCSprite spriteWithFile:#"ball.png"];
[batch addChild:ballGuySprite z:7 tag:0];
ballGuySprite.position = ccp(where-ever the center image is located);
id actionMove = [CCMoveTo actionWithDuration:actualDuration
position:ccp(random off screen location)];
[ballGuySprite runAction:actionMove];
}
Also usually your update method looks something like the following:
-(void)update:(ccTime)delta{
//check for sprites that have moved off screen and disable them.
}
Hope this helps.

Render second vertex buffer with Apple's Metal

I'm stuck with a problem where and I want to render just two triangles (each one is stored in separated buffer) and Metal API rejects attempts to render second vertex buffer. I suspect this is about alignment. The assertion message is failed assertion `(length - offset)(0) must be >= 32 at buffer binding at index 0 for vertexArray[0].' Here the code:
Vertex and constants structs:
struct VertexPositionColor
{
VertexPositionColor(const simd::float4& pos,
const simd::float4& col)
: position(pos), color(col) {}
simd::float4 position;
simd::float4 color;
};
typedef struct
{
simd::float4x4 model_view_projection;
} constants_t;
This is how I store and add new buffers (the function gets called twice):
NSMutableArray<id<MTLBuffer>> *_vertexBuffer;
NSMutableArray<id<MTLBuffer>> *_uniformBuffer;
NSMutableArray<id<MTLBuffer>> *_indexBuffer;
- (void)linkGeometry:(metalGeometry*)geometry
{
[_vertexBuffer addObject:[_device newBufferWithBytes:[geometry vertices]
length:[geometry vertices_length]
options:0]
];
[_uniformBuffer addObject:[_device newBufferWithLength:[geometry uniforms_length]
options:0]
];
RCB::constants_t* guts = (RCB::constants_t*) [[_uniformBuffer lastObject] contents];
guts->model_view_projection = [geometry uniforms]->model_view_projection;
[geometry linkTransformation:(RCB::constants_t *)[[_uniformBuffer lastObject] contents]];
}
And next are the lines where assert fails (the very last one):
[render setVertexBuffer:_vertexBuffer[0] offset:0 atIndex:0];
[render setVertexBuffer:_uniformBuffer[0] offset:0 atIndex:1];
[render drawPrimitives:MTLPrimitiveTypeTriangle vertexStart:0 vertexCount:3];
[render setVertexBuffer:_vertexBuffer[1] offset:3*sizeof(VertexPositionColor) atIndex:0];
[render setVertexBuffer:_uniformBuffer[1] offset:sizeof(constants_t) atIndex:1];
[render drawPrimitives:MTLPrimitiveTypeTriangle vertexStart:3 vertexCount:3];
So, we just make offsets equal to memory size taken by previous buffer. Note that the first triangle will be rendered as expected if we comment the last line out.
Could anyone understand what I've missed? I would really appreciate that.
Regards
The offset parameter expresses the offset to the beginning of the data in the provided buffer. If you're using separate buffers for each object, the offset should be 0.