Dividing MDLMesh into multiple SCNGeometry - objective-c

I have an .obj file containing a 3D model, divided into multiple smaller meshes, e.g. a front and a back.
My goal is to load this file into a SceneKit View and interact with the different parts of the model, to color them, select them, hide them, or move them individually.
I was able to load the file into a MDLAsset, containing the MDLMesh, wich itself contains all of the sub meshes as MDLSubmesh.
To display the loaded model I have to convert the MDLMeshto an SCNGeometry.
The basic approach is to call SCNGeometry geometryWithMDLMesh:(MDLMesh *)mdlMesh. This works fine, the SCNGeometrycontains different SCNGeometryElements.
However, a lot of information that was in the MDLSubmeshis lost when converting this to the Scene Kit Geometry and my ability to interact with the different sub meshes is very limited.
It would be ideal if I convert all of the MDLSubmesh to SCNGeometry individually. I have tried two different approaches:
I tried to use [MDLMes newSubdividedMesh: aMesh submeshIndex: i subdivisionLevels:0] for each of the sub meshes. Then I created SCNGeometry out of them.
The problem was that SceneKit didn't render the scene as aspected. The geometry was working but the light was not applied to the model, something that worked when I was converting the whole MDLMesh.
Generate a new mesh from each submesh:
for (NSInteger i = 0; i < [[mesh submeshes] count]; i++) {
MDLMesh *submesh = [MDLMesh newSubdividedMesh:mesh submeshIndex:i subdivisionLevels:0];
SCNGeometry *geometry = [SCNGeometry geometryWithMDLMesh:submesh];
SCNNode *subNode = [SCNNode nodeWithGeometry:geometry];
[node addChildNode:subNode];
}
The resulting rendering with this approach rendering with subdivided meshes and when converting the whole MDLMeshrendering the whole mesh. Notice the missing light effects on the first rendering with the above code.
The second approach was to generate the SCNGeometry with the SCNGeometry geometryWithSources:elements: method. Although I doubt this is the 'right' way to do this, here is what I tried.
(SCNNode*) loadMDLMesh : (MDLMesh*) mesh withSubmeshes: (bool) sub {
if (sub) {
//Generate a scene kit node
SCNNode * node = [SCNNode node];
//Generate a new mesh from each submesh
for (NSInteger i = 0; i < [[mesh submeshes] count]; i++) {
//Create the geometry element from the mdl sub mesh
SCNGeometryElement *element = [SCNGeometryElement geometryElementWithMDLSubmesh: [[mesh submeshes] objectAtIndex:i] ];
//Create a geometry source from the index buffer
MDLMeshBufferMap *map = [[[[mesh submeshes] objectAtIndex:i] indexBuffer] map];
SCNGeometrySource *source = [SCNGeometrySource geometrySourceWithVertices:[map bytes] count:[[[mesh submeshes] objectAtIndex:i] indexCount]];
//Create the SCNGeometry from the source and the element
SCNGeometry *subMesh = [SCNGeometry geometryWithSources:[NSArray arrayWithObject:source] elements:[NSArray arrayWithObject:element]];
//Update the name
subMesh.name = [[[mesh submeshes] objectAtIndex:i] name];
//Create a subnode and add it to the object node
SCNNode *subNode = [SCNNode nodeWithGeometry:subMesh];
[node addChildNode:subNode];
}
return node;
} else {
return [SCNNode nodeWithMDLObject:mesh];
}
Unfortunately, the app crashes with an Bad Access Exception.
As one can see, I am not that experienced with developing objective-c. Any help to fix my ideas or different approaches to subdivide the mesh would be great.
Thank you.

[Re-posting comment as answer as it seems it provided a working solution]
I looks like you're missing the normals when using +newSubdividedMesh:submeshIndex:subdivisionLevels: to build a new mesh from a submesh. Maybe -initWithVertexBuffer:vertexCount:descriptor:submeshes: will lead to better results?

Related

Get Depth from detected face using Vision and ARKit iOS

I am trying to achieve something like this using Vision and ARKit, so my idea is to get landmark points from Vision and deploy node using those points. I am using this demo as a reference. To date, I have been able to find the landmark points of the face using Vision. Now to use those points in ARKit to add nodes to the scene. I am unable to get the depth, which is essential for the node's position.
After searching SO, I found this post to convert CGPoint to SCNVector3, but here I am having an issue as I don't having any reference plane which is can use to get depth by hit testing against.
So, how can I get the perfect depth using CGPoints, other than using hitTest, or is there any other way I can achieve those result shown in video.
Here is the code which is implemented
CGPoint faceRectCenter = (CGPoint){
CGRectGetMidX(faceRect),CGRectGetMidY(faceRect)
}; // faceRect is detected face bounding box
__block NSMutableArray<ARHitTestResult* >* testResults = [NSMutableArray new];
void(^hitTest)(void) = ^{
NSArray<ARHitTestResult* >* hitTestResults = [self.sceneView hitTest:faceRectCenter types:ARHitTestResultTypeFeaturePoint];
if(hitTestResults.count > 0){
//get the first
ARHitTestResult* firstResult = nil;
for (ARHitTestResult* result in hitTestResults) {
if (result.distance > 0.10) {
firstResult = result;
[testResults addObject:firstResult];
break;
}
}
}
};
for(int i=0; i<3; i++){
hitTest();
}
if(testResults.count > 0){
NSLog(#"%#", testResults);
SCNVector3 postion = averagePostion([testResults copy]);
NSLog(#"<%.1f,%.1f,%.1f>",postion.x,postion.y,postion.z);
__block SCNNode* textNode = [ARTextNode nodeWithText:name Position:postion];
SCNVector3 plane = [self.sceneView projectPoint:textNode.position];
float projectedDepth = plane.z;
NSLog(#"projectedDepth: %f",projectedDepth);
dispatch_async(dispatch_get_main_queue(), ^{
[self.sceneView.scene.rootNode addChildNode:textNode];
[textNode show];
});
}
else{
// NSLog(#"HitTest invalid");
}
}
Any help will be great!!

Multiple Render Targets in Metal

I am trying to implement two distinct CAMetalLayers and use one MTLRenderCommandEncoder to render the same scene to both layers (Metal for OS X).
For this purpose, I've tried creating one MTLRenderPassDescriptor and attaching the two layers' textures to its color attachments. My render method looks like the following:
- (void)render {
dispatch_semaphore_wait(_inflight_semaphore, DISPATCH_TIME_FOREVER);
id<MTLCommandBuffer> commandBuffer = [_commandQueue commandBuffer];
__block dispatch_semaphore_t block_sema = _inflight_semaphore;
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> buffer) {
dispatch_semaphore_signal(block_sema);
}];
MTLRenderPassDescriptor *renderPass = [MTLRenderPassDescriptor renderPassDescriptor];
for (int i = 0; i < [_metalLayers count]; i++) {
_metalDrawables[i] = [_metalLayers[i] nextDrawable];
renderPass.colorAttachments[i].texture = _metalDrawables[[_metalDrawables count] - 1].texture;
renderPass.colorAttachments[i].clearColor = MTLClearColorMake(0.5, 0.5, (float)i / (float)[_metalLayers count], 1);
renderPass.colorAttachments[i].storeAction = MTLStoreActionStore;
renderPass.colorAttachments[i].loadAction = MTLLoadActionClear;
}
id<MTLRenderCommandEncoder> commandEncoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPass];
[commandEncoder setRenderPipelineState:_pipeline];
[commandEncoder setVertexBuffer:_positionBuffer offset:0 atIndex:0 ];
[commandEncoder setVertexBuffer:_colorBuffer offset:0 atIndex:1 ];
[commandEncoder drawPrimitives:MTLPrimitiveTypeTriangle vertexStart:0 vertexCount:3 instanceCount:1];
[commandEncoder endEncoding];
for (int i = 0; i < [_metalDrawables count]; i++) {
[commandBuffer presentDrawable:_metalDrawables[i]];
}
[commandBuffer commit];
}
However, the scene gets rendered to just one of the layers, which turns out to be the one associated with the first color attachment's texture. The other layer is cleared with the specified clear color, but nothing is drawn.
Has the approach given any chance of succeeding or is using the render pass descriptor's color attachments entirely pointless when trying to render the same scene to multiple screens (i.e. CAMetalLayers)? If so, is there any other conceivable approach to achieve this result?
To write to multiple render targets, you would need to explicitly write out to that render target in your fragment shader. #lock has already pointed this out.
struct MyFragmentOutput {
// color attachment 0
float4 clr_f [[ color(0) ]];
// color attachment 1
int4 clr_i [[ color(1) ]];
// color attachment 2
uint4 clr_ui [[ color(2) ]];
};
fragment MyFragmentOutput
my_frag_shader( ... )
{
MyFragmentOutput f;
....
f.clr_f = ...;
f.clr_i = ...;
...
return f;
 }
However, this is an overkill since you don't really need the GPU to render the scene twice. So the answer above by #Kacper is more accurate for your case. However, to add to his answer, I would recommend using BlitEncoder that can copy data between two textures on the GPU, which I assume should be much faster than the CPU.
https://developer.apple.com/library/mac/documentation/Miscellaneous/Conceptual/MetalProgrammingGuide/Blit-Ctx/Blit-Ctx.html#//apple_ref/doc/uid/TP40014221-CH9-SW4
As far as I read about this problem you can try to render just to one MTLTexture (not drawable layer) and then try to use MTLTexture methods getBytes and replaceRegion to copy texture data into two drawable layers.
Currently I am working on rendering to ordinary texture but I encounter some artifacts and currently it not working for me, maybe you will find way to solve that.

Stopping objects when collision occurs in sprite kit

I'm building a game using Apple's SpriteKit and SKPhysics that use squares that move around on the screen based on user input. I'm having an issue with collisions in that the squares will move out of place if they collide. For example, if all the blocks move to the right, any blocks that are on the same "row" need stack next to each other and not overlap or move position vertically. As of now, they will change their vertical direction. Here is my code:
self.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:self.size];
self.physicsBody.dynamic = YES;
self.physicsBody.allowsRotation = NO;
self.physicsBody.affectedByGravity = NO;
Are there any other settings that I'm missing?
The issue could be coming from your collisionBitMask category. In order to solve that, you need to first create categories for the blocks' physics bodies as follows:
struct PhysicsCategory {
static let None : UInt32 = 0
static let All : UInt32 = UInt32.max
static let block : UInt32 = 0b1
}
then set the blocks' settings to the following.
block.physicsBody?.categoryBitMask = PhysicsCategory.block
block.physicsBody?.collisionBitMask = PhysicsCategory.None
This should prevent the collision calculations from being automatically carried out by spritekit.
If you're moving your sprites via user inputs(i.g. SKAction's moveTo), then you're most likely not using physics to move your sprite. In this case, you should make the velocity of the physicsbody to 0- this will make the sprite completely rigid when it comes in contact with another object.
Try:
self.physicsBody.velocity = CGVectorMake(0, 0);
You should put this code inside your update loop.

Route Me: Location & Screen Position

I am drawing a bunch of Markers on my Map View.
The information about the position of all objects (latitude and longitude) is stored in an Array.
To optimize performance i don't want to draw ALL Markers. I only want to alloc Markers in the area i am seeing at the moment. But the only information i get from the Route Me API about position/screen is:
center.latitude;
center.longitude;
But this value is only returning the center of my whole map and i want to see the center position (lat and long) of the actual view. I am also able to get the GPS position, but not the center position of the screen. Do you think there is an easy way to get this information?
This is a part of my implementation:
UIImage* object = [UIImage imageNamed:#"object.png"];
CLLocationCoordinate2D buoyLocation;
NSString* templatString;
NSString* templongString;
for (int i =0; i<(myArray.count); i=i+2)
{
templongString = [myArray objectAtIndex:i];
templatString = [myarray objectAtIndex:i+1];
objectLocation.latitude = [templatString floatValue];
objectLocation.longitude = [templongString floatValue];
myMarker = [[RMMarker alloc] initWithUIImage:object anchorPoint:CGPointMake(xspec, yspec)]; //0.5 1.0
[markerManager1 addMarker:myMarker AtLatLong:objectLocation];
}
Take a look at latitudeLongitudeBoundingBoxForScreen in RMMapContents.
Ciao!
-- Randy

interacting with UIViews stored inside a NSMutableArray

a big noob needs help understanding things.
I have three UIViews stored inside a NSMutableArray
lanes = [[NSMutableArray arrayWithCapacity:3] retain];
- (void)registerLane:(Lane*)lane {
NSLog (#"registering lane:%i",lane);
[lanes addObject:lane];
}
in the NSLog I see: registering lane:89183264
The value displayed in the NSLog (89183264) is what I am after.
I'd like to be able to save that number in a variable to be able to reuse it elsewhere in the code.
The closest I could come up with was this:
NSString *lane0 = [lanes objectAtIndex:0];
NSString *description0 = [lane0 description];
NSLog (#"description0:%#",description0);
The problem is that description0 gets the whole UIView object, not just the single number (dec 89183264 is hex 0x550d420)
description0's content:
description0:<Lane: 0x550d420; frame = (127 0; 66 460); alpha = 0.5; opaque = NO; autoresize = RM+BM; tag = 2; layer = <CALayer: 0x550d350>>
what I don't get is why I get the correct decimal value with with NSLog so easily, but seem to be unable to get it out of the NSMutableArray any other way. I am sure I am missing some "basic knowledge" here, and I would appreciate if someone could take the time and explain what's going on here so I can finally move on. it's been a long day studying.
why can't I save the 89183264 number easily with something like:
NSInteger * mylane = lane.id;
or
NSInteger * mylane = lane;
thank you all
I'm really confused as to why you want to save the memory location of the view? Because that's what your '89183264' number is. It's the location of the pointer. When you are calling:
NSLog (#"registering lane:%i",lane);
...do you get what's actually being printed out there? What the number that's being printed means?
It seems like a really bad idea, especially when if you're subclassing UIView you've already got a lovely .tag property which you can assign an int of your choosing.
You're making life infinitely more complex than it needs to be. Just use a pointer. Say I have an array containing lots of UIViews:
UIView *viewToCompare = [myArray objectAtIndex:3];
for (id object in myArray) {
if (object == viewToCompare) {
NSLog(#"Found it!");
}
}
That does what you're trying to do - it compares two pointers - and doesn't need any faffing around with ints, etc.