Tearing graphics in SCNView - objective-c

Hi, I have a SCNView with some nodes, when rotating I get some strange tearing, the nodes on top have a higher rendering order, changing this seems to have no effect.
Is there anything I can do to get rid of the white lines?
It's like it's fighting for position??

as said above, it looks like you're experiencing z-fighting because your colored objects and white object lie in a same plane.
You can avoir this
By slightly offsetting your geometries, but this trick does not work in every situation (the user might notice the gap depending on the point of view)
by changing the renderingOrder of your nodes but don't forget to tweak the writesToDepthBuffer and readsFromDepthBuffer properties of your materials

When use NO.2 solution mnuages introduced:
node.renderingOrder = 100;//Max value to ensure your node render at latest.
//disable deep buffer for rendering
node.firstMaterial.writesToDepthBuffer = NO;
node.firstMaterial.readsFromDepthBuffer = NO;
This is only work for node's geometry locate top hierarchy otherwise will lead to a weird perspective scenario.

Related

Find out when SCNNode disappears from SCNScene

Does a particular method get called when an SCNNode is removed from the scene?
-(void)removeFromParentNode;
Does not get called on the SCNNode object.
To set the scene
I am using gravity to pull down an object. When an object goes too far down, it automatically disappears and the draw calls and polygon counts decrease. So the SCNNode is definitely being destroyed, but is there a way I could hook into the destruction?
Other answers covered this pretty well already, but to go a bit further:
First, your node isn't being removed from the scene — its content is passing outside the camera's viewing frustum, which means SceneKit knows it doesn't need to issue draw calls to the GPU to render it. If you enumerate the child nodes of the scene (or of whatever parent contains the nodes you're talking about), you'll see that they're still there. You lose some of the rendering performance cost because SceneKit doesn't need to issue draw calls for stuff that it knows won't be visible in the frame.
(As noted in Tanguy's answer, this may be because of your zFar setting. Or it may not — it depends which direction the nodes are falling out of camera in.)
But if you keep adding nodes and letting physics drop them off the screen, you'll accumulate a pre-render performance cost, as SceneKit has to walk the scene graph every frame and figure out which nodes it'll need to issue draw calls for. This cost is pretty small for each node, but it could eventually add up to something you don't want to deal with.
And since you want to have something happen when the node falls out of frame anyway, you just need to find a good opportunity to both deal with that and clean up the disappearing node.
So where to do that? Well, you have a few options. As has been noted, you could put something into the render loop to check the visibility of every node on every frame:
- (void)renderer:(id<SCNSceneRenderer>)renderer didSimulatePhysicsAtTime:(NSTimeInterval)time {
if (![renderer isNodeInsideFrustum:myNode withPointOfView:renderer.pointOfView]) {
// it's gone, remove it from scene
}
}
But that's a somewhat expensive check to be running on every frame (remember, you're targeting 30 or 60 fps here). A better way might be to let the physics system help you:
Create a node with an SCNBox geometry that's big enough to "catch" everything that falls off the screen.
Give that node a static physics body, and set up the category and collision bit masks so that your falling nodes will collide with it.
Position that node just outside of the viewing frustum so that your falling objects hit it soon after they fall out of view.
Implement a contact delegate method to destroy the falling nodes:
- (void)physicsWorld:(SCNPhysicsWorld *)world didBeginContact:(SCNPhysicsContact *)contact {
if (/* sort out which node is which */) {
[fallingNode removeFromParentNode];
// ... and do whatever else you want to do when it falls offscreen.
}
}
Your object will disappear if it goes further than the ZFar property of your active camera. (default value is 100.0)
As said by David Rönnqvist in comments, your Node is not destroyed and you can still modify its property.
If you want to hook-up to your Node's geometry disappearance, you can calculate the distance between your active camera and your Node and check it every frame in your rendering loop to trigger an action if it gets higher than 100.
If you want to render your Node at a greater distance, you can just modify the ZFar property of your camera.

Cocos3D - background shown through meshes

I imported the .pod file created from Blender and the blue background is shown through the eyelash and eyebrow meshes. Does anyone know why I'm encountering this?
WITHOUT additional material (looking normal except the root of the hair).
WITH new green material added to her left shoulder, the eyebrow and eyelash began showing the background
This issue is caused by the order in which the nodes are being rendered in your scene.
In the first model, the hair is drawn first, then the skin, then the eyebrows and eyelashes. In the second model, the hair, eyebrows and eyelashes are all drawn before the skin. By the time the skin under the hair or eyelashes is drawn, the depth buffer indicates that something closer to the camera has already been drawn, and the engine doesn't bother rendering those skin pixels. But because the eyelashes, eyebrows and hair all contain transparency, we end up looking right through them onto the backdrop.
This design use of a depth buffer is key to all 3D rendering. It's how the engine knows not to render pixels that are being visually occluded by another object, otherwise all we'd ever see was the last object to be rendered.However, when rendering overlapping objects that contain transparency, it's important to get the rendering order correct, so that more distant objects that are behind closer transparent objects are rendered first.
In Cocos3D, there are several tool available for you to order your transparent objects for rendering:
The first, and primary tool, is the drawingSequencer that is managed by the CC3Scene. You can configure several different types of drawing sequencers. The default sequencer is smart enough to render all opaque objects first, then to render the objects that contain transparency in decreasing order of distance from the camera (rendering farther objects first). This works best for most scenes, and in particular where objects are moving around and can move in front of each other unpredictably. Unfortunately, in your custom CC3Scene initialization code (which you sent me per the question comments), you replaced the default drawing sequencer with one that does not sequence transparent objects based on distance. If you remove that change, everything works properly.
Objects that are not explicitly sequenced by distance (as in part 1 above) are rendered in the order in which they are added to the scene. You can therefore also define rendering order by ensuring that the objects are added to your scene in the order in which you want them rendered. This can work well for static models, such as your first character (if you change it to add the hair after the skin).
CC3Node also has a zOrder property, which allows you to override the rendering order explicitly, so that objects with larger zOrder value are rendered before those with smaller zOrder values. This is useful when you have a static model whose components cannot be added in rendering order, or to temporarily override the rendering order of two transparent objects that might be passing in front of each other. Using the zOrder property does depend on using a drawingSequencer that makes use of it (the default drawing sequencer does).
Finally, you can temporarily turn off depth testing or masking when rendering particular nodes, by setting the shouldDisableDepthTest and shouldDisableDepthMask properties to YES on those nodes.

three.js: how to control rendering order

Am using three.js
How can I control the rendering order? Let's say I have three plane geometries, and want to render them in a specific order regardless of their spatial position.
thanks
You can set
renderer.sortObjects = false;
and the objects will be rendered in the order they were added to the scene.
Alternatively, you can leave sortObjects as true, the default, and specify for each object a value for object.renderOrder.
For more detail, see Transparent objects in Threejs
Another thing you can do is use the approach described here: How to change the zOrder of object with Threejs?
three.js r.71
for threejs r70 and higher is renderDepth removed.
Using object.renderDepth worked in my case. I had a glass case and bubbles inside that were transparent. The bubbles were getting lost at certain angles.
So, setting their renderDepth to a high number and playing with other elements depths in the scene fixed the issue. Hooking up a dat.gui control to the renderDepth property made it very easy to tweak what needed to be at what depth to make the scene work.
So, in my fishScene, I have gravel, tank and bubbles. I hooked up the gravel mesh with a dat.gui control and with in a few seconds, I had the depth I needed.
this.gui.add(this.fishScene.gravel, "renderDepth", 0, 200);
i had a bunch of objects which was cloned from a for loop in random position x and y... and obj.z ++, so they would line up in line.. including obj.renderOrder ++; in the loop solved my issue.

How do I pad a graph so that lines aren't clipped by the edges?

I've had a pretty good look around for an answer to this, and tried several solutions in my code, nothing found so far.
I have a line graph that I am plotting, a CPTScatterPlot graph, and I have got points adding to it correctly. I want to show each of these points as a dot about 3-5 pixels in diameter, and connected by lines that are about 3 pixels wide. This all works fine.
The problem is that when the plot is a straight along one of the edges of the graph hosting view, the lines and dots are clipped and don't look right at all.
This is a mockup of what it should look like:
And this is the effect I am seeing much of the time at the moment:
I apologise for the small images, but hopefully you can see that in the second one, the line and dots are rendered only a few pixels into the graph view, not fully in view. In the second one the data is actually at y=1 for the first 75%, then falls down to y=0.
How can I inset the drawing of the graph components by several pixels to prevent the clipping of any shapes?
So far:
I have tried setting the padding on the graph, but that just
contracts the area it draws to, I suppose to make room for titles
which I am not using.
I have also tried adding to the min/max x/y range settings which I
recalculate based on the data I am updating in the background. This
works, but obviously only if the amount I add to those values is
correct in relation to the drawing scale that will be used for the
data values I am inputting.
I am on Mac OS using NSView (actually CPTGraphHostingView) so clipsToBounds isn't available. Also, I tried masksToBounds and masksToBorder on CPTXYGraph.
I think the easiest way to handle this is to simply extend your ranges by a small amount. There is a method in CPTPlotRange that makes it very easy to extend a given range by a fixed percentage (e.g. 1%). I think the main test app example even shows this in action.
Another option would be to turn off the masksToBounds and/or masksToBorder on the CPTPlotArea (plotArea) and possibly the CPTPlotAreaFrame (plotAreaFrame). You access them both via properties of the graph.
This might be of help..
The default padding on the graph itself (not the plot area frame) is 20 pixels on each side. You can change that, too.
graph.paddingLeft = 0.0;
graph.paddingTop = 0.0;
graph.paddingRight = 0.0;
graph.paddingBottom = 0.0;

OpenglES - Transparent texture blocking objects behind

I have some quads that have a texture with transparency and some objects behind these quads. However, these don't seem to be shown. I know it's something about GL_BLEND but I can't manage to make the objects behind show.
I've tried with:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
but still not working. What I basically have is:
// I paint the object
draw_ac3d_file([actualObject getCurrentObject3d]);
// I paint the quad
paintQuadWithAlphaTexture();
There are two common scenarios that create this situation, and it is difficult to tell which one your program is doing, if either at all.
Draw Order
First, make sure you are drawing your objects in the correct order. You must draw from back-to-front or else the models will not be blended properly.
http://www.opengl.org/wiki/Transparency_Sorting
note as Arne Bergene Fossaa pointed out, front-to-back is the proper way to render objects that are not transparent from a performance stand point. Because of this, most renderers first draw all the models that have no transparency front-to-back, and then they go back and render all models that have transparency back-to-front. This is covered in most 3D-graphic texts out there.
back-to-front
front-to-back
image credit to Geoff Leach at RMIT University
Lighting
The second most common issue is improper use of lighting. Normally in this case if you were using the fixed-function pipeline, people would advise you to simply call glDisable(GL_LIGHTING);
Now this should work (if it is the cause at all) but what if you want lighting? Then you would either have to employ custom shaders or set up proper material settings for the models.
A discussion of using the material properties can be found at http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=285889