three.js: how to control rendering order - optimization

Am using three.js
How can I control the rendering order? Let's say I have three plane geometries, and want to render them in a specific order regardless of their spatial position.
thanks

You can set
renderer.sortObjects = false;
and the objects will be rendered in the order they were added to the scene.
Alternatively, you can leave sortObjects as true, the default, and specify for each object a value for object.renderOrder.
For more detail, see Transparent objects in Threejs
Another thing you can do is use the approach described here: How to change the zOrder of object with Threejs?
three.js r.71

for threejs r70 and higher is renderDepth removed.

Using object.renderDepth worked in my case. I had a glass case and bubbles inside that were transparent. The bubbles were getting lost at certain angles.
So, setting their renderDepth to a high number and playing with other elements depths in the scene fixed the issue. Hooking up a dat.gui control to the renderDepth property made it very easy to tweak what needed to be at what depth to make the scene work.
So, in my fishScene, I have gravel, tank and bubbles. I hooked up the gravel mesh with a dat.gui control and with in a few seconds, I had the depth I needed.
this.gui.add(this.fishScene.gravel, "renderDepth", 0, 200);

i had a bunch of objects which was cloned from a for loop in random position x and y... and obj.z ++, so they would line up in line.. including obj.renderOrder ++; in the loop solved my issue.

Related

Dynamic/scrollable container size in Cytoscape (+ cy.fit() issues)

So I've been giving Cytoscape a try recently. My project's goal is basically a collaborative graph that people will be able to add/remove nodes to/from, making it grow in the process. The graph will include many compound nodes.
Most of the examples I've seen use container div that takes 100% of the screen space. This is fine for "controlled" graphs but won't work in my case because its size is intended to be dynamic.
Here's a JSFiddle using the circle layout within a fixed 3000px/3000px container:
https://jsfiddle.net/Jeto143/zj8ed82a/5/
Is there any way to have the container size be dynamic as opposed to stating it explicitly? Or do I have to compute the new optimal container size each time somehow, and then call cy.resize()?
edit: actually, using 100%/100% into cy.fit() might just work no matter how large the network is gonna be, so please ignore this question is this is the case.
Is there a recommended layout for displaying large/unknown amounts of data in a non-hierarchical way that would "smartly" place nodes (including compound ones) in the most efficient way possible, all the while avoiding any overlap? (I guess that's a lot to ask...)
Why doesn't cy.fit() seem to be working in my example? I'm using it both at graph initialization and when CTRL+clicking nodes (to show closed neighborhood), but it doesn't seem to like the 3000x3000px container (seems better with 100%x100%).
edit: also ignore this question if you ignored 1., as again it seems fine with 100%/100%.
Any help/suggestions would be greatly appreciated.
Thank you very much in advance.
TLDR: It's (1).
Cytoscape has a pannable viewport, like a map. You can define the dimensions of the viewport (div) in CSS. What's displayed in the viewport is a function of the positions of the nodes, the zoom level, and the pan level -- just like what is visible in a map's viewport is a function of zoom, pan, and positions of points of interest.
So either you have to
(a) rethink your UI in terms of zoom and pan and use those in-built facilities in Cytoscape, or
(b) disable zoom and pan in Cytoscape (probably stay at (0, 0) at zoom 1) and let the user scroll the page as you add content to the graph and resize its container div to accommodate the new content.

Kivy: Depth Oder not so in Depth

Now I could be wrong about this but after testing it all day, I have discovered...
When adding a widget and setting the z-index, the value "0" seems to be the magic depth.
If a widget's Z is at 0, it will be drawn on top of everything that's not at 0, Z wise.
It doesn't matter if a widget has a z-index of 99, -999, 10, -2 or what ever... It will not appear on top of a widget who's z-index is set to 0.
It gets more strange though...
Any index less than -2 or greater than 2 seems to create an "index out of range" error. Funny thing is...when I was working with a background and sprite widget, the background's Z was set to 999 and no errors. When I added another sprite widget, that's when the -2 to 2 z-index limitation appeared.
Yeah I know...sounds whacked!
My question is, am I right about "0" being the magic Z value?
If so, creating a simple 23D effect like making a sprite move being a big rock will take some unwanted code.
Since you can only set Z when adding,a widget, one must remove and immediately add back, with the new Z value...a widget.
You'll have to do this with the moving sprite and the overlapping object in question. Hell, I already have that code practically written but I want to find out from Kivy pros, is there a way to set z-index without removing and adding a widget.
If not, I'll have to settle for the painful way.
My version of Kivy is 1.9.0
What do you mean by z-order? Drawing order is determined entirely by order of widgets being added to the parent, and the index argument to add_widget is just a list index at which the widget will be inserted. The correct way to change drawing order amongs widgets is to remove and add them (actually you can mess with the canvases manually but this is the same thing just lower level, and not a better idea).
I found a working solution using basic logic based on the fact widgets have to be removed and added again in order to control depth/draw order.
I knew the Main Character widget had to be removed along with the object in question...so I created a Main Character Parent widget, which defines and control the Main Character, apart from its Graphic widget.
My test involves the Main Character walking in front of a large rock, then behind it...creating a 23D effect.
I simply used the "y-" theory along with widget attach and detach code to create the desired effect.
The only thing that caught me off guard was the fact my Graphic widget for my Actor was loading textures. That was a big no no because the fps died.
Simple fix, moved the texture loading to the Main Character Parent widget and the loading is done once for all-time.
PS, if anyone knows how to hide the scrollbars and wish to share that knowledge, it'll be much appreciated. I haven't looked for an API solution for it yet but I will soon.
Right now I'm just trying to make sure I can do the basic operations necessary for creating a commercial 23D game (handhelds).
I'm a graphic artist and web developer so coming up with lovely visuals won't be an issue. I'm more concerned with what'll be "under the hood" so to say. Hopefully enough, lol.

Tearing graphics in SCNView

Hi, I have a SCNView with some nodes, when rotating I get some strange tearing, the nodes on top have a higher rendering order, changing this seems to have no effect.
Is there anything I can do to get rid of the white lines?
It's like it's fighting for position??
as said above, it looks like you're experiencing z-fighting because your colored objects and white object lie in a same plane.
You can avoir this
By slightly offsetting your geometries, but this trick does not work in every situation (the user might notice the gap depending on the point of view)
by changing the renderingOrder of your nodes but don't forget to tweak the writesToDepthBuffer and readsFromDepthBuffer properties of your materials
When use NO.2 solution mnuages introduced:
node.renderingOrder = 100;//Max value to ensure your node render at latest.
//disable deep buffer for rendering
node.firstMaterial.writesToDepthBuffer = NO;
node.firstMaterial.readsFromDepthBuffer = NO;
This is only work for node's geometry locate top hierarchy otherwise will lead to a weird perspective scenario.

Cocos3D - background shown through meshes

I imported the .pod file created from Blender and the blue background is shown through the eyelash and eyebrow meshes. Does anyone know why I'm encountering this?
WITHOUT additional material (looking normal except the root of the hair).
WITH new green material added to her left shoulder, the eyebrow and eyelash began showing the background
This issue is caused by the order in which the nodes are being rendered in your scene.
In the first model, the hair is drawn first, then the skin, then the eyebrows and eyelashes. In the second model, the hair, eyebrows and eyelashes are all drawn before the skin. By the time the skin under the hair or eyelashes is drawn, the depth buffer indicates that something closer to the camera has already been drawn, and the engine doesn't bother rendering those skin pixels. But because the eyelashes, eyebrows and hair all contain transparency, we end up looking right through them onto the backdrop.
This design use of a depth buffer is key to all 3D rendering. It's how the engine knows not to render pixels that are being visually occluded by another object, otherwise all we'd ever see was the last object to be rendered.However, when rendering overlapping objects that contain transparency, it's important to get the rendering order correct, so that more distant objects that are behind closer transparent objects are rendered first.
In Cocos3D, there are several tool available for you to order your transparent objects for rendering:
The first, and primary tool, is the drawingSequencer that is managed by the CC3Scene. You can configure several different types of drawing sequencers. The default sequencer is smart enough to render all opaque objects first, then to render the objects that contain transparency in decreasing order of distance from the camera (rendering farther objects first). This works best for most scenes, and in particular where objects are moving around and can move in front of each other unpredictably. Unfortunately, in your custom CC3Scene initialization code (which you sent me per the question comments), you replaced the default drawing sequencer with one that does not sequence transparent objects based on distance. If you remove that change, everything works properly.
Objects that are not explicitly sequenced by distance (as in part 1 above) are rendered in the order in which they are added to the scene. You can therefore also define rendering order by ensuring that the objects are added to your scene in the order in which you want them rendered. This can work well for static models, such as your first character (if you change it to add the hair after the skin).
CC3Node also has a zOrder property, which allows you to override the rendering order explicitly, so that objects with larger zOrder value are rendered before those with smaller zOrder values. This is useful when you have a static model whose components cannot be added in rendering order, or to temporarily override the rendering order of two transparent objects that might be passing in front of each other. Using the zOrder property does depend on using a drawingSequencer that makes use of it (the default drawing sequencer does).
Finally, you can temporarily turn off depth testing or masking when rendering particular nodes, by setting the shouldDisableDepthTest and shouldDisableDepthMask properties to YES on those nodes.

Get size of an expanding circle in a CABasicAnimation at any point in time

I would like to know how I can get the diameter (or radius) of an expanding circle animation at a at any point in time during the animation. I will end up stoping the animation right after I get the size as well, but figure I couldn't stop and remove it form the layer until I get the size of the circle.
For an example of how the expanding circle animation is implemented, it is a variation on the implementation shown in the addGrowingCircleAtPoint:(CGPoint)point method in the answer in the iPhone Quartz2D render expanding circle question.
I have tried to check various values on the layers, animation, etc but can't seem to find anything. I figure worse case I can attempt to make a best guess by taking the current time it is into its animation and use that to figure where it "should" be at based on its to and from size states. This seems like overkill for what I would assume is a value that is incrementing someplace I can just get easily.
Update:
I have tried several properties on the Presentation Layer including the Transform which never seems to change all the values are always the same regardless of what size the circle is at the time checked.
Okay here is how you get the current state of the an animation while it is animating.
While Rob was close he left out two pieces of key information.
First from the layer.presentationLayer.subLayers you have to get the layer you are animating on, which for me is the only sub layer available.
Second, from this sub layer you cannot just access the transform directly you have to do it by valueForKeyPath to get transform.scale.x. I used x because its a circle and x and y are the same.
I then use this to calculate the size of the circle at the time of the based on the values used to create the Arc.
I assume what you're trying to get to is the current CATransform3D, and that from that, you can get to your circle size.
What you want is the layer.presentationLayer.transform. See the CALayer docs for details on the presentationLayer. Also see the Core Animation Rendering Architecture.