problems with CGAL AABB tree - cgal

I am using CGAL's AABB tree to perform point-location queries for my project. I have a cartesian grid in 3D and a surface immersed inside the grid. I need to find which elements of the grid are outside/inside/cut by the surface. For this, I cast a Ray and find the number of intersections for each corner of the cell and check if they are inside or outside.
This works fine as long as the corners of the grid do not coincide with nodes on the surface. But, I get rubbish results when the corner points of the cell coincide with nodes on the surface. One such scenario is shown in Erroneous result from CGAL.
I tried using Simple_cartesian<double> and Exact_predicates_inexact_constructions_kernel but the situation did not improve.
It seems that CGAL is so sensitive to floating point operations.
How can solve this issue?

Without seeing the code you wrote it is hard to say where the problem is. However the class Side_of_triangle_mesh seems to be exactly what you need.

Related

GODOT: What is an efficient calculation for the AABB of a simple 3D model from a camera's view

I am attempting to come up with a quick and efficient means of translating a 3d mesh into a projected AABB. In the end, I would like to accomplish something similar to figure 1 wherein only the area of the screen covered by the cube is located inside the bounding box highlighted in red. ((if it is at all possible, getting the area as small as possible, highlighted in blue, would increase efficiency down the road.))
Figure 1. https://i.imgur.com/pd0E20C.png
Currently, I have tried:
Calculating the point position on the screen using camera.unproject_position(). this failed largely due to my inability to wrap my head around the pixel positions trending towards infinity. I understand it has something to do with Tan, but frankly, it is too late for my brain to function anymore.
Getting the area of collision between the view frustum and the AABB of the mesh instance. This method seems convoluted, and to get it in a usable format I would need to project the result into 2d coordinates again.
Using the MeshInstance VisualInstance to create a texture wherein a pixel is white if it contains the mesh instance, and black otherwise. Visual instances in general just baffle me, and I did not think it would be efficient to have another viewport just to output this texture.
What I am looking for:
An output that can be passed to a shader informing where to complete certain calculations. Right now this is set up to use a bounding box, but it could easily be rewritten to also use a texture. It also could be rewritten to use polygons, but I am trying to keep calculations to a minimum in the shader.
Certain solutions I have tried before have worked, slightly, but this must be robust. The camera interfacing with the 3d object will be able to move completely around and through it, meaning at times the view will be completely surrounded by the 3d model with points both in front, and behind.
Thank you for any help you can provide.
I will try my best to update this post with information if needed.

How to deal with non-manifold vertices for 3d printing

I am working on modeling a shape for 3d printing, but I'm running into a problem with it having non-manifold vertices. I've ensured that the faces are oriented properly, but it still says these vertices are non-manifold. Anyone have ideas?
Figured it out. The fact that I was trying to make the long top, outside, and bottom piece one single face was the problem. The vertices where the center section met the inside of the outer part of the design were being overlapped by the edge from the single long face. Instead, I added the vertices (points in red) in that spot, at the top, and in both places on the mirror side of the shape. After that I recreated the faces between the new vertices, checked for manifold vertices, and got none.

Setting control points for Cytoscape edges?

I'm trying to build a descending graph in Cytoscape. I've got the majority done quite well, but now I'm stuck on the edge types. I'd like to use something like the 'segments' curve-style, where my edges have points.
However, instead of being zig-zags, I would like the edges to be constrained to horizontal/vertical lines.
My graph is pretty constrained and the user cannot manipulate the positions. I would like the edges to start at the 'parent' element, go straight down a set amount, then hit a point, turn, head horizontally to the same X as the child, then straight down to the child element.
Right now, the lines go straight, and I can add segments easily, but they aren't constrained and are based on percentages that I won't have access to without doing a bunch of math, which I guess isn't terrible.
Current:
Desired:
If you want specific absolute positions on segments edges, you'll need to convert the absolute co-ordinates to the relative co-ordinates that you specify for segments edges.
If you want a different type of edges for your usecase, feel free to propose it in the issue tracker.

In OpenGL ES 2.0, how can I draw a wireframe of triangles except for the lines on adjacent coplanar faces?

I vaguely remember seeing something in OpenGL (not ES, which was still at v1.0 on the iPhone when I came across this, which is why I never used it) that let me specify which edges of my polygons were considered outlines vs those that made up the interior of faces. As such, this isn't the same as the outline of the entire model (which I know how to do), but rather the outline of a planar face with all its tris basically blended into one poly. For instance, in a cube made up of tri's, each face is actually two tris. I want to render the outline of the square, but not the diagonal across the face. Same thing with a hexagon. That takes four tris, but just one outline for the face.
Now yes, I know I can simply test all the edges to see if they share coplanar faces, but I could have sworn I remember seeing somewhere when you're defining the tri mesh data where you could say 'this line outlines a face whereas this one is inside a face.' That way when rendering, you could set a flag that basically says 'Give me a wireframe, but only the wires around the edges of complete faces, not around the tris that make them up.'
BTW, my target is all platforms that support OpenGL ES 2.0 but my dev platform is iOS. Again, this Im pretty sure was originally in OpenGL and may have been depreciated once shaders came on the scene, but I can't even find a reference to this feature to check if that's the case.
The only way I know now is to have one set of vertices, but two separate sets of indices... one for rendering tris, and another for rendering the wireframes of the faces. It's a real pain since I end up hand-coding a lot of this, which again, I'm 99% sure you can define when rendering the lines.
GL_QUADS, glEdgeFlag and glPolygonMode are not supported in OpenGL ES.
You could use LINES to draw the wireframe: To get hidden lines, first draw black filled triangles (with DEPTH on) and then draw the edges you are interested in with GL_LINES.

Which pixels did that drawmesh operation just draw to?

Ok, it's a relatively simple problem, I want to know where, in screen space, a particular mesh was just drawn. I plan on then storing that information in a data store of some kind so that when I interact with something in screen space, I can lookup in the register and find the object, i.e, click on the spaceship drawn on the screen and then select target etc.
I can't find any way of finding out which pixels the mesh was drawn to though...
Alternatively, if I'm missing something obvious regarding what it is that I Want to do, please let me know!
There is no easy way to do that. But you can use another texture as render target and render those meshes with unique colors.
So for example you give #FF0000 to your mesh A and draw it also to your second render target with that color. Now when you select a pixel from 2nd render target and look at that color, if it is #FF0000 you can understand that, the pixel is a part of mesh A. Thus you can easily pick the mesh drawn on a certain pixel when you click one of those pixels.
Why dont you Unproject your screen space coords into 3D space? The only complication I had was the fact that I'd be left with a plane, I could check if a Mesh intersected with that plane but I often had multiple candidates for 'picking'.
Check out Google for DirectX Unproject and there are various articles discussing it. It's sometimes complicated for some to implement but done well it's actually pretty nifty; don't get put off by the people online who say it doesn't work, it does work!