I am working on modeling a shape for 3d printing, but I'm running into a problem with it having non-manifold vertices. I've ensured that the faces are oriented properly, but it still says these vertices are non-manifold. Anyone have ideas?
Figured it out. The fact that I was trying to make the long top, outside, and bottom piece one single face was the problem. The vertices where the center section met the inside of the outer part of the design were being overlapped by the edge from the single long face. Instead, I added the vertices (points in red) in that spot, at the top, and in both places on the mirror side of the shape. After that I recreated the faces between the new vertices, checked for manifold vertices, and got none.
Related
I am attempting to come up with a quick and efficient means of translating a 3d mesh into a projected AABB. In the end, I would like to accomplish something similar to figure 1 wherein only the area of the screen covered by the cube is located inside the bounding box highlighted in red. ((if it is at all possible, getting the area as small as possible, highlighted in blue, would increase efficiency down the road.))
Figure 1. https://i.imgur.com/pd0E20C.png
Currently, I have tried:
Calculating the point position on the screen using camera.unproject_position(). this failed largely due to my inability to wrap my head around the pixel positions trending towards infinity. I understand it has something to do with Tan, but frankly, it is too late for my brain to function anymore.
Getting the area of collision between the view frustum and the AABB of the mesh instance. This method seems convoluted, and to get it in a usable format I would need to project the result into 2d coordinates again.
Using the MeshInstance VisualInstance to create a texture wherein a pixel is white if it contains the mesh instance, and black otherwise. Visual instances in general just baffle me, and I did not think it would be efficient to have another viewport just to output this texture.
What I am looking for:
An output that can be passed to a shader informing where to complete certain calculations. Right now this is set up to use a bounding box, but it could easily be rewritten to also use a texture. It also could be rewritten to use polygons, but I am trying to keep calculations to a minimum in the shader.
Certain solutions I have tried before have worked, slightly, but this must be robust. The camera interfacing with the 3d object will be able to move completely around and through it, meaning at times the view will be completely surrounded by the 3d model with points both in front, and behind.
Thank you for any help you can provide.
I will try my best to update this post with information if needed.
I am using CGAL's AABB tree to perform point-location queries for my project. I have a cartesian grid in 3D and a surface immersed inside the grid. I need to find which elements of the grid are outside/inside/cut by the surface. For this, I cast a Ray and find the number of intersections for each corner of the cell and check if they are inside or outside.
This works fine as long as the corners of the grid do not coincide with nodes on the surface. But, I get rubbish results when the corner points of the cell coincide with nodes on the surface. One such scenario is shown in Erroneous result from CGAL.
I tried using Simple_cartesian<double> and Exact_predicates_inexact_constructions_kernel but the situation did not improve.
It seems that CGAL is so sensitive to floating point operations.
How can solve this issue?
Without seeing the code you wrote it is hard to say where the problem is. However the class Side_of_triangle_mesh seems to be exactly what you need.
I am trying to uv map a cube in blender 2.74, but even though all six faces are placed on the image on the left hand side, only two of them actually show on the cube on the right hand side. I have tried unwrapping in different ways and moving the squares on the left hand side around, but still only two sides show the image.
When I try more complicated shapes (a tree), none of the faces show the texture, no matter how I unwrap.
However, when I export as a .obj file and draw it with opengl, all sides are textured, with the texture coordinates in the places where I uv mapped them to.
So my problem is that I don't know what it is going to look like until I actually export the file.
How do I get all faces of an object to show textured as I do the mapping?
simple solution, duplicate a couple of those light sources(the dashed line outlined black sphere that always appears in the cube startup program), and place them around your object. Also switch between shading options.
I'm attempting to calculate vertex normals for various game assets. The normals I calculate are used for "inflating" the model (to draw behind the real model producing a thick outline).
I currently compute the normal for each face and average all of them (several other questions on Stack Overflow suggest this approach). However, this doesn't work for sharp corners like this one (adjacent faces' normals marked in orange, the normal I'm trying to calculate is outlined in green).
The object looks like a small pedestal and we're looking at the front-left corner. There are three adjoining faces (the bottom face isn't visible; its normal points straight down).
Blender computes an excellent normal that lies squarely in the middle of the three faces' normals; it seems like it somehow calculates a normal that has minimum rotation to each of the three face normals. Blender's normal also doesn't change when the quads are triangulated differently.
Averaging the faces' normals gives me a different normal that points slightly upward in the Z-axis (-0.45, -0.89, +0.08). Inflating my model this way doesn't produce a good outline because the bottom face of the outline is shifted up and doesn't enclose the original model.
I attempted to look at the Blender source code but couldn't find what I was looking for. If anyone can point me to the algorithm in the Blender source, I'd accept that also.
Weight the surface normals by the angle of the faces where they join. It is a common practice in surface rendering (see discussion here: http://www.bytehazard.com/code/vertnorm.html), and will ensure that your bottom face is weighted stronger than the two slanted side faces. I don't know if Blender does it differently, but you should give it a try.
I vaguely remember seeing something in OpenGL (not ES, which was still at v1.0 on the iPhone when I came across this, which is why I never used it) that let me specify which edges of my polygons were considered outlines vs those that made up the interior of faces. As such, this isn't the same as the outline of the entire model (which I know how to do), but rather the outline of a planar face with all its tris basically blended into one poly. For instance, in a cube made up of tri's, each face is actually two tris. I want to render the outline of the square, but not the diagonal across the face. Same thing with a hexagon. That takes four tris, but just one outline for the face.
Now yes, I know I can simply test all the edges to see if they share coplanar faces, but I could have sworn I remember seeing somewhere when you're defining the tri mesh data where you could say 'this line outlines a face whereas this one is inside a face.' That way when rendering, you could set a flag that basically says 'Give me a wireframe, but only the wires around the edges of complete faces, not around the tris that make them up.'
BTW, my target is all platforms that support OpenGL ES 2.0 but my dev platform is iOS. Again, this Im pretty sure was originally in OpenGL and may have been depreciated once shaders came on the scene, but I can't even find a reference to this feature to check if that's the case.
The only way I know now is to have one set of vertices, but two separate sets of indices... one for rendering tris, and another for rendering the wireframes of the faces. It's a real pain since I end up hand-coding a lot of this, which again, I'm 99% sure you can define when rendering the lines.
GL_QUADS, glEdgeFlag and glPolygonMode are not supported in OpenGL ES.
You could use LINES to draw the wireframe: To get hidden lines, first draw black filled triangles (with DEPTH on) and then draw the edges you are interested in with GL_LINES.