Handling .obj files: Why is it possible to have more vertextextures (vt) than vertices (v)? - .obj

I am working on a .obj handler in c++. Importing the data shouldn't be a Problem, but i do not understand why it is possible, that a .obj (e.g. exported from blender) has more 'vt' entries than 'v' entires. If someone could explain me that, i would be very happy!
Thanks!

The number of position, normal and texture coordinates may be different because two vertices may share a coordinate in one space but differ in another.
Think of a box (8 verts) using 6 different rectangular shapes (one per face) in texture space -> that's 6*4=24 texture coordinates.
Edit: A common uv-map for a box looks like below (14 texture coordinates). I've annotated three different vertices: A, B and C. Note that in a box every vertex is adjacent to three faces which has to be true in the uv-map also. C gets a texture coordinate which is adjacent to three faces, but B has to be duplicated and A tripled to do so.

I found the source of the problem. I had prematurely optimized my program, and didn't realize that texture coordinates could be of a larger quantity than vertex coordinates due to the fact that textures are mapped per face rather than per vertex, so each vertex could have many texture coordinates mapped to it. Hopefully someone will learn from my mistakes.
Something I found strange though was initializing an sf::RenderWindow prior to running my .obj parser resulted in no error messages being thrown and the crash to be reported in a completely different area than it was actually happening.

There is much confusion caused by the naming in the LWO format. The lines labelled 'v' are actually defining points and not vertices.
When the faces are defined these points are converted into vertices, which gives a cube 24 vertices, but only 8 points.

Related

Inflation layer not working in certain geometries in ANSYS meshing tool

I am trying to implement an inflation layer between two geometries in my mesh using ANSYS, and I am confused about the procedure.
I found online (see the answer from Gopinath N K on 1/17/22) that in the ANSYS meshing tool you cannot combine face meshing with inflation. So I tried to remove the face sizings thinking that was what was being referred to but it gave mixed results which I'll explain below.
Second, I saw here that to create inflation I might need to employ named selections instead of selecting the two geometries (a body and a face) but this also gave mixed results.
As to my mixed results, I successfully got an inflation layer to work for a cylindrical body inside another cylindrical one (see images below). The blue larger cylinder is the body (red arrow), and the green circles are the edges of the small cylinder inside (green arrows). I created this inflation layer successfully.
However, when I try to create an inflation layer between the Rotating Zone (larger cylinder) and the Stationary Zone the inflation layer fails. This occurs as soon as I select the rectangular larger body. I didn't bother to finish selecting the other faces since next to Active it says "No, Invalid Method". The same thing occurs if I select the Structured Zone (smallest cylinder) and the faces of the wing (angled plate subtracted from the Structured Zone). So I really no clue what is causing this since it seems to occur as soon as I select the outer larger body geometry. Maybe I'm not selecting the right set of faces, or there is something else that is leading to this.
Thank you
So it turns out that the message saying "No, Invalid Method" is referring to a Hex Dominant method I created. There are certain mesh methods that inflation does not like to work with, and I haven't been able to find any reason why. I hope anyone using the ANSYS Mesher finds this helpful.

Undesired very fine at local mesh when using detect_features

Dear users and developers,
Is there a way to avoid the mesh from becoming extremely fine at certain locations (see attached Fig. 1) when using "CGAL::make_mesh_3" to mesh an object for which "detect_features" is used to automatically detect edges?
I have tried by varying the protection angles but given that the angle of the edge to be detected varies significantly there are always locations with the issue mentioned. With another mesher I managed to keep this from happening by setting a lower bound on the size of the triangles and tetrahedra conforming the mesh, but in CGAL from what I can tell I can only specify upper bounds.
Regards,
Tim
Fig. 1: Undesired mesh refinement

MeshLab Face Count

Not sure if I'm supposed to ask this question here, but going to give it a try since MeshLab doesn't seem to respond to issues on GitHub fast..
When I imported a mesh consisting of 100 vertices and 75 quad faces, meshlab somehow recognizes it to have 146 faces. What is the problem here?
Please find here the OBJ file and below the screenshot:
Any help/advice would be greatly appreciated,
Thank you!
Tim
Yes, per the MeshLab homepage Stack Overflow is now the recommended place to ask questions. Github should be reserved for reporting actual bugs.
It is important to understand is that MeshLab is designed to work with large unstructured triangular meshes, and while it can do some things with quad and polygonal meshes, there are some limitations and idiosyncrasies.
MeshLab essentially treats all meshes as triangular for most operations; when a polygonal mesh is opened, MeshLab creates "faux edges" that subdivide the mesh into triangles. You can visualize the faux edges by turning "Polygonal Modality" on or off in the edge display pane. If you run "Compute Geometric Measures", it will provide different lengths for the edges both with and without the faux edges. This is why MeshLab is reporting a higher number of faces for your model; it is reporting the number of faces after triangulation, i.e. including the faux edge subdivision. As you can see, when dividing the number of quad faces (75) in half, you end up with nearly double the number of triangular faces (146), which makes sense. Unfortunately I don't know of a way to have MeshLab report the number of faces without these faux edges.
Most filters only work on triangular meshes, and if run on a polygonal mesh the faux edges will be used. A few specific filters (e.g. those in the "Polygonal and Quad Mesh" category) work with quads, and for these the faux edges should be ignored. When exporting, if you check "polygonal" the faux edges should be discarded and the mesh will be saved with the proper polygons, otherwise the mesh will be permanently triangulated per the faux edges.
Hope this helps!

Tetrahedralization from surface mesh of thin-walled object

I need to generate a tetrahedral (volume) mesh of thin-walled object object. Think of objects like a bottle or a plastic bowl, etc, which are mostly hollow. The volumetric mesh is needed for an FEM simulation. A surface mesh of the outside surface of the object is available from measurement, using e.g. octomap or KinectFusion. Therefore the vertex spacing is relatively regular. The inner surface of the object can be calculated from the outside surface by moving all points inside, since the wall thickness is known.
So far, I have considered the following approaches:
Create a 3D Delaunay triangulation (which would destroy the existing surface meshes) and then remove all tetrahedra which are not between the two original surfaces. For this check, it might be required to create an implicit surface representation of the 2 surfaces.
Create a 3D Delaunay triangulation and remove tetrahedra which are "inside" (in the hollow space) or "outside" (of the outer surface) with Alphashapes.
Close the outside and inside meshes and load them into tetgen as the outside hull and as a hole respectively.
These approaches seem to be a bit inelegant to me, and they still have some pitfalls. I would probably need several libraries/tools for them. For 1 and 2, probably tetgen or another FEM meshing tool would still be required to create well-conditioned tetrahedra. Does anyone have a more straight-forward solution? I guess this should also be a common problem in 3D printing.
Concerning tools/libraries, I have looked into PCL, meshlab and tetgen so far. They all seem to do only part of the job. Ideally, I would like to use only open source libraries and avoid tools which require manual intervention.
One way is to:
create triangular mesh of surface points,
extrude (move) that surface to inner for a given thickness. That produces volume (triangular prism) mesh of a wall,
each prism can be split in three tetrahedrons.
The problem I see is aspect ratio.
A single layer of tetrahedra will not reproduce shell or bending behavior very well. A single element through the thickness will already require a large mesh. Putting more than one will likely break the bank in order to keep aspect ratios and angles acceptable.
I'd prefer brick or thick shell elements to tetrahedra in this case. I think the modeling will be easier and the behavior will be more faithful to the physics.

transform a path along an arc

Im trying to transform a path along an arc.
My project is running on osX 10.8.2 and the painting is done via CoreAnimation in CALayers.
There is a waveform in my project which will be painted by a path. There are about 200 sample points which are mirrored to the bottom side. These are painted 60 times per second and updated to a song postion.
Please ignore the white line, it is just a rotation indicator.
What i am trying to achieve is drawing a waveform along an arc. "Up" should point to the middle. It does not need to go all the way around. The waveform should be painted along the green circle. Please take a look at the sketch provided below.
Im not sure how to achieve this in a performant manner. There are many points per second that need coordinate correction.
I tried coming up with some ideas of my own:
1) There is the possibility to add linear transformations to paths, which, i think, will not help me here. The only thing i can think of is adding a point, rotating the path with a transformation, adding another point, rotating and so on. But this would be very slow i think
2) Drawing the path into an image and bending it would surely lead to image-artifacts.
3) Maybe the best idea would be to precompute sample points on an arc, then save save a vector to the center. Taking the y-coordinates of the waveform, placing them on the sample points and moving them along the vector to the center.
But maybe i am just not seeing some kind of easy solution to this problem. Help is really appreciated and fresh ideas very welcome. Thank you in advance!
IMHO, the most efficient way to go (in terms of CPU usage) would be to use some form of pre-computed approach that would take into account the resolution of the display.
Cleverly precomputed values
I would go for the mathematical transformation (from linear to polar) and combine two facts:
There is no need to perform expansive mathematical computation
There is no need to render two points that are too close from each other
I have no ready-made algorithm for you, but you could use a pre-computed sin or cos table, and match the data range to the display size in order to work with integers.
For instance imagine we have some data ranging from 0 to 1E6 and we need to display the sin value of each point in a 100 pix height rectangle. We can use a pre-computed sin table and work with integers. This way displaying the sin value of a point would be much quicker. This concept can be refined to get a nicer result.
Also, there are some ways to retain only significant points of a curve so that the displayed curve actually looks like the original (see the Ramer–Douglas–Peucker algorithm on wikipedia). But I found it to be inefficient for quickly displaying ever-changing data.
Using multicore rendering
You could compute different areas of the curve using multiple cores (can be tricky)
Or you could use pre-computing using several cores, and one core to do finish the job.