Assign mirror operation to vertices array - mesh

I understand the math in flipping vertex coordinates in a .obj vertices array to get the mirrored coordinate across a plane/axis. But, how do you populate the vertices array for an actual mirror operation (as opposed to just a flip)

Normally you don't mirror by flipping vertex values, but by appliying an apropriate mirror transform matrix.

Related

Why isn't there a 3D array image in Vulkan?

In the Vulkan API it was seen as valuable to include a VK_IMAGE_VIEW_TYPE_CUBE_ARRAY, but not a 3D array:
typedef enum VkImageViewType {
VK_IMAGE_VIEW_TYPE_1D = 0,
VK_IMAGE_VIEW_TYPE_2D = 1,
VK_IMAGE_VIEW_TYPE_3D = 2,
VK_IMAGE_VIEW_TYPE_CUBE = 3,
VK_IMAGE_VIEW_TYPE_1D_ARRAY = 4,
VK_IMAGE_VIEW_TYPE_2D_ARRAY = 5,
VK_IMAGE_VIEW_TYPE_CUBE_ARRAY = 6,
} VkImageViewType;
Each 6 layers of view for a cube array is another cube. I'm actually struggling to think of a use case for a cube array, and I don't really think it would be useful for 3D array, but why does the cube get an array type and not the 3D image. How is this cube array even supposed to be used? Is there even a cube array sampler?
Cube maps, cube map arrays, and 2D array textures are, in terms of the bits and bytes of storage, ultimately the same thing. All of these views are created from the same kind of image. You have to specify if you need a layered 2D image to be usable as an array or a cubemap (or both), but conceptually, they're all just the same thing.
Each mipmap level consists of L images of a size WxH, where W and H shrink based on the original size and the current mipmap level. L is the number of layers specified at image creation time, and it does not change with the mipmap level. Put simply, there are a constant number of 2D images per mipmap level. Cubemaps and cubemap arrays require L to be either 6 or a multiple of 6 respectively, but it's still constant.
A 3D image is not that. Each mipmap level consists of a single image of size WxHxD, where W, H, and D shrink based on the original size and current mipmap level. Even if you think of a mipmap level of a 3D image as being D number of WxH images, the number D is not constant between mipmap levels.
These are not the same things.
To have a 3D array image, you would need to have each mipmap level contain L 3D images of size WxHxD, where L is the same for each mipmap level.
As for the utility of a cubemap array, it's the same utility you would get out of a 2D array compared to a single 2D image. You use array textures when you need to specify one of a number of images to sample. It's just that in one case, each image is a single 2D image, while in another case, each image is a cubemap.
For a more specific example, many advanced forms of shadow mapping require the use of multiple shadow maps, selected at runtime. That's a good use case for an array texture. You can apply these techniques to point lights through the use of cube maps, but now you need to have the individual images in the array be cube maps, not just single 2D images.

How do I modify mesh attributes to send custom information in Blender?

I have a mesh in 3DS format. I imported this mesh to blender and now, I want to export this mesh back to 3DS but, I want to associate a number (say id) with each vertex of this mesh. Now, I only need the x, y and z coordinates of this newly exported 3DS, and I don't really care about the normals or the texture coordinates.
So the way of keeping the IDs intact could be to insert that number in an un-required attribute, let's say the x coordinate of each vertex normal or the first texture coordinate of each vertex.
Here's what I tried with normals:
import bpy
import bmesh
object_reference = bpy.context.active_object
bm = bmesh.new()
bm.from_mesh(object_reference.data)
for vert in bm.verts:
vert.normal[0] = vert.index
bm.to_mesh(object_reference.data)
But, the normals reverted back to default on export. So, how do I do this?
I couldn't figure out a way to set the texture coordinates, how can I do so? If I can't, then how can I make the vertex normal hack work? Is there a less-hacky way of doing this?

Does CGAL 2D Conforming Mesh support fix points?

In my meshing application I will have to specify fix points within a domain. The idea is that, the fix points must also be the element points after the domain is being meshed.
Furthermore, the elements around the fix points should be more dense. The general concept is that for the fix points, there should exist a radius r around those points, such that the mesh size inside r is of different sizes than outside of the r. The mesh sizes inside and outside of the r should be specifiable.
Are these two things doable in CGAL 2D Mesh algorithm?
Using your wording, all the input point of the initial constrained Delaunay triangulation will be fix points, because the 2D mesh generator only insert new points in the triangulation: it never removes any point.
As for the density, you can copy, paste, and modify a criteria class, such as CGAL::Delaunay_mesh_size_criteria_2<CDT> so that the local size upper bound is smaller around the fix points.
Now, the difficulty is how to implement that new size policy. Your criteria class could store a const reference to another Delaunay_triangulation_2, that contains only the fixed points you want. Then, for each triangle query, you can call nearest_vertex and then actually check if the distance between the query point is smaller that the radius bound of your circles. For a triangle, you can either verify that for only its barycenter, or for all three points of the triangle. Then, according to the result of that/those query(s), you can modify the size bound, in the code of your copy of CGAL::Delaunay_mesh_size_criteria_2<CDT>.
Yes, no points will be removed from the triangulation by the mesher.
Note however that if you insert points too close to a constraint this will induce a refinement of the constraint while it is not Gabriel.

Meshes in 3DS Max does not have same number of vertices

I have two meshes with same vertices number in 3DS Max, but when I export it, both have not the same vertices number.
- I have to create a "ProOptimizer" modifier, to get the same number of vertices in all meshes.
- I export it as ".Obj", and uncheck all parameters, except textures, to keep it.
- I import it from Blender and I export it as ".FBX".
If I export it directly from 3DS Max, the vertices number is very different between all meshes, I do not understand.
How do I get the same vertices?
Can anyone help me please? Thank you very much.
Do both meshes have same smoothing groups applied to the same respective triangles? And are the UV mapping similar?
Both normals (smoothing groups), and UV coordinate distribution can affect how many times a single vertex need to be split in order to render correctly, or get exported to a specific format. For example one vertex can have many normals (one for each neighboring triangle, e.g. in a box), forcing the vertex to be counted several times. Or on the contrary a vertex can have a single normal, making all neighboring faces appearing "smoothed" around the vertex.

Reconstruct surface from 3D triangular meshes

I have a 3D model, which consists of the 3D triangular meshes. I want to partition the meshes into different groups. Each group represents a surface, such as a planar face, cylindrical surface. This is something like surface recognition/reconstruction.
The input is a set of 3D triangular meshes. The output is the mesh segmentations per surface.
Is there any library meets my requirement?
If you want to go into lots of mesh processing, then the point cloud library is a good idea, but I'd also suggest CGAL: http://www.cgal.org for more algorithms and loads of structures aimed at meshes.
Lastly, the problem you describe is most easily solved on your own:
enumerate all vertices
enumerate all polygons
create an array of ints with the size of the number of vertices in your "big" mesh, initialize with 0.
create an array of ints with the size of the number of polygons in your "big" mesh, initialize with 0.
initialize a counter to 0
for each polygon in your mesh, look at its vertices and the value that each has in the array.
if the values for each vertex are zero, increase counter and assign to each of the values in the vertex array and polygon array correspondingly.
if not, relabel all vertices and polygons with a higher number to the smallest, non-zero number.
The relabeling can be done quickly with a look up table.
This might save you lots of issues interfacing your code to some library you're not really interested in.
You should have a look at the PCL library, it has all these features and much more: http://pointclouds.org/