I have a mesh (close to polygon soup) with texture coordinates. I'd like to use CGAL for various operaitons on this mesh; most specifically the Naf_polyhedron class. I can "thicken" each triangle to make sure it's manifold and acceptable as a Naf, but I don't know how to carry the texture coordinates through the operations so they are preserved for vertices, and interpolated for cut edges.
Also, a single "point" may have multiple texture coordinates, as the texture coordinate is a function of both "face" and "point."
Are there examples or documentation for how to do this? Or does CGAL not support this in a mostly-built-in fashion?
Related
i am quite a beginner in Gmsh and am trying to create a mesh for hydrodynamic simulation from coastlines. I used splines for the complex coastline for simplicity, but the produced mesh crossed over the coastlines. What should i do to make the mesh not cross over the bounding curves?
Image for reference
Your mesh is simply to coarse in the moment. The points of each Triangle in the mesh lie on the real geometry/coastline but the edges are linearly connected and do not care about the geometry.
In order to refine the mesh you might try to press Mesh->Refine by Splitting a couple of times and see split the few current cells. The mesh should get finer and should not violate the geometry boarder by as much as right now.
BUT by this you'll only make the "issue" less obvious to see. On a smaller scale you will always see mesh cells that are partly "outside" the geometry borders. You cannot prevent this with concave meshes like the one you have here. If you have s.th. convex like a circle all elements will strictly lie inside the geometry border.
So as a first step, make a finer mesh until you are satisfied with the match between geometry and mesh.
I am trying to use MATLAB's camera calibrator to calibrate an infrared camera. I was able to get the intrinsic matrix by just feeding around 100 images to the calibrator. But I'm struggling with how to get the extrinsic matrix [R|t].
Because the extrinsic matrix is used to map the world frame with the camera frame, so in theory, when the camera(object) is moving, there will be many extrinsic matrices.
In the picture below, if the intrinsic matrix is determined using 50 images, then there are 50 extrinsic matrices correspond to each image. Am I correct?
You are right. Usually, a by-product of an intrinsic calibration is the extrinsic matrix for each pattern observed; this is mostly used to draw the patterns with respect to the camera as in the picture you posted.
What you usually do afterwards is to define some external reference frame that makes sense for you application, also known as the 'world' reference frame, and compute the pose of the camera with respect to it. That's the extrinsic matrix you always hear about.
For this, you:
Define the reference frame and take some points with known 3D coordinates on it; this can be a grid drawn on the floor, for example.
Take a picture of the 3D points with the calibrated camera and get a list of the correspondent 2D (image) coordinates of the points.
Use a pose estimation function that takes: the camera intrinsic parameters, the 3D points and the correspondent 2D image points. I am more familiar with OpenCV, but the Matlab function that seems to do the job is: https://www.mathworks.com/help/vision/ref/estimateworldcamerapose.html
I'm using CGAL 2D Delaunay triangulation to define a terrain. I can't use the terrain class because my triangulation has constraints and they can't be used on terrain or 3D triangulations. (That's what I see so far, since there are no terrain properties or 3D triangulation classes). Due to the constraints I'm using the make_conforming_delaunay_2 function to refine the triangulation. I have a problem when using this function. Everything is compiling and running OK, but the problem is with the results:
The function is inserting some points out of any existing triangle face. Is this correct?
Since it is a terrain I need the elevation of these inserted points. Is there any way to make CGAL tell me what triangle face these inserted points are in, so that I could calculate its elevation? I expected the points only in existing triangles faces.
Is there anyway even in a 2d triangulation to use 3D points? (So that the interpolated points will come with the elevation already calculated.)
You can use the class CGAL::Projection_traits_xy_3 like in this example.
I want to display mesh models in OpenGL ES 2.0, where it clearly shows the actual mesh, so I don't want smooth shading across each primitive/triangle. The only two options I can think about are
Each triangle has its own set of normals, all perpendicular to the triangles surface (but then I guess I can't share vertices among the triangles with this option)
Indicate triangle/primitive edges using black lines and stick to the normal way with shared vertices and one normal for each vertex
Does it have to be like this? Why can't I simply read in primitives and don't specify any normals and somehow let OpenGL ES 2.0 make a flat shade on each face?
Similar question Similar Stackoverflow question, but no suggestion to solution
Because in order to have shading on your mesh (any, smooth or flat), you need a lighting model, and OpenGL ES can't guess it. There is no fixed pipeline in GL ES 2 so you can't use any built-in function that will do the job for you (using a built-in lighting model).
In flat shading, the whole triangle will be drawn with the same color, computed from the angle between its normal and the light source (Yes, you also need a light source, which could simply be the origin of the perspective view). This is why you need at least one normal per triangle.
Then, a GPU works in a very parallelized way, processing several vertices (and then fragments) at the same time. To be efficient, it can't share data among vertices. This is why you need to replicate normals for each vertex.
Also, your mesh can't share vertices among triangles anymore as you said, because they share only the vertex position, not the vertex normal. So you need to put 3 * NbTriangles vertices in you buffer, each one having one position and one normal. You can't either have the benefit of using triangle strips/fans, because none of your faces will have a common vertex with another one (because, again, different normals).
I'm creating heightmaps using Fractal Brownian Motion. I'm then coloring it based on the heights and mapping it to a sphere. My problem is that the heightmap doesn't wrap seamlessly. I've used the Diamond Square algorithm and it's pretty easy to make things seamless using it, but I can't seem to figure out how to do it with fBm and I seem to be having trouble finding an explanation for it on the web.
To clarify, by "seamless", I mean that when I map it to a sphere, it creates a seamless map on the sphere.
Instead of calculating the heightmap per pixel on the heightmap, calculate the heightmap in 3D space based on each point on the sphere and then map that to an image pixel. You're going to have trouble wrapping a 2D, rectangular heightmap like that onto a sphere without getting ugly results at the poles unless you start your calculations from the sphere.
fBM generalizes to 3 dimensions, so given a point on the sphere you can get the height at that point, and then you can do the math to map that value to where it should be stored in the heightmap image.
Or you could use one of the traditional map projections. A cylindrical projection (x, y)->(x, sin y) would give you a seam of just one meridian, which you could rotate to the back. Or you could "antialias" the edge by one or another means.
With a stereographic projection (x,y,z)->(x/(z+1),y/(z+1)), there's only one sour point (the projection point itself).