I'm currently implementing a mesh extrusion algorithm for plane shapes, let's assume for a rectangle.
When I extrude this rectangle I create four new sides (resulting in 8 new triangles) and a new bottom for the 3d shape.
This works fine when I duplicate all vertices so that my final cube has 24 of them. But I'd like to avoid these extra vertices now so that I have only 8 vertices. Unfortunately, in this case I do not know how to calculate the UV coordinates and I keep getting wrong results as shown in the image below.
The correct result would look like this (with duplicated faces):
My first question is: Is it possible to generate a good uv map with just 8 vertices (and hence 8 uv coordinates) for a cube?
Second is: How? :)
Thanks for your help.
Related
I have 2 input images of a plane where the (static) camera is at an unknown angle. I managed to extract edges and points of interests using opencv. But I'm stuck calculating real angles from the images.
From image #1 I need to calculate the camera angle relative to the plane. I know 3 points on the plane that form a equilateral triangle (angles of 60 degree). The center point of the triangle is also the centerpoint of the plane. However the plane center point on the image is covered by another object.
From image #2 I need to calculate the real angle of an object (Point C) on the plane to one of the 3 points and the plane center point (= line A to B).
How can I calculate the real angle β as if the camera had no angle towards the plane?
Update:
I was looking for a solution for my problem at https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html
There is a number of functions but I couldn't figure out how to apply them to my specific problem.
There is a function to calculate Homography using two images with keypoints but I do not have images of the scene from different camera angles.
Then there is cv::findHomography which Finds a perspective transformation between two planes. I know 4 source points but what are my 4 destination points?
Another one I was looking at is cv::solvePnP and cv::solvePnPRansac but again I only know 4 source points on the plane. I don't know about their 3D correspondence point.
What am I missing?
#Micka: Thanks for your input. I have 4 points for processing the image (the 3 static base points + the object at point C). I can assume these points are all located on the plane at z=0. However I do not have coordinates for a second plane neither the (x,y) of the corresponding 3D points.
Your description does not explicitly say it, but if you can assume that segment AB bisects the base of the triangle, then you have 4 point correspondences between the plane and its image, so you can use cv::findHomography.
thanks for reading this question. My title is basically what I'm trying to achieve. I did a poisson surface mesh generation using Poisson_surface_reconstruction_3(cgal). I can't figure out how to map the node identities of my resulting surface mesh into my starting point sets?
The output of my poisson surface generation is produced by the following lines:
CGAL::facets_in_complex_2_to_triangle_mesh(c2t3, output_mesh);
out << output_mesh;
In my output file, there are some x y z coordinates, followed by a set of 3 integers each line, I think they indicates which nodes form a delaunay triangle. The problem is that the output points do not correspond to my initial point set, since not any x y z value match to any of my original points. Yet I'm trying to figure out which points are forming a delaunay triangles in my original point set.
Could someone suggest me how can I do this in cgal?
Many thanks.
The poisson recontruction algorithm consist in meshing an implicit function that somehow fits you input points. In practice, it means that you input point will no belong to the set of points of the output surface, and won't even lie exactly on triangles of the output surface. However, they should not be too far from the output surface (except if you have some really sparse sampling parts).
What you can do to locate your input points with the output surface is to use the function closest_point_and_primitive() from the AABB-tree class.
Here is an example of how to build the tree from a mesh.
I'm needing to implement a Minkowski sum function that can return the Minkowski sum of either 2 circles, 2 convex polygons or a circle and a convex polygon. I found this thread that explained how to do this for convex polygons, but I'm not sure how to do this for a circle and polygon. Also, how would I even represent the answer?! I'd like the algorithm to run in O(n) time but beggars can't be choosers.
Circle is trivial -- just add the center points, and add the radii. Circle + ConvexPoly is nearly as simple: move each segment perpendicularly outward by the circle radius, and connect adjacent segments with circular arcs centered at the original poly vertices. Translate the whole by the circle center point.
As for how you represent the answer: Well, it depends on what you want to do with it. You could convert it to a NURBS if you just want to draw it with a vector drawing library. You could approximate the circular arcs with polylines if you just want a polygonal approximation. Or you might store it as is -- "this polygon, expanded by such-and-such a radius". That would be the best choice for things like raycasting, for instance. Or as a compromise, you could connect adjacent segments linearly instead of with circular arcs, and store it as the union of the (new) convex polygon and a list of circles at the vertices.
Oh, about ConvexPoly + ConvexPoly. That's the trickiest one, but still straightforward. The basic idea is that you take the list of segment vectors for each polygon (starting from some particular extremal point, like the point on each poly with the lowest X coordinate), then merge the two lists together, keeping it sorted by angle. Sum the two points you started with, then apply each vector from the merged vector list to produce the other points.
I have a number of 2D (possibly intersecting) polygons which I rendered using OpenGL ES on the screen. All the polygons are completely contained within the screen. What is the most timely way to find the percentage area of the union of these polygons to the total screen area? Timeliness is required as I have a requirement for the coverage area to be immediately updated whenever a polygon is shifted.
Currently, I am representing each polygon as a 2D array of booleans. Using a point-in-polygon function (from a geometry package), I sample each point (x,y) on the screen to check if it belongs to the polygon, and set polygon[x][y] = true if so, false otherwise.
After doing that to all the polygons in the screen, I loop through all the screen pixels again, and check through each polygon array, counting that pixel as "covered" if any polygon has its polygon[x][y] value set to true.
This works, but the performance is not ideal as the number of polygons increases. Are there any better ways to do this, using open-source libraries if possible? I thought of:
(1) Unioning the polygons to get one or more non-overlapping polygons. Then compute the area of each polygon using the standard area-of-polygon formula. Then sum them up. Not sure how to get this to work?
(2) Using OpenGL somehow. Imagine that I am rendering all these polygons with a single color. Is it possible to count the number of pixels on the screen buffer with that certain color? This would really sound like a nice solution.
Any efficient means for doing this?
If you know background color and all polygons have other colors, you can read all pixels from framebuffer glReadPixels() and simply count all pixels that have color different than background.
If first condition is not met you may consider creating custom framebuffer and render all polygons with the same color (For example (0.0, 0.0, 0.0) for backgruond and (1.0, 0.0, 0.0) for polygons). Next, read resulting framebuffer and calculate mean of red color across the whole screen.
If you want to get non-overlapping polygons, you can run a line intersection algorithm. A simple variant is the Bentley–Ottmann algorithm, but even faster algorithms of O(n log n + k) (with n vertices and k crossings) are possible.
Given a line intersection, you can unify two polygons by constructing a vertex connecting both polygons on the intersection point. Then you follow the vertices of one of the polygons inside of the other polygon (you can determine the direction you have to go in using your point-in-polygon function), and remove all vertices and edges until you reach the outside of the polygon. There you repair the polygon by creating a new vertex on the second intersection of the two polygons.
Unless I'm mistaken, this can run in O(n log n + k * p) time where p is the maximum overlap of the polygons.
After unification of the polygons you can use an ordinary area function to calculate the exact area of the polygons.
I think that attempt to calculate area of polygons with number of pixels is too complicated and sometimes inaccurate. You can see something similar in stackoverflow answer about calculation the area covered by a polygon and if you construct regular polygons see area of a regular polygon ,
I'm trying to implement a geometry templating engine. One of the parts is taking a prototypical polygonal mesh and aligning an instantiation with some points in the larger object.
So, the problem is this: given 3d point positions for some (perhaps all) of the verts in a polygonal mesh, find a scaled rotation that minimizes the difference between the transformed verts and the given point positions. I also have a centerpoint that can remain fixed, if that helps. The correspondence between the verts and the 3d locations is fixed.
I'm thinking this could be done by solving for the coefficients of a transformation matrix, but I'm a little unsure how to build the system to solve.
An example of this is a cube. The prototype would be the unit cube, centered at the origin, with vert indices:
4----5
|\ \
| 6----7
| | |
0 | 1 |
\| |
2----3
An example of the vert locations to fit:
v0: 1.243,2.163,-3.426
v1: 4.190,-0.408,-0.485
v2: -1.974,-1.525,-3.426
v3: 0.974,-4.096,-0.485
v5: 1.974,1.525,3.426
v7: -1.243,-2.163,3.426
So, given that prototype and those points, how do I find the single scale factor, and the rotation about x, y, and z that will minimize the distance between the verts and those positions? It would be best for the method to be generalizable to an arbitrary mesh, not just a cube.
Assuming you have all points and their correspondences, you can fine-tune your match by solving the least squares problem:
minimize Norm(T*V-M)
where T is the transformation matrix you are looking for, V are the vertices to fit, and M are the vertices of the prototype. Norm refers to the Frobenius norm. M and V are 3xN matrices where each column is a 3-vector of a vertex of the prototype and corresponding vertex in the fitting vertex set. T is a 3x3 transformation matrix. Then the transformation matrix that minimizes the mean squared error is inverse(V*transpose(V))*V*transpose(M). The resulting matrix will in general not be orthogonal (you wanted one which has no shear), so you can solve a matrix Procrustes problem to find the nearest orthogonal matrix with the SVD.
Now, if you don't know which given points will correspond to which prototype points, the problem you want to solve is called surface registration. This is an active field of research. See for example this paper, which also covers rigid registration, which is what you're after.
If you want to create a mesh on an arbitrary 3D geometry, this is not the way it's typically done.
You should look at octree mesh generation techniques. You'll have better success if you work with a true 3D primitive, which means tetrahedra instead of cubes.
If your geometry is a 3D body, all you'll have is a surface description to start with. Determining "optimal" interior points isn't meaningful, because you don't have any. You'll want them to be arranged in such a way that the tetrahedra inside aren't too distorted, but that's the best you'll be able to do.