I'm learning compute shaders after several years of experience with fragment and vertex shaders. I'd like to convert the algorithms from one of my procedural fragment shaders into a compute shader that uses the same algorithms but outputs the resulting procedural map to a texture and sends it to the CPU. Does anyone know of a tutorial or sample code that will point me in the right direction? I just need a generic framework.
Related
I'm trying to use Polygonal Surface Reconstruction with building point cloud to create simplified building models.
I did first tests with this CGAL code example and got first promising results.
As an example, I used this point cloud with vertex normals correctly oriented and got the following result from PSR. Some faces are clearly inverted (dark faces with normals pointing inside the watertight mesh and therefore not visible).
I was wondering if there a way to fix this face orientation error. I've noticed orientation methods on Polygon mesh but I don't really know to apply them to the resulting PSR surface mesh. As far as logic is concerned making normal point outwards should not be too complicated I guess.
Thanks in advance for any help
You can use the function reverse_face_orientations in the Polygon mesh processing package.
Note that this package has several functions that can help you to correct/modify your mesh.
I’d like to perform a surface parametrization of a triangle mesh (for the purpose of texture mapping).
I tried using some of CGAL’s algorithms, e.g. ARAP, Discrete Conformal Map etc.
The problem is that the surface parameterization methods proposed by CGAL only deal with meshes which are homeomorphic (topologically equivalent) to discs.
Meshes with arbitrary topology can be parameterized, provided that the user specifies a cut graph (a set of edges), which defines the border of a topological disc.
So the problem now becomes – how to cut the graph properly (using CGAL’s interface).
I found a similar question from 3 years ago that went unanswered.
P.S.
If someone can point me to a different library that can do the job, that’ll be just as great.
Thanks.
I have a polygon mesh of a room in high resolution, and I want to extract vertices color information and map them as a UV map, so I can generate a texture atlas of the room.
After that, I want to remesh the model in order to reduce the number of polygons and map the hi-res texture onto the new mesh in lower resolution.
So far I've found this link to do it in Blender, but I would like to do it programmatically. Do you know about any library/code that could help my in my task?
I guess first of all I have to segment the model (normals criterion could be helpful) and then cut each mesh segment, so only then I am able to parameterize it. About parameterization, LSCM seems to provide good results for simple models. Once having available the texture atlas, I think the problem becomes a simple task of texture mapping.
My main problem is segmentation and mesh cutting. I'm using CGAL library for that purpose, but the algorithm is too simple to cut complex shapes. Any hint about a better segmentation/cutting algorithm that performs well for room-sized models?
EDIT:
The mesh consists in a room reconstructed with a RGB-D camera, with 2.5 million vertices and 4.7 million faces. The point is to extract high resolution texture, remesh the model to reduce number of polygons and then remap the texture onto it. It's not a closed mesh, and there are holes due to reconstruction, so I'm guessing if my task is not possible to accomplish at all.
I attach a capture of the mesh.
I would suggest using the following 4-steps procedure:
Step 1: remesh
For this type of mesh that comes from computer vision, you need a remesher that is robust to holes, overlaps, skinny triangles etc... You can use my GEOGRAM software [1]. Use the following command:
vorpalite my_input.obj my_output.obj pre=false post=false pts=30000
where 30000 is the number of desired points (adapt it to the complexity of your input). Note: I am deactivating pre and post-processing (pre=false post=false) that may remove too much parts of the mesh for this type of mesh.
Step 2: segment the remesh
My favourite method is "Variational Shape Approximation" [3]. I like it because it is simple to implement and gives reasonable results in most cases.
Step 3: parameterize
Besides my LSCM method, you may use ABF++ that we developed after [4], that gives much better results in most cases. You may also try ARAP [5].
Step 4: bake the texture
Once the simplified mesh is parameterized, you need to copy the colors from the original mesh onto the new one. This means determining for each pixel of the texture where it goes in 3D, and finding the nearest point in the original 3D mesh.
Segmentation, parameterization and baking are implemented in my Graphite software [2] (use the old version 2.x, the newer version 3.x does not have all the texturing functionalities).
[1] geogram: http://alice.loria.fr/software/geogram/doc/html/index.html
[2] graphite: http://alice.loria.fr/software/graphite/doc/html/
[3] Variational Shape Approximation (Cohen-Steiner, Alliez, Desbrun, SIGGRAPH 2004): http://www.geometry.caltech.edu/pubs/CAD04.pdf
[4] ABF++: http://alice.loria.fr/index.php/publications.html?redirect=1&Paper=ABF_plus_plus#2004
[5] ARAP: cs.harvard.edu/~sjg/papers/arap.pdf
For reducing the number of polygons, I prefer using mesh decimation. My recommended workflow: (Input: High resolution mesh(mesh0) with vertex color).
Compute uv coordinates for mesh0.
Generate texture image(textureImage) by vertex color. Thus, you have a texture mesh(mesh0 with uv coordinates, textureImage).
Apply mesh decimation to mesh0, and the decimation should take uv coorindates into consideration.
I have an example about this workflow in my site, the example image: Decimation of texture mesh .
Or you can refer my site for details.
Is it possible to emulate the completed fixed function pipeline with shaders on the fly? By on the fly mean not rewriting the fixed function code to use shaders but sort of an intermediate driver which receives fixed function GLES calls (possibly caching it for full one frame as there is no direct one to one translation from fixed to programmable pipeline) and outputs equivalent GLES2.0 calls.
And even if it possible then how much work would it really be?
For most of ES 1.1, that looks pretty straightforward. All the typical fixed functionality like transformations, lights, and materials, translates directly into shader code.
For a complete replacement, you would obviously have to implement all the functionality. From skimming over the ES 1.1 entry points, I spotted a few items that would not directly translate to ES 2.0, where the last of these looks particularly problematic:
Arbitrary clipping planes. This is not available in ES 2.0, but not terribly hard to emulate in shaders by calculating a distance in the vertex shader, and then discarding the clipped fragments in the fragment shader.
ES 1.1 has something called "palette textures". From my understanding, it looks somewhat painful to implement in ES 2.0, but possible. You would probably need two textures, one for the indices, and one for the palette, with two levels of sampling in the fragment shader.
ES 1.1 supports logical operations (glLogicOp) as part of the per-fragment operations that are executed after the fragment shader. ES 2.0 does not have this, and I can't think of a good way to replicate it. The only thing that comes to mind is to render, read back the result, do the logical operation on the CPU, and then render the resulting image. And you would have to do that every time the operation is changed.
Whenever I look at sample shaders, it seems this type of stuff happens almost by magic; sometimes information is saved into special places like position/color, but other times a fragment shader uses parameters and quite how fragment shader knows where to get this data I can't follow.
Can anyone provide a medium-simple GLES shader which does this, and explain how it works?
Have a look at the OpenGL ES quick reference card.
You're interested in the "Built-In Inputs, Outputs, and Constants" later where GLSL is described, in particular vertex shader outputs and fragment shader inputs.
Additional VS outputs (that become FS inputs) should be declared in both using the varying keyword.