If the "layers" member of this struct is 3 for example, and I have an image view attachment in the framebuffer at [0] with three layers, will the shader run three times and will I get an in-shader variable telling me which layer it is?
I know that multi-layer rendering was something available in OpenGL, but the catch is that it only works with a geometry shader. Is this what the "layers" field of the framebuffer create info does?
If I want to write to multiple layers without the geometry shader, what are my options? Does VK_EXT_shader_viewport_index_layer help? I know that VK_KHR_multiview will do this, but just want to find out if it's possible without it.
Related
I'm trying to draw geometry for multiple models using a single draw call. All the geometry, thusly, resizes within the same vertex/index buffers. The geometry for the different models share the same vertex format, but the vertex amounts for each model can be different.
In the vertex/fragment shaders, what's a technique that can be used to differentiate between the different models, to access their appropriate transforms/textures/etc ?
Are these static models? For traditional static batching:
You only need a single transform relative to the batch origin (position the individual models relative to the batch origin as part of the offline data packaging step).
You can batch your textures in to a single atlas (either a single 2D image with different coordinates for each object, or a texture array with a different layer for each object).
If you do it this way you don't need to different component models - they are effectively just "one large model". Which has nice performance properties ...
For more modern methods, you can try indirect draws with multiple "drawCount" values to index the settings you want. This allows variable buffer offsets and triangle counts to be used, but the rest of the state used needs to be the same.
As an alternative to texture arrays, with bindless texturing you can just programmatically select which texture to use in the shader at runtime. BUT you generally still want it to be at least warp-uniform to avoid a performance hit.
I thought in two ways to write my opengl es 2.0 code.
First, I write many calls to draw elements in the screen with many VAOs and VBOs or one only VAO and many VBOs.
Second, I save the coordinates of all elements in one list and I write all vertices of these coordinates in one only VAO and one only VBO and draw all vertices in the screen.
What is the better way that I should follow?
These are the ones I thought, what other ways are there?
The VAO is meant to save you some setup calls when setting the vertex attributes pointers and enabling/disabling the pipeline states related to that setup. Having just one VAO isn't saving you anything, because you will repeatedly re-bind the vertex buffers and change some settings. So you should aim to have multiple VAOs, one per "static" rendering batch, but not necessarily one per object drawn.
As to having all vertices in single VBO or many VBOs - that really depends on the task.
Having all data in single VBO has no benefits if you draw that all in many calls. But there's also no point in allocating one VBO per sprite. It's always about the balance between the costs of different calls to setup the pipeline, so ideally you try different approaches and decide what's best for you in your particular case.
There might be restrictions on the buffer sizes, and there's definitely "reasonable" sizes preferred by specific implementations. I remember some issues with old Intel drivers, when rendering the portion of the buffer would process the entire buffer, skipping unneeded vertices.
I able to apply DeepLabV3+ to segment the images, but also like to get the boundary around individual detection.
For example, in the image segmentation mask above, I cannot distinguish between the two children on the horse. If I could draw the boundary around each individual children or put a different color for them, I would be able to distinguish them. Please let me know if is there any way to configure deepLab to achieve that.
You are confusing two tasks: semantic segmentation and instance segmentation.
DeepLbV3+ (and many similar deep nets) are solving semantic segmentation problem: that is labeling each pixel with the class it belongs to. You got a very nice results where all pixels belonging to "person" were colored pink. Semantic segmentation algorithms do not care how many "person"s there are in the image and they do not wish and do not care to label each person separately. As long as all "person" pixels were labeled as such - the task is considred well done.
On the other hand, what you are looking for is instance segmentation: that is labeling each "person" as a unique person in the image. This is far more complex task: not only should you succeed in labeling all "person" pixels as "person", but also you want to group the "person" pixels into the different instances in the image.
Since instance segmentation is a more difficult task, you would need different models/nets to accomplish it.
I suggest Mask R-CNN as a good starting point for instance segmentation algorithms.
I have a polygon mesh of a room in high resolution, and I want to extract vertices color information and map them as a UV map, so I can generate a texture atlas of the room.
After that, I want to remesh the model in order to reduce the number of polygons and map the hi-res texture onto the new mesh in lower resolution.
So far I've found this link to do it in Blender, but I would like to do it programmatically. Do you know about any library/code that could help my in my task?
I guess first of all I have to segment the model (normals criterion could be helpful) and then cut each mesh segment, so only then I am able to parameterize it. About parameterization, LSCM seems to provide good results for simple models. Once having available the texture atlas, I think the problem becomes a simple task of texture mapping.
My main problem is segmentation and mesh cutting. I'm using CGAL library for that purpose, but the algorithm is too simple to cut complex shapes. Any hint about a better segmentation/cutting algorithm that performs well for room-sized models?
EDIT:
The mesh consists in a room reconstructed with a RGB-D camera, with 2.5 million vertices and 4.7 million faces. The point is to extract high resolution texture, remesh the model to reduce number of polygons and then remap the texture onto it. It's not a closed mesh, and there are holes due to reconstruction, so I'm guessing if my task is not possible to accomplish at all.
I attach a capture of the mesh.
I would suggest using the following 4-steps procedure:
Step 1: remesh
For this type of mesh that comes from computer vision, you need a remesher that is robust to holes, overlaps, skinny triangles etc... You can use my GEOGRAM software [1]. Use the following command:
vorpalite my_input.obj my_output.obj pre=false post=false pts=30000
where 30000 is the number of desired points (adapt it to the complexity of your input). Note: I am deactivating pre and post-processing (pre=false post=false) that may remove too much parts of the mesh for this type of mesh.
Step 2: segment the remesh
My favourite method is "Variational Shape Approximation" [3]. I like it because it is simple to implement and gives reasonable results in most cases.
Step 3: parameterize
Besides my LSCM method, you may use ABF++ that we developed after [4], that gives much better results in most cases. You may also try ARAP [5].
Step 4: bake the texture
Once the simplified mesh is parameterized, you need to copy the colors from the original mesh onto the new one. This means determining for each pixel of the texture where it goes in 3D, and finding the nearest point in the original 3D mesh.
Segmentation, parameterization and baking are implemented in my Graphite software [2] (use the old version 2.x, the newer version 3.x does not have all the texturing functionalities).
[1] geogram: http://alice.loria.fr/software/geogram/doc/html/index.html
[2] graphite: http://alice.loria.fr/software/graphite/doc/html/
[3] Variational Shape Approximation (Cohen-Steiner, Alliez, Desbrun, SIGGRAPH 2004): http://www.geometry.caltech.edu/pubs/CAD04.pdf
[4] ABF++: http://alice.loria.fr/index.php/publications.html?redirect=1&Paper=ABF_plus_plus#2004
[5] ARAP: cs.harvard.edu/~sjg/papers/arap.pdf
For reducing the number of polygons, I prefer using mesh decimation. My recommended workflow: (Input: High resolution mesh(mesh0) with vertex color).
Compute uv coordinates for mesh0.
Generate texture image(textureImage) by vertex color. Thus, you have a texture mesh(mesh0 with uv coordinates, textureImage).
Apply mesh decimation to mesh0, and the decimation should take uv coorindates into consideration.
I have an example about this workflow in my site, the example image: Decimation of texture mesh .
Or you can refer my site for details.
I'm creating a DirectX 11 game that renders complex meshes in 3D space. I'm using vertex/index buffers/shaders and this all works fine. However I now want to perform some basic 'overlay' rendering - more specifically, I want to render wireframe boxes in 3D space to show the bounds of a particular area. There would only ever be one or two boxes in view at any one time, and their vertices would change position each frame.
I've therefore been searching for simpler DX11 rendering methods but most articles I find still prepare a vertex/index buffer for very simple rendering. I know that hardware is well optimised for processing vertex streams, but is the overhead of building and filling a vertex buffer every frame just to process 8 vertices really the most efficient method?
My question is therefore, what is the most efficient method for performing this very simple rendering in DX11? Is there any more primitive method ("DrawLine", "DrawLineList(D3DXVECTOR3[])", ...) that would be a better solution? It could be less efficient per-vertex than the standard method of passing vertex buffers because it's only ever going to be used for a handful of vertices per frame.
Thanks in advance
Rob
You should create a single vertex / index buffer for each primitive Shape (box, sphere, ...) and use transformation matrix to place it correctly in the world.