How to cut polyhedron with a plane or bounding box? - cgal

Within a polyhedron, how do I obtain the handle to any edge that intersects a given plane (purpose is that I can further cut it with CGAL::polyhedron_cut_plane_3)?
I currently have this snippet but it doesn't work. I constructed this from pieces found in CGAL documentations and examples:
CGAL 4.14 - 3D Fast Intersection and Distance Computation (AABB Tree)
typedef CGAL::Simple_cartesian<double> Kernel;
typedef CGAL::Polyhedron_3<Kernel> Polyhedron;
typedef CGAL::AABB_face_graph_triangle_primitive<Polyhedron> Primitive;
typedef CGAL::AABB_traits<Kernel, Primitive> Traits;
Polyhedron poly = load_obj(argv[1]); // load from file using a helper
Kernel::Plane_3 plane(1, 0, 0, 0); // I am certain this plane intersects the given mesh
CGAL::AABB_tree<Traits> tree(faces(poly).first, faces(poly).second, poly);
auto intersection = tree.any_intersection(plane);
if (intersection) {
if (boost::get<Kernel::Segment_3>(&(intersection->first))) {
// SHOULD enter here and I can do things with intersection->second
} else {
// BUT it enters here
}
} else {
std::cout << "No intersection." << std::endl;
}
Edit on 9/9/2019:
I changed this title from the original Old title: How to obtain the handle to some edge found in a plane-polyhedron intersection. With the methods provided in CGAL/Polygon_mesh_processing/clip.h, it is unnecessary to use AABB_Tree to find intersection.
To clip with one plane, one line is enough: CGAL::Polygon_mesh_processing::clip(poly, plane);
To clip within some bounding box, as suggested by #sloriot, there is an internal function CGAL::Polygon_mesh_processing::internal::clip_to_bbox. Here is an example.

The simplest way to do it would be to use the function undocumented function clip_to_bbox() from the file CGAL/Polygon_mesh_processing/clip.h to turn a plane into a clipping bbox and call the function corefine() to embedded the plane intersection into your mesh. If you want to get the intersection edges, pass a edge constrained map to corefine() in the named parameters.

Related

Does order within push constant structs matter, even when using alignas()?

I have a "Sprite" struct that I hand over as a push constant.
struct Sprite
{
glm::vec2 position;
alignas(16) Rect uvRect; //a "Rect" is just 2 glm::vec2s
alignas(4) glm::uint32 textureIndex;
alignas(4) glm::float32 rotation;
};
In my .vert file, I describe its layout as:
layout(push_constant) uniform Push
{
vec2 offset; //the 'position' part of the Sprite
vec2 origin; //The first part of the Sprite struct's "uvRect"
vec2 extent; //The second part of the Sprite struct's "uvRect"
uint textureIndex;
float rotation;
}
push;
This doesn't work: I get a black screen.
However, if I rearrange Sprite so that it goes:
struct Sprite
{
glm::vec2 position;
alignas(4) glm::uint32 textureIndex;
alignas(16) Rect uvRect; //a "Rect" is just 2 glm::vec2s
alignas(4) glm::float32 rotation;
};
...and then change the layout descriptor thing in the .vert file accordingly, suddenly it does work.
Does anyone know why this might be?
In your first structure you align the Rect to the 16 byte boundary, but push constant buffer in vulkan is expecting another vec2 to be tightly packed. Assuming I'm reading the alignment spec correctly archive, structures are aligned on a multiple of a 16 byte boundary, whereas a 2 component vector has an alignment twice that of its base component.
An array or structure type has an extended alignment equal to the largest extended alignment of any of its members, rounded up to a multiple of 16.
A scalar or vector type has an extended alignment equal to its base alignment.
A two-component vector has a base alignment equal to twice its scalar alignment.
A scalar of size N has a scalar alignment of N.
Thus the offset in of uvRect in C is 16, but in GLSL the offset of origin is 8 and extent is 16.
By changing the order Vulkan will begin looking for origin on the 8 byte alignment, which after 3 dwords would be an offset of 16, which then matches what C is expecting.

Subtracting multiple polyhedra continuously from a polyhedron

I want to get the result of subtracting multiple (probably hundreds) polyhedra from a polyhedron. I found that the CGAL library's "3D Boolean Operations on Nef Polyhedra" package supports Boolean Operations between Polyhedra. I wanted to use this package to solve my problem, but I ran into a lot of trouble. While I knew the CGAL library was a powerful one, I was completely new to it and had no idea how to use it most effectively to solve my problem.
My goal is to use the CGAL library to implement one polyhedron minus multiple polyhedrons, and I'll go into more detail about this problem as well as my approach and the errors that the program produced. I'd appreciate it if you could tell me why the program is producing these errors and how I can efficiently use the CGAL library to implement one polyhedron minus multiple polyhedra.
The problem I want to solve with CGAL:
I used MATLAB to get some polyhedra: A,B1,B2,...,Bi,..., Bn(n probably several hundred). I want to get the result of A-B1-B2-...-Bn.
My approach to solve this problem with CGAL library:
In fact, only A is a 2-manifold, and Bi were both 3-dimensional surface with boundaries. In order to use CGAL library's "3D Boolean Operations on Nef Polyhedron" package, I turned these surfaces into closed polyhedrons. I saved A as a ".off" format file named "blank.off". Bi was converted to ".off" format, and all Bi were saved in one file named "sv.off". Each Bi is separated by a newline character. I use CGAL::OFF_to_nef_3() to read file "blank.off" into a Nef_polyhedron object nef1. Then I wrote a looping statement, in which I use CGAL::OFF_to_nef_3() to read file "sv.off" into a Nef_polyhedron object nef2 and do nef1-=nef2.
code is as follows:
#include <CGAL/Exact_predicates_exact_constructions_kernel.h>
#include <CGAL/Polyhedron_3.h>
#include <CGAL/Nef_polyhedron_3.h>
#include <CGAL/IO/Polyhedron_iostream.h>
#include <CGAL/draw_nef_3.h>
#include <CGAL/OFF_to_nef_3.h>
typedef CGAL::Exact_predicates_exact_constructions_kernel Kernel;
typedef CGAL::Polyhedron_3<Kernel> Polyhedron;
typedef CGAL::Nef_polyhedron_3<Kernel> Nef_polyhedron;
#include<fstream>
#include<ctime>
int main() {
Polyhedron p1, p2, res;
int n = 0;
std::ifstream fin1("blank.off");
std::ifstream fin2("sv.off");
std::cout << "how many polyhedra in sv.off\n";
std::cin >> n;
//load nef2 and do bool operations
Nef_polyhedron nef1(p1);
Nef_polyhedron nef2(p2);
CGAL::OFF_to_nef_3(fin1, nef1);
fin1.close();
for (int i = 0; i < n;i++) {
nef2.clear();
CGAL::OFF_to_nef_3(fin2, nef2);
fin2.get();
nef1 -= nef2;
std::cout << "A-B" << i+1 << " have been calculated" << std::endl;
}
//convert nef2 to res.off
nef1.convert_to_polyhedron(res);
std::ofstream fout("res.off");
fout << res;
//draw
//CGAL::draw(nef1);
fin2.close();
return 0;
}
You can download blank.off and sv.off files from Github. Download link: blank.off and sv.off
Problems that occur while the program is running
On the fifth iteration(i==4) of the program, sometimes the IDE will stop running or crash without any exception thrown. Why does this problem arise? Is it because the memory usage is too high?
After 12 cycles(i==11), I want to convert nef1 to .off file to save current result. But it fails when the program reaches the sentence "nef1.convert_to_polyhedron(res)"since nef1.is_simple() returns false. I looked in the manual and I realized that this means that nef1 is no longer a 2-manifold. But what causes nef1 to no longer be a 2-manifold? Is there a function in the CGAL that can modify nef1 to make it a 2-manifold again?
This is not an error, but the computing speed is too slow. Is there another way to do it faster?
Other problems:
In fact, what I originally got using MATLAB were sets of points for the polyhedron A without boundary and the surface B with boundaries. In order to perform boolean operations on A and B. I wrote some programs with MATLAB to triangulate A and B, and convert B to closed polyhedron. I know the quality of triangulation mesh produced by the program is not high, which maybe the main reason why so many errors occur. Whether these can be done entirely with the CGAL library? How to do it?
The most important question is whether the CGAL library is suitable for performing hundreds of continuous Boolean operations on three-dimensional geometry?
Thank you very much for reading this question, I would appreciate it if you could help me.
After playing a little with your inputs, I can say that the most plausible cause for all of your problems is that your insputs are not very clean. If you manage to remove degenerated faces from your OFFs in sv.off, everything should run fine. Now, there is a way to make everything much faster : corefine_and_compute_difference.
If I adapt your code to use it, it looks like :
#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>
#include <CGAL/Surface_mesh.h>
#include <CGAL/IO/Polyhedron_iostream.h>
#include <CGAL/Polygon_mesh_processing/corefinement.h>
#include <CGAL/draw_surface_mesh.h>
#include <CGAL/Polygon_mesh_processing/IO/polygon_mesh_io.h>
typedef CGAL::Exact_predicates_inexact_constructions_kernel Epic;
typedef CGAL::Surface_mesh<Epic::Point_3> Surface_mesh;
int main() {
std::cout << "how many polyhedra in sv.off\n";
std::cin >> n;
std::ifstream fin1("data/blank.off");
std::ifstream fin2("data/sv.off");
//load nef2 and do bool operations
Surface_mesh s1, s2, out;
CGAL::IO::read_OFF(fin1, s1);
fin1.close();
for (int i = 0; i < n;i++) {
s2.clear();
CGAL::IO::read_OFF(fin2, s2);
fin2.get();
std::ofstream fout("s2.off");
fout << s2;
fout.close();
CGAL::Polygon_mesh_processing::corefine_and_compute_difference(s1, s2, s1);
std::cout << "A-B" << i+1 << " have been calculated" << std::endl;
}
//convert nef2 to res.off
std::ofstream fout("res.off");
fout << s1;
fout.close();
//draw
CGAL::draw(s1);
fin2.close();
return 0;
}
But you have to be sure your sv meshes are "clean", they must not have degenerated faces, for example.
From the shape of the first sv mesh, I'd say you can use CGAL's Advancing Front Surface Reconstruction (example here) to get your meshes from your point sets, and they should be clean enough. (I tried with the first one and it worked well).

What are the normal methods for achiving texture mapping with raytracing?

When you create a BLAS (bottom level acceleration structures) you specify any number of vertex/index buffers to be part of the structure. How does that end up interacting with the shader and get specified in the descriptor set? How should I link these structures with materials?
How is texture mapping usually done with raytracing? I saw some sort of "materials table" in Q2RTX but the documentation is non-existent and the code is sparsely commented.
A common approach is to use a material buffer in combination with a texture array that is addressed in the shaders where you require the texture data. You then pass the material id e.g. per-vertex or per-primitive and then use that to dynamically fetch the material, and with it the texture index. Due to the requirements for Vulkan ray tracing you can simplify this by using the VK_EXT_descriptor_indexing extension (Spec) that makes it possible to create a large and descriptor set containing all textures required to render your scene.
The relevant shader parts:
// Enable required extension
...
#extension GL_EXT_nonuniform_qualifier : enable
// Material definition
struct Material {
int albedoTextureIndex;
int normalTextureIndex;
...
};
// Bindings
layout(binding = 6, set = 0) readonly buffer Materials { Material materials[]; };
layout(binding = 7, set = 0) uniform sampler2D[] textures;
...
// Usage
void main()
{
Primitive primitive = unpackTriangle(gl_Primitive, ...);
Material material = materials[primitive.materialId];
vec4 color = texture(textures[nonuniformEXT(material.albedoTextureIndex)], uv);
...
}
In your application you then create a buffer that stores the materials generated on the host, and bind it to the binding point of the shader.
For the textures, you pass them as an array of textures. An array texture would be an option too, but isn't as flexible due to the same size per array slice limitation. Note that it does not have a size limitation in the above example, which is made possible by VK_EXT_descriptor_indexing and is only allowed for the final binding in a descriptor set. This adds some flexibility to your setup.
As for the passing the material index that you fetch the data from: The easiest approach is to pass that information along with your vertex data, which you'll have to access/unpack in your shaders anyway:
struct Vertex {
vec4 pos;
vec4 normal;
vec2 uv;
vec4 color;
int32_t materialIndex;
}

OpenGL 4.5 - Shader storage: write in vertex shader, read in fragment shader

Both my fragment and vertex shaders contain the following two guys:
struct Light {
mat4 view;
mat4 proj;
vec4 fragPos;
};
layout (std430, binding = 0) buffer Lights {
Light lights[];
};
My problem is that that last field, fragPos, is computed by the vertex shader like this, but the fragment shader does not see the changes made by the vertex shader in fragPos (or any changes at all):
aLight.fragPos = bias * aLight.proj * aLight.view * vec4(vs_frag_pos, 1.0);
... where aLight is lights[i] in a loop. As you can imagine I'm computing the position of the vertex in the coordinate systems of each light present to be used in shadow mapping. Any idea what's wrong here? Am I doing a fundamentally wrong thing?
Here is how I initialize my storage:
struct LightData {
glm::mat4 view;
glm::mat4 proj;
glm::vec4 fragPos;
};
glGenBuffers(1, &BBO);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, BBO);
glNamedBufferStorage(BBO, lights.size() * sizeof(LightData), NULL, GL_DYNAMIC_STORAGE_BIT);
// lights is a vector of a wrapper class for LightData.
for (unsigned int i = 0; i < lights.size(); i++) {
glNamedBufferSubData(BBO, i * sizeof(LightData), sizeof(LightData), &(lights[i]->data));
}
It may be worth noting that if I move fragPos to a fixed-size array out variable in the vertex shader out fragPos[2], leave the results there and then add the fragment shader counterpart in fragPos[2] and use that for the rest of my stuff then things are OK. So what I want to know more about here is why my fragment shader does not see the numbers crunched down by the vertex shader.
I will not be very accurate, but I will try to explain you why your fragment shader does not see what your vertex shader write :
When your vertex shader write some informations inside your buffer, the value you write are not mandatory to be wrote inside video memory, but can be stored in a kind of cache. The same idea occur when your fragment shader will read your buffer, it may read value in a cache (that is not the same as the vertex shader).
To avoid this problem, you must do two things, first, you have to declare your buffer as coherent (inside the glsl) : layout(std430) coherent buffer ...
Once you have that, after your writes, you must issue a barrier (globally, it says : be careful, I write value inside the buffer, values that you will read may be invalid, please, take the new values I wrote).
How to do such a thing ?
Using the function memoryBarrierBuffer after your writes. https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/memoryBarrierBuffer.xhtml
BTW : don't forget to divide by w after your projection.

What is the correct way to structure a Cg program?

This tutorial uses explicit OUT structures, e.g:
struct C3E1v_Output {
float4 position : POSITION;
float4 color : COLOR;
};
C3E1v_Output C3E1v_anyColor(float2 position : POSITION,
uniform float4 constantColor)
{
C3E1v_Output OUT;
OUT.position = float4(position, 0, 1);
OUT.color = constantColor; // Some RGBA color
return OUT;
}
But looking at one of my shaders I have explicit in/out parameters:
float4 slice_vp(
// Vertex Inputs
in float4 position : POSITION, // Vertex position in model space
out float4 oposition : POSITION,
// Model Level Inputs
uniform float4x4 worldViewProj) : TEXCOORD6
{
// Calculate output position
float4 p = mul(worldViewProj, position);
oposition=p;
return p;
}
I'm having some problems using HLSL2GLSL with this and wondered if my Cg format is to blame (even though it works fine as a Cg script). Is there a 'right' way or are the two simply different ways to the same end?
As you've seen, both ways work. However, I strongly endorse using structs -- especially for the output of vertex shaders (input of fragment shaders). The reasons are less to do with what the machine likes (it doesn't care), and more to do with creating code that can be safely re-used and shared between projects and people. The last thing you want to have to find and debug is a case where one programmer has assigned a value to TEXCOORD1 in some cases and is trying to read it from TEXCOORD2 in (some) other cases. Or any permutation of register mis-match. Use structs, your life will be better.