CGAL surface mesh - removing face - mesh

Does the remove_face method change the mesh indices?
I get a segmentation fault with this code:
auto face_iterator = m.faces_around_target(m.halfedge(v3));
for (auto i=face_iterator.begin(); i!=face_iterator.end(); i++) {
m.remove_face(*i);
}
According to my understanding of the documentation, as long as I don't call collect_garbage the faces are only marked as removed., therefore no changes to indices. What is happening?
Does remove_face, also remove the face halfedges\ makes them point to null_face? It does not seem to do so, and I don't understand why not..
Thank you.

The face is indeed simply marked as removed but its iterator is invalidated by the removal (remember that iterator goes only over non-removed elements).
As stated in the doc: removes face f from the halfedge data structure without adjusting anything.
You need to use a higher level function such as CGAL::Euler::remove_face().

Related

Specifying push constant block offset in HLSL

I am trying to write a Vulkan renderer, I use glslangValidator with HLSL for shaders and am trying to implement push constants.
[[vk::push_constant]]
cbuffer cbFragment {
float4 imageColor;
float4 aaaa;
};
[[vk::push_constant]]
cbuffer cbMatrices {
float4 bbbb;
};
The annotation "[[vk::push_constant]]" works, I use spirv_reflect for reflection and both push constants show up and they work as intended.
The problem I'm having is that they seemingly overlap, if I assign "bbbb" a value, "imageColor" is affected in exactly the same way and vice versa. In the reflection data both push constant blocks have the offset 0, which explains the issue. However, I seem to be completely unable to change the offset of either of the push constants.
[[vk::offset(x)]] does not work at all, it neither affects the individual member offsets nor the offset of the push constants. The only offset that works at all is HLSL's built in "packoffset", which only applies to the buffer members. And although it might actually be a solution to just offset the members of one of the push constants to be outside the range of the other, I hardly believe that can be a sensible solution as it's also causing the validation layer to fail because offsetting the individual member simply increases the size of the push constant unnecessarily and the overlap itself is still present.
I would greatly appreciate any help on this matter and am willing to provide any necessary clarification, thank you very much!
Push constants live in a single chunk of contiguous memory. The compiler doesn't try to append multiple blocks into that memory; like with the GLSL syntax, it's intended to just have one block containing all the push constant data.
This is consistent with other places where the compiler has to pack variables in a block: it only packs within a block, not across multiple blocks. Two separate non-pushconstant cbuffers would refer to two distinct buffers in memory, with contents that begin at offset zero within their individual buffer. There's only one "push constant buffer", hence you should only decorate one cbuffer with vk::push_constant.

How to get a halide buffer with data on GPU?

I'm new to halide. Now I have a pointer which points to data on GPU. I want to get a halide buffer from this pointer without copying data. I have searched a lot and found this /halidebuffer-on-gpu . It says using Buffer::device_wrap_native will be helpful. And I have read the docs of itBuffer::device_wrap_nativeBut I'm little confused about what value should I pass to device_interface? docs of device_interface don't help me much.
For device_interface you want to pass either halide_cuda_device_interface(), or halide_opencl_device_interface(), or similar. These methods are all defined in HalideRuntime*.h. Here's the full list:
HalideRuntimeCuda.h: halide_cuda_device_interface();
HalideRuntimeD3D12Compute.h: halide_d3d12compute_device_interface();
HalideRuntimeHexagonDma.h: halide_hexagon_dma_device_interface();
HalideRuntimeHexagonHost.h: halide_hexagon_device_interface();
HalideRuntimeMetal.h: halide_metal_device_interface();
HalideRuntimeOpenCL.h: halide_opencl_device_interface();
HalideRuntimeOpenGL.h: halide_opengl_device_interface();
HalideRuntimeOpenGLCompute.h: halide_openglcompute_device_interface();

Does ordering of mesh element change from run to run for constrained triangulation under CGAL?

I iterate over finie_vertieces, finite_edges and finite_faces after generating constrained delauny triangulation with Loyd optimization. I am on VS2012 using CGAL 4.12 under release mode. I see for a given case finite_verices list is repeatable (so is the vertex list under finite_faces), however, the ordering of the edges in finite_edges seems to change from run to run
for(auto eit = cdtp.finite_edges_begin(); eit != cdtp.finite_edges_end(); ++eit)
{
const auto isConstrainedEdge = cdtp.is_constrained(*eit);
auto & cFace = *(eit->first);
auto cwVert = cFace.vertex(cFace.cw(eit->second));
auto ccwVert = cFace.vertex(cFace.ccw(eit->second));
I use the above code snippet to extract vertex list, and vertex list with a given edge changes from run to run.
Any help is appreciated resolving this, as I am looking for consistent behavior in the code. My triangulation involves many line constraints on a two dimensional domain.
I was told it's likely dependable behaviour, but there is no guarantee of order. IIRC the documentation says the traversal order is not guaranteed. I think it's best to assume the iterators' transversal is not deterministic and could change.
You could use any of the _info extensions to embed information into the face, edge, etc (a hash perhaps?) which you could then check against to detect a change.
In my use case, I wanted to traverse the mesh in parallel and OpenMP didn't support the iterators. So I hold a vector of the Face_handles in memory which I can then easily index over. In conjunction with the _info data, you could use this to build a vector of edges,faces, etc with a guaranteed order using unique information in the ->info() field.
Another _info example.

How to add a new syntax element in HM (HEVC test Model)

I've been working on the HM reference software for a while, to improve something in the intra prediction part. Now a new intra prediction algorithm is added to the code and I let the encoder choose between my algorithm and the default algorithm of HM (according to the RDCost of course).
What I need now, is to signal a flag for each PU, so that the decoder will be able to perform the same algorithm as the encoder decides in the rate distortion loop.
I want to know what exactly should I do to properly add this one bit flag to the stream, without breaking anything in the code.
Assuming that I want to use a CABAC context model to keep the track of my flag's statistics, what else should I do:
adding a new context model like ContextModel3DBuffer m_cCUIntraAlgorithmSCModel to the TEncSbac.h file.
properly initializing the model (both at encoder and decoder side) by looking at how the HM initialezes other context models.
calling the function m_pcBinIf->encodeBin(myFlag, cCUIntraAlgorithmSCModel) and m_pcTDecBinIfdecodeBin(myFlag, cCUIntraAlgorithmSCModel) at the encoder side and decoder side, respectively.
I take these three steps but apparently it breaks something.
PS: Even an equiprobable signaling (i.e. without using CABAC contexts) will be useful. I just want to send this flag peacefully!
Thanks in advance.
I could solve this problem finally. It was a bug in the CABAC context initialization.
But I want to share this experience as many people may want to do the same thing.
The three steps that I explained are essentially necessary to add a new syntax element, but one might be very careful with the followings:
In the beginning, you need to decide either you want to use a separate context model for your syntax element? Or you want to use an existing one? In case of CABAC separation, you should define a ContextModel3DBuffer and the best way to do that is: finding a similar syntax element in the code; then duplicating its ``ContextModel3DBuffer'' definition and ALL of its occurences in the code. This way assures that you are considering everything.
Encoding of each syntax elements happens in two different places: first, in the RDO loop to make a "decision", and second, during the actual encoding phase and when the decisions are being encoded (e.g. encodeCtu function).
The order of encoding/decoding syntaxt elements should be the same at the encoder/decoder sides. For example if your new syntax element is encoded after splitFlag and before predMode at the encoder side, you should decode it exactly between splitFlag and predMode at the decoder side.
The context model is implemented as a 3D matrix in order to let track the statistics of syntaxt elements separately for different block sizes, componenets etc. This means that when you want to call the function encodeBin, you may make sure that a correct index is being used. I've made stupid mistakes in this part!
Apart from the above remarks, I found a the function getState very useful for debugging. This function returns the state of your CABAC context model in an arbitrary place of the code when you have access to it. It is very useful to compare the state at the same place of the encoder and the decoder when you have a mismatch. For example, it happens a lot that you encode a 1 but you decode a 0. In this case, you need to check the state of your CABAC context before encoding and decoding. They should be the same. If they are not the same, track back the error to find the first place of mismatch.
I hope it was helpful.

Computation of fault interpretation and polyline intersections

I am planning to make functionality that can test if a borehole is crossing a fault. My first idea was to make a workstep component that takes a Borehole and a Fault Interpretation as input and returns the number of intersections. I have already made a workstep that checks if a fault interpretation is intersecting a surface. The core of this function is the following:
ICoordinateReferenceSystem inputCRS = PetrelProject.PrimaryProject.CoordinateReferenceSystem;
SpatialUnitsPolicy unitsPolicy = SpatialUnitsPolicy.AllDataInSI;
SpatialContext spatialCtx = new SpatialContext(inputCRS, unitsPolicy);
ISurfaceIntersectionService sis = CoreSystem.GetService<ISurfaceIntersectionService>(arguments.Surface);
foreach (FaultInterpretationPolyline p in arguments.Fault.GetPolylines()) {
IEnumerable<PolylineSurfaceIntersection> intersections = sis.GetSurfacePolyLineIntersection(arguments.Surface, p.Polyline);
foreach (PolylineSurfaceIntersection intersection in intersections) {
arguments.NumberOfIntersections++;
}
}
The above works fine and I was thinking I could make something along the same lines to compute the intersection between a polyline (well trajectory) and a surface generated from the collection of polylines representing the fault interpretation. The key question is, is there a way to get/generate a surface from a collection of polylines? The fault interpretation can be displayed as a surface (triangulated), is this surface accessible from the api? The surface returned from the api must be such that it can be used as an argument to ISurfaceIntersectionService. If this is not possible through the Ocean api, is there a way that the user could prepare the fault interpretation up front making surfaces from the fault interpretations? Or maybe there is a complete different approach to solve the above in an efficient way?
The problem you will have is the creation of the surface. Currently you can only create a RegularHeightFieldSurface which is a surface that has its points located on a lattice. A FaultIntersection will not normally fit this model as it's points are not picked on a regular lattice. Therefore creating surface for the points from a set of fault interpretation picks is the problem.