CGAL kruskal isolated edge - cgal

I have created a finite Delaunay graph and launched kruskal MST on it, and it seems that in one of my test, I have an isolated edge in the MST (not connected to the others).
Is that supposed to happen?
How can I prevent that from happening, having all my edges connected to at least on other edge of the graph.
Thanks!

Related

Do I need to create all the surfaces before creating the device?

just a quick question here...
So, as you know, when you create a Vulkan device, you need to make sure the physical device you chose supports presenting to a surface with vkGetPhysicalDeviceSurfaceSupportKHR() right?, that means, you need to create the surface before creating the device.
Now lets say that at run-time, the user may press a button which makes a new window open, and stuff is going to be drawn to that window, so you need a new surface right?, but the device has already been created...
Does this mean I have to create all the surfaces before I create the device or do I have to recreate the device, but, if need to recreate it, what happens with all the stuff that has been created/allocated from that device?
Does this mean I have to create all the surfaces before I create the device or do I have to recreate the device
Neither.
If the physical device cannot draw to the surface... then you need to find a physical device which can. This could happen if you have 2 GPUs, and each one is plugged into a different monitor. Each GPU can only draw to surfaces that are on their monitor (though sometimes there are ways for implementations to get around this).
So if the physical device for the logial VkDevice you're using cannot draw to the surface, you don't "recreate" the device. You create a new device, one which is almost certainly unable to draw to the surface that the old device could draw to. So in this case, you'd need 2 separate devices to render to the two surfaces.
But for most multi-monitor cases this isn't an issue. If you have a single GPU with multi-monitor output support, then any windows you create will almost certainly be compatible with that GPU. Integrated GPU + discrete GPU cases also tend to support the same surfaces.
The Vulkan API simply requires that you check to see if there is an incompatibility, and then deal with it however you can. Which could involve moving the window to the proper monitor or other OS-specific things.

How the GPU process non-graphic data in parallel?

As the introduction of programmable shaders in graphic pipeline enabled GPGPU concept which makes use of GPU as a general processing engine suited for parallel data.
However, as far as I know, because GPU is still used for graphic processing a lot compared to GPGPU, it makes use of lots of fixed graphic pipeline stages that cannot be programmed.
If my understanding is correct, when one data is processed by the GPU regardless of the type of data (graphic or general), it should be processed through the fixed graphic pipeline which includes programmable stages and non-programmable fixed stages.
Does that mean non-graphical processing should go through graphical processing stages even though it doesn't make use of it? Or can it bypass those fixed stages used for graphics? If one can explain how the GPU pipeline works for GPGPU I would appreciate it.
TL;DR:
GPGPU completely bypasses the rendering pipeline, but the pipeline is still used today.
GPUs consist of two main parts (in relation to your question). The first one is the processing part, which consists of the memory, registers, warp units, dispatchers and streaming processors. The other part is a set of controllers, that are responsible for geometry processing and the graphics pipeline. Those controllers just issue commands for the Streaming Processors on how to process the data for each of the steps of the rendering pipeline, either hardwired or based on user supplied shaders. NVidia calls them "PolyMorph Engine", AMD "Geometric Processor".
Historically, some of those controllers were hardwired to do things a single way, so you could only programm the vertexshader, fragmentshader and pixelshader. The tesselation controller e.g. was hardwired on the GPU and not user programmable. As demands grew, more and more of those controllers became user-programmable and today most of them are completely programmable (Wikipedia).
In the beginning days of GPGPU, the only way to do computing was to hack the available shaders by using a texture with your input data on a full-screen face to calculate the result and then read the rendered image back (See slide 26 on this introduction).
With CUDA, NVidia allowed users not only to program the shaders/polymorph Engine, but also directly interact with the Streaming processors and execute code on those (See slide 31 & 32).
This does not mean, that the graphics pipeline became obsolete, but now there is a way to completely bypass it and directly run code on the GPU processors. Nvidia has a nice explanation on how the pipeline works today, where you can also see both the PolyMorph Engine and the Streaming Processors here.
The Graphics pipeline still helps the dev by offloading repetitive and more complicated parts of the process, like managing the memory, managing warps, passing data and all that stuff. Theoretically you could probably write your own pipeline directly on the StreamingProcessors using CUDA and then render the result, but it would be tedious. Just how writing a GPGPU-Code using Shaders would be tedious.
Although old GPUs have pipelines hardcoded in the chip, modern GPU itself is just a large ASIC that can compute vectorized data at stupid fast speed. It is human who defines what it can do. So the render pipeline is defined in the graphics library like OpenGL, not in GPU. Thus, GPU does not care what it is computing, as long as it is vectorized data, it can do all the computation needed and give you a result.

What is the relationship of RenderPass and Pipeline in Vulkan?

What is the logical relationship between RenderPass and Pipeline in Vulkan?
If you ignore RenderPass, my understanding of the rendering process is first, the vertex data prepared by the application layer, then the texture data can be submitted to the driver, and after that through the various stages of the pipeline, after writing to the Framebuffer, you can complete a rendering.
So what is the responsibility of the RenderPass? Is it an abstraction that provides metadata for rendering each stage (such as Format), or does it have some other role?
Is RenderPass and Pipeline dependent on feelings? For example, each Pipeline belongs to a Subpass. Or a dependency, such as the last output of the Pipeline, is handled by RenderPass. Or is it something else?
At the end of the day Vulkan is a nice modern-ish OO API. All the objects in Vulkan are practically only what parameters they take. Just saying this to ease your learning. You can look at vkCreateX and largely understand what VkX does in Vulkan.
VkPipeline is a GPU context. Think of GPU as a FPGA (which it isn't, but bear with me). Doing vkCmdBindPipeline would set the GPU to given gate configuration. Exept GPU is not FPGA — in our case it sets the GPU to a state where it can execute the shader programs and fixed-function pipeline stages defined by the VkPipeline.
VkRenderPass is a data oriented thing. It is necessitated by tiled architecture GPUs (mobile GPUs). On desktop GPUs it can still fill the role of being oracle for optimization and\or allow partially-tiled architecture (or any other architecture really that can use this).
Tiled-architecture GPUs need to "load" image\buffer from general-purpose RAM to "on-chip memory". When they are done they "store" their results back to RAM.
VkRenderPass defines what kind of inputs (attachments) will be needed. It defines how they get loaded and stored before and after the render pass instance*, respectively. It also has subpasses. It defines synchronization between them (replaces vkCmdPipelineBarriers). And defines the kind of purpose given render pass attachment will be filling (e.g. if it is color buffer, or a depth buffer).
* Render Pass Instance is the thing created from Render Pass instance by vkCmdBeginRenderPass. Yea, not confusing, right.

HoloLens external rendering

Does soneone of you have a good solution for external rendering for Microsoft HoloLens Apps? Specified: Is it possible to let my laptop render an amount of 3D objects that is too much for the HoloLens GPU and then display them with the HoloLens by wifi including the spatial mapping and interaction?
It's possible to render remotely both directly from the unity editor and from a built application.
While neither achieves your goal of a "good solution" they both allow very intensive applications to at least run at all.
This walks you through how to add it to an app you're building.
https://learn.microsoft.com/en-us/windows/mixed-reality/add-holographic-remoting
This is for running directly from the editor:
https://blogs.unity3d.com/2018/05/30/create-enhanced-3d-visuals-with-holographic-emulation-in-uwp/
I don't think this is possible since, you can't really access the OS or the processor at all on the HoloLens. Even if you do manage to send the data to a 3rd party to process, the data will still need to be run back through the HoloLens which is really just the same as before.
You may find a way to perhaps hook up a VR backpack to it but even then, I highly doubt it would be possible.
If you are having trouble rendering 3D objects, then you should reduce the number of triangles, get a lower resolution shader on it, or reduce the size of the object. The biggest factor in processing 3D objects on the HoloLens is how much space is being drawn on the lens. If your object takes up 25% of the view instead of 100% it will be easier to process on the HoloLens.
Also if you can't avoid a lot of objects in the scene maybe check out LOD, which reduces the resolution of objects based off of distance to it and vice versa.

Why is rising edge preferred over falling edge

Flip-Flops(,Registers ...) are usually triggered by a rising or falling edge. But mostly in code you see an if-clause which uses the rising edge triggering. In fact i never saw a code with falling edge.
Why is that? Is it because naturally the programmers use rising edge, because they are used to, or is it because of some physical/analog law/fact, where the rising edge programming is faster/simpler/energy-efficient/... ?
As zennehoy says, it's convention - but one going back to when logic was done in discrete chips with a few gates or flipflops within them. Those packages of flipflops were always rising-edge triggered...as far as I recall, but maybe someone with better recollection of the yellow books will correct me!
So when synthesis came along, no doubt everyone felt comfortable carrying on that way!
Nothing more than a matter of convention.
Using the rising edge is more common, and most component libraries use the rising edge. This means that using those libraries requires you to also use rising edges, or add clock synchronization logic, or keep your paths so short that the delay is less than half a clock cycle. Just using rising edges everywhere is by far the easiest.
When you design a (single-edge) DFF in a chip, you must choose at which (rising or falling) clock edge it will operate. This decision is independent from the implementation approach (i.e., master-slave or pulsed-latch), and it does not alter the number of transistors in the DFF itself.
Since positive-edge is the typical default (as in FPGAs), to operate at the negative clock edge the usual procedure is to simply use a positive-edge DFF with an inverted version of the clock signal connected to its clock port. If this is done locally (near the DFF clock port), then two extra transistors are indeed needed (to build a CMOS inverter for the clock).
it is somewhat a matter of convention but if you look at the design of falling versus rising edge, there is only a difference of an added inverter, and it turned out to be 2 transistors less on rising edge
but there are designs out there that use both, for example in some data caches you write on rising edge and read on falling edge, or vice-versa depending on design choices!
good question, and try it out or take a course(maybe online) on digital integrated circuits