CGAL 4.7 Arrangement of Bezier curves crashes on some inputs - cgal

I am using the Arrangements package of CGAL 4.7 (64-bit, on Windows) to make 2-d arrangements of Bezier curves for a research project. Unfortunately, I keep experiencing crashes on some -usually (near) degenerate- input, when inserting Bezier curves.
As a simple example I have added the content of two data files that can be read by the Bezier_curve example project provided with CGAL 4.7 (found in .../CGAL-4.7/examples/Arrangement_on_surface_2).
The example crashes for me if I feed it either of the two files.
The example works correctly for me if I use the Bezier.dat file that comes with it, and on some other test cases that I tried.
Bezier_crash1.dat - (very simple test case)
1
4 0 100 100 0 100 200 0 100
Bezier_crash2.dat - (encountered and recorded in my own experiments)
6
4 2581853/262144 174874249452033/4398046511104 5673646619833933/35184372088832 2756888783932123/70368744177664 6296137/131072 15962699/131072 105/2 5687589/65536
4 105/2 5687589/65536 7466423/131072 6787657/131072 4884829/32768 1213073/16384 120 13200823/131072
4 120 13200823/131072 13772385/131072 14995659/131072 8262217/131072 13388069/131072 105/2 5687589/65536
4 105/2 5687589/65536 5500343/131072 9362287/131072 5544234768323137/35184372088832 5711427009345511/140737488355328 2581853/262144 183625004300137/2199023255552
4 696761914568827/4398046511104 3007857/16384 1156274078886441/17592186044416 301767055302015/8796093022208 4173567/65536 1173535/8192 97589/1024 428833/4096
4 97589/1024 428833/4096 8317825/65536 541797/8192 10142101/131072 1505657/16384 9752923/131072 1168223/16384
I don't know if I should (and am allowed to) post the code of the CGAL example, please let me know if it's needed. I also have more crashing test cases than just these two, but I am hoping that these crashes are a problem with my personal CGAL setup, and that the rest will magically be solved when I fix it :)

We have fixed the bug that most probably causes this problem. It was in the CGAL component that handles Bezier curves, namely, Arr_Bezier_curve_traits_2.h.

Related

Using Alias to model bidirectional constraints in GAMS

I'm working on a big optimization problem on GAMS so it's impossible for me to put the entire code here but I hope you could help me with where I am stuck at. I have 4 power nodes in my model that are connected by 2 bidirectional transmission lines (r) like this.
where r_a, r_b are the current transmission line capacities. Power can flow both directions and I'm tracking power going from A to A' and B to B' as well as from A' to A and B' to B. So there are 4 power flows (f) in 2 transmission lines (r). My decision variables are how much capacity upgrade (c(f)) I need to build in each of these lines to satisfy more power flow needs. So in GAMS, I minimized the cost of upgrading as:
investment_cost.. cap_cost =e= sum(f,c(f)*capCost(f));
here capCost(f) is the capital cost of upgrading the transmission line capacity to be able to flow 1 extra GW her hour.
My constraint: In each time period (t), the total power flow p(f,t) must be less than or equal to the existing line capacity + newly upgraded capacity:
line_cap(f,t).. old_line_cap(l)+c(f) =g= p(f,t);
However, my solution looks something like this:
fAA': 6.7 (built 6.7 GW more of capacity in line from A to A')
fA'A: 5.0 (built 5.0 GW more of capacity in line from A' to A)
fBB': 5.5 (built 5.5 GW more of capacity in line from B to B')
fB'B: 8.1 (built 8.1 GW more of capacity in line from B' to B)
But this is not right because if I upgrade line AA' by 6.7 GW, I don't need to upgrade line A'A, since they are the same line. Basically, I pay twice to upgrade the same line.
So I fix this, I'm trying to use Alias such as this:
Alias(f,ff)
line_cap(f,t).. old_line_cap(f)+c(f)+ sum((f,ff)$[line_source(f)=line_sink(ff) and line_source(ff)=line_sink(f)],c(ff)) =g= p(f,t);
But that still does not fix my problem.
I'd appreciate any help! Thank you!
If you want to stick with your 4 fs, let's call them your virtual lines (since a pair of two of these virtual lines actually form one physical lines). Now, let's introduce a new set of physical lines (fPhys) and a mapping between the virtual and the physical lines (fMap):
Set fPhys / fAA, fBB /
fMap(f, fPhys) / ("fA'A", "fAA'").fAA
("fB'B", "fBB'").fBB /;
Now, you can stick with using the set f for your flows, and use fPhys for the investment decision, like this:
investment_cost.. cap_cost =e= sum(fPhys,c(fPhys)*capCost(fPhys));
line_cap(f,t).. sum(fMap(f,fPhys), old_line_cap(fPhys)+c(fPhys)) =g= p(f,t);

Why is VK_SAMPLE_COUNT_1_BIT an invalid choice for multisampling in Vulkan?

Hello people of StackOverflow,
I am currently working on a games engine using the Vulkan graphics API, in the past I was just setting anti-aliasing to the max it could be. However today I was trying to turn it off (to improve performance on weaker systems). To do this I tried to set the MSAA samples on my engine to VK_SAMPLE_COUNT_1_BIT however this produced the validation error:
Validation Error: [ VUID-VkSubpassDescription-pResolveAttachments-00848 ] Object 0: handle = 0x55aaa6e32828, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0xfad6c3cb | ValidateCreateRenderPass(): Subpass 0 requests multisample resolve from attachment 0 which has VK_SAMPLE_COUNT_1_BIT. The Vulkan spec states: If pResolveAttachments is not NULL, for each resolve attachment that is not VK_ATTACHMENT_UNUSED, the corresponding color attachment must not have a sample count of VK_SAMPLE_COUNT_1_BIT (https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.html#VUID-VkSubpassDescription-pResolveAttachments-00848)
I can work around this problem relatively easily so it isn't really an issue for me, however I was wondering why exactly this limit is put into place. If I want to set the MSAA samples to 1 why can't I?
Thanks,
sckzor
A sample count of 1 means "not a multisampled image". And if you're doing multisample resolve, resolving from a non-multisampled image doesn't make sense. Which is also why you can't use such images for any other things that expect a multisampled image (you can't use an MS-style sampler or texture function on them).

How to build Yocto hddimg on i.MX7 to boot from usb stick

I have an i.mx7 som. I want to build a Yocto image which I can dd onto a usb stick to boot from. I believe that I want an hddimg image but cannot see how to create one (I have sdimg which works prefectly).
I would appreciate advice.
I have set IMAGE_FSTYPES to "hddimg" but get "ERROR: Nothing PROVIDES 'syslinux'"
The SOM is the Technexion i.MX7. Layers are:
layer path priority
=======================================================
meta sources/poky/meta 5
meta-poky sources/poky/meta-poky 5
meta-oe sources/meta-openembedded/meta-oe 6
meta-multimedia sources/meta-openembedded/meta-multimedia 6
meta-freescale sources/meta-freescale 5
meta-freescale-3rdparty sources/meta-freescale-3rdparty 4
meta-freescale-distro sources/meta-freescale-distro 4
meta-powervault sources/meta-powervault 6
meta-python sources/meta-openembedded/meta-python 7
meta-networking sources/meta-openembedded/meta-networking 5
meta-virtualization sources/meta-virtualization 8
meta-filesystems sources/meta-openembedded/meta-filesystems 6
meta-cpan sources/meta-cpan 10
meta-mender-core sources/meta-mender/meta-mender-core 6
meta-mender-freescale sources/meta-mender/meta-mender-freescale 10
Nope, you certainly do not want an hddimg, as this is a mostly deprecated format for x86 systems. On ARM, you almost never want syslinux :-)
Usually your SOM comes with a Board Support Package in the form of a layer, which includes the MACHINE definition which in turn defines the IMAGE_FSTYPE that this machine likes for booting. If in doubt, consult the manual or ask your vendor.
Having said that, if you specify SOM and layers in use we can have a look if publicly accessible, but without those details it is impossible to give a proper answer.

How to fix problem with 'getColor' getting every car being green?

I'm making a program that makes the cars running in the simulation pass on their colors to other cars, to achieve that I'm using the TraCi function 'getColor'. The problem is that every car that I ask the color returns (255,255,0,255) doesn't matter what the actual color is. However, using 'getColor' inside a condition for the "contamination" makes the program work, maybe out of sheer luck. Please help me understand how to fix it and how it works.
I'm on Ubuntu 18.04.3 LTS, SUMO 0.32.0 and using the traci library. I've tried modifying the program and running the simulation step by step, even running the same line in different code with the same idea in mind.
This is the program in which the "contamination" works although it gets the wrong colors:
def run():
step = 0
while traci.simulation.getMinExpectedNumber() > 0:
traci.simulationStep()
step += 1
if step > 2:
if distancia("veh1","veh0") < 5:
traci.vehicle.setColor("veh1",(255,0,0,255))
if distancia("veh0","veh2") < 5 :
traci.vehicle.setColor("veh2",(255,0,0,255))
if traci.vehicle.getColor("veh2") == (255,0,0,255):
if distancia("veh1","veh2") < 5 :
traci.vehicle.setColor("veh1",(255,0,0,255))
print(traci.vehicle.getColor("veh1"))
traci.close()
sys.stdout.flush()
I hoped when I selected the red car I would get (255,0,0,255), but I got (255,255,0,0). But it doesn't get any error messages, just shows the worng color.
It seems that the default color for traci is yellow, I'd to set every car to its own color from the python code to start doing what I wanted.

How to optimize golang program that spends most time in runtime.osyield and runtime.usleep

I've been working on optimizing code that analyzes social graph data (with lots of help from https://blog.golang.org/profiling-go-programs) and I've successfully reworked a lot of slow code.
All data is loaded into memory from db first, and the data analysis from there appears CPU bound (max memory consumption < 10MB, CPU1 # 100%)
But now most of my program's time seems to be in runtime.osyield and runtime.usleep. What's the way to prevent that?
I've set GOMAXPROCS=1 and the code does not spawn any goroutines (other than what the golang libraries may call).
This is my top10 output from pprof
(pprof) top10
62550ms of 72360ms total (86.44%)
Dropped 208 nodes (cum <= 361.80ms)
Showing top 10 nodes out of 77 (cum >= 1040ms)
flat flat% sum% cum cum%
20760ms 28.69% 28.69% 20850ms 28.81% runtime.osyield
14070ms 19.44% 48.13% 14080ms 19.46% runtime.usleep
11740ms 16.22% 64.36% 23100ms 31.92% _/C_/code/sc_proto/cloudgraph.(*Graph).LeafProb
6170ms 8.53% 72.89% 6170ms 8.53% runtime.memmove
4740ms 6.55% 79.44% 10660ms 14.73% runtime.typedslicecopy
2040ms 2.82% 82.26% 2040ms 2.82% _/C_/code/sc_proto.mAvg
890ms 1.23% 83.49% 1590ms 2.20% runtime.scanobject
770ms 1.06% 84.55% 1420ms 1.96% runtime.mallocgc
760ms 1.05% 85.60% 760ms 1.05% runtime.heapBitsForObject
610ms 0.84% 86.44% 1040ms 1.44% _/C_/code/sc_proto/cloudgraph.(*Node).DeepestChildren
(pprof)
The _ /C_/code/sc_proto/* functions are my code.
And the output from web:
(better, SVG version of graph here: https://goo.gl/Tyc6X4)
Found the answer myself, so I'm posting this here for anyone else who is having a similar problem. And special thanks to #JimB for sending me down the right path.
As can be seen from the graph, the paths which lead to osyield and usleep are garbage collection routines. This program was using a linked list which generated a lot of pointers, which created a lot of work for the gc, which occasionally blocked execution of my code while it cleaned up my mess.
Ultimately the solution to this problem came from https://software.intel.com/en-us/blogs/2014/05/10/debugging-performance-issues-in-go-programs (which was an awesome resource btw). I followed the instructions about the memory profiler there; and the recommendation to replace collections of pointers with slices cleared up my garbage collection issues, and my code is much faster now!