CBC solver: how to run solver multiple times with same command - coin-or-cbc

I use CBC and I have a problem to solve with noise.
I want to ask CBC with one command to run x times, not just once otherwise I have to rerun manually each time.
How do you force CBC to run multiple times the same problem (with noise) without having to run the solver manually each time?
Thanks.

Related

How can i change the gui refresh rate in SUMO Without changing the simulation step frequency?

I am running long simulations using project_flow and SUMO. but i don't need the GUI to refresh every step of the simulation. is there any way to
de-couple the simulation step and the gui refresh rate?
I do not know how this works in the context of flow but in general you can simply use the sumo executable instead of sumo-gui to have the same simulation without a GUI. Modifying the refresh rate is not possible, as far as I know, but minimizing the window should at least avoid all draw requests from the system.

adding processing delay within a block in gnuradio

I am working on a block with gnuradio. I have come across a strange performance improvement when i was printing out some huge data on to terminal and the performance degrades without giving a print statement on to terminal.
I infer that while printing out to terminal i am giving gnuradio block an extra processing time for the block to process. This is just my hunch and might not be the exact reason. Kindly correct if this is not correct.
So, is there a way to add a specific amount of processing delay within a block(as what i got during printing out data to terminal) in gnuradio.
Thanks in advance
First of all, the obvious: don't print large amounts of data to a terminal. Terminals aren't meant for that, and your CPU will pretty quickly be the limiting factor, as it tries to render all the text.
I infer that while printing out to terminal i am giving gnuradio block an extra processing time for the block to process. This is just my hunch and might not be the exact reason. Kindly correct if this is not correct.
Printing to terminal is an IO thing to do. That means that the program handling the data (typically, the linux kernel might be handling the PTY data, or it might be directly handed off to the process of your terminal emulator) will set a limit on how it accepts data from the program printing.
Your GNU Radio block's work function is simply blocked, because the resources you're trying to use are limited.
So, can i add a specific amount of processing delay within a block(as what i got during printing out data to terminal) in gnuradio.
Yes, but it doesn't help in any way here.
You're IO bound. Do something that is not printing to terminal – makes a lot of sense, because you can't read that fast, anyway.

console still running after sub-optimal termination

I use gurobi to solve a LP problem. The method is barrier method and I disabled crossover, since I'm satisfied with the sub-optimal solution, and crossover takes forever.
However, after the sub-optimal termination, the console is still running (with huge use of memory). The solution takes 0.5 hours, but I have to wait for hours until I can continue.
I used python command line and Spyder.
Here is a summary of the (modified) log:
Barrier performed 500 interactions in 2000 seconds
Sub-optimal termination - objective XX
(here it takes hours)
warning: a sub-optimal solution is available
(here it takes hours)
The complete log is here: https://drive.google.com/open?id=1q2Z6QJNmXTWSRsJqxtEpUlCwhXnLIhFf
I expect the console would return the results immediately after the termination. The solving is fast, but it takes hours for something else.
Is there anything wrong? What can I do to make it faster?

Valgrind has long pause before running executables

Let me preface this question by saying that I know it takes programs longer to run in valgrind as there is a lot of overhead. This question is not about that.
To ensure that our implementations of data structures have the appropriate runtime, all test cases time out after a certain amount of time (usually around 10 times the amount of time the teacher produced solutions take to run in Valgrind). I ran the test cases on my laptop early in the day and everything was fine. I made two very minor changes later at night (adding one to something and adding a counter for something else, both of which are constant time operations). I reran the tests and I timed out on even the most basic of test cases, like inserting one node. I was freaking out, so I went to the 24/7 computer lab on campus and ran my code on a virtual machine and it worked fine. I ran the binaries on my laptop and they're speedy. I tried turning my computer off and then back on and that didn't fix anything, so I tried updating valgrind but it is up to date. I removed valgrind and then re-installed and that didn't fix the problem either. To verify it is a problem with valgrind and not my code I made a hello_world.cpp then and ran the binary in valgrind with no extra flags. It takes about 15-20 seconds to run. I have absolutely no idea why this is happening. I've not made any changes to my computer. I've skimmed the valgrind documentation, but I cannot pin down what is wrong. I run Fedora 27.

PITest: JavaLaunchHelper is implemented in both

Recently I started using PITest for Mutation Testing. Post building my project using maven when I run the command mvn org.pitest:pitest-maven:mutationCoverage I get this error bunch of times:
-stderr : objc[2787]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.8.0_74.jdk/Contents/Home/jre/bin/java and /Library/Java/JavaVirtualMachines/jdk1.8.0_74.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be ustderr : sed. Which one is undefined.
Sometimes the error is followed by
PIT >> WARNING : Slave exited abnormally due to MEMORY_ERROR
or PIT >> WARNING : Slave exited abnormally due to TIMED_OUT
I use OsX version 10.10.4 and Java 8 (jdk1.8.0_74).
Any fix/ work-around for this?
Don't worry about this;
-stderr : objc[2787]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.8.0_74.jdk/Contents/Home/jre/bin/java and /Library/Java/JavaVirtualMachines/jdk1.8.0_74.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be ustderr : sed. Which one is undefined.
This is just for information that there are two implementations of JavaLauncherHelper and the message tells you that one of the two will use std-err output stream but it is undetermined which one of the two. It is a known isse, see also this question
The other two are a result of what PIT is doing: it's modifying the byte code and it may happen that this not just affects the output of an operation (detected by a test) but actually affects the runtime behavior. For example if boundaries of a loop get changed that way, that the loop runs endlessly. Pit is capable of detecting this and prints out an error. Mutations detected by either a memory error or a timeout error can be considered as "killed". But you should check each of those individually as they could be false positives, too.
PIT >> WARNING : Slave exited abnormally due to MEMORY_ERROR
means the modified code produces more or larger objects so the forked jvm runs out of memory. Imagine a loop like this
while(a < b){
list.add(new Object());
a++;
}
And the a++ gets altered to a--. The loop may eventually end, but it's more likely you run out of memory before that.
From the documentation
A memory error might occur as a result of a mutation that increases the amount of memory used by the system, or may be the result of the additional memory overhead required to repeatedly run your tests in the presence of mutations. If you see a large number of memory errors consider configuring more heap and permgen space for the tests.
The timeout issue is similar to this, the reason coud be either, that you run an infinite loop or that system thinks you run an infinite loop, i.e. when the system is too slow to compute the altered code. If you experience a lot of timeouts you should consider increasing the timeout value. But be carefull, as this may impact the overall execution time.
From the FAQ
Timeouts when running mutation tests are caused by one of two things
1 A mutation that causes an infinite loop
2 PIT thinking an infinite loop has occured but being wrong
In order to detect infinite loops PIT measures the normal execution time of each test without any mutations present. When the test is run in the presence of a mutation PIT checks that the test doesn’t run for any longer than
normal time * x + y
Unfortunately the real world is more complex than this.
Test times can vary due to the order in which the tests are run. The first test in a class may have a execution time much higher than the others as the JVM will need to load the classes required for that test. This can be particularly pronounced in code that uses XML binding frameworks such as jaxb where classloading may take several seconds.
When PIT runs the tests against a mutation the order of the tests will be different. Tests that previously took miliseconds may now take seconds as they now carry the overhead of classloading. PIT may therefore incorrectly flag the mutation as causing an infinite loop.
An fix for this issue may be developed in a future version of PIT. In the meantime if you encounter a large number of timeouts, try increasing y in the equations above to a large value with –timeoutConst (timeoutConstant in maven).