SCIP write best feasible solution in automated test - scip

Based on steps in http://scip.zib.de/doc/html/TEST.php, I have managed to set up an automated test using SCIP. However, I'd like to write the solution (best feasible solution) to a file, instead of just getting the objective value. Is there anyway to do it in the automated test?
I did a hack in check.sh by replacing
OPTCOMMAND=optimize; write solution myfilename.sol;
But too bad, it doesn't seem to work, when I tried to make TEST=mytest test, this line is observed from the output
bash ./check.sh mytest bin/scip-3.1.0.linux.x86_64.gnu.opt.spx default scip-3.1.0.linux.x86_64.gnu.opt.spx 3600 2100000000 6144 1 default 10000 false false 3.1.0 spx false /tmp optimize;
write: solution is not logged in on myfilename.sol
I know it is possible to write the solution via interactive shell, but I am trying to automate the test in order to retrieve both solution and obj value. Any help or clarification will be much appreciated!

You are getting an error because with the syntax you are using, you try to invoke a bash command called "write" because of the semicolon:
The write utility allows you to communicate with other users, by
copying lines from your terminal to theirs.
Just try without semicolon ;)
The cleaner solution would be to modify the file "check/configuration_tmpfile_setup_scip.sh"
and add the line
echo write solution /absolute/path/to/solutions/${INSTANCE}.sol >> $TMPFILE
before the quit command. This configuration file sets up a batch file to feed SCIP with all commands that the interactive shell should execute, and you can model arbitrary user behavior.

Related

How to test "main()" routine from "go test"?

I want to lock the user-facing command line API of my golang program by writing few anti-regression tests that would focus on testing my binary as a whole. What testing "binary as a whole" means is that go-test should:
be able to feed STDIN to my binary
be able to check that my binary produces correct STDOUT
be able to ensure that error cases are handled properly by binary
However, it is not obvious to me what is the best practice to do that in go? If there is a good go test example, could you point me to it?
P.S. in the past I have been using autotools. And I am looking for something similar to AT_CHECK, for example:
AT_CHECK([echo "XXX" | my_binary -e arg1 -f arg2], [1], [],
[-f and -e can't be used together])
Just make your main() single line:
import "myapp"
func main() {
myapp.Start()
}
And test myapp package properly.
EDIT:
For example, popular etcd conf server uses this technique: https://github.com/coreos/etcd/blob/master/main.go
I think you're trying too hard: I just tried the following
func TestMainProgram(t *testing.T) {
os.Args = []string{"sherlock",
"--debug",
"--add", "zero",
"--ruleset", "../scripts/ceph-log-filters/ceph.rules",
"../scripts/ceph-log-filters/ceph.log"}
main()
}
and it worked fine. I can make a normal tabular test or a goConvey BDD from it pretty easily...
If you really want to do such type of testing in Go, you can use Go os/exec package https://golang.org/pkg/os/exec/ to execute your binary and test it as a whole - for example, executing go run main.go command. Essentially it would be an equivalent of a shell script done in Go. You can use StdinPipe https://golang.org/pkg/os/exec/#Cmd.StdinPipe and StdouPipe/StderrPipe (https://golang.org/pkg/os/exec/#Cmd.StdoutPipe and https://golang.org/pkg/os/exec/#Cmd.StderrPipe) to feed the desired input and verify output. The examples on the package documentation page https://golang.org/pkg/os/exec/ should give you a good starting point.
However, the testing of compiled programs goes beyond the unit testing so it is worth to consider other tools (not necessarily Go-based) that more typically used for functional / acceptance testing such as Cucumber http://cucumber.io.

How do I tell Octave where to find functions without picking up other files?

I've written an octave script, hello.m, which calls subfunc.m, and which takes a single input file, a command line argument, data.txt, which it loads with load(argv(){1}).
If I put all three files in the same directory, and call it like
./hello.m data.txt
then all is well.
But if I've got another data.txt in another directory, and I want to run my script on it, and I call
../helloscript/hello.m data.txt
this fails because hello.m can't find subfunc.m.
If I call
octave --path "../helloscript" ../helloscript/hello.m data.txt
then that seems to work fine.
The problem is that if I don't have a data.txt in the directory, then the script will pick up any data.txt that is lying around in ../helloscript.
This seems a bit fragile. Is there any way to tell octave, preferably in the script itself, to get subfunctions from the same directory as the script, but to get everything else relative to the current directory.
The best robust solution I can think of at the moment is to inline the subfunction in the script, which is a bit nasty.
Is there a good way to do this, or is it just a thorny problem that will cause occasional hard to find problems and can't be avoided?
Is this in fact just a general problem with scripting languages that I've just never noticed before? How does e.g. python deal with it?
It seems like there should be some sort of library-load-path that can be set without altering the data-load-path.
Adding all your subfunctions to your program file is not nasty at all. Why would you think so? It is perfectly normal to have function definitions in your script. The only language I know that does not do this is Matlab but that's just braindead.
The other alternative you have is to check that the input file argument, data.txt exists. Like so:
fpath = argv (){1};
[info, err, msg] = stat (fpath);
if (err)
error ("could not stat `%s' : %s", fpath, msg);
endif
## continue your script knowing the file exists
But really, I would recommend you to use both. Add your subfunctions in your main program, the only reason to have it on separate file is if you plan on sharing with other programs, and always check input arguments.

In Scala, is it possible to write a script which refers to another script

I am currently looking at using Scala scripts to control the life-cycle of a MySQL database instead of using MS-DOS scripts (I am on Windows XP).
I want to have a configuration script which only holds configuration information, and 1 or more management scripts which use the configuration information to perform various operations such as start, stop, show status, etc .....
Is it possible to write a Scala script which includes/imports/references another Scala script?
I had a look at the -i option of the scala interpreter, but this launches an interactive session which is not what I want.
According to Scala man, script pre-loading only works for interactive mode.
As a workaround, you can exit the interactive mode after running the script. Here's the code of child.bat (script that includes another generic one):
::#!
#echo off
call scala -i genetic.bat %0
goto :eof
::!#
def childFunc="child"
println(geneticFunc)
println(childFunc)
exit;
genericFunc is defined at genetic.bat
The output of child.bat:
>child.bat
Loading genetic.bat...
...
geneticFunc: java.lang.String
Loading child.bat...
...
childFunc: java.lang.String
generic
child
I'd use Process and call the other Scala script just like any other command.
One option would be to have a script which concatenates two files together and then launches it, something like:
#echo off
type config.scala > temp.scala
type code.scala >> temp.scala
scala temp.scala
del temp.scala
or similar. Then you keep the two seperate as you wished.

Persistent Python Command-Line History

I'd like to be able to "up-arrow" to commands that I input in a previous Python interpreter. I have found the readline module which offers functions like: read_history_file, write_history_file, and set_startup_hook. I'm not quite savvy enough to put this into practice though, so could someone please help? My thoughts on the solution are:
(1) Modify .login PYTHONSTARTUP to run a python script.
(2) In that python script file do something like:
def command_history_hook():
import readline
readline.read_history_file('.python_history')
command_history_hook()
(3) Whenever the interpreter exits, write the history to the file. I guess the best way to do this is to define a function in your startup script and exit using that function:
def ex():
import readline
readline.write_history_file('.python_history')
exit()
It's very annoying to have to exit using parentheses, though: ex(). Is there some python sugar that would allow ex (without the parens) to run the ex function?
Is there a better way to cause the history file to write each time? Thanks in advance for all solutions/suggestions.
Also, there are two architectural choices as I can see. One choice is to have a unified command history. The benefit is simplicity (the alternative that follows litters your home directory with a lot of files.) The disadvantage is that interpreters you run in separate terminals will be populated with each other's command histories, and they will overwrite one another's histories. (this is okay for me since I'm usually interested in closing an interpreter and reopening one immediately to reload modules, and in that case that interpreter's commands will have been written to the file.) One possible solution to maintain separate history files per terminal is to write an environment variable for each new terminal you create:
def random_key()
''.join([choice(string.uppercase + string.digits) for i in range(16)])
def command_history_hook():
import readline
key = get_env_variable('command_history_key')
if key:
readline.read_history_file('.python_history_{0}'.format(key))
else:
set_env_variable('command_history_key', random_key())
def ex():
import readline
key = get_env_variable('command_history_key')
if not key:
set_env_variable('command_history_key', random_key())
readline.write_history_file('.python_history_{0}'.format(key))
exit()
By decreasing the random key length from 16 to say 1 you could decrease the number of files littering your directories to 36 at the expense of possible (2.8% chance) of overlap.
I think the suggestions in the Python documentation pretty much cover what you want. Look at the example pystartup file toward the end of section 13.3:
http://docs.python.org/tutorial/interactive.html
or see this page:
http://rc98.net/pystartup
But, for an out of the box interactive shell that provides all this and more, take a look at using IPython:
http://ipython.scipy.org/moin/
Try using IPython as a python shell. It already has everything you ask for. They have packages for most popular distros, so install should be very easy.
Persistent history has been supported out of the box since Python 3.4. See this bug report.
Use PIP to install the pyreadline package:
pip install pyreadline
If all you want is to use interactive history substitution without all the file stuff, all you need to do is import readline:
import readline
And then you can use the up/down keys to navigate past commands. Same for 2 or 3.
This wasn't clear to me from the docs, but maybe I missed it.

Script for Testing with Input files and Output Solutions

I have a set of *.in files and a set of *.soln files with matching files names. I would like to run my program with the *.in file as input and compare the output to the ones found in the *.soln files. What would be the best way to go about this? I can think of 3 options.
Write some driver in Java to list files in the folder, run the program, and compare. This would be hard and difficult.
Write a bash script to do this. How?
Write a python script to do this?
I would go for a the bash solution. Also given that what you are doing is a test, I would always save the output of the myprogram so that if there are failures, that you always have the output to compare it to.
#!/bin/bash
for infile in *.in; do
basename=${infile%.*}
myprogram $infile > $basename.output
diff $basename.output $basename.soln
done
Adding the checking or exit statuses etc. as required by your report.
If the program exists, I suspect the bash script is the best bet.
If your soln files are named right, some kind of loop like
for file in base*.soln
do
myprogram > new_$file
diff $file new_$file
done
Of course, you can check the exit code of diff and
do various other things to create a test report . . .
That looks simplest to me . . .
Karl
This is primarily a problem that requires the use of the file-system with minimal logic. Bash isn't a bad choice for such problems. If it turns out you want to do something more complicated than just comparing for equality Python becomes a more attractive choice. Java doesn't seem like a good choice for a throwaway script such as this.
Basic bash implementation might look something like this:
cd dir_with_files
program=your_program
input_ext=".in"
compare_to_ext=".soIn"
for file in *$from_extension; do
diff <("$program" "$i") "${file:0:$((${#file}-3))}$compare_to_ext"
done
Untested.