The Java interface doesn't consider all constraints - scip

I've successfully generated the scip Java interface. During testing face, I've encountered this problem: the Java interface returns immediately a so called optimal solution (problem is solved [optimal solution found]), but some of the constraints are not being considered - the solution generated is a zero-solution (actually, not a solution).
I've tried running the same problem in scip binary in the terminal, and works fine.
This is the problem that I'm testing:
param fichier:="/home/sebastien/workspace/DodenPlanning_teamDC/data.txt";
param nb_custom:= 11 ;
set Ind :=
{ read fichier as "<1s>" comment "#" use nb_custom} ;
param px[Ind] :=
read fichier as "<1s> 2n" comment "#" use nb_custom ;
param py[Ind] :=
read fichier as "<1s> 3n" comment "#" use nb_custom;
defnumb dist(i,j) := sqrt((px[i]-px[j])^2 + (py[i]-py[j])^2) ;
var x[Ind*Ind] binary ;
var r[Ind] integer <= card(Ind) - 1 ; # depot has rank 0, others have ranks 1 .. nb_points
minimize cost: sum <i,j> in Ind*Ind : dist(i,j) * x[i,j] ;
subto unique_predecessor : forall <j> in Ind do sum <i> in Ind : x[i,j]==1;
subto unique_successor : forall <j> in Ind do sum <k> in Ind : x[j,k]==1;
subto first_rank : r["0"] == 0 ; # because 0 is the starting point.
subto other_rank : # if there is edge i-> j then rank(j) = rank(i) + 1
forall <i,j> in Ind*Ind with j != "0" do
vif x[i,j] == 1 then r[j] == r[i] + 1 end ;
It's a modelisation of a VPR-problem. The solution returned by the interface is a solution where each vertex is only linked to himself (as this minimised the cost: equals 0), but this not a solution considering the other constraints.
We have verified that it is not a translation problem: a problem file was generated manually by simple and was solved by both ships binary in the terminal and by the Java interface.
The file is read by both: it generated the same number of variables, constraints etc.
Is the number of constraints limited in the Java interface? We have solved smaller problems which
Is the number of constraints limited in the Java interface? We have solved smaller problems which returned the same solution in both the scip binary and the Java interface.

Related

Can gather be used to unroll Junctions?

In this program:
use v6;
my $j = +any "33", "42", "2.1";
gather for $j -> $e {
say $e;
} # prints 33␤42␤2.1␤
for $j -> $e {
say $e; # prints any(33, 42, 2.1)
}
How does gather in front of forchange the behavior of the Junction, allowing to create a loop over it? The documentation does not seem to reflect that behavior. Is that spec?
Fixed by jnthn in code and test commits.
Issue filed.
Golfed:
do put .^name for any 1 ; # Int
put .^name for any 1 ; # Mu
Any of ten of the thirteen statement prefixes listed in the doc can be used instead of do or gather with the same result. (supply unsurprisingly produces no output and hyper and race are red herrings because they try and fail to apply methods to the junction values.)
Any type of junction produces the same results.
Any number of elements of the junction produces the same result for the for loop without a statement prefix, namely a single Mu. With a statement prefix the for loop repeats the primary statement (the put ...) the appropriate number of times.
I've searched both rt and gh issues and failed to find a related bug report.

Find and Replace an operation in Verilog using Yosys

I am trying to see if Yosys fits my requirements or no.
What i want to do is to find an operation in Verilog code (e.g. temp = 16*val1 + 8*val2 ) and replace this with another op like ( temp = val1 << 4 + val2 << 3 ).
Which parts i need to learn & use from Yosys? if anyone knows the set of command to use, can he/she please let me know to boost my learning curve ?
Thanks.
For example consider the following verilog input (test.v):
module test(input [7:0] val1, val2, output [7:0] temp);
assign temp = 16*val1 + 8*val2;
endmodule
The command yosys -p 'prep; opt -full; show' test.v will produce the following circuit diagram:
And the output written to the console contains this:
3.1. Executing OPT_EXPR pass (perform const folding).
Replacing multiply-by-16 cell `$mul$test.v:2$1' in module `\test' with shift-by-4.
Replacing multiply-by-8 cell `$mul$test.v:2$2' in module `\test' with shift-by-3.
Replacing $shl cell `$mul$test.v:2$1' (B=3'100, SHR=-4) in module `test' with fixed wiring: { \val1 [3:0] 4'0000 }
Replacing $shl cell `$mul$test.v:2$2' (B=2'11, SHR=-3) in module `test' with fixed wiring: { \val2 [4:0] 3'000 }
The two lines reading Replacing multiply-by-* cell are the transformation you mentioned. The two lines after that replace the constant shift operations with wiring, using {val1[3:0], 4'b0000} and {val2[4:0], 3'b000} as inputs for the adder.
This is done in the opt_expr pass. See passes/opt/opt_expr.cc for its source code to see how it's done.

IDL batch processing: fully automatic input selection

I need to process MODIS ocean level 2 data and I obtained an external plugin for ENVI https://github.com/dawhite/EPOC/releases. Now, I want to batch process hundreds of images for which I modified the code like the following code. The code is running fine, but I have to select the input file every time. Can anyone please help me to make the program fully automatic? I really appreciate and thanks a lot for your help!
Pro OCL2convert
dir = 'C:\MODIS\'
CD, dir
; batch processing of level 2 ocean chlorophyll data
files=file_search('*.L2_LAC_OC.x.hdf', count=numfiles)
; this command will search for all files in the directory which end with
; the specified one
counter=0
; this is a counter that tells IDL which file is being read-starts at 0
While (counter LT numfiles) Do begin
; this command tells IDL to start a loop and to only finish when the counter
; is equal to the number of files with the name specified
name=files(counter)
openr, 1, name
proj = envi_proj_create(/utm, zone=40, datum='WGS-84')
ps = [1000.0d,1000.0d]
no_bowtie = 0 ;same as not setting the keyword
no_msg = 1 ;same as setting the keyword
;OUTPUT CHOICES
;0 -> standard product only
;1 -> georeferenced product only
;2 -> standard and georeferenced products
output_choice = 2
;RETURNED VALUES
;r_fid -> ENVI FID for the standard product, if requested
;georef_fid -> ENVI FID for the georeferenced product, if requested
convert_oc_l2_data, fname=fname, output_path=output_path, $
proj=proj, ps=ps, output_choice=output_choice, r_fid=r_fid, $
georef_fid=georef_fid, no_bowtie=no_bowtie, no_msg=no_msg
print,'done!'
close, 1
counter=counter+1
Endwhile
End
Not knowing what convert_oc_l2_data does (it appears to be a program you created, there is no public documentation for it), I would say that the problem might be that the out_path variable is not defined in the rest of your program.

Why does the Io REPL and the interpreter give me two different values?

Consider this code:
OperatorTable addOperator(":", 2)
: := method(value,
list(self, value)
)
hash := "key": "value"
hash println
The return should be list(key, value), and when using this in the Io REPL that is exactly the return value. When using the interpreter (as in io somefile.io) the value returned is value. After some inspection the difference is here:
# In the REPL
OperatorTable addOperator(":", 2)
message("k" : "v") # => "k" :("v")
# Via the Interpreter
OperatorTable addOperator(":", 2)
message("k" : "v") # => "k" : "v"
Why is this happening?
File execution happens in these stages:
load file
replace operators based on the current operator table
execute contents
So operator to message conversion only happens when the file is initially loaded in stage 2.
When the operator registration code is executed in stage 3. this has already happened,
thus the operator has no effect.
You can set the order which files get loaded manually and put the operator definition in the first file loaded.
Having a file called operators.io for example which includes all operator definitions loaded before the files that use them.
After confirming with ticking I arrived at the following solution:
main.io:
doFile("ops.io")
doFile("script.io")
ops.io:
OperatorTable addOperator(":", 2)
: := method(value,
list(self, value))
script.io:
hash := "key": "value"
hash println
Like ticking explains, the whole file is loaded at once so you have to split it up so the loading order guarantees that the operators are available.

How to prevent common sub-expression elimination (CSE) with GHC

Given the program:
import Debug.Trace
main = print $ trace "hit" 1 + trace "hit" 1
If I compile with ghc -O (7.0.1 or higher) I get the output:
hit
2
i.e. GHC has used common sub-expression elimination (CSE) to rewrite my program as:
main = print $ let x = trace "hit" 1 in x + x
If I compile with -fno-cse then I see hit appearing twice.
Is it possible to avoid CSE by modifying the program? Is there any sub-expression e for which I can guarantee e + e will not be CSE'd? I know about lazy, but can't find anything designed to inhibit CSE.
The background of this question is the cmdargs library, where CSE breaks the library (due to impurity in the library). One solution is to ask users of the library to specify -fno-cse, but I'd prefer to modify the library.
How about removing the source of the trouble -- the implicit effect -- by using a sequencing monad that introduces that effect? E.g. the strict identity monad with tracing:
data Eval a = Done a
| Trace String a
instance Monad Eval where
return x = Done x
Done x >>= k = k x
Trace s a >>= k = trace s (k a)
runEval :: Eval a -> a
runEval (Done x) = x
track = Trace
now we can write stuff with a guaranteed ordering of the trace calls:
main = print $ runEval $ do
t1 <- track "hit" 1
t2 <- track "hit" 1
return (t1 + t2)
while still being pure code, and GHC won't try to get to clever, even with -O2:
$ ./A
hit
hit
2
So we introduce just the computation effect (tracing) sufficient to teach GHC the semantics we want.
This is extremely robust to compile optimizations. So much so that GHC optimizes the math to 2 at compile time, yet still retains the ordering of the trace statements.
As evidence of how robust this approach is, here's the core with -O2 and aggressive inlining:
main2 =
case Debug.Trace.trace string trace2 of
Done x -> case x of
I# i# -> $wshowSignedInt 0 i# []
Trace _ _ -> err
trace2 = Debug.Trace.trace string d
d :: Eval Int
d = Done n
n :: Int
n = I# 2
string :: [Char]
string = unpackCString# "hit"
So GHC has done everything it could to optimize the code -- including computing the math statically -- while still retaining the correct tracing.
References: the useful Eval monad for sequencing was introduced by Simon Marlow.
Reading the source code to GHC, the only expressions that aren't eligible for CSE are those which fail the exprIsBig test. Currently that means the Expr values Note, Let and Case, and expressions which contain those.
Therefore, an answer to the above question would be:
unit = reverse "" `seq` ()
main = print $ trace "hit" (case unit of () -> 1) +
trace "hit" (case unit of () -> 1)
Here we create a value unit which resolves to (), but which GHC can't determine the value for (by using a recursive function GHC can't optimise away - reverse is just a simple one to hand). This means GHC can't CSE the trace function and it's 2 arguments, and we get hit printed twice. This works with both GHC 6.12.4 and 7.0.3 at -O2.
I think you can specify the -fno-cse option in the source file, i.e. by putting a pragma
{-# OPTIONS_GHC -fno-cse #-}
on top.
Another method to avoid common subexpression elimination or let floating in general is to introduce dummy arguments. For example, you can try
let x () = trace "hi" 1 in x () + x ()
This particular example won't necessarily work; ideally, you should specify a data dependency via dummy arguments. For instance, the following is likely to work:
let
x dummy = trace "hi" $ dummy `seq` 1
x1 = x ()
x2 = x x1
in x1 + x2
The result of x now "depends" on the argument dummy and there is no longer a common subexpression.
I'm a bit unsure about Don's sequencing monad (posting this as answer because the site doesn't let me add comments). Modifying the example a bit:
main :: IO ()
main = print $ runEval $ do
t1 <- track "hit 1" (trace "really hit 1" 1)
t2 <- track "hit 2" 2
return (t1 + t2)
This gives us the following output:
hit 1
hit 2
really hit 1
That is, the first trace fires when the t1 <- ... statement is executed, not when t1 is actually evaluated in return (t1 + t2). If we define the monadic bind operator as
Done x >>= k = k x
Trace s a >>= k = k (trace s a)
instead, the output will reflect the actual evaluation order:
hit 1
really hit 1
hit 2
That is, the traces will fire when the (t1 + t2) statement is executed, which is (IMO) what we really want. For example, if we change (t1 + t2) to (t2 + t1), this solution produces the following output:
hit 2
really hit 2
hit 1
The output of the original version remains unchanged, and we don't see when our terms are really evaluated:
hit 1
hit 2
really hit 2
Like the original solution, this also works with -O3 (tested on GHC 7.0.3).