What is the most optimal way to apply a kernel to a large array in F#? - optimization

I have the following code:
// reduce by 25x
let smallOutput = Array2D.init (output.GetLength(0) / 5) (output.GetLength(1) / 5) (fun _ _ -> Int32.MinValue)
let weights =
array2D [|
[| 1; 1; 1; 1; 1 |]
[| 1; 3; 3; 3; 1 |]
[| 1; 3; 5; 3; 1 |]
[| 1; 3; 3; 3; 1 |]
[| 1; 1; 1; 1; 1 |]
|]
let weightsSum = 45
for y in [0 .. smallOutput.GetLength(0) - 1] do
for x in [0 .. smallOutput.GetLength(1) - 1] do
let mutable v = 0
for i in [0 .. 4] do
for j in [0 .. 4] do
v <- v + weights.[j, i] * output.[y * 5 + j, x * 5 + i]
smallOutput.[y, x] <- v / weightsSum
It takes a large matrix (16k x 16k) and reduces it by 25x while applying weights.
I understand I can try to do this in a Parallel.ForEach loop, but I am wondering if there is anything built-in F# that would allow to make this faster in the first place.

I don't think there's much you can do to optimize it further, short of doing the summation already when you initialize the smallOutput variable; i.e.
let smallOutput = Array2D.init (output.GetLength(0) / 5) (output.GetLength(1) / 5) (fun y x ->
let mutable v = 0
for i in [0 .. 4] do
for j in [0 .. 4] do
v <- v + weights.[j, i] * output.[y * 5 + j, x * 5 + i]
v / weightsSum)
The thing is, that you need to loop over all entries in the larger array, there's no way getting around that. If you know beforehand the structure of the weighting-matrix, e.g. that it's symmetric in some way, you might be able to utilize that. Thought to be honest, I'm not sure how much of an optimization that would yield.

Related

Linear programming if/then modification to cost function?

I'm setting up a linear programming optimization model using CPLEX and am wondering if it's possible to accomplish a modification of the cost function dependent upon which binary decision variables are 'active' in an arbitrary solution. This is mostly a question about how to formulate the LP model (if it's even possible), but responses in the context of CPLEX are welcome or even preferred.
Say I have an LP problem in canonical form:
minimize cTx
s.t. Ax <= b
With cost function:
c = [c_1, c_2,...,c_100]
All variables are binary. I have this basic setup modeled and running effectively in CPLEX.
Now say I have a subset of variables:
efficiency_set = [x_1, x_2,...,x_5]
With the condition:
if any x_n in efficiency_set == 1
then c_n for all other x_n in the set = 0.9 * c_n
Essentially there is a dependency where if any x_n in the efficiency set is 'active', it becomes 10% less expensive for other variables in the set to appear in the solution.
I thought that CPLEX indicator constraints were what I was looking for, but after reading through documentation, I don't think I can enforce an on-the-fly change to cost function with them (I could be wrong). So I feel like it needs to be done through formulation of the LP, but I can't reason how to accomplish it. Any ideas?. Thanks.
In CPLEX you have many APIs, let me answer you with the easiest one OPL.
Your canonical form can be written
int n=3;
int m=4;
range N=1..n;
range M=1..m;
float A[N][M]=[[1,4,9,6],[8,5,0,8],[2,9,0,2]];
float B[M]=[3,1,3,0];
float C[N]=[1,1,1];
dvar boolean x[N];
minimize sum(i in N) C[i]*x[i];
subject to
{
forall(j in M) sum(i in N) A[i,j]*x[i]>=B[j];
}
and then you can you write logical constraints:
int n=3;
int m=4;
range N=1..n;
range M=1..m;
float A[N][M]=[[1,4,9,6],[8,5,0,8],[2,9,0,2]];
float B[M]=[3,1,3,0];
float C[N]=[1,1,1];
{int} efficiencySet={1,2};
dvar boolean activeEfficiencySet;
dvar boolean x[N];
minimize sum(i in N) C[i]*x[i]*(1-0.1*activeEfficiencySet*(i not in efficiencySet));
subject to
{
forall(j in M) sum(i in N) A[i,j]*x[i]>=B[j];
activeEfficiencySet==(1<=sum(i in efficiencySet) x[i]);
}
Using Alex's data, I have written the program in docplex (cplex python API)
from docplex.mp.model import Model
n = 3
m = 4
A = {}
A[0, 0] = 1
A[0, 1] = 4
A[0, 2] = 9
A[0, 3] = 6
A[1, 0] = 8
A[1, 1] = 5
A[1, 2] = 0
A[1, 3] = 8
A[2, 0] = 2
A[2, 1] = 9
A[2, 2] = 0
A[2, 3] = 2
B = {}
B[0] = 3
B[1] = 1
B[2] = 3
B[3] = 0
C = {}
C[0] = 1
C[1] = 1
C[2] = 1
efficiencySet = [0, 1]
mdl = Model(name="")
activeEfficiencySet = mdl.binary_var()
x = mdl.binary_var_dict(range(n), name="x")
# constraint 1:
for j in range(m):
mdl.add_constraint(mdl.sum(A[i, j] * x[i] for i in range(n)) >= B[j])
# constraint 2:
mdl.add(activeEfficiencySet == (mdl.sum(x) >= 1))
# objective function:
# expr = mdl.linear_expr()
lst = []
for i in range(n):
if i not in efficiencySet:
lst.append((C[i] * x[i] * (1 - 0.1 * activeEfficiencySet)))
else:
lst.append(C[i] * x[i])
mdl.minimize(mdl.sum(lst))
mdl.solve()
for i in range(n):
print(str(x[i]) + " : " + str(x[i].solution_value))
activeEfficiencySet.solution_value

Haskell, function works when using numbers, but not with variables

I'm using ghci and I'm having a problem with a function for getting the factors of a number.
The code I would like to work is:
let factors n = [x | x <- [1..truncate (n/2)], mod n x == 0]
It doesn't complain when I then hit enter, but as soon as I try to use it (with 66 in this case) I get this error message:
Ambiguous type variable 't0' in the constraints:
(Integral t0)
arising from a use of 'factors' at <interactive>:30:1-10
(Num t0) arising from the literal '66' at <interactive>:30:12-13
(RealFrac t0)
arising from a use of 'factors' at <interactive:30:1-10
Probable fix: add a type signature that fixes these type variable(s)
In the expression: factors 66
In the equation for 'it': it = factors 66
The following code works perfectly:
let factorsOfSixtySix = [x | x <- [1..truncate (66/2)], mod 66 x == 0]
I'm new to haskell, and after looking up types and typeclasses, I'm still not sure what I'm meant to do.
Use div for integer division instead:
let factors n = [x | x <- [1.. n `div` 2], mod n x == 0]
The problem in your code is that / requires a RealFrac type for n while mod an Integral one. This is fine during definition, but then you can not choose a type which fits both constraints.
Another option could be to truncate n before using mod, but is more cumbersome. After all, you do not wish to call factors 6.5, do you? ;-)
let factors n = [x | x <- [1..truncate (n/2)], mod (truncate n) x == 0]
If you put a type annotation on this top-level bind (idiomatic Haskell), you get different, possibly more useful error messages.
GHCi> let factors n = [x | x <- [1..truncate (n/2)], mod n x == 0]
GHCi> :t factors
factors :: (Integral t, RealFrac t) => t -> [t]
GHCi> let { factors :: Double -> [Double]; factors n = [x | x <- [1..truncate (n/2)], mod n x == 0]; }
<interactive>:30:64:
No instance for (Integral Double) arising from a use of `truncate'
Possible fix: add an instance declaration for (Integral Double)
In the expression: truncate (n / 2)
In the expression: [1 .. truncate (n / 2)]
In a stmt of a list comprehension: x <- [1 .. truncate (n / 2)]
GHCi> let { factors :: Integer -> [Integer]; factors n = [x | x <- [1..truncate (n/2)], mod n x == 0]; }
<interactive>:31:66:
No instance for (RealFrac Integer) arising from a use of `truncate'
Possible fix: add an instance declaration for (RealFrac Integer)
In the expression: truncate (n / 2)
In the expression: [1 .. truncate (n / 2)]
In a stmt of a list comprehension: x <- [1 .. truncate (n / 2)]
<interactive>:31:77:
No instance for (Fractional Integer) arising from a use of `/'
Possible fix: add an instance declaration for (Fractional Integer)
In the first argument of `truncate', namely `(n / 2)'
In the expression: truncate (n / 2)
In the expression: [1 .. truncate (n / 2)]
I am new to Haskell so please forgive my courage to come up with an answer here but recently i have done this as follows;
factors :: Int -> [Int]
factors n = f' ++ [n `div` x | x <- tail f', x /= exc]
where lim = truncate (sqrt (fromIntegral n))
exc = ceiling (sqrt (fromIntegral n))
f' = [x | x <- [1..lim], n `mod` x == 0]
I believe it's more efficient. You will notice if you do like;
sum (factors 33550336)

How to create cartesian product [duplicate]

This question already has answers here:
Generate all possible n-character passwords
(4 answers)
Closed 1 year ago.
I have a list of integers, a = [0, ..., n]. I want to generate all possible combinations of k elements from a; i.e., the cartesian product of the a with itself k times. Note that n and k are both changeable at runtime, so this needs to be at least a somewhat adjustable function.
So if n was 3, and k was 2:
a = [0, 1, 2, 3]
k = 2
desired = [(0,0), (0, 1), (0, 2), ..., (2,3), (3,0), ..., (3,3)]
In python I would use the itertools.product() function:
for p in itertools.product(a, repeat=2):
print p
What's an idiomatic way to do this in Go?
Initial guess is a closure that returns a slice of integers, but it doesn't feel very clean.
For example,
package main
import "fmt"
func nextProduct(a []int, r int) func() []int {
p := make([]int, r)
x := make([]int, len(p))
return func() []int {
p := p[:len(x)]
for i, xi := range x {
p[i] = a[xi]
}
for i := len(x) - 1; i >= 0; i-- {
x[i]++
if x[i] < len(a) {
break
}
x[i] = 0
if i <= 0 {
x = x[0:0]
break
}
}
return p
}
}
func main() {
a := []int{0, 1, 2, 3}
k := 2
np := nextProduct(a, k)
for {
product := np()
if len(product) == 0 {
break
}
fmt.Println(product)
}
}
Output:
[0 0]
[0 1]
[0 2]
[0 3]
[1 0]
[1 1]
[1 2]
[1 3]
[2 0]
[2 1]
[2 2]
[2 3]
[3 0]
[3 1]
[3 2]
[3 3]
The code to find the next product in lexicographic order is simple: starting from the right, find the first value that won't roll over when you increment it, increment that and zero the values to the right.
package main
import "fmt"
func main() {
n, k := 5, 2
ix := make([]int, k)
for {
fmt.Println(ix)
j := k - 1
for ; j >= 0 && ix[j] == n-1; j-- {
ix[j] = 0
}
if j < 0 {
return
}
ix[j]++
}
}
I've changed "n" to mean the set is [0, 1, ..., n-1] rather than [0, 1, ..., n] as given in the question, since the latter is confusing since it has n+1 elements.
Just follow the answer Implement Ruby style Cartesian product in Go, play it on http://play.golang.org/p/NR1_3Fsq8F
package main
import "fmt"
// NextIndex sets ix to the lexicographically next value,
// such that for each i>0, 0 <= ix[i] < lens.
func NextIndex(ix []int, lens int) {
for j := len(ix) - 1; j >= 0; j-- {
ix[j]++
if j == 0 || ix[j] < lens {
return
}
ix[j] = 0
}
}
func main() {
a := []int{0, 1, 2, 3}
k := 2
lens := len(a)
r := make([]int, k)
for ix := make([]int, k); ix[0] < lens; NextIndex(ix, lens) {
for i, j := range ix {
r[i] = a[j]
}
fmt.Println(r)
}
}

C language. Logic error: The left operand of '-' is a garbage value

I have the following code, but in this line of code I have warning x[i] = (rhs[i] - x[i - 1]) / b;, compiler is telling me that rhs[i] is a garbage value. Why it's happend? And how to remove this warning?
double* getFirstControlPoints(double* rhs, const int n) {
double *x;
x = (double*)malloc(n * sizeof(double));
double *tmp; // Temp workspace.
tmp = (double*)malloc(n * sizeof(double));
double b = 2.0;
x[0] = rhs[0] / b;
for (int i = 1; i < n; i++) // Decomposition and forward substitution.
{
tmp[i] = 1 / b;
b = (i < n - 1 ? 4.0 : 3.5) - tmp[i];
x[i] = (rhs[i] - x[i - 1]) / b; //The left operand of '-' is a garbage value
}
for (int i = 1; i < n; i++) {
x[n - i - 1] -= tmp[n - i] * x[n - i]; // Backsubstitution.
}
free(tmp);
return x;
}
All compiler warnings and calling getFirstControlPoints you may see on screenshots.
You need a check to make sure you have at least 4 points in the points array because this loop (line 333):
for (NSInteger i = 1 ; i < n - 1 ; ++i) {
// initialisation stuff
}
will not execute at all for n = 0, 1, 2.
Assume that points has 3 objects in it, At line 311 you set n to the count - 1 i.e. n == 2
Then the loop condition is i < 2 - 1 i.e. i < 1.
I think you need the loop condition to be i < n
if points.count is 0 or 1 you are facing some problems, because then, n is -1 or 0, and you access rhs[n-1]; and you malloc n* bytes;
maybe that can be the problem. that you put some rangechecks int o the code?

How to optimize splitting in F# more?

This code is splitting a list in two pieces by a predicate that take a list and return false in the moment of splitting.
let split pred ys =
let rec split' l r =
match r with
| [] -> []
| x::xs -> if pred (x::l) then x::(split' (x::l) xs) else []
let res = split' [] ys
let last = ys |> Seq.skip (Seq.length res) |> Seq.toList
(res, last)
Do someone knows more optimal and simpler ways to do that in F#?
Well you can make it tail recursive but then you have to reverse the list. You wouldn't want to fold it since it can exit out of the recursive loop at any time. I did a little testing and reversing the list is more than made up for by tail recursion.
// val pred : ('a list -> bool)
let split pred xs =
let rec split' l xs ys =
match xs with
| [] -> [], ys
| x::xs -> if pred (x::l) then (split' (x::l) xs (x::ys)) else x::xs, ys
let last, res = split' [] xs []
(res |> List.rev, last)
A version similar to Brian's that is tail recursive and takes a single value predicate.
// val pred : ('a -> bool)
let split pred xs =
let rec split' xs ys =
match xs with
| [] -> [], ys
| x::xs -> if pred x then (split' xs (x::ys)) else (x::xs), ys
let last, res = split' xs []
(res |> List.rev, last)
This is different from the library function partition in that it stops taking elements as soon as the predicate returns false kind of like Seq.takeWhile.
// library function
let x, y = List.partition (fun x -> x < 5) li
printfn "%A" x // [1; 3; 2; 4]
printfn "%A" y // [5; 7; 6; 8]
let x, y = split (fun x -> x < 5) li
printfn "%A" x // [1; 3]
printfn "%A" y // [5; 7; 2; 4; 6; 8]
Not tail-recursive, but:
let rec Break pred list =
match list with
| [] -> [],[]
| x::xs when pred x ->
let a,b = Break pred xs
x::a, b
| x::xs -> [x], xs
let li = [1; 3; 5; 7; 2; 4; 6; 8]
let a, b = Break (fun x -> x < 5) li
printfn "%A" a // [1; 3; 5]
printfn "%A" b // [7; 2; 4; 6; 8]
// Also note this library function
let x, y = List.partition (fun x -> x < 5) li
printfn "%A" x // [1; 3; 2; 4]
printfn "%A" y // [5; 7; 6; 8]
Here is some foldr way:
let split' pred xs = let f (ls,rs,cond) x = if cond (ls#[x]) then (ls#[x],rs,cond) else (ls,rs#[x],(fun _->false))
let ls,rs,_ = List.fold f ([],[],pred) xs
ls, rs