or-tools Maximize/Minimize OR / XOR - optimization

Simple example:
days = range(1,10)
for d in days:
model.AddBoolXOr(a,b,c,d,e,f,g)
Above I can ensure only one of a...g is true every day. But it is not always possible to achieve this every day so I want to be able to maximize the number of times it is achieved. Something like...
array_bools = []
days = range(1,10)
for d in days:
day_bool = NewBoolVar('name')
model.Add(day_bool = XOr(a,b,c,d,e,f,g))
array_bools.append(day_bool)
model.Maximize(sum(array_bool[i] for i in range(len(array_bool))))

array_bools = []
days = range(1,10)
for d in days:
day_bool = NewBoolVar('name')
model.Add(sum([a,b,c,d,e,f,g]) == 1).OnlyEnforceIf(day_bool)
model.Add(sum([a,b,c,d,e,f,g]) != 1).OnlyEnforceIf(day_bool.Not())
array_bools.append(day_bool)
model.Maximize(sum(array_bool))
See this page of documentation

Related

How to fix "submatrix incorrectly defined" in Scilab?

I am trying to find three parameters (a, b, c) to fit my experimental data using ODE solver and optimization by least squares using Scilab in-built functions.
However, I keep having the message "submatrix incorrectly defined" at line "y_exp(:,1) = [0.135 ..."
When I try another series of data (t, yexp) such as the one used in the original template I get no error messages. The template I use was found here: https://wiki.scilab.org/Non%20linear%20optimization%20for%20parameter%20fitting%20example
function dy = myModel ( t , y , a , b, c )
// The right-hand side of the Ordinary Differential Equation.
dy(1) = -a*y(1) - b*y(1)*y(2)
dy(2) = a*y(1) - b*y(1)*y(2) - c*y(2)
endfunction
function f = myDifferences ( k )
// Returns the difference between the simulated differential
// equation and the experimental data.
global MYDATA
t = MYDATA.t
y_exp = MYDATA.y_exp
a = k(1)
b = k(2)
c = k(3)
y0 = y_exp(1,:)
t0 = 0
y_calc=ode(y0',t0,t,list(myModel,a,b,c))
diffmat = y_calc' - y_exp
// Make a column vector
f = diffmat(:)
MYDATA.funeval = MYDATA.funeval+ 1
endfunction
// Experimental data
t = [0,20,30,45,75,105,135,180,240]';
y_exp(:,1) =
[0.135,0.0924,0.067,0.0527,0.0363,0.02445,0.01668,0.012,0.009]';
y_exp(:,2) =
[0,0.00918,0.0132,0.01835,0.0261,0.03215,0.0366,0.0393,0.0401]';
// Store data for future use
global MYDATA;
MYDATA.t = t;
MYDATA.y_exp = y_exp;
MYDATA.funeval = 0;
function val = L_Squares ( k )
// Computes the sum of squares of the differences.
f = myDifferences ( k )
val = sum(f.^2)
endfunction
// Initial guess
a = 0;
b = 0;
c = 0;
x0 = [a;b;c];
[fopt ,xopt]=leastsq(myDifferences, x0)
Does anyone know how to approach this problem?
Just rewrite lines 28,29 as
y_exp = [0.135,0.0924,0.067,0.0527,0.0363,0.02445,0.01668,0.012,0.009
0,0.00918,0.0132,0.01835,0.0261,0.03215,0.0366,0.0393,0.0401]';
or insert a clear at line 1 (you may have defined y_exp before with a different size).

Calculate time complexity for the following snippet

Can someone please calculate the the no. of steps it will take to execute the above code?
And verify the solution, with some input values of n.
(found some relevant question, but not helping)
int count=0;
for(int i=1; i<=n ;i=i*2)
{
for(int j=1; j<=i; j=j*2)
{
count++;
}
}
We can make a table:
i = 1: j = 1 --> 1 count
i = 2: j = 1,2 --> 2 counts
i = 4: j = 1,2,4 --> 3 counts
i = 8: j = 1,2,4,8 --> 4 counts
The pattern should be clear from here. We can reimagine the pattern such that i = 1, 2, 3, 4, ..., and instead of going from 1 to n, let's just say it goes from 1 to log n. This means that the total count should be the sum from i = 1 to log (base 2) n of i. The sum from i = 1 to x of i is simply x(x+1)/2, so if x = log_2(n), then this sum is simply (log_2(n) * log_2(n)+1)/2
EDIT: It seems like I made a mistake somewhere, and what I wrote is actually f(n/2) based on empirical tests. Thus, the correct answer is actually (log_2(2n) * log_2(2n)+1)/2. Nevertheless, this is the logic I would follow to solve a problem like this
EDIT 2: Caught my mistake. Instead of saying "let's just say it goes from 1 to log n", I should have said "let's just say it goes from 0 to log n" (i.e., I need to take the log of every number in the series)
inner-loop
i = 1 --> log(1) = 0
i = 2 --> log(2) = 1
i = 4 --> log(4) = 2
i = 8 --> log(8) = 3
i = 16 -> log(16) = 4
i = 32 -> log(32) = 5
i = 64 -> log(64) = 6
.
.
.
i = n -> log(n) = log(n)
That is the amount of work and it will stop after log(n) iterations as i hits n.
1 + 2 + 3 + 4 +...+ log(n) = [(1+log(n))*log(n)]/2 = O(log^2(n))

How to declare constraints with variable as array index in Z3Py?

Suppose x,y,z are int variables and A is a matrix, I want to express a constraint like:
z == A[x][y]
However this leads to an error:
TypeError: object cannot be interpreted as an index
What would be the correct way to do this?
=======================
A specific example:
I want to select 2 items with the best combination score,
where the score is given by the value of each item and a bonus on the selection pair.
For example,
for 3 items: a, b, c with related value [1,2,1], and the bonus on pairs (a,b) = 2, (a,c)=5, (b,c) = 3, the best selection is (a,c), because it has the highest score: 1 + 1 + 5 = 7.
My question is how to represent the constraint of selection bonus.
Suppose CHOICE[0] and CHOICE[1] are the selection variables and B is the bonus variable.
The ideal constraint should be:
B = bonus[CHOICE[0]][CHOICE[1]]
but it results in TypeError: object cannot be interpreted as an index
I know another way is to use a nested for to instantiate first the CHOICE, then represent B, but this is really inefficient for large quantity of data.
Could any expert suggest me a better solution please?
If someone wants to play a toy example, here's the code:
from z3 import *
items = [0,1,2]
value = [1,2,1]
bonus = [[1,2,5],
[2,1,3],
[5,3,1]]
choices = [0,1]
# selection score
SCORE = [ Int('SCORE_%s' % i) for i in choices ]
# bonus
B = Int('B')
# final score
metric = Int('metric')
# selection variable
CHOICE = [ Int('CHOICE_%s' % i) for i in choices ]
# variable domain
domain_choice = [ And(0 <= CHOICE[i], CHOICE[i] < len(items)) for i in choices ]
# selection implication
constraint_sel = []
for c in choices:
for i in items:
constraint_sel += [Implies(CHOICE[c] == i, SCORE[c] == value[i])]
# choice not the same
constraint_neq = [CHOICE[0] != CHOICE[1]]
# bonus constraint. uncomment it to see the issue
# constraint_b = [B == bonus[val(CHOICE[0])][val(CHOICE[1])]]
# metric definition
constraint_sumscore = [metric == sum([SCORE[i] for i in choices ]) + B]
constraints = constraint_sumscore + constraint_sel + domain_choice + constraint_neq + constraint_b
opt = Optimize()
opt.add(constraints)
opt.maximize(metric)
s = []
if opt.check() == sat:
m = opt.model()
print [ m.evaluate(CHOICE[i]) for i in choices ]
print m.evaluate(metric)
else:
print "failed to solve"
Turns out the best way to deal with this problem is to actually not use arrays at all, but simply create integer variables. With this method, the 317x317 item problem originally posted actually gets solved in about 40 seconds on my relatively old computer:
[ 0.01s] Data loaded
[ 2.06s] Variables defined
[37.90s] Constraints added
[38.95s] Solved:
c0 = 19
c1 = 99
maxVal = 27
Note that the actual "solution" is found in about a second! But adding all the required constraints takes the bulk of the 40 seconds spent. Here's the encoding:
from z3 import *
import sys
import json
import sys
import time
start = time.time()
def tprint(s):
global start
now = time.time()
etime = now - start
print "[%ss] %s" % ('{0:5.2f}'.format(etime), s)
# load data
with open('data.json') as data_file:
dic = json.load(data_file)
tprint("Data loaded")
items = dic['items']
valueVals = dic['value']
bonusVals = dic['bonusVals']
vals = [[Int("val_%d_%d" % (i, j)) for j in items if j > i] for i in items]
tprint("Variables defined")
opt = Optimize()
for i in items:
for j in items:
if j > i:
opt.add(vals[i][j-i-1] == valueVals[i] + valueVals[j] + bonusVals[i][j])
c0, c1 = Ints('c0 c1')
maxVal = Int('maxVal')
opt.add(Or([Or([And(c0 == i, c1 == j, maxVal == vals[i][j-i-1]) for j in items if j > i]) for i in items]))
tprint("Constraints added")
opt.maximize(maxVal)
r = opt.check ()
if r == unsat or r == unknown:
raise Z3Exception("Failed")
tprint("Solved:")
m = opt.model()
print " c0 = %s" % m[c0]
print " c1 = %s" % m[c1]
print " maxVal = %s" % m[maxVal]
I think this is as fast as it'll get with Z3 for this problem. Of course, if you want to maximize multiple metrics, then you can probably structure the code so that you can reuse most of the constraints, thus amortizing the cost of constructing the model just once, and incrementally optimizing afterwards for optimal performance.

Convert Notes to Hertz (iOS)

I have tried to write a function that takes in notes in MIDI form (C2,A4,Bb6) and returns their respective frequencies in hertz. I'm not sure what the best method of doing this should be. I am torn between two approaches. 1) a list based one where I can switch on an input and return hard-coded frequency values given that I may only have to do this for 88 notes (in the grand piano case). 2) a simple mathematical approach however my math skills are a limitation as well as converting the input string into a numerical value. Ultimately I've been working on this for a while and could use some direction.
You can use a function based on this formula:
The basic formula for the frequencies of the notes of the equal
tempered scale is given by
fn = f0 * (a)n
where
f0 = the frequency of one fixed note which must be defined. A common choice is setting the A above middle C (A4) at f0 = 440 Hz.
n = the number of half steps away from the fixed note you are. If you are at a higher note, n is positive. If you are on a lower note, n is negative.
fn = the frequency of the note n half steps away. a = (2)1/12 = the twelth root of 2 = the number which when multiplied by itself 12 times equals 2 = 1.059463094359...
http://www.phy.mtu.edu/~suits/NoteFreqCalcs.html
In Objective-C, this would be:
+ (double)frequencyForNote:(Note)note withModifier:(Modifier)modifier inOctave:(int)octave {
int halfStepsFromA4 = note - A;
halfStepsFromA4 += 12 * (octave - 4);
halfStepsFromA4 += modifier;
double frequencyOfA4 = 440.0;
double a = 1.059463094359;
return frequencyOfA4 * pow(a, halfStepsFromA4);
}
With the following enums defined:
typedef enum : int {
C = 0,
D = 2,
E = 4,
F = 5,
G = 7,
A = 9,
B = 11,
} Note;
typedef enum : int {
None = 0,
Sharp = 1,
Flat = -1,
} Modifier;
https://gist.github.com/NickEntin/32c37e3d31724b229696
Why don't you use a MIDI pitch?
where f is the frequency, and d the MIDI data.

Matlab dynamic parametres generation

Till now x has two colomns and there was no problems, but now x have got various num of colomns, and I don't know how to write analog code but with dynamic number of colomns in x?
min_x = min(x);
max_x = max(x);
step = (max_x - min_x)/50;
[X, Y] = ndgrid(min_x(1):step(1):max_x(1), min_x(2):step(2):max_x(2));
You can use cellarrays to generate a comma separated list:
%# sample data
x = rand(10,3); %# you can change the column numbers here
%# calculate step sizes
mn = min(x);
mx = max(x);
step = (mx-mn)/50;
%# vec{i} = mn(i):s(i):mx(i)
vec = arrayfun(#(a,s,b)a:s:b, mn,step,mx, 'UniformOutput',false);
%# [X,Y,...] = ndgrid(vec{1},vec{2},...)
C = cell(1,numel(vec));
[C{:}] = ndgrid( vec{:} );
%# result = [X(:),Y(:),...]
result = cell2mat( cellfun(#(v)v(:), C, 'UniformOutput',false) );