I'm setting up a linear programming optimization model using CPLEX and am wondering if it's possible to accomplish a modification of the cost function dependent upon which binary decision variables are 'active' in an arbitrary solution. This is mostly a question about how to formulate the LP model (if it's even possible), but responses in the context of CPLEX are welcome or even preferred.
Say I have an LP problem in canonical form:
minimize cTx
s.t. Ax <= b
With cost function:
c = [c_1, c_2,...,c_100]
All variables are binary. I have this basic setup modeled and running effectively in CPLEX.
Now say I have a subset of variables:
efficiency_set = [x_1, x_2,...,x_5]
With the condition:
if any x_n in efficiency_set == 1
then c_n for all other x_n in the set = 0.9 * c_n
Essentially there is a dependency where if any x_n in the efficiency set is 'active', it becomes 10% less expensive for other variables in the set to appear in the solution.
I thought that CPLEX indicator constraints were what I was looking for, but after reading through documentation, I don't think I can enforce an on-the-fly change to cost function with them (I could be wrong). So I feel like it needs to be done through formulation of the LP, but I can't reason how to accomplish it. Any ideas?. Thanks.
In CPLEX you have many APIs, let me answer you with the easiest one OPL.
Your canonical form can be written
int n=3;
int m=4;
range N=1..n;
range M=1..m;
float A[N][M]=[[1,4,9,6],[8,5,0,8],[2,9,0,2]];
float B[M]=[3,1,3,0];
float C[N]=[1,1,1];
dvar boolean x[N];
minimize sum(i in N) C[i]*x[i];
subject to
{
forall(j in M) sum(i in N) A[i,j]*x[i]>=B[j];
}
and then you can you write logical constraints:
int n=3;
int m=4;
range N=1..n;
range M=1..m;
float A[N][M]=[[1,4,9,6],[8,5,0,8],[2,9,0,2]];
float B[M]=[3,1,3,0];
float C[N]=[1,1,1];
{int} efficiencySet={1,2};
dvar boolean activeEfficiencySet;
dvar boolean x[N];
minimize sum(i in N) C[i]*x[i]*(1-0.1*activeEfficiencySet*(i not in efficiencySet));
subject to
{
forall(j in M) sum(i in N) A[i,j]*x[i]>=B[j];
activeEfficiencySet==(1<=sum(i in efficiencySet) x[i]);
}
Using Alex's data, I have written the program in docplex (cplex python API)
from docplex.mp.model import Model
n = 3
m = 4
A = {}
A[0, 0] = 1
A[0, 1] = 4
A[0, 2] = 9
A[0, 3] = 6
A[1, 0] = 8
A[1, 1] = 5
A[1, 2] = 0
A[1, 3] = 8
A[2, 0] = 2
A[2, 1] = 9
A[2, 2] = 0
A[2, 3] = 2
B = {}
B[0] = 3
B[1] = 1
B[2] = 3
B[3] = 0
C = {}
C[0] = 1
C[1] = 1
C[2] = 1
efficiencySet = [0, 1]
mdl = Model(name="")
activeEfficiencySet = mdl.binary_var()
x = mdl.binary_var_dict(range(n), name="x")
# constraint 1:
for j in range(m):
mdl.add_constraint(mdl.sum(A[i, j] * x[i] for i in range(n)) >= B[j])
# constraint 2:
mdl.add(activeEfficiencySet == (mdl.sum(x) >= 1))
# objective function:
# expr = mdl.linear_expr()
lst = []
for i in range(n):
if i not in efficiencySet:
lst.append((C[i] * x[i] * (1 - 0.1 * activeEfficiencySet)))
else:
lst.append(C[i] * x[i])
mdl.minimize(mdl.sum(lst))
mdl.solve()
for i in range(n):
print(str(x[i]) + " : " + str(x[i].solution_value))
activeEfficiencySet.solution_value
Related
I am new to CVXPY, I learn by running examples and I found this post: How to construct a SOCP problem and solve using cvxpy and cvxpylayers #ben
The author provided the codes (Below I've corrected the line that caused an error in OP's EDIT 2):
import cvxpy as cp
import numpy as np
N = 5
Q_sqrt = cp.Parameter((N, N))
Q = cp.Parameter((N, N))
x = cp.Variable(N)
z = cp.Variable(N)
p = cp.Variable()
t = cp.Variable()
objective = cp.Minimize(p - t)
constraint_soc = [z == Q # x, x.value * z >= t ** 2, z >= 0, x >= 0] # <-- my question
constraint_other = [cp.quad_over_lin(Q_sqrt # x, p) <= N * p, cp.sum(x) == 1, p >= 0, t >= 0]
constraint_all = constraint_other + constraint_soc
matrix = np.random.random((N, N))
a_matrix = matrix.T # matrix
Q.value = a_matrix
Q_sqrt.value = np.sqrt(a_matrix)
prob = cp.Problem(objective, constraint_all)
prob.solve(verbose=True)
print("status:", prob.status)
print("optimal value", prob.value)
My question is here: x.value * z >= t ** 2
why only x.value while z not?
Actually I tried x * z, x.value * z.value, they both throw out errors, only the one in the original post works, which is x.value * z.
I googled and found this page and this, looks most relevant, but still failed to find an answer.
But both x and z are variables and defined as such
x = cp.Variable(N)
z = cp.Variable(N)
why only x needs a .value after?? Maybe it's a trivial question to experienced users, but I really don't get it. Thank you.
Update: two follow-up questions
Thanks to #MichalAdamaszek the first question above is clear: x.value didn't make sense here. A suggestion is using constraint_soc = [z == Q # x] + [ z[i]>=cp.quad_over_lin(t, x[i]) for i in range(N) ]
I have two following questions:
is it better to write the second of the soc constraint as : [ x[i]>=cp.quad_over_lin(t, z[i]) for i in range(N) ]? because in the question we only know that z[i] > 0 for sure. Or it doesn't matter at all in cvxpy? I tired both, both returned a similar optimal value.
I noticed that for the two constraints:
$x^TQx <= Np^2 $ and $ x_i z_i >= t^2 $
the quadratic terms were always intentionally split into two linear terms:
cp.quad_over_lin(Q_sqrt # x, p) <= N * p and z[i]>=cp.quad_over_lin(t, x[i]) respectively.
Is it because that in cvxpy, it is not allowed to have quadratic terms in (in)equality constraints? Is there an documentation to those basic rules?
Thank you.
Using Z3Py, once a model has been checked for an optimization problem, is there a way to convert ArithRef expressions into values?
Such as
y = If(x > 5, 0, 0.5 * x)
Once values have been found for x, can I get the evaluated value for y, without having to calculate again based on the given values for x?
Many thanks.
You need to evaluate, but it can be done by the model for you automatically:
from z3 import *
x = Real('x')
y = If(x > 5, 0, 0.5 * x)
s = Solver()
r = s.check()
if r == sat:
m = s.model();
print("x =", m.eval(x, model_completion=True))
print("y =", m.eval(y, model_completion=True))
else:
print("Solver said:", r)
This prints:
x = 0
y = 0
Note that we used the parameter model_completion=True since there are no constraints to force x (and consequently y) to any value in this model. If you have sufficient constraints added, you wouldn't need that parameter. (Of course, having it does not hurt.)
I am trying to generate a sample of 100 scenarios (X, Y) where both X and Y are normally distributed X=N(50,5^2), Y=N(30,2^2) and X and Y are correlated Cov(X,Y)=0.4.
I have been able to generate 100 scenarios with the Cholesky decomposition:
# We do a Cholesky decomposition to generate correlated scenarios
nScenarios = 10
Σ = [25 0.4; 0.4 4]
μ = [50, 30]
L = cholesky(Σ)
v = [rand(Normal(0, 1), nScenarios), rand(Normal(0, 1), nScenarios)]
X = reshape(zeros(nScenarios),1,nScenarios)
Y = reshape(zeros(nScenarios),1,nScenarios)
for i = 1:nScenarios
X[1, i] = sum(L.U[1, j] *v[j][i] for j = 1:nBreadTypes) + μ[1]
Y[1, i] = sum(L.U[2, j] *v[j][i] for j = 1:nBreadTypes) + μ[2]
end
However I need the probability of each scenario, i.e P(X=k and Y=p). My question would be, how can we get a sample of a certain distribution with the probability of each scenario?
Following the BatWannaBe explanation, normally I would do it like this:
julia> using Distributions
julia> d = MvNormal([50.0, 30.0], [25.0 0.4; 0.4 4.0])
FullNormal(
dim: 2
μ: [50.0, 30.0]
Σ: [25.0 0.4; 0.4 4.0]
)
julia> point = rand(d)
2-element Vector{Float64}:
52.807189619051485
32.693811008760676
julia> pdf(d, point)
0.0056519503173830515
How to sample without replacement in TensorFlow? Like numpy.random.choice(n, size=k, replace=False) for some very large integer n (e.g. 100k-100M), and smaller k (e.g. 100-10k).
Also, I want it to be efficient and on the GPU, so other solutions like this with tf.py_func are not really an option for me. Anything which would use tf.range(n) or so is also not an option because n could be very large.
This is one way:
n = ...
sample_size = ...
idx = tf.random_shuffle(tf.range(n))[:sample_size]
EDIT:
I had posted the answer below but then read the last line of your post. I don't think there is a good way to do it if you absolutely cannot produce a tensor with size O(n) (numpy.random.choice with replace=False is also implemented as a slice of a permutation). You could resort to a tf.while_loop until you have unique indices:
n = ...
sample_size = ...
idx = tf.zeros(sample_size, dtype=tf.int64)
idx = tf.while_loop(
lambda i: tf.size(idx) == tf.size(tf.unique(idx)),
lambda i: tf.random_uniform(sample_size, maxval=n, dtype=int64))
EDIT 2:
About the average number of iterations in the previous method. If we call n the number of possible values and k the length of the desired vector (with k ≤ n), the probability that an iteration is successful is:
p = product((n - (i - 1) / n) for i in 1 .. k)
Since each iteartion can be considered a Bernoulli trial, the average number of trials unitl first success is 1 / p (proof here). Here is a function that calculates the average numbre of trials in Python for some k and n values:
def avg_iter(k, n):
if k > n or n <= 0 or k < 0:
raise ValueError()
avg_it = 1.0
for p in (float(n) / (n - i) for i in range(k)):
avg_it *= p
return avg_it
And here are some results:
+-------+------+----------+
| n | k | Avg iter |
+-------+------+----------+
| 10 | 5 | 3.3 |
| 100 | 10 | 1.6 |
| 1000 | 10 | 1.1 |
| 1000 | 100 | 167.8 |
| 10000 | 10 | 1.0 |
| 10000 | 100 | 1.6 |
| 10000 | 1000 | 2.9e+22 |
+-------+------+----------+
You can see it varies wildy depending on the parameters.
It is possible, though, to construct a vector in a fixed number of steps, although the only algorithm I can think of is O(k2). In pure Python it goes like this:
import random
def sample_wo_replacement(n, k):
sample = [0] * k
for i in range(k):
sample[i] = random.randint(0, n - 1 - len(sample))
for i, v in reversed(list(enumerate(sample))):
for p in reversed(sample[:i]):
if v >= p:
v += 1
sample[i] = v
return sample
random.seed(100)
print(sample_wo_replacement(10, 5))
# [2, 8, 9, 7, 1]
print(sample_wo_replacement(10, 10))
# [6, 5, 8, 4, 0, 9, 1, 2, 7, 3]
This is a possible way to do it in TensorFlow (not sure if the best one):
import tensorflow as tf
def sample_wo_replacement_tf(n, k):
# First loop
sample = tf.constant([], dtype=tf.int64)
i = 0
sample, _ = tf.while_loop(
lambda sample, i: i < k,
# This is ugly but I did not want to define more functions
lambda sample, i: (tf.concat([sample,
tf.random_uniform([1], maxval=tf.cast(n - tf.shape(sample)[0], tf.int64), dtype=tf.int64)],
axis=0),
i + 1),
[sample, i], shape_invariants=[tf.TensorShape((None,)), tf.TensorShape(())])
# Second loop
def inner_loop(sample, i):
sample_size = tf.shape(sample)[0]
v = sample[i]
j = i - 1
v, _ = tf.while_loop(
lambda v, j: j >= 0,
lambda v, j: (tf.cond(v >= sample[j], lambda: v + 1, lambda: v), j - 1),
[v, j])
return (tf.where(tf.equal(tf.range(sample_size), i), tf.tile([v], (sample_size,)), sample), i - 1)
i = tf.shape(sample)[0] - 1
sample, _ = tf.while_loop(lambda sample, i: i >= 0, inner_loop, [sample, i])
return sample
And an example:
with tf.Graph().as_default(), tf.Session() as sess:
tf.set_random_seed(100)
sample = sample_wo_replacement_tf(10, 5)
for i in range(10):
print(sess.run(sample))
# [3 0 6 8 4]
# [5 4 8 9 3]
# [1 4 0 6 8]
# [8 9 5 6 7]
# [7 5 0 2 4]
# [8 4 5 3 7]
# [0 5 7 4 3]
# [2 0 3 8 6]
# [3 4 8 5 1]
# [5 7 0 2 9]
This is quite intesive on tf.while_loops, though, which are well-known not to be particularly fast in TensorFlow, so I wouldn't know how fast can you really get with this method without some kind of benchmarking.
EDIT 4:
One last possible method. You can divide the range of possible values (0 to n) in "chunks" of size c and pick a random amount of numbers from each chunk, then shuffle everything. The amount of memory that you use is limited by c, and you don't need nested loops. If n is divisible by c, then you should get about a perfect random distribution, otherwise values in the last "short" chunk would receive some extra probability (this may be negligible depending on the case). Here is a NumPy implementation. It is somewhat long to account for different corner cases and pitfalls, but if c ≥ k and n mod c = 0 several parts get simplified.
import numpy as np
def sample_chunked(n, k, chunk=None):
chunk = chunk or n
last_chunk = chunk
parts = n // chunk
# Distribute k among chunks
max_p = min(float(chunk) / k, 1.0)
max_p_last = max_p
if n % chunk != 0:
parts += 1
last_chunk = n % chunk
max_p_last = min(float(last_chunk) / k, 1.0)
p = np.full(parts, 2)
# Iterate until a valid distribution is found
while not np.isclose(np.sum(p), 1) or np.any(p > max_p) or p[-1] > max_p_last:
p = np.random.uniform(size=parts)
p /= np.sum(p)
dist = (k * p).astype(np.int64)
sample_size = np.sum(dist)
# Account for rounding errors
while sample_size < k:
i = np.random.randint(len(dist))
while (dist[i] >= chunk) or (i == parts - 1 and dist[i] >= last_chunk):
i = np.random.randint(len(dist))
dist[i] += 1
sample_size += 1
while sample_size > k:
i = np.random.randint(len(dist))
while dist[i] == 0:
i = np.random.randint(len(dist))
dist[i] -= 1
sample_size -= 1
assert sample_size == k
# Generate sample parts
sample_parts = []
for i, v in enumerate(np.nditer(dist)):
if v <= 0:
continue
c = chunk if i < parts - 1 else last_chunk
base = chunk * i
sample_parts.append(base + np.random.choice(c, v, replace=False))
sample = np.concatenate(sample_parts, axis=0)
np.random.shuffle(sample)
return sample
np.random.seed(100)
print(sample_chunked(15, 5, 4))
# [ 8 9 12 13 3]
A quick benchmark of sample_chunked(100000000, 100000, 100000) takes about 3.1 seconds in my computer, while I haven't been able to run the previous algorithm (sample_wo_replacement function above) to completion with the same parameters. It should be possible to implement it in TensorFlow, maybe using tf.TensorArray, although it would require significant effort to get it exactly right.
use the gumbel-max trick here: https://github.com/tensorflow/tensorflow/issues/9260
z = -tf.log(-tf.log(tf.random_uniform(tf.shape(logits),0,1)))
_, indices = tf.nn.top_k(logits + z,K)
indices are what you want. This tick is so easy~!
The following works fairly fast on the GPU, and I did not encounter memory issues when using n~100M and k~10k (using NVIDIA GeForce GTX 1080 Ti):
def random_choice_without_replacement(n, k):
"""equivalent to 'numpy.random.choice(n, size=k, replace=False)'"""
return tf.math.top_k(tf.random.uniform(shape=[n]), k, sorted=False).indices
Here is my code in matlab:
x = [1 2 3 4];
result = fft(x);
a = real(result);
b = imag(result);
Result from matlab:
a = [10,-2,-2,-2]
b = [ 0, 2, 0,-2]
And my runnable code in objective-c:
int length = 4;
float* x = (float *)malloc(sizeof(float) * length);
x[0] = 1;
x[1] = 2;
x[2] = 3;
x[3] = 4;
// Setup the length
vDSP_Length log2n = log2f(length);
// Calculate the weights array. This is a one-off operation.
FFTSetup fftSetup = vDSP_create_fftsetup(log2n, FFT_RADIX2);
// For an FFT, numSamples must be a power of 2, i.e. is always even
int nOver2 = length/2;
// Define complex buffer
COMPLEX_SPLIT A;
A.realp = (float *) malloc(nOver2*sizeof(float));
A.imagp = (float *) malloc(nOver2*sizeof(float));
// Generate a split complex vector from the sample data
vDSP_ctoz((COMPLEX*)x, 2, &A, 1, nOver2);
// Perform a forward FFT using fftSetup and A
vDSP_fft_zrip(fftSetup, &A, 1, log2n, FFT_FORWARD);
//Take the fft and scale appropriately
Float32 mFFTNormFactor = 0.5;
vDSP_vsmul(A.realp, 1, &mFFTNormFactor, A.realp, 1, nOver2);
vDSP_vsmul(A.imagp, 1, &mFFTNormFactor, A.imagp, 1, nOver2);
printf("After FFT: \n");
printf("%.2f | %.2f \n",A.realp[0], 0.0);
for (int i = 1; i< nOver2; i++) {
printf("%.2f | %.2f \n",A.realp[i], A.imagp[i]);
}
printf("%.2f | %.2f \n",A.imagp[0], 0.0);
The output from objective c:
After FFT:
10.0 | 0.0
-2.0 | 2.0
The results are so close. I wonder where is the rest ? I know missed something but don't know what is it.
Updated: I found another answer here . I updated the output
After FFT:
10.0 | 0.0
-2.0 | 2.0
-2.0 | 0.0
but even that there's still 1 element missing -2.0 | -2.0
Performing a FFT delivers a right hand spectrum and a left hand spectrum.
If you have N samples the frequencies you will return are:
( -f(N/2), -f(N/2-1), ... -f(1), f(0), f(1), f(2), ..., f(N/2-1) )
If A(f(i)) is the complex amplitude A of the frequency component f(i) the following relation is true:
Real{A(f(i)} = Real{A(-f(i))} and Imag{A(f(i)} = -Imag{A(-f(i))}
This means, the information of the right hand spectrum and the left hand spectrum is the same. However, the sign of the imaginary part is different.
Matlab returns the frequency in a different order.
Matlab order is:
( f(0), f(1), f(2), ..., f(N/2-1) -f(N/2), -f(N/2-1), ... -f(1), )
To get the upper order use the Matlab function fftshift().
In the case of 4 Samples you have got in Matlab:
a = [10,-2,-2,-2]
b = [ 0, 2, 0,-2]
This means:
A(f(0)) = 10 (DC value)
A(f(1)) = -2 + 2i (first frequency component of the right hand spectrum)
A(-f(2) = -2 ( second frequency component of the left hand spectrum)
A(-f(1) = -2 - 2i ( first frequency component of the left hand spectrum)
I do not understand your objective-C code.
However, it seems to me that the program returns the right hand spectrum only.
So anything is perfect.