CPLEX: Error 5002 Objective is not convex -> Problem can be solved to global optimality with optimality target 3 -> - optimization

I am receiving this error on CPLEX Optimization studio. The problem is a simple quadratic problem with one equality and two inequality constraints.
.mod code shown below (no .dat used):
/*********************************************
* OPL 12.10.0.0 Model
* Author: qdbra
* Creation Date: Sep 14, 2020 at 9:40:57 PM
*********************************************/
range R = 1..5;
range B= 6..10;
dvar float x[R];
dvar boolean y[B];
minimize
( x[1]^2 - 2*x[2]^2 + 3*x[3]^2 + 4*x[4]^2
- 5*x[5]^2 + 6*y[6]^2 + 7*y[7]^2 -
8*y[8]^2 + 9*y[9]^2 + 10*y[10]^2 +
8*x[1]*x[2] + 17*x[3]*y[8] - 20*y[6]*y[9]
+ 26*y[9]*y[10])/2 ;
subject to {
ct1:
x[1] + x[2] + x[3] + x[5] + y[6] + y[7] == 20;
ct2:
x[1] + x[4] + y[8] + y[9] + y[10] >= 1;
ct3:
x[2] - x[4] - y[6] + y[7] >= 0;
}

if you set the optimality target to 3 you ll get a result:
execute
{
cplex.optimalitytarget=3;
}
range R = 1..5;
range B= 6..10;
dvar float x[R];
dvar boolean y[B];
minimize
( x[1]^2 - 2*x[2]^2 + 3*x[3]^2 + 4*x[4]^2
- 5*x[5]^2 + 6*y[6]^2 + 7*y[7]^2 -
8*y[8]^2 + 9*y[9]^2 + 10*y[10]^2 +
8*x[1]*x[2] + 17*x[3]*y[8] - 20*y[6]*y[9]
+ 26*y[9]*y[10])/2 ;
subject to {
ct1:
x[1] + x[2] + x[3] + x[5] + y[6] + y[7] == 20;
ct2:
x[1] + x[4] + y[8] + y[9] + y[10] >= 1;
ct3:
x[2] - x[4] - y[6] + y[7] >= 0;
}
will give
x = [20
0 0 0 0];
y = [0 0 0 0 0];

Related

Theoretical time complexity calculation of nested dependent for loops [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
How do I calculate the big-O time complexity of the following nested for loop with dependent indices:
void function1 (int n)
{
int x = 0;
for (int i = 0; i <= n/2; i+=3)
for (int j = i; j <= n/4; j+=2)
x++;
}
Complexity of the code is defined as how many times your code will be executed for a given n.
There are two ways to do it.
Simulation: Run the code for different value of n and find out the values. In this case this is equivalent to the final value of x.
Theoretical:
Let's first check for each i how many times your code runs:
Using the arithmetic progression formula (a_n = a_1 + (k-1)*d):
i=0 => n/4 = 0 + (k-1)*2 => n/8 + 1 times
i=3 => n/4 = 3 + (k-1)*2 => (n-12)/8 + 1 times
i=6 => n/4 = 6 + (k-1)*2 => (n-24)/8 + 1 times
i=9 => n/4 = 9 + (k-1)*2 => (n-36)/8 + 1 times
Let's check the last i's now:
i=n/4 => n/4 = n/4 + (k-1)*2 => 1 times
i=n/4 - 3 => n/4 = (n/4-3) + (k-1)*2 => 3/2 + 1 times
i=n/4 - 6 => n/4 = (n/4-6) + (k-1)*2 => 6/2 + 1 times
So total number of times inner loop will be running is:
= (1) + (3/2 + 1) + (6/2 + 1) + (9/2 + 1) ... + ((n-12)/8 + 1)+ (n/8 + 1)
=> (0/2 + 1) + (3/2 + 1) + (6/2 + 1) + (9/2 + 1) ... + ((n-12)/8 + 1)+ (n/8 + 1)
Can be written as:
=> (0/2 + 3/2 + 6/2 + ... (n-12)/8 + n/8) + (1 + 1 + 1 ... 1 + 1)
Let's assume there are total P terms in the series:
Let's find out P:
n/8 = (0/2) + (P-1)*(3/2) => P = (n+12)/12
Now summing up the above series:
= [(P/2) (0/2 + (P-1) * 3/2)] + [P]
= P(3P+1)/4
= (n+12)(3(n+12)+12)/(4*12*12)
= (n^2 + 28n + 96)/192
So the final complexity of the code is
= (number of operation in each iteration) * (n^2 + 28n + 96)/192
Now look at the term (n^2 + 28n + 96)/192 For a very large n this will be close to ~n^2
Following is the complexity comparison:
Linear scale was difficult to analyse to I plotted log scale. Though for small n you don't see the complexity converging to n^2.
Using a very relax approach one can say that:
for (int i = 0; i <= n/2; i+=3){
for (int j = i; j <= n/4; j+=2) {
x++;
}
}
in the same as :
for (int i = 0; i <= n/4; i+=3){
for (int j = i; j <= n/4; j+=2) {
x++;
}
}
since with i > n/4 the inner loop will not execute. Moreover, to simplify the math you can say that the code is approximately the same as:
for (int i = 0; i < n/4; i+=3){
for (int j = i; j < n/4; j+=2) {
x++;
}
}
since the context is big-O it does not make a difference for the calculation of the upper-bound of the double loop. The number of iterations of a loop of the form:
for (int j = a; j < b; j += c)
can be calculated using the formula (b - a) /c. Hence, the inner loop will run approximately ((n/4) - i) / 2) times, or n/8 - i/2 times.
The outer-loop can be thought as running from k=0 until n/12. So with both loops we have
the summation of [k=0 to n/12] of (n/8 - 3k/2),
which is equivalent to
the summation [k=0 to n/12] of n/8 - the summation [k=0 to n/12] of 3k/2.
Hence,
(N^2) / 96 - the summation [[k=0 to n/12] of 3k/2
which is approximately (n^2) / 192. Therefore, the upper bound is O (n^2).

Problem using inprod() to summarise linear predictor

I am having a problem when trying to summarise my aditive predictor:
mu[j] <- b0 + weights1[1] * A[j] + weights1[2] * A[j+1] + weights1[3] * A[j+2] + weights1[4] * A[j+3] +
weights1[5] * A[j+4] + weights1[6] * A[j+5] + weights1[7] * A[j+6] + weights1[8] * A[j+7] +
weights1[9] * A[j+8] + weights1[10] * A[j+9] + weights1[11] * A[j+10] + weights1[12] * A[j+11] +
weights2[1] * B[j] + weights2[2] * B[j+1] + weights2[3] * B[j+2] + weights2[4] * B[j+3] +
weights2[5] * B[j+4] + weights2[6] * B[j+5] + weights2[7] * B[j+6] + weights2[8] * B[j+7] +
weights2[9] * B[j+8] + weights2[10] * B[j+9] + weights2[11] * B[j+10] + weights2[12] * B[j+11]
by using inprod(). This is what I thought should be the equivalent:
mu[j] <- b0 + inprod(weights1[],A[j:(j+11)]) + inprod(weights2[],B[j:(j+11)])
While the model compiles and seems to work, it stays updating forever. Its been running for hours and it does not end while the first approach ends in few minutes.
These are the priors, just in case:
weights1[1] ~ dnorm(0,1.0E-6)
weights2[1] ~ dnorm(0,1.0E-6)
for(t in 2:12) {
weights1[t]~dnorm(weights1[t-1],tauweight1)}
for(t in 2:12) {
weights2[t]~dnorm(weights2[t-1],tauweight2)}
b0 ~ dnorm(0,.001)
tau ~ dgamma(0.001, 0.001)
sigma <- 1/sqrt(tau)
tauweight1~dgamma(1.0E-3,1.0E-3)
tauweight2~dgamma(1.0E-3,1.0E-3)
I am calling OpenBUGS from R using R2OpenBUGS just in case.
Thanks very much for your time!

Finding out the complexity of given program

I'm trying to find out the Complexity of the given program. Suppose we have;
int a = θ;
for (i=θ; i<n; i++){
for(j = n; j>i; j--)
{
a = a + i + j;
}
}
Complexity: O(N*N)
Explanation:
The code runs total no of times
`= N + (N – 1) + (N – 2) + … 1 + 0
= N * (N + 1) / 2
= 1/2 * N^2 + 1/2 * N
O(N^2) times`

Accurately calculate moon phases

For a new project I like to calculate the moon phases. So far I haven't seen any code that does that. I don't want to rely on online-services for this.
I have tried some functions, but they are not 100% reliable. Functions I have tried:
NSInteger r = iYear % 100;
r %= 19;
if (r>9){ r -= 19;}
r = ((r * 11) % 30) + iMonth + iDay;
if (iMonth<3){r += 2;}
r -= ((iYear<2000) ? 4 : 8.3);
r = floor(r+0.5);
other one:
float n = floor(12.37 * (iYear -1900 + ((1.0 * iMonth - 0.5)/12.0)));
float RAD = 3.14159265/180.0;
float t = n / 1236.85;
float t2 = t * t;
float as = 359.2242 + 29.105356 * n;
float am = 306.0253 + 385.816918 * n + 0.010730 * t2;
float xtra = 0.75933 + 1.53058868 * n + ((1.178e-4) - (1.55e-7) * t) * t2;
xtra = xtra + (0.1734 - 3.93e-4 * t) * sin(RAD * as) - 0.4068 * sin(RAD * am);
float i = (xtra > 0.0 ? floor(xtra) : ceil(xtra - 1.0));
float j1 = [self julday:iYear iMonth:iMonth iDay:iDay];
float jd = (2415020 + 28 * n) + i;
jd = fmodf((j1-jd + 30), 30);
and last one
NSInteger thisJD = [self julday:iYear iMonth:iMonth iDay:iDay];
float degToRad = 3.14159265 / 180;
float K0, T, T2, T3, J0, F0, M0, M1, B1, oldJ = 0.0;
K0 = floor((iYear-1900)*12.3685);
T = (iYear-1899.5) / 100;
T2 = T*T; T3 = T*T*T;
J0 = 2415020 + 29*K0;
F0 = 0.0001178*T2 - 0.000000155*T3 + (0.75933 + 0.53058868*K0) - (0.000837*T + 0.000335*T2);
M0 = 360*[self getFrac:((K0*0.08084821133)) + 359.2242 - 0.0000333*T2 - 0.00000347*T3];
M1 = 360*[self getFrac:((K0*0.07171366128)) + 306.0253 + 0.0107306*T2 + 0.00001236*T3];
B1 = 360*[self getFrac:((K0*0.08519585128)) + 21.2964 - (0.0016528*T2) - (0.00000239*T3)];
NSInteger phase = 0;
NSInteger jday = 0;
while (jday < thisJD) {
float F = F0 + 1.530588*phase;
float M5 = (M0 + phase*29.10535608)*degToRad;
float M6 = (M1 + phase*385.81691806)*degToRad;
float B6 = (B1 + phase*390.67050646)*degToRad;
F -= 0.4068*sin(M6) + (0.1734 - 0.000393*T)*sin(M5);
F += 0.0161*sin(2*M6) + 0.0104*sin(2*B6);
F -= 0.0074*sin(M5 - M6) - 0.0051*sin(M5 + M6);
F += 0.0021*sin(2*M5) + 0.0010*sin(2*B6-M6);
F += 0.5 / 1440;
oldJ=jday;
jday = J0 + 28*phase + floor(F);
phase++;
}
float jd = fmodf((thisJD-oldJ), 30);
All are working more and less, but none is really giving the correct dates of full moon for 2017 and 2018.
Does anyone have a function that will calculate the moon phases correctly - also based on time zone?
EDIT:
I only want the function for the Moonphases. SwiftAA offers a lot more and only produces not needed overhead in the app.

why is the time complexity of bubble sort's best case being O(n)

I deduced the time complexity of bubble sort in its best case according to the mothod used in book ALGORITHMS 2.2. But the answer turned out to be O(n^2).
Here's my derivation, hope anyone can help me find out where is wrong:
public void bubbleSort(int arr[]) {
for(int i = 0, len = arr.length; i < len - 1; i++) {
for(int j = 0; j < len - i - 1; j++) {
if(arr[j + 1] < arr[j])
swap(arr, j, j + 1);
}
}
}
Statements cost times
i = 0,len = arr.length c1 1
i < len - 1 c2 n
i++ c3 n - 1
j = 0 c4 n - 1
j < len - i - 1 c5 t1(i=0) + t1(i=1) + ... + t1(i = n-2)
j++ c6 t2(i=0) + t2(i=1) + ... + t2(i = n-2)
arr[j + 1] < arr[j] c7 t3(i=0) + t3(i=1) + ... + t3(i = n-2)
swap(arr, j, j + 1) c8 t4(i=0) + t4(i=1) + ... + t4(i = n-2)
T(n) = c1 + c2n + c3(n - 1) + c4(n - 1) + c5t5 + c6t6 + c7t7 + c8t8
= c1 + c2n + c3(n - 1) + c4(n - 1) + c5[t1(i=0) + t1(i=1) + ... + t1(i = n-2)] + c6[t2(i=0) + t2(i=1) + ... + t2(i = n-2)] + c7[t3(i=0) + t3(i=1) + ... + t3(i = n-2)] + c8[t4(i=0) + t4(i=1) + ... + t4(i = n-2)];
in its best cast, the sequence is already positive before sorting. Then t8 sould be 0.
T(n) = c1 + c2n + c3(n - 1) + c4(n - 1) + c5[t1(i=0) + t1(i=1) + ... + t1(i = n-2)] + c6[t2(i=0) + t2(i=1) + ... + t2(i = n-2)] + c7[t3(i=0) + t3(i=1) + ... + t3(i = n-2)]
The time complexity is O(n^2)
Your implementation
public void bubbleSort(int arr[]) {
for(int i = 0, len = arr.length; i < len - 1; i++) {
for(int j = 0; j < len - i - 1; j++) {
if(arr[j + 1] < arr[j])
swap(arr, j, j + 1);
}
}
}
lacks the control whether there was any swap in the inner loop, and the breaking out of the outer loop if there wasn't.
That control makes it possible that the best case (an already sorted array) is O(n), since then there are no swaps in the inner loop when it runs the first time.
public void bubbleSort(int arr[]) {
boolean swapped = true;
for(int i = 0, len = arr.length; swapped && i < len - 1; i++) {
swapped = false;
for(int j = 0; j < len - i - 1; j++) {
if(arr[j + 1] < arr[j]) {
swap(arr, j, j + 1);
swapped = true;
}
}
}
}
The best case for bubble sort is when the elements are already sorted.
The usual implementation gives O(n^2) time complexity for best, average, worst case.
We can modify the bubble sort by checking if array is sorted or not(a swap would indicate an unsorted array) at every iteration.
As soon as the array is found to be sorted(if no swap occurs) control exits from loops or loop continues to execute till length-1.
And same is true for insertion sort as well!
I am not sure what are you counting. In general, when you are talking about comparison sort algorithms you should count the number of comparisons made. Bubble sort is regarded as such. In this case the algorithm you presented is O(n^2).
If you count the number of swaps its O(1) or maybe even one could say O(0). It is however rare to analyze Bubble sort like that.
You can, however very easily improve Bubble to get O(N) on best case. E.g by introducing a flag swap_was_made. If its false at the end of inner for you can finish. On best case it will cut complexity to O(N) (one inner for loop). In case of fair even distribution it cuts the expected or average complexity to O(N^2/2) ... But please double check me on it I might be wrong. Didn't do the math here.