Hoare Logic, calculate pre condition - conditional-statements

if x < 15:
x = x+1
else:
x = 0
the post condition is: Q = {0 <= x <= 15}
is the correct pre condition P1 = {-1 <= x} or P2 = {0 <= x <= 15}
And how can I calculate it?

Both are valid preconditions for the code fragment and postcondition, so you want to choose the weaker one, which in this case is P1. (P2 specifies a narrower range of values for x, all of which are present in the range specified by P1.)

Related

What is a time complexity of the following algorithm in Big Theta Notation?

res = 0
for i in range (1,n):
j = i
while j % 2 == 0:
j = j/2
res = res + j
I understand that upper bound is O(nlogn), however I'm wondering if it's possible to find a stronger constraint? I'm stuck with the analysis.
Some ideas that may be helpful:
Could create a function (g(n)) that annotates your function (f(n)) to include how many operations occur when running f(n)
def f(n):
res = 0
for i in range (1,n):
j = i
while j % 2 == 0:
j = j/2
res = res + j
return res
def g(n):
comparisons = 0
operations = 0
assignments = 0
assignments += 1
res = 0
assignments += 1. # i = 1
comparisons += 1. # i < n
for i in range (1,n):
assignments += 1
j = i
operations += 1
comparisons += 1
while j % 2 == 0:
operations += 1
assignments += 1
j = j/2
operations += 1
assignments += 1
res = res + j
operations += 1
comparisons += 1
operations += 1 # i + 1
assignments += 1 # assign to i
comparisons += 1 # i < n ?
return operations + comparisons + assignments
For n = 1, the code runs without hitting any loops: assigning the value of res; assigning i as 1; comparing i to n and skipping the loop as a result.
For n > 1, you get into the for loop, and the for statement is all that is changing the loop varaible, so the complexity of the rest of the code is at least O(n).
Once in the loop:
if i is odd, then you only assign j, perform the mod operation and compare to zero. That will be the case for half the values of i, so each run of the loop from 2 to n will (half the time) add a fixed number of a few operations (including the loop operations). So, that's still O(n), just with a larger constant.
if i is even, then we divide by 2 until it is odd. This is what we need to work out the impact of.
Based on my counting of the different operations, I get:
g_initial_setup = 3 (every time)
g_for_any_i = 6 (half the time, it is just this)
g_for_even_i = 6 for each time we divide by two (the other half of the time)
For a random even i between 2 and n, half the time we will only need to divide by two once, half the remaining time by two again, half the remaining time by two again, etc. So we have an infinite series as n goes to infinity of sum(1/2^i) for 1 < i < n, and multiply that by the 6 operations done for each halving of j.
I would expect from this:
g(n) = 3 + (n * 6) + (n * 6) * sum( 1 / pow(2,m) for m between 1 and n )
Given that the infinite series 1/2^n = 1, we simplify that to:
g(n) = 3 + 12n as n approaches infinity.
That implies that the algorithm is O(n). Huh. I did not expect that.
Let's try out the function g(n) from above, counting all the operations that are occurring as f(n) is computed.
g(1) = 3 operations
g(2) = 9
g(3) = 21
g(4) = 27
g(5) = 45
g(10) = 123
g(100) = 1167
g(1000) = 11943
g(10000) = 119943
g(100000) = 1199931
g(1000000) = 11999919
g(10000000) = 119999907
Okay, unless I've really made a serious error here, it's O(n).

Modelling if-then-else-logic in MILP/MIP

I would like to model the following for a mixed-integer linear programming problem:
Let y be a binary and x1 and x2 be continuous variables, whereas k1 is an invariant parameter.
if y == 1 then:
x2 = k1*x1
else (y == 0):
x2 = 0
First idea that comes to mind is to do something like:
x2 >= k1*x1 - M*(1-y)
x2 <= k1*x1 - M*(1-y)
But here M would have to be k1*x1 and is therefore not an invariant parameter anymore. Does anyone have a better idea. Thank you!
Per your problem :
y : binary_variable
x1 : continous_variable
x2 : continous_variable
Please see equations below:
x2 <= u*y
x2 <= k1*x1
x2 >= k1*x1 − u*(1 − y)
x2 >= 0
u is the upper bound on k1*x1
0 <= k1*x1 <= u
you don't require a big M over here.
when y == 1 => x2 == k1*x1
when y == 0 => x2 == 0

Maximizing with constraint for number of distinct SKU not greater than X

I'm building a optimization tool using Pulp.
It's purpose is to define which SKU to take and which SKU to leave from each warehouse.
I'm having trouble with the following constraint:
"The maximum of different SKUs selected should not exceed 500"
That is to say, that no matter how many units you take, as long as they do not exceed 500 varieties (different SKUs), its all good.
This is what I've got so far
#simplex
df=pd.read_excel(ruta+"actual/202109.xlsx", nrows=20) #leemos la nueva base del mes
# Create variables and model
x = pulp.LpVariable.dicts("x", df.index, lowBound=0)
mod = pulp.LpProblem("Budget", pulp.LpMaximize)
# Objective function
objvals = {idx: (1.0)*(df['costo_unitario'][idx]) for idx in df.index}
mod += sum([x[idx]*objvals[idx] for idx in df.index])
# Lower and upper bounds:
for idx in df.index:
mod += x[idx] <= df['unidades_sobrestock'][idx]
# Budget sum
mod += sum([x[idx] for idx in df.index]) <= max_uni
# Solve model
mod.solve()
# Output solution
for idx in df.index:
print (str(idx) + " " + str(x[idx].value()))
print ('Objective' + " " + str(pulp.value(mod.objective)))
In the same dataframe, I have a column with the SKU of each particular row df['SKU']
I'm imagining that the constraint should look something like:
for idx in df.index:
mod += df['SKU'].count(distinct) <= 500
but that doesn't seem to work.
Thanks!
You will need a binary variable y[i] to indicate if a SKU is used. In math-like notation:
x[i] ≤ maxx[i]*y[i] (y[i] = 0 ==> x[i] = 0)
sum(i, y[i]) ≤ maxy (limit number of different SKUs)
y[i] ∈ {0,1} (binary variable)
where
maxx[i] = upperbound on x[i]
maxy = limit on number of different SKUs

How to find the value of integer k efficiently for which q divides b ^ k finitely?

We have given two integers b and q, and we want to find the minimum value of an integer 'k' for which q completely divides b^k or k does not exist. Can we find out the value of k efficiently? Not just iterating each value of k (0, 1, 2, 3, ...) and checking (b^k) % q == 0) where q <= k or q >= k.
First of all, k will never equal zero unless q=1. k will never equal one unless q=b.
Next, if you can factorize q and b, then you can reason about them.
If there are any prime factors of b that are not factors of q at all, then k does not exist. Otherwise, k has to be large enough so that every factor of b^k is represented in q.
Here's some pseudo-code:
if (q==1) return 0;
if (q==b) return 1;
// qfactors and bfactors are arrays, one element per factor
let qfactors = prime_factorization(q);
let bfactors = prime_factorization(b);
let kmin=0;
foreach (f in bfactors.unique) {
let bcount = bfactors.count(f);
let qcount = qfactors.count(f);
if (qcount==0 || qcount < bcount) return -1; // k does not exist
kmin_f = ceiling(bcount/qcount);
if (kmin_f > kmin) let kmin = kmin_f;
}
return kmin;
If q = 1 ; k = 0
If b = q ; k = 1
If b > q and factors ; k = 1
If b < q and factors ; k != I
If b != q and not factors ; k != I
We know,
Dividend = Divisor x Quotient + Reminder
=> Dividend = Divisor x Quotient [Here, Reminder = 0]
Now go for calculation of Maxima and Minima as lower the value of Quotient is lower the value of 'k'.
If you consider the Quotient as 1 (lowest but spl case) then your formula for 'k' becomes,
k = log q/log b
I found a solution-
If q divides pow(b,k) then all prime factors of q are prime factors of b. Now we can do iterations q = q ÷ gcd(b,q) while gcd(q,b)≠1. If q≠1 after iterations, there are prime factors of q which are not prime factors of b then k doesn't exist else k = no of iteration.

Axiomatic Semantics - How to calculate a weakest precondition

Here is example:
x = y + 1;
y = y - 2;
{y < 3}
What is weakest precondition of this example?
I think maybe y < 3 is an answer.
If not, can you tell me why, in detail?
Here is my first mistaken attempt at an answer based on a quick read of Predicate transformer semantics
WP( x := y + 1; y := y - 2, y < 3 ) # Initial problem
= WP( x := y + 1, WP( y := y - 2, y < 3 ) ) # Sequence rule
= WP( x := y + 1, y < 5 ) # Assignment rule
= WP( x - 1 = y, y < 5 ) # solve for y <--- this is wrong!
= WP( x - 1 < 5 ) # Assignment rule
= x < 6 # solve for x
However as pointed out by Kris since x := y + 1 is an assignment to x which doesn't affect y the weakest precondition for y should just be y < 5 so the correct answer should be
WP( x := y + 1; y := y - 2, y < 3 ) # Initial problem
= WP( x := y + 1, WP( y := y - 2, y < 3 ) ) # Sequence rule
= WP( x := y + 1, y < 5 ) # Assignment rule
= y < 5
Thanks also to philipxy for identifying errors in my syntax especially := vs = since that made it easier to mistake assignments for equations which was part of my initial confusion.