pyomo: minimal production time / BIG M - variables

I am looking for a way to map a minimum necessary duty cycle in an optimization model.
After several attempts, however, I have now reached the end of my knowledge and hope for some inspiration here.
The idea is that a variable (binary) mdl.ontime is set so that the sum of successive ontime values is greater than or equal to the minimum duty cycle:
def ontime(mdl,t):
min_on_time = 3 # minimum on time in h
if t < min_on_time: return mdl.ontime[t] == 0
return sum(mdl.ontime[t-i] for i in range(min_on_time)) >= min_on_time
That works so far, if the variable mdl.ontime will not be recognized at all.
Then I tried three different constraints, unfortunately they all gave the same result: CPLEX only finds inf. results.
The first variant was:
def flag(mdl,t):
return mdl.ontime[t] + (mdl.production[t]>=0.1) >= 2
So if mdl.ontime is 1 and mdl.production is greater or equal 0.1 (the assumption is just exact enough), it should be greater or equal 2: a logical addition therm.
The second attemp was quite similar to the first:
def flag(mdl,t):
return (mdl.ontime[t]) >= (mdl.production[t] >= 0.1)
If mdl.ontime is 1, it should be greater or equal mdl.production compared with 0.1.
And the third with a big M variable:
def flag(mdl,t):
bigM = 10**6
return mdl.ontime[t] * bigM >= mdl.production[t]
bigM instead should be great enough in my case...
All of them do not work at all...and I have no idea, why CPLEX returns the error that there is only an infeasible solution.
Basically the model runs if I don't consider the ontime-integration.
Do you guys have any more ideas how I could implement this?
Many greetings,
Mathias

It isn't real clear what the desired relationship is between your variables/constraints. That said, I don't think this is legal. I'm surprised that it isn't popping an error....and if not popping an error, I'm pretty sure it isn't doing what you think:
def flag(mdl,t):
return mdl.ontime[t] + (mdl.production[t]>=0.1) >= 2
You are essentially burying an inferred binary variable in there with the test on mdl.production, which isn't going to work, I believe. You probably need to introduce another variable or such.

Related

Kotlin: Why these two implementations of log base 10 give different results on the specific imputs?

println(log(it.toDouble(), 10.0).toInt()+1) // n1
println(log10(it.toDouble()).toInt() + 1) // n2
I had to count the "length" of the number in n-base for non-related to the question needs and stumbled upon a bug (or rather unexpected behavior) that for it == 1000 these two functions give different results.
n1(1000) = 3,
n2(1000) = 4.
Checking values before conversion to int resulted in:
n1_double(1000) = 3.9999999999999996,
n2_double(1000) = 4.0
I understand that some floating point arithmetics magic is involved, but what is especially weird to me is that for 100, 10000 and other inputs that I checked n1 == n2.
What is special about it == 1000? How I ensure that log gives me the intended result (4, not 3.99..), because right now I can't even figure out what cases I need to double-check, since it is not just powers of 10, it is 1000 (and probably some other numbers) specifically.
I looked into implementation of log() and log10() and log is implemented as
if (base <= 0.0 || base == 1.0) return Double.NaN
return nativeMath.log(x) / nativeMath.log(base) //log() here is a natural logarithm
while log10 is implemented as
return nativeMath.log10(x)
I suspect this division in the first case is the reason of an error, but I can't figure out why it causes an error only in specific cases.
I also found this question:
Python math.log and math.log10 giving different results
But I already know that one is more precise than another. However there is no analogy for log10 for some base n, so I'm curious of reason WHY it is specifically 1000 that goes wrong.
PS: I understand there are methods of calculating length of a number without fp arithmetics and log of n-base, but at this point it is a scientific curiosity.
but I can't figure out why it causes an error only in specific cases.
return nativeMath.log(x) / nativeMath.log(base)
//log() here is a natural logarithm
Consider x = 1000 and nativeMath.log(x). The natural logarithm is not exactly representable. It is near
6.90775527898213_681... (Double answer)
6.90775527898213_705... (closer answer)
Consider base = 10 and nativeMath.log(base). The natural logarithm is not exactly representable. It is near
2.302585092994045_901... (Double)
2.302585092994045_684... (closer answer)
The only exactly correct nativeMath.log(x) for a finite x is when x == 1.0.
The quotient of the division of 6.90775527898213681... / 2.302585092994045901... is not exactly representable. It is near 2.9999999999999995559...
The conversion of the quotient to text is not exact.
So we have 4 computation errors with the system giving us a close (rounded) result instead at each step.
Sometimes these rounding errors cancel out in a way we find acceptable and the value of "3.0" is reported. Sometimes not.
Performed with higher precision math, it is easy to see log(1000) was less than a higher precision answer and that log(10) was more. These 2 round-off errors in the opposite direction for a / contributed to the quotient being extra off (low) - by 1 ULP than hoped.
When log(x, 10) is computed for other x = power-of-10, and the log(x) is slightly more than than a higher precision answer, I'd expect the quotient to less often result in a 1 ULP error. Perhaps it will be 50/50 for all powers-of-10.
log10(x) is designed to compute the logarithm in a different fashion, exploiting that the base is 10.0 and certainly exact for powers-of-10.

Minimize a differece that has to be positive

I have the following optimization:
mnimize: obj(x) = f(x) - constant
sbj.to: lb < x < hb
sbj.to: f(x) - constant >= 0
I have the feeling that this is not the most convenient way to put the optimization problem. Is there another way which might be more convenient computationally?
Till now I tried to modify the objective for example using obj(x) = (f(x) - constant)^2, which permit me to avoid the second constraint. But it gives me some convergence problems depending on the value of the constant. Some ideas?
I can think of two other possible approaches - not sure whether they are any better, but that is:
If [f(x) - constant]^2 gives you convergence issues (not sure why?), try and replace it with abs(f(x) - constant). Warning: the abs() function is not always the best behaved, sometimes it confuses some optimization algorithms
In your objective function, if (f(x) - constant) becomes negative, return a big value - proportional to how much it becomes negative. Otherwise return the normal difference.

Randomly increasing sequence- Wolfram Mathematica

Good afternoon, I have a problem making recurrence table with randomly increasing sequence. I want it to return an increasing sequence with a random difference between two elements. Right now I've got:
RecurrenceTable[{a[k+1]==a[k] + RandomInteger[{0,4}], a[1]==-12},a,{k,1,5}]
But it returns me an arithmetic progression with chosen d for all k (e.g. {-12,-8,-4,0,4,8,12,16,20,24}).
Also, I will be really grateful for explaining why if I replace every k in my code with n I get:
RecurrenceTable[{4+a[n] == a[n],a[1] == -12},a,{n,1,10}]
Thank You very much for Your time!
I don't believe that RecurrenceTable is what you are looking for.
Try this instead
FoldList[Plus,-12,RandomInteger[{0,4},5]]
which returns, this time,
{-12,-8,-7,-3,1,2}
and returns, this time,
{-12,-9,-5,-3,0,1}

Pyomo: Unbounded objective function though bounded

I am currently implementing an optimization problem with pyomo and since now some hours I get the message that my problem is unbounded. After searching for the issue, I came along one term which seems to be unbounded. I excluded this term from the objective function and it shows that it takes a very high negative value, which supports the assumption that it is unbounded to -Inf.
But I have checked the problem further and it is impossible that the term is unbounded, as following code and results show:
model.nominal_cap_storage = Var(model.STORAGE, bounds=(0,None)) #lower bound is 0
#I assumed very high CAPEX for each storage (see print)
dict_capex_storage = {'battery': capex_battery_storage,
'co2': capex_co2_storage,
'hydrogen': capex_hydrogen_storage,
'heat': capex_heat_storage,
'syncrude': capex_syncrude_storage}
print(dict_capex_storage)
>>> {'battery': 100000000000000000, 'co2': 100000000000000000,
'hydrogen': 1000000000000000000, 'heat': 1000000000000000, 'syncrude': 10000000000000000000}
From these assumptions I already assume that it is impossible that the one term can be unbounded towards -Inf as the capacity has the lower bound of 0 and the CAPEX is a positive fixed value. But now it gets crazy. The following term is has the issue of being unbounded:
model.total_investment_storage = Var()
def total_investment_storage_rule(model):
return model.total_investment_storage == sum(model.nominal_cap_storage[storage] * dict_capex_storage[storage] \
for storage in model.STORAGE)
model.total_investment_storage_con = Constraint(rule=total_investment_storage_rule)
If I exclude the term from the objective function, I get following value after the optimization. It seems, that it can take high negative values.
>>>>
Variable total_investment_storage
-1004724108.3426505
So I checked the term regarding the component model.nominal_cap_storage to see the value of the capacity:
model.total_cap_storage = Var()
def total_cap_storage_rule(model):
return model.total_cap_storage == sum(model.nominal_cap_storage[storage] for storage in model.STORAGE)
model.total_cap_storage_con = Constraint(rule=total_cap_storage_rule)
>>>>
Variable total_cap_storage
0.0
I did the same for the dictionary, but made a mistake: I forgot to delete the model.nominal_cap_storage. But the result is confusing:
model.total_capex_storage = Var()
def total_capex_storage_rule(model):
return model.total_capex_storage == sum(model.nominal_cap_storage[storage] * dict_capex_storage[storage] \
for storage in model.STORAGE)
model.total_capex_storage_con = Constraint(rule=total_capex_storage_rule)
>>>>
Variable total_capex_storage
0.0
So my question is why is the term unbounded and how is it possible that model.total_investment_storage and model.total_capex_storage have different solutions though both are calculated equally? Any help is highly appreciated.
I think you are misinterpreting "unbounded." When the solver says the problem is unbounded, that means the objective function value is unbounded based on the variables and constraints in the problem. It has nothing to do with bounds on variables, unless one of those variable bounds prevents the objective from being unbound.
If you want help on above problem, you need to edit and post the full problem, with the objective function, and (if possible) the error. What you have now is a collection of different snippets of different variations of a problem, which isn't really informative on the overall issue.
I solved the problem by setting a lower bound to the term, which takes a negative value:
model.total_investment_storage = Var(bounds=(0, None)
I am still not sure why this term can take negative values but this solved at least my problem

Is this considered memoisation?

In optimising some code recently, we ended up performing what I think is a "type" of memoisation but I'm not sure we should be calling it that. The pseudo-code below is not the actual algorithm (since we have little need for factorials in our application, and posting said code is a firing offence) but it should be adequate for explaining my question. This was the original:
def factorial (n):
if n == 1 return 1
return n * factorial (n-1)
Simple enough, but we added fixed points so that large numbers of calculations could be avoided for larger numbers, something like:
def factorial (n):
if n == 1 return 1
if n == 10 return 3628800
if n == 20 return 2432902008176640000
if n == 30 return 265252859812191058636308480000000
if n == 40 return 815915283247897734345611269596115894272000000000
# And so on.
return n * factorial (n-1)
This, of course, meant that 12! was calculated as 12 * 11 * 3628800 rather than the less efficient 12 * 11 * 10 * 9 * 8 * 7 * 6 * 5 * 4 * 3 * 2 * 1.
But I'm wondering whether we should be calling this memoisation since that seems to be defined as remembering past results of calculations and using them. This is more about hard-coding calculations (not remembering) and using that information.
Is there a proper name for this process or can we claim that memoisation extends back not just to calculations done at run-time but also those done at compile-time and even back to those done in my head before I even start writing the code?
I'd call it pre-calculation rather than memoization. You're not really remembering any of the calculations you've done in the process of calculating a final answer for a given input, rather you're pre-calculating some fixed number of answers for specific inputs. Memoization as I understand it is really more akin to "caching" a set of results as you calculate them for later reuse. If you were to store each value calculated so that you didn't need to recalculate it again later, that would be memoization. Your solution differs in that you never store any "calculated" results from your program, only the fixed points that have been pre-calculated. With memoization if you reran the function with an input different than one of the pre-calculated ones it would not have to recalculate the result, it would simply reuse it.
Whether or not you are hard coding the results in, this is still memoization because you have already calculated results that you are expecting to calculate again. Now this may come in the form of run-time, or compile time.. but either way, it's memoization.
Memoization is done at run-time. You are optimizing at compile time. So, it is not.
See for example ... Wikipedia
Or ...
Memoization
The term memoization was coined by Donald Michie (1968) to refer to the process by which a function is made to automatically remember the results of previous computations. The idea has become more popular in recent years with the rise of functional languages; Field and Harrison (1988) devote a whole chapter to it. The basic idea is just to keep a table of previously computed input/result pairs.
Peter Norvig
University of California
(the bold is mine)
Link
def memoisation(f):
dct = {}
def myfunction(x):
if x not in dct:
dct[x] = f(x)
return dct[x]
return myfunction
#memoisation
def fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fibonacci(n-1) + fibonacci(n-2)
def nb_appels(n):
if n==0 or n==1:
return 0
else:
return 1 + nb_appels(n-1) + 1 + nb_appels(n-2)
print(fibonacci(13))
print ('nbappel',nb_appels(13))