Minimize a differece that has to be positive - optimization

I have the following optimization:
mnimize: obj(x) = f(x) - constant
sbj.to: lb < x < hb
sbj.to: f(x) - constant >= 0
I have the feeling that this is not the most convenient way to put the optimization problem. Is there another way which might be more convenient computationally?
Till now I tried to modify the objective for example using obj(x) = (f(x) - constant)^2, which permit me to avoid the second constraint. But it gives me some convergence problems depending on the value of the constant. Some ideas?

I can think of two other possible approaches - not sure whether they are any better, but that is:
If [f(x) - constant]^2 gives you convergence issues (not sure why?), try and replace it with abs(f(x) - constant). Warning: the abs() function is not always the best behaved, sometimes it confuses some optimization algorithms
In your objective function, if (f(x) - constant) becomes negative, return a big value - proportional to how much it becomes negative. Otherwise return the normal difference.

Related

Kotlin: Why these two implementations of log base 10 give different results on the specific imputs?

println(log(it.toDouble(), 10.0).toInt()+1) // n1
println(log10(it.toDouble()).toInt() + 1) // n2
I had to count the "length" of the number in n-base for non-related to the question needs and stumbled upon a bug (or rather unexpected behavior) that for it == 1000 these two functions give different results.
n1(1000) = 3,
n2(1000) = 4.
Checking values before conversion to int resulted in:
n1_double(1000) = 3.9999999999999996,
n2_double(1000) = 4.0
I understand that some floating point arithmetics magic is involved, but what is especially weird to me is that for 100, 10000 and other inputs that I checked n1 == n2.
What is special about it == 1000? How I ensure that log gives me the intended result (4, not 3.99..), because right now I can't even figure out what cases I need to double-check, since it is not just powers of 10, it is 1000 (and probably some other numbers) specifically.
I looked into implementation of log() and log10() and log is implemented as
if (base <= 0.0 || base == 1.0) return Double.NaN
return nativeMath.log(x) / nativeMath.log(base) //log() here is a natural logarithm
while log10 is implemented as
return nativeMath.log10(x)
I suspect this division in the first case is the reason of an error, but I can't figure out why it causes an error only in specific cases.
I also found this question:
Python math.log and math.log10 giving different results
But I already know that one is more precise than another. However there is no analogy for log10 for some base n, so I'm curious of reason WHY it is specifically 1000 that goes wrong.
PS: I understand there are methods of calculating length of a number without fp arithmetics and log of n-base, but at this point it is a scientific curiosity.
but I can't figure out why it causes an error only in specific cases.
return nativeMath.log(x) / nativeMath.log(base)
//log() here is a natural logarithm
Consider x = 1000 and nativeMath.log(x). The natural logarithm is not exactly representable. It is near
6.90775527898213_681... (Double answer)
6.90775527898213_705... (closer answer)
Consider base = 10 and nativeMath.log(base). The natural logarithm is not exactly representable. It is near
2.302585092994045_901... (Double)
2.302585092994045_684... (closer answer)
The only exactly correct nativeMath.log(x) for a finite x is when x == 1.0.
The quotient of the division of 6.90775527898213681... / 2.302585092994045901... is not exactly representable. It is near 2.9999999999999995559...
The conversion of the quotient to text is not exact.
So we have 4 computation errors with the system giving us a close (rounded) result instead at each step.
Sometimes these rounding errors cancel out in a way we find acceptable and the value of "3.0" is reported. Sometimes not.
Performed with higher precision math, it is easy to see log(1000) was less than a higher precision answer and that log(10) was more. These 2 round-off errors in the opposite direction for a / contributed to the quotient being extra off (low) - by 1 ULP than hoped.
When log(x, 10) is computed for other x = power-of-10, and the log(x) is slightly more than than a higher precision answer, I'd expect the quotient to less often result in a 1 ULP error. Perhaps it will be 50/50 for all powers-of-10.
log10(x) is designed to compute the logarithm in a different fashion, exploiting that the base is 10.0 and certainly exact for powers-of-10.

pyomo: minimal production time / BIG M

I am looking for a way to map a minimum necessary duty cycle in an optimization model.
After several attempts, however, I have now reached the end of my knowledge and hope for some inspiration here.
The idea is that a variable (binary) mdl.ontime is set so that the sum of successive ontime values is greater than or equal to the minimum duty cycle:
def ontime(mdl,t):
min_on_time = 3 # minimum on time in h
if t < min_on_time: return mdl.ontime[t] == 0
return sum(mdl.ontime[t-i] for i in range(min_on_time)) >= min_on_time
That works so far, if the variable mdl.ontime will not be recognized at all.
Then I tried three different constraints, unfortunately they all gave the same result: CPLEX only finds inf. results.
The first variant was:
def flag(mdl,t):
return mdl.ontime[t] + (mdl.production[t]>=0.1) >= 2
So if mdl.ontime is 1 and mdl.production is greater or equal 0.1 (the assumption is just exact enough), it should be greater or equal 2: a logical addition therm.
The second attemp was quite similar to the first:
def flag(mdl,t):
return (mdl.ontime[t]) >= (mdl.production[t] >= 0.1)
If mdl.ontime is 1, it should be greater or equal mdl.production compared with 0.1.
And the third with a big M variable:
def flag(mdl,t):
bigM = 10**6
return mdl.ontime[t] * bigM >= mdl.production[t]
bigM instead should be great enough in my case...
All of them do not work at all...and I have no idea, why CPLEX returns the error that there is only an infeasible solution.
Basically the model runs if I don't consider the ontime-integration.
Do you guys have any more ideas how I could implement this?
Many greetings,
Mathias
It isn't real clear what the desired relationship is between your variables/constraints. That said, I don't think this is legal. I'm surprised that it isn't popping an error....and if not popping an error, I'm pretty sure it isn't doing what you think:
def flag(mdl,t):
return mdl.ontime[t] + (mdl.production[t]>=0.1) >= 2
You are essentially burying an inferred binary variable in there with the test on mdl.production, which isn't going to work, I believe. You probably need to introduce another variable or such.

Using Subtraction in a Conditional Statement in Verilog

I'm relatively new to Verilog and I've been working on a project in which I would, in an ideal world, like to have an assignment statement like:
assign isinbufferzone = a > (packetlength-16384) ? 1:0;
The file with this type of line in it will compile, but isinbufferzone doesn't go high when it should. I'm assuming it's not happy with having subtraction in the conditional. I'm able to make the module work by moving stuff around, but the result is more complicated than I think it should need to be and the latency really starts to add up. Does anyone have any thoughts on what the most concise way to do this is? Thank you in advance for your help.
You probably expect isinbufferzone to go high if packetlength is 16384 or less regardless of a, however this is not what happens.
If packetlength is less than 16384, the value packetlength - 16384 is not a negative number −X, but some very large positive number (maybe 232 − X, or 217 − X, I'm not quite sure which, but it doesn't matter), because Verilog does unsigned arithmetic by default. This is called integer overflow.
You could maybe try to solve this by declaring some signals as signed, but in my opinion the safest way is to explicitly handle the overflow case and making sure the subtraction result is only evaluated for packetlength values of 16384 or greater:
assign isinbufferzone = (packetlength < 16384) ? 1 : (a > packetlength - 16384);

Which is faster, doing float subtraction twice or doing float subtraction once, storing the result, and then using the stored result?

I have a piece of code in my program that will need to execute as fast as possible. I have two doubles that I need to subtract in it, and I need to use the result of that subtraction twice in this piece of code. Would storing the result in a variable and using the variable twice be faster, or would it be faster to just do the subtraction twice. Here's what I mean in pseudocode, where x and y are doubles:
Should I do this:
double difference = x - y;
if(difference >= 10.0)
return 0;
else
return tan(difference);
Or this:
if((x-y) >= 10.0)
return 0;
else
return tan(x-y);
Bonus points if you can tell me whether > compare is significantly faster than >= compare. It's unlikely x-y will ever be exactly 10.0, so I could go with just > if that would be faster. This is in Objective-C for an iPhone app. Thanks.
Good chances are that the optimizer will "see" what you are doing, and optimize the second code snippet to match the first code snippet. This optimization technique is called Common Subexpression Elimination.
Moreover, the optimizer would very likely eliminate the difference variable altogether, using the value from the register in the call of tan.
In the absence of optimization the answer depends on the mixture of xs and ys: if a significant portion is such that tan is not called, the second snippet would be slightly faster. If most of the pairs are such that you call tan, the performance would be dominated by the call of tan, which significantly slower than a single subtraction or a single instruction to store a float.
Let's analyze your code on RAM computational model on which execution time is simply the count of primitive operations . The primitive operations for this model are:
Assigning a value to a variable
Calling a method
Performing arithmetic operations
Comparing two numbers
Indexing into an array
Returning from a method
Now analyze your both codes on this basis. For your first code number of primitive operations are
double difference = x - y; --------> 2
if(difference >= 10.0) --------> 1
return 0; --------> 1
else
return tan(difference); --------> 2 + p (primitive operations in tan function)
6+p.
In second code
if((x-y) >= 10.0) --------> 2
return 0; --------> 1
else
return tan(x-y); --------> 3 + p
number of primitive operations are 6+p.
The performance of both code would be same.
Two things:
1. Memory references are slow.
2. The compiler is smart.

Objective-c how to implement a "Goal seek" like algorithm, similar to excel?

I have a rather complicated equation with a single variable that I would like to vary. T*he goal is to get the equation to equal 0.*
For example:
0 = variable * (complicated equation of constants and exponents)
My initial thought was to simply brute force down from some large enough value of variable, but I quickly realized that the number I'm "Goal seeking" may contain a fractional component, so simple integer decrement may not work.
Can someone suggest the correct "Goal seek" algorithm implementation, like excel?
double result = 1;
double variable = 1000;
double tempVariable = variable;
double tolerance = 0.1;
while (abs(result) > tolerance ) {
variable--;
result = variable * (complicated equation);
};
Is there an algorithm I can use to numerically solve the equation that I have?
Simulated annealing is a commonly used technique. In this case you'd want to minimize the absolute value of your complicated function, which would find the closest value to 0.
Alternatively, you could use least square curve fitting (see "lsqcurvefit" in MATLAB). lsqcurvefit is much more powerful than GoalSeek in curve-fitting depending on the complexity of the problem to solve.
cheers,