Objective-c how to implement a "Goal seek" like algorithm, similar to excel? - objective-c

I have a rather complicated equation with a single variable that I would like to vary. T*he goal is to get the equation to equal 0.*
For example:
0 = variable * (complicated equation of constants and exponents)
My initial thought was to simply brute force down from some large enough value of variable, but I quickly realized that the number I'm "Goal seeking" may contain a fractional component, so simple integer decrement may not work.
Can someone suggest the correct "Goal seek" algorithm implementation, like excel?
double result = 1;
double variable = 1000;
double tempVariable = variable;
double tolerance = 0.1;
while (abs(result) > tolerance ) {
variable--;
result = variable * (complicated equation);
};
Is there an algorithm I can use to numerically solve the equation that I have?

Simulated annealing is a commonly used technique. In this case you'd want to minimize the absolute value of your complicated function, which would find the closest value to 0.

Alternatively, you could use least square curve fitting (see "lsqcurvefit" in MATLAB). lsqcurvefit is much more powerful than GoalSeek in curve-fitting depending on the complexity of the problem to solve.
cheers,

Related

Minimize a differece that has to be positive

I have the following optimization:
mnimize: obj(x) = f(x) - constant
sbj.to: lb < x < hb
sbj.to: f(x) - constant >= 0
I have the feeling that this is not the most convenient way to put the optimization problem. Is there another way which might be more convenient computationally?
Till now I tried to modify the objective for example using obj(x) = (f(x) - constant)^2, which permit me to avoid the second constraint. But it gives me some convergence problems depending on the value of the constant. Some ideas?
I can think of two other possible approaches - not sure whether they are any better, but that is:
If [f(x) - constant]^2 gives you convergence issues (not sure why?), try and replace it with abs(f(x) - constant). Warning: the abs() function is not always the best behaved, sometimes it confuses some optimization algorithms
In your objective function, if (f(x) - constant) becomes negative, return a big value - proportional to how much it becomes negative. Otherwise return the normal difference.

Conditional Equations with Variable in GAMS

I need your help to solve this "Little" problem I'm having programming with GAMS.
In my objective function I have this member that is z = [...]-TWC(j)*HS(j).
Where HS(j)is a variable.
Now, TWC(j) should be a parameter that works like this:
TWC(j) = 0 when HS(j) < 1000
and
TWC(j) = 3.21 when HS(j) >=1000.
Any idea how to implement this in GAMS? my attempts all failed.
EDIT: this is what I tried I defined an equation called TWCup(j) that was:
TWCup(j)$(HS.l(j) >= 1000).. TWC(j) =e= 3.21;
Thanks ;)
Probably not relevant for the OP anymore (since the question is more than 3 years old), but maybe useful for someone else that looks at this question.
If TWC(j) is a function of your variable HS(j), it is not a parameter. It is another variable. So you should define TWC(j) as a variable and not as a parameter. This is probably the reason you were getting errors.
There are some ways to fix your problem: One is to actually turn TWC(j) into a variable. But this would turn your problem into non-linear which could be (or not) an issue. Also, this could need the use of binary variables, which could also become a problem (again, or not).
But I think this issue could be resolved with a different specification of the LP. The cost function f(HS(j)) = TWC(j)*HS(j) is linear by parts and convex, which you can represent in a standard LP using auxiliary variables (assuming you are minimizing).
* declare auxiliary variable
Variable
w(j);
* declare equations for linear by part cost function
Equation
costfun1(j)
costfun2(j);
;
* define costfun1 and costfun2
costfun1(j).. w(j) =g= 0;
costfun2(j).. w(j) =g= -3210 + 3.21*HS(j);
*redefine objective function (note that I changed to plus because I assumed this is a cost function that you are minimizing)
z = [...]+w(j)
This solution is very problem dependent. I assumed you were minimizing and I changed the sign in the objective function to '+'. If this was not the case, this would not work (would not be convex). Then we would need to check other approaches.
But the takeaway here is to stress that something that is a function of a variable is also a variable. But you may have options to reformulate your problem to address the problem.

Using Subtraction in a Conditional Statement in Verilog

I'm relatively new to Verilog and I've been working on a project in which I would, in an ideal world, like to have an assignment statement like:
assign isinbufferzone = a > (packetlength-16384) ? 1:0;
The file with this type of line in it will compile, but isinbufferzone doesn't go high when it should. I'm assuming it's not happy with having subtraction in the conditional. I'm able to make the module work by moving stuff around, but the result is more complicated than I think it should need to be and the latency really starts to add up. Does anyone have any thoughts on what the most concise way to do this is? Thank you in advance for your help.
You probably expect isinbufferzone to go high if packetlength is 16384 or less regardless of a, however this is not what happens.
If packetlength is less than 16384, the value packetlength - 16384 is not a negative number −X, but some very large positive number (maybe 232 − X, or 217 − X, I'm not quite sure which, but it doesn't matter), because Verilog does unsigned arithmetic by default. This is called integer overflow.
You could maybe try to solve this by declaring some signals as signed, but in my opinion the safest way is to explicitly handle the overflow case and making sure the subtraction result is only evaluated for packetlength values of 16384 or greater:
assign isinbufferzone = (packetlength < 16384) ? 1 : (a > packetlength - 16384);

How to correctly name a variable which represents a value of 1 - n?

It obviously depends on the context you are using them in but, I was wondering if there is a universally accepted way to name such variables, or at least in a mathematical context.
I've often seen:
float k = someValue;
float oneMinusK = 1 - k;
...which seems as descriptive as much as meaningless to me.
Please note that I'm not asking how to name a variable, but how to do it in this very case. Examples and contexts where you used them will be much appreciated,
Thanks.
In probability 1-k is the probability of X not occurring, given that k is the probability of X occurring.
So
float will_win_lottery = 0.00000000001;
float will_not_win_lottery = 1 - will_win_lottery;
You should name your variables based on what it means in terms of the domain you are working on not the algorithm you used to produce it. Thus if k represented your house number k-1 may represent your next door neighbors house number. Name it accordingly.
I would call it the Complement.
I would probably calculate that when I needed it. How much time do you think it saves to store it in a variable? Remember that premature optimization is the root of all evil.
There is no way to answer your question without knowing what "k" represents. Ironicly, the reason why that is not possible is the poor naming of the variable "k" in the first place, so that is what you should worry about instead. If you give "k" a more describing name, a good choise of naming for "k-1" should come naturally, like in the example of "will_win_lottery" and "will_not_win_lottery".
Your usage already seems descriptive enough, just go with it.
Are these supposed to be constants ?
If you are doing it for legibility reasons exclusively why not create a method a la Dan's suggestion.
float complement(float n) { return (1.0 - n); }
Does it really matter? Use i; it's not any less descriptive than k. Things like this need to be documented/commented if you're that OCD about code descriptiveness.

Comparing IEEE floats and doubles for equality

What is the best method for comparing IEEE floats and doubles for equality? I have heard of several methods, but I wanted to see what the community thought.
The best approach I think is to compare ULPs.
bool is_nan(float f)
{
return (*reinterpret_cast<unsigned __int32*>(&f) & 0x7f800000) == 0x7f800000 && (*reinterpret_cast<unsigned __int32*>(&f) & 0x007fffff) != 0;
}
bool is_finite(float f)
{
return (*reinterpret_cast<unsigned __int32*>(&f) & 0x7f800000) != 0x7f800000;
}
// if this symbol is defined, NaNs are never equal to anything (as is normal in IEEE floating point)
// if this symbol is not defined, NaNs are hugely different from regular numbers, but might be equal to each other
#define UNEQUAL_NANS 1
// if this symbol is defined, infinites are never equal to finite numbers (as they're unimaginably greater)
// if this symbol is not defined, infinities are 1 ULP away from +/- FLT_MAX
#define INFINITE_INFINITIES 1
// test whether two IEEE floats are within a specified number of representable values of each other
// This depends on the fact that IEEE floats are properly ordered when treated as signed magnitude integers
bool equal_float(float lhs, float rhs, unsigned __int32 max_ulp_difference)
{
#ifdef UNEQUAL_NANS
if(is_nan(lhs) || is_nan(rhs))
{
return false;
}
#endif
#ifdef INFINITE_INFINITIES
if((is_finite(lhs) && !is_finite(rhs)) || (!is_finite(lhs) && is_finite(rhs)))
{
return false;
}
#endif
signed __int32 left(*reinterpret_cast<signed __int32*>(&lhs));
// transform signed magnitude ints into 2s complement signed ints
if(left < 0)
{
left = 0x80000000 - left;
}
signed __int32 right(*reinterpret_cast<signed __int32*>(&rhs));
// transform signed magnitude ints into 2s complement signed ints
if(right < 0)
{
right = 0x80000000 - right;
}
if(static_cast<unsigned __int32>(std::abs(left - right)) <= max_ulp_difference)
{
return true;
}
return false;
}
A similar technique can be used for doubles. The trick is to convert the floats so that they're ordered (as if integers) and then just see how different they are.
I have no idea why this damn thing is screwing up my underscores. Edit: Oh, perhaps that is just an artefact of the preview. That's OK then.
The current version I am using is this
bool is_equals(float A, float B,
float maxRelativeError, float maxAbsoluteError)
{
if (fabs(A - B) < maxAbsoluteError)
return true;
float relativeError;
if (fabs(B) > fabs(A))
relativeError = fabs((A - B) / B);
else
relativeError = fabs((A - B) / A);
if (relativeError <= maxRelativeError)
return true;
return false;
}
This seems to take care of most problems by combining relative and absolute error tolerance. Is the ULP approach better? If so, why?
#DrPizza: I am no performance guru but I would expect fixed point operations to be quicker than floating point operations (in most cases).
It rather depends on what you are doing with them. A fixed-point type with the same range as an IEEE float would be many many times slower (and many times larger).
Things suitable for floats:
3D graphics, physics/engineering, simulation, climate simulation....
In numerical software you often want to test whether two floating point numbers are exactly equal. LAPACK is full of examples for such cases. Sure, the most common case is where you want to test whether a floating point number equals "Zero", "One", "Two", "Half". If anyone is interested I can pick some algorithms and go more into detail.
Also in BLAS you often want to check whether a floating point number is exactly Zero or One. For example, the routine dgemv can compute operations of the form
y = beta*y + alpha*A*x
y = beta*y + alpha*A^T*x
y = beta*y + alpha*A^H*x
So if beta equals One you have an "plus assignment" and for beta equals Zero a "simple assignment". So you certainly can cut the computational cost if you give these (common) cases a special treatment.
Sure, you could design the BLAS routines in such a way that you can avoid exact comparisons (e.g. using some flags). However, the LAPACK is full of examples where it is not possible.
P.S.:
There are certainly many cases where you don't want check for "is exactly equal". For many people this even might be the only case they ever have to deal with. All I want to point out is that there are other cases too.
Although LAPACK is written in Fortran the logic is the same if you are using other programming languages for numerical software.
Oh dear lord please don't interpret the float bits as ints unless you're running on a P6 or earlier.
Even if it causes it to copy from vector registers to integer registers via memory, and even if it stalls the pipeline, it's the best way to do it that I've come across, insofar as it provides the most robust comparisons even in the face of floating point errors.
i.e. it is a price worth paying.
This seems to take care of most problems by combining relative and absolute error tolerance. Is the ULP approach better? If so, why?
ULPs are a direct measure of the "distance" between two floating point numbers. This means that they don't require you to conjure up the relative and absolute error values, nor do you have to make sure to get those values "about right". With ULPs, you can express directly how close you want the numbers to be, and the same threshold works just as well for small values as for large ones.
If you have floating point errors you have even more problems than this. Although I guess that is up to personal perspective.
Even if we do the numeric analysis to minimize accumulation of error, we can't eliminate it and we can be left with results that ought to be identical (if we were calculating with reals) but differ (because we cannot calculate with reals).
If you are looking for two floats to be equal, then they should be identically equal in my opinion. If you are facing a floating point rounding problem, perhaps a fixed point representation would suit your problem better.
If you are looking for two floats to be equal, then they should be identically equal in my opinion. If you are facing a floating point rounding problem, perhaps a fixed point representation would suit your problem better.
Perhaps we cannot afford the loss of range or performance that such an approach would inflict.
#DrPizza: I am no performance guru but I would expect fixed point operations to be quicker than floating point operations (in most cases).
#Craig H: Sure. I'm totally okay with it printing that. If a or b store money then they should be represented in fixed point. I'm struggling to think of a real world example where such logic ought to be allied to floats. Things suitable for floats:
weights
ranks
distances
real world values (like from a ADC)
For all these things, either you much then numbers and simply present the results to the user for human interpretation, or you make a comparative statement (even if such a statement is, "this thing is within 0.001 of this other thing"). A comparative statement like mine is only useful in the context of the algorithm: the "within 0.001" part depends on what physical question you're asking. That my 0.02. Or should I say 2/100ths?
It rather depends on what you are
doing with them. A fixed-point type
with the same range as an IEEE float
would be many many times slower (and
many times larger).
Okay, but if I want a infinitesimally small bit-resolution then it's back to my original point: == and != have no meaning in the context of such a problem.
An int lets me express ~10^9 values (regardless of the range) which seems like enough for any situation where I would care about two of them being equal. And if that's not enough, use a 64-bit OS and you've got about 10^19 distinct values.
I can express values a range of 0 to 10^200 (for example) in an int, it is just the bit-resolution that suffers (resolution would be greater than 1, but, again, no application has that sort of range as well as that sort of resolution).
To summarize, I think in all cases one either is representing a continuum of values, in which case != and == are irrelevant, or one is representing a fixed set of values, which can be mapped to an int (or a another fixed-precision type).
An int lets me express ~10^9 values
(regardless of the range) which seems
like enough for any situation where I
would care about two of them being
equal. And if that's not enough, use a
64-bit OS and you've got about 10^19
distinct values.
I have actually hit that limit... I was trying to juggle times in ps and time in clock cycles in a simulation where you easily hit 10^10 cycles. No matter what I did I very quickly overflowed the puny range of 64-bit integers... 10^19 is not as much as you think it is, gimme 128 bits computing now!
Floats allowed me to get a solution to the mathematical issues, as the values overflowed with lots zeros at the low end. So you basically had a decimal point floating aronud in the number with no loss of precision (I could like with the more limited distinct number of values allowed in the mantissa of a float compared to a 64-bit int, but desperately needed th range!).
And then things converted back to integers to compare etc.
Annoying, and in the end I scrapped the entire attempt and just relied on floats and < and > to get the work done. Not perfect, but works for the use case envisioned.
If you are looking for two floats to be equal, then they should be identically equal in my opinion. If you are facing a floating point rounding problem, perhaps a fixed point representation would suit your problem better.
Perhaps I should explain the problem better. In C++, the following code:
#include <iostream>
using namespace std;
int main()
{
float a = 1.0;
float b = 0.0;
for(int i=0;i<10;++i)
{
b+=0.1;
}
if(a != b)
{
cout << "Something is wrong" << endl;
}
return 1;
}
prints the phrase "Something is wrong". Are you saying that it should?
Oh dear lord please don't interpret the float bits as ints unless you're running on a P6 or earlier.
it's the best way to do it that I've come across, insofar as it provides the most robust comparisons even in the face of floating point errors.
If you have floating point errors you have even more problems than this. Although I guess that is up to personal perspective.