Is divide slower than Multiply? - objective-c

Ok, this might sound like a strange question but it is an interesting one. I am coding for iOS and have been told that it is always best to multiply rather than divide values as it is faster.
I know that processors these days probably make this a non issue but my curiosity has gotten the better of me and I am wondering if anyone might be able to shed some light on this for me.
SO..... My question is this -
is:
player.position = ccp(player.contentSize.width / 2, winSize.height / 2);
slower than:
player.position = ccp(player.contentSize.width * 0.5, winSize.height * 0.5);

Yes, division is usually much slower than multiplication.
However, when dividing by literals (or anything that can be determined to be a constant at compile-time), the compiler will usually optimize out the division.

On most processors division is slower than multiplication for the same data types. In your example your multiplication is a floating point operation, if width and height are integer types, the result may be very different and may depend on both your processor and your compiler.
However most compilers (certainly GCC) will translate a division by a constant power-of-two as in your example, to a right-shift where that would be more efficient. That would generally be faster than either a multiply or divide.

Multiplication up-to a certain degree can be done in the parallel, if you can use either use multiplication.

Related

MPFR - Loss precision after addition

First, sorry if this question looks "silly", because I'm new to MPFR, LOL.
I have two mpfr_t variables with precision of 1024, and they have the value of 0.2 and 0.06 stored in them.
But when I add these variables, things goes wrong and the result (which is also a mpfr_t variable) has the value of 0.2599999...
This is strange because the MPFR library should maintain the precision (isn't it?).
Could you please help me with this? Thanks so much, so much in advance.
MPFR numbers are represented in binary (base 2). In this system, the only numbers that can be represented exactly have the form N·2k, where N and k are integers. Neither 0.2 = 1/5 nor 0.06 = 3/50 have this form, so that they are approximated with some small error. When you add these variables, you are seeing a consequence of this error (there may be also another error in the addition operation since in binary these numbers have many nonzero digits, unlike in decimal).
This is the same issue as the one described in: Is floating point math broken?
EDIT:
To answer the question in comment "Is there a way to avoid this situation?", no, there is no way to avoid this situation in practice, except in very specific cases. For instance, if all your numbers (inputs and results of each intermediate operations) are decimal numbers, representable with a small enough number of digits, you can use a decimal arithmetic (but MPFR can't do that). Computer algebra systems may help in some cases. There's also iRRAM... I'll come back to it later.
However, there are solutions to attempt to hide issues with numerical errors. You need to estimate the maximum possible error on a computed value. With an error analysis, you can obtain rigorous bounds, but this may be difficult or take time to do. Note that rigorous bounds are pessimistic in general, but if you use arbitrary precision (e.g. with MPFR), this is less an issue. The analysis can be done dynamically with interval arithmetic (still pessimistic, even worse). But perhaps a simple estimate is sufficient for you. Once you have an estimate of the maximum error:
For the output, choose the number of displayed digits so that the error is less than the weight of the last displayed digit.
For discontinuous functions (e.g. equality test, floor, ceil): if the distance between the computed value and a discontinuity point is less than the maximum error, assume that the actual value is equal to the discontinuity point. Note that this is just a heuristic, but if it fails (this may remain unnoticed and will probably invalidate your estimate), this means that you have not done your computations with enough precision.
Note: MPFR won't do that for you. But you can write code to take these rules into account.
The iRRAM package, which is based on MPFR, can track the error in a rigorous way (like with interval arithmetic) and automatically redo all the computations in a higher precision if it notices that the accuracy is too low. However, if some mathematical result is a discontinuity point, iRRAM won't help. In particular, it cannot provide a rigorous equality test.
Finally, I suggest that you have a look at Goldberg's paper What Every Computer Scientist Should Know About Floating-Point Arithmetic, in particular the notion of cancellation.

Numerical Accuracy: to scale or not?

I am working on a n-body gravitational simulator that takes input and produces output in metric MKS units. This involves dealing with some very large numbers (like solar masses expressed in kilograms, semimajor axes of planetary orbits expressed in meters, and timescales of years expressed in seconds), which get multiplied by some very small numbers (notably, the gravitational constant, which is 6.67384e-11 in MKS units), and also the occasional very small number getting added to or subtracted from a very large number (mainly when summing up pairwise accelerations), which gets me concerned about the effects of rounding errors.
I've already taken the step of replacing all masses m by Gm (premultiplying by the gravitational constant), which significantly reduces the total number of multiplies, and makes the mass numbers much smaller, and that seems to have had a positive effect on both efficiency and accuracy, as judged by how well the simulator conserves energy.
I am wondering, however: is potentially it worth trying to do some internal re-scaling into different units to further minimize floating point errors? And if so, what kind of range (for double-precision floats) should I be trying to get my numbers centered on for maximum accuracy?
In general if you want precise results in physical based rendering you don't want to use floats or doubles since they have massive rounding problems and thus introduce errors in your simulation.
If you need or want to stick with floats/double you probably should rescale around zero. The reason is that often floating point representations have a higher "density" of values around this point and tend to have fewer on the min/max sides. Image example from google
I would suggest that you change all values to integer based number variables. This erases rounding errors (over/underflow can still happen!) and speeds up the calculation process by an order of magnitude because normal CPUs work faster with integer operations. In case of GPU its basically the same but thats another story all by its own...
But before you take such an effort to further improve your accuracy i would strongly advise an arbitrary precision number library. This may come with an performance loss but should be way easier and yield better results than a rescaling of your values.
Most of the numerical mathematicians come across this problem.
At first let me remind you that you can not deal with numbers (or phsycal values) smaller than the machine epsilon for each calculation. Unfortunately the epsilon depends around which number you are analyzing. You can try eps(a) for any value of a in MATLAB, as far as I remember eps(1.0)~=2.3e-16 and eps(0)~1e-298.
That's why in numerical methods you avoid calculations using very different scaled numbers. Because one is just an ignored (smaller than its epsilon) by the other value and rounding errors are inevitable.
But what else people do? If they encounter such physical problems, before coding, mathematicians analyse the problem theoritically, they make simplifications to use similarly scaled numbers.

Is multiplying y by 2^x and subtracting y faster that multiplying y by [(2^x)-1] directly?

I have a rather theoretical question:
Is multiplying y by 2^x and subtracting y faster than
multiplying y by [(2^x)-1] directly?
(y*(2^x) - y) vs (y*((2^x)-1))
I implemented a moving average filter on some data I get from a sensor. The basic idea is that I want to average the last 2^x values by taking the old average, multiplying that by [(2^x)-1], adding the new value, and dividing again by 2^x. But because I have to do this more than 500 times a second, I want to optimize it as much as possible.
I know that floating point numbers are represented in IEEE754 and therefore, multiplying and dividing by a power of 2 should be rather fast (basically just changing the mantissa), but how to do that most efficiently? Should I simply stick with just multiplying ((2^x)-1), or is multiplying by 2.0f and subtracting y better, or could I even do that more efficiently by performing a leftshift on the mantissa? And if that is possible, how to implement that properly?
Thank you very much!
I don't think that multiplying a floating-point number by a power of two is faster in practice than a generic multiplication (though I agree that in theory it should be faster, assuming no overflow/underflow). Said otherwise, I don't think that there is a hardware optimization.
Now, I can assume that you have a modern processor, i.e. with a FMA. In this case, (y*(2^x) - y) is faster if performed as fma(y,2^x,-y) (the way you have to write the expression depends on your language and implementation): a FMA should be as fast as a multiplication in practice.
Note also that the speed may also depend on the context. For instance, I've observed on simple code that doing more work can surprisingly yield faster code! So, you need to test (on your real code, not with an arbitrary benchmark).

approximating log10[x^k0 + k1]

Greetings. I'm trying to approximate the function
Log10[x^k0 + k1], where .21 < k0 < 21, 0 < k1 < ~2000, and x is integer < 2^14.
k0 & k1 are constant. For practical purposes, you can assume k0 = 2.12, k1 = 2660. The desired accuracy is 5*10^-4 relative error.
This function is virtually identical to Log[x], except near 0, where it differs a lot.
I already have came up with a SIMD implementation that is ~1.15x faster than a simple lookup table, but would like to improve it if possible, which I think is very hard due to lack of efficient instructions.
My SIMD implementation uses 16bit fixed point arithmetic to evaluate a 3rd degree polynomial (I use least squares fit). The polynomial uses different coefficients for different input ranges. There are 8 ranges, and range i spans (64)2^i to (64)2^(i + 1).
The rational behind this is the derivatives of Log[x] drop rapidly with x, meaning a polynomial will fit it more accurately since polynomials are an exact fit for functions that have a derivative of 0 beyond a certain order.
SIMD table lookups are done very efficiently with a single _mm_shuffle_epi8(). I use SSE's float to int conversion to get the exponent and significand used for the fixed point approximation. I also software pipelined the loop to get ~1.25x speedup, so further code optimizations are probably unlikely.
What I'm asking is if there's a more efficient approximation at a higher level?
For example:
Can this function be decomposed into functions with a limited domain like
log2((2^x) * significand) = x + log2(significand)
hence eliminating the need to deal with different ranges (table lookups). The main problem I think is adding the k1 term kills all those nice log properties that we know and love, making it not possible. Or is it?
Iterative method? don't think so because the Newton method for log[x] is already a complicated expression
Exploiting locality of neighboring pixels? - if the range of the 8 inputs fall in the same approximation range, then I can look up a single coefficient, instead of looking up separate coefficients for each element. Thus, I can use this as a fast common case, and use a slower, general code path when it isn't. But for my data, the range needs to be ~2000 before this property hold 70% of the time, which doesn't seem to make this method competitive.
Please, give me some opinion, especially if you're an applied mathematician, even if you say it can't be done. Thanks.
You should be able to improve on least-squares fitting by using Chebyshev approximation. (The idea is, you're looking for the approximation whose worst-case deviation in a range is least; least-squares instead looks for the one whose summed squared difference is least.) I would guess this doesn't make a huge difference for your problem, but I'm not sure -- hopefully it could reduce the number of ranges you need to split into, somewhat.
If there's already a fast implementation of log(x), maybe compute P(x) * log(x) where P(x) is a polynomial chosen by Chebyshev approximation. (Instead of trying to do the whole function as a polynomial approx -- to need less range-reduction.)
I'm an amateur here -- just dipping my toe in as there aren't a lot of answers already.
One observation:
You can find an expression for how large x needs to be as a function of k0 and k1, such that the term x^k0 dominates k1 enough for the approximation:
x^k0 +k1 ~= x^k0, allowing you to approximately evaluate the function as
k0*Log(x).
This would take care of all x's above some value.
I recently read how the sRGB model compresses physical tri stimulus values into stored RGB values.
It basically is very similar to the function I try to approximate, except that it's defined piece wise:
k0 x, x < 0.0031308
k1 x^0.417 - k2 otherwise
I was told the constant addition in Log[x^k0 + k1] was to make the beginning of the function more linear. But that can easily be achieved with a piece wise approximation. That would make the approximation a lot more "uniform" - with only 2 approximation ranges. This should be cheaper to compute due to no longer needing to compute an approximation range index (integer log) and doing SIMD coefficient lookup.
For now, I conclude this will be the best approach, even though it doesn't approximate the function precisely. The hard part will be proposing this change and convincing people to use it.

Are there compilers that optimise floating point operations for accuracy (as opposed to speed)?

We know that compilers are getting better and better at optimising our code and make it run faster, but my question are there compilers that can optimise floating point operations to ensure greater accuracy.
For example a basic rule is to perform multiplications before addition, this is because multiplication and division using floating point numbers does not introduce inaccuracies as great as that of addition and subtraction but can increase the magnitude of inaccuracies introduced by addition and subtraction, so it should be done first in many cases.
So a floating point operation like
y = x*(a + b); // faster but less accurate
Should be changed to
y = x*a + x*b; // slower but more accurate
Are there any compilers that will optimise for improved floating point accuracy at the expense of speed like I showed above? Or is the main concern of compilers speed with out looking at accuracy of floating point operations?
Thanks
Update: The selected answer, showed a very good example where this type of optimisation would not work, so it wouldn't be possible for the compiler to know before hand what is the more accurate way to evaluate y. Thanks for the counter example.
Your premise is faulty. x*(a + b), is (in general) no less accurate than x*a + x*b. In fact, it will often be more accurate, because it performs only two floating point operations (and therefore incurs only two rounding errors), whereas the latter performs three operations.
If you know something about the expected distribution of values for x, a, and b a priori, then you could make an informed decision, but compilers almost never have access to that type of information.
That aside, what if the person writing the program actually meant x*(a+b) and specifically wanted the exactly roundings that are caused by that particular sequence of operations? This sort of thing is actually pretty common in high-quality numerical algorithms.
Better to do what the programmer wrote, not what you think he might have intended.
Edit -- An example to illustrate a case where the transformation you suggested results in a catastrophic loss of accuracy: suppose
x = 3.1415926535897931
a = 1.0e15
b = -(1.0e15 - 1.0)
Then, evaluating in double we get:
x*(a + b) = 3.1415926535897931
but
x*a + x*b = 3.0
Compilers typically "optimize" for accuracy over speed, accuracy defined as exact implementation of the IEEE 754 standard. Whereas integer operations can be reordered in any way that doesn't cause overflow, FP operations need to be performed exactly as the programmer specifies. This may sacrifice numerical accuracy (ordinary C compilers are not equipped to optimize for that) but faithfully implements the what the programmer asked.
A programmer who is sure he hasn't manually optimized for accuracy may enable compiler features like GCC's -funsafe-math-optimizations and -ffinite-math-only to possibly extract extra speed. But usually there isn't much gain.
No, there isn't. Stephen Canon gives some good reasons why this would be a stupid idea, and he's correct; so you won't find a compiler that does this.
If you as the programmer have some knowledge about the ranges of numbers you're manipulating, you can use parentheses, temporary variables and similar constructs to strongly hint the compiler about how you want things done.