Why do we use mid = low + (high – low)/2; but not mid = (low/2) +(high /2)? - binary-search

In binary search, we use mid = low + (high – low)/2 instead of (low + high)/2 to avoid overflow, however, can't calculate low/2 and high/2 separately and then sum them up rather than low+(( high-low)/2)?
P.S. If low + (high – low)/2 is more efficient, then why is it so?

Let's say both low and high are 3; then middle = 3/2 + 3/2 = 1+1 = 2, which actually is quite bad. :-)
The reason we don't use middle=high/2+low/2 is that it gives us the wrong result

Can't we calculate low/2 and high/2 separately and then sum them up rather than using low+((high-low)/2)?
Sure.
If low+(high-low)/2 is more efficient, then why is it so?
For a lot of hardware dividing is slower than adding and subtracting, so dividing twice might be slower than the method that uses more adding and subtracting.

Related

Faster calculation for large amounts of data / inner loop

So, I am programming a simple Mandelbrot renderer.
My inner loop (which is executed up to ~100,000,000 times each time I draw on screen) looks like this:
Complex position = {re,im};
Complex z = {0.0, 0.0};
uint32_t it = 0;
for (; it < maxIterations; it++)
{
//Square z
double old_re = z.re;
z.re = z.re*z.re - z.im*z.im;
z.im = 2*old_re*z.im;
//Add c
z.re = z.re+position.re;
z.im = z.im+position.im;
//Exit condition (mod(z) > 5)
if (sqrt(z.re*z.re + z.im*z.im) > 5.0f)
break;
}
//Color in the pixel according to value of 'it'
Just some very simple calculations. This takes between 0.5 and a couple of seconds, depending on the zoom and so on, but i need it to be much faster, to enable (almost) smooth scrolling.
My question is: What is my best bet to achieve the maximum possible calculation speed?
OpenCl to use the GPU? Coding it in assembly? Dividing the image into small pieces and dispatch the calculation of each piece on another thread? A combination of those?
Any help is appreciated!
I have written a Mandelbrot set renderer several times... and here are the things that you should keep in mind...
The things that take the longest are the ones that never escape and take all the iterations.
a. so you can make a region in the middle out of a few rectangles and check that first.
any starting point with a real and imaginary part between -1 and 1 will never escape.
you can cache points (20, or 30) in a rolling buffer and if you ever see a point in the buffer that you just calculated means that you have a cycle and it will never escape.
You can use a more general logic that doesn't require a square root... in that if any part is less than -2 or more than 2 it will race out of control and can be considered escaped.
But you can also break this up because each point is its own thing, so you can make a separate thread or gcd dispatch or whatever for each row or quadrant... it is a very easy problem to divide up and run in parallel.
In addition to the comments by #Grady Player you could start just by optimising your code
//Add c
z.re += position.re;
z.im += position.im;
//Exit condition (mod(z) > 5)
if (z.re*z.re + z.im*z.im > 25.0f)
break;
The compiler may optimise the first, but the second will certainly help.
Why are you coding your own complex rather than using complex.h

Precise Multiplication

first post!
I have a problem with a program that i'm writing for a numerical simulation and I have a problem with the multiplication. Basically, I am trying to calculate:
result1 = (a + b)*c
and this loops thousands of times. I need to expand this code to be
result2 = a*c + b*c
However, when I do that I start to get significant errors in my results. I used a high precision library, which did improve things, but the simulation ran horribly slow (the simulation took 50 times longer) and it really isn't a practical solution. From this I realised that it isn't really the precision of the variables a, b, & c that is hurting me, but something in the way the multiplication is done.
My question is: how can I multiply out these brackets in way so that result1 = result2?
Thanks.
SOLVED!!!!!!!!!
It was a problem with the addition. So i reordered the terms and applied Kahan addition by writing the following piece of code:
double Modelsimple::sum(double a, double b, double c, double d) {
//reorder the variables in order from smallest to greatest
double tempone = (a<b?a:b);
double temptwo = (c<d?c:d);
double tempthree = (a>b?a:b);
double tempfour = (c>d?c:d);
double one = (tempone<temptwo?tempone:temptwo);
double four = (tempthree>tempfour?tempthree:tempfour);
double tempfive = (tempone>temptwo?tempone:temptwo);
double tempsix = (tempthree<tempfour?tempthree:tempfour);
double two = (tempfive<tempsix?tempfive:tempsix);
double three = (tempfive>tempsix?tempfive:tempsix);
//kahan addition
double total = one;
double tempsum = one + two;
double error = (tempsum - one) - two;
total = tempsum;
// first iteration complete
double tempadd = three - error;
tempsum = total + tempadd;
error = (tempsum - total) - tempadd;
total = tempsum;
//second iteration complete
tempadd = four - error;
total += tempadd;
return total;
}
This gives me results that are as close to the precise answer as makes no difference. However, in a fictitious simulation of a mine collapse, the code with the Kahan addition takes 2 minutes whereas the high precision library takes over a day to finish!!
Thanks to all the help here. This problem was really a pain in the a$$.
I am presuming your numbers are all floating point values.
You should not expect result1 to equal result2 due to limitations in the scale of the numbers and precision in the calculations. Which one to use will depend upon the numbers you are dealing with. More important than result1 and result2 being the same is that they are close enough to the real answer (eg that you would have calculated by hand) for your application.
Imagine that a and b are both very large, and c much less than 1. (a + b) might overflow so that result1 will be incorrect. result2 would not overflow because it scales everything down before adding.
There are also problems with loss of precision when combining numbers of widely differing size, as the smaller number has significant digits reduced when it is converted to use the same exponent as the larger number it is added to.
If you give some specific examples of a, b and c which are causing you issues it might be possible to suggest further improvements.
I have been using the following program as a test, using values for a and b between 10^5 and 10^10, and c around 10^-5, but so far cannot find any differences.
Thinking about the storage of 10^5 vs 10^10, I think it requires about 13 bits vs 33 bits, so you may lose about 20 bits of precision when you add a and b together in result1.
But multiplying them by the same value c essentially reduces the exponent but leaves the significand the same, so it should also lose about 20 bits of precision in result2.
A double significand usually stores 53 bits, so I suspect your results will still retain 33 bits, or about 10 decimal digits of precision.
#include <stdio.h>
int main()
{
double a = 13584.9484893449;
double b = 43719848748.3911;
double c = 0.00001483394434;
double result1 = (a+b)*c;
double result2 = a*c + b*c;
double diff = result1 - result2;
printf("size of double is %d\n", sizeof(double));
printf("a=%f\nb=%f\nc=%f\nr1=%f\nr2=%f\ndiff=%f\n",a,b,c,result1,result2,diff);
}
However I do find a difference if I change all the doubles to float and use c=0.00001083394434. Are you sure that you are using 64 (or 80) bit doubles when doing your calculations?
Usually "loss of precision" in these kinds of calculations can be traced to "poorly formulated problem". For example, when you have to add a series of numbers of very different sizes, you will get a different answer depending on the order in which you sum them. The problem is even more acute when you subtract numbers.
The best approach in your case above is to look not simply at this one line, but at the way that result1 is used in your subsequent calculations. In principle, an engineering calculation should not require precision in the final result beyond about three significant figures; but in many instances (for example, finite element methods) you end up subtracting two numbers that are very similar in magnitude - in which case you may lose many significant figures and get a meaningless answer. Given that you are talking about "materials properties" and "strain", I am suspecting that is actually at the heart of your problem.
One approach is to look at places where you compute a difference, and see if you can reformulate your problem (for example, if you can differentiate your function, you can replace Y(x+dx)-Y(x) with dx * Y(x)'.
There are many excellent references on the subject of numerical stability. It is a complicated subject. Just "throwing more significant figures at the problem" is almost never the best solution.

binary search middle value calculation

The following is the pseudocode I got from a TopCoder tutorial about binary search
binary_search(A, target):
lo = 1, hi = size(A)
while lo <= hi:
mid = lo + (hi-lo)/2
if A[mid] == target:
return mid
else if A[mid] < target:
lo = mid+1
else:
hi = mid-1
// target was not found
Why do we calculate the middle value as mid = lo + (hi - lo) / 2 ? Whats wrong with (hi + lo) / 2
I have a slight idea that it might be to prevent overflows but I'm not sure, perhaps someone can explain it to me and if there are other reasons behind this.
Although this question is 5 years old, but there is a great article in googleblog which explains the problem and the solution in detail which is worth to share.
It's needed to mention that in current implementation of binary search in Java mid = lo + (hi - lo) / 2 calculation is not used, instead the faster and more clear alternative is used with zero fill right shift operator
int mid = (low + high) >>> 1;
Yes, (hi + lo) / 2 may overflow. This was an actual bug in Java binary search implementation.
No, there are no other reasons for this.
From later on in the same tutorial:
"You may also wonder as to why mid is calculated using mid = lo + (hi-lo)/2 instead of the usual mid = (lo+hi)/2. This is to avoid another potential rounding bug: in the first case, we want the division to always round down, towards the lower bound. But division truncates, so when lo+hi would be negative, it would start rounding towards the higher bound. Coding the calculation this way ensures that the number divided is always positive and hence always rounds as we want it to. Although the bug doesn't surface when the search space consists only of positive integers or real numbers, I've decided to code it this way throughout the article for consistency."
It is indeed possible for (hi+lo) to overflow integer. In the improved version, it may seem that subtracting lo from hi and then adding it again is pointless, but there is a reason: performing this operation will not overflow integer and it will result in a number with the same parity as hi+lo, so that the remainder of (hi+lo)/2 will be the same as (hi-lo)/2. lo can then be safely added after the division to reach the same result.
Let us assume that the array we're searching in, is of length INT_MAX.
Hence initially:
high = INT_MAX
low = 0
In the first iteration, we notice that the target element is greater than the middle element and so we shift the start index to mid as
low = mid + 1
In the next iteration, when mid is calculated, it is calculated as (high + low)/2
which essentially translates to
INT_MAX + low(which is half of INT_MAX + 1) / 2
Now, the first part of this operation i.e. (high + low) would lead to an overflow since we're going over the max Int range i.e. INT_MAX
Because Unsigned right shift is not present in Go programming, To avoid integer overflow while calculating middle value in Go Programming language we can write like this.
mid := int(uint(lo+hi) >> 1)
Why question is answered but it is not easy to understand why solution works.
So let's assume 10 is high and 5 is low. Assume 10 is highest value integer can have ( 10+1 will cause overflow ).
So instead of doing (10+5)/2 ≈ 7 ( because 10 + anything will lead overflow).
We do 5+(10-5)/2=> 5 + 2.5 ≈ 7

Reducing Number of Registers Used in CUDA Kernel

I have a kernel which uses 17 registers, reducing it to 16 would bring me 100% occupancy. My question is: are there methods that can be used to reduce the number or registers used, excluding completely rewriting my algorithms in a different manner. I have always kind of assumed the compiler is a lot smarter than I am, so for example I often use extra variables for clarity's sake alone. Am I wrong in this thinking?
Please note: I do know about the --max_registers (or whatever the syntax is) flag, but the use of local memory would be more detrimental than a 25% lower occupancy (I should test this)
Occupancy can be a little misleading and 100% occupancy should not be your primary target. If you can get fully coalesced accesses to global memory then on a high end GPU 50% occupancy will be sufficient to hide the latency to global memory (for floats, even lower for doubles). Check out the Advanced CUDA C presentation from GTC last year for more information on this topic.
In your case, you should measure performance both with and without maxrregcount set to 16. The latency to local memory should be hidden as a result of having sufficient threads, assuming you don't random access into local arrays (which would result in non-coalesced accesses).
To answer you specific question about reducing registers, post the code for more detailed answers! Understanding how compilers work in general may help, but remember that nvcc is an optimising compiler with a large parameter space, so minimising register count has to be balanced with overall performance.
It's really hard to say, nvcc compiler is not very smart in my opinion.
You can try obvious things, for example using short instead of int, passing and using variables by reference (e.g.&variable), unrolling loops, using templates (as in C++). If you have divisions, transcendental functions, been applied in sequence, try to make them as a loop. Try to get rid of conditionals, possibly replacing them with redundant computations.
If you post some code, maybe you will get specific answers.
Utilizing shared memory as cache may lead less register usage and prevent register spilling to local memory...
Think that the kernel calculates some values and these calculated values are used by all of the threads,
__global__ void kernel(...) {
int idx = threadIdx.x + blockDim.x * blockIdx.x;
int id0 = blockDim.x * blockIdx.x;
int reg = id0 * ...;
int reg0 = reg * a / x + y;
...
int val = reg + reg0 + 2 * idx;
output[idx] = val > 10;
}
So, instead of keeping reg and reg0 as registers and making them possibily spill out to local memory (global memory), we may use shared memory.
__global__ void kernel(...) {
__shared__ int cache[10];
int idx = threadIdx.x + blockDim.x * blockIdx.x;
if (threadIdx.x == 0) {
int id0 = blockDim.x * blockIdx.x;
cache[0] = id0 * ...;
cache[1] = cache[0] * a / x + y;
}
__syncthreads();
...
int val = cache[0] + cache[1] + 2 * idx;
output[idx] = val > 10;
}
Take a look at this paper for further information..
It is not generally a good approach to minimize register pressure. The compiler does a good job optimizing the overall projected kernel performance, and it takes into account lots of factors, incliding register.
How does it work when reducing registers caused slower speed
Most probably the compiler had to spill insufficient register data into "local" memory, which is essentially the same as global memory, and thus very slow
For optimization purposes I would recommend to use keywords like const, volatile and so on where necessary, to help the compiler on the optimization phase.
Anyway, it is not these tiny issues like registers which often make CUDA kernels run slow. I'd recommend to optimize work with global memory, the access pattern, caching in texture memory if possible, transactions over the PCIe.
The instruction count increase when lowering the register usage have a simple explanation. The compiler could be using registers to store the results of some operations that are used more than once through your code in order to avoid recalculating those values, when forced to use less registers, the compiler decides to recalculate those values that would be stored in registers otherwise.

What are you favorite low level code optimization tricks? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I know that you should only optimize things when it is deemed necessary. But, if it is deemed necessary, what are your favorite low level (as opposed to algorithmic level) optimization tricks.
For example: loop unrolling.
gcc -O2
Compilers do a lot better job of it than you can.
Picking a power of two for filters, circular buffers, etc.
So very, very convenient.
-Adam
Why, bit twiddling hacks, of course!
One of the most useful in scientific code is to replace pow(x,4) with x*x*x*x. Pow is almost always more expensive than multiplication. This is followed by
for(int i = 0; i < N; i++)
{
z += x/y;
}
to
double denom = 1/y;
for(int i = 0; i < N; i++)
{
z += x*denom;
}
But my favorite low level optimization is to figure out which calculations can be removed from a loop. Its always faster to do the calculation once rather than N times. Depending on your compiler, some of these may be automatically done for you.
Inspect the compiler's output, then try to coerce it to do something faster.
I wouldn't necessarily call it a low level optimization, but I have saved orders of magnitude more cycles through judicious application of caching than I have through all my applications of low level tricks combined. Many of these methods are applications specific.
Having an LRU cache of database queries (or any other IPC based request).
Remembering the last failed database query and returning a failure if re-requested within a certain time frame.
Remembering your location in a large data structure to ensure that if the next request is for the same node, the search is free.
Caching calculation results to prevent duplicate work. In addition to more complex scenarios, this is often found in if or for statements.
CPUs and compilers are constantly changing. Whatever low level code trick that made sense 3 CPU chips ago with a different compiler may actually be slower on the current architecture and there may be a good chance that this trick may confuse whoever is maintaining this code in the future.
++i can be faster than i++, because it avoids creating a temporary.
Whether this still holds for modern C/C++/Java/C# compilers, I don't know. It might well be different for user-defined types with overloaded operators, whereas in the case of simple integers it probably doesn't matter.
But I've come to like the syntax... it reads like "increment i" which is a sensible order.
Using template metaprogramming to calculate things at compile time instead of at run-time.
Years ago with a not-so-smart compilier, I got great mileage from function inlining, walking pointers instead of indexing arrays, and iterating down to zero instead of up to a maximum.
When in doubt, a little knowledge of assembly will let you look at what the compiler is producing and attack the inefficient parts (in your source language, using structures friendlier to your compiler.)
precalculating values.
For instance, instead of sin(a) or cos(a), if your application doesn't necessarily need angles to be very precise, maybe you represent angles in 1/256 of a circle, and create arrays of floats sine[] and cosine[] precalculating the sin and cos of those angles.
And, if you need a vector at some angle of a given length frequently, you might precalculate all those sines and cosines already multiplied by that length.
Or, to put it more generally, trade memory for speed.
Or, even more generally, "All programming is an exercise in caching" -- Terje Mathisen
Some things are less obvious. For instance traversing a two dimensional array, you might do something like
for (x=0;x<maxx;x++)
for (y=0;y<maxy;y++)
do_something(a[x,y]);
You might find the processor cache likes it better if you do:
for (y=0;y<maxy;y++)
for (x=0;x<maxx;x++)
do_something(a[x,y]);
or vice versa.
Don't do loop unrolling. Don't do Duff's device. Make your loops as small as possible, anything else inhibits x86 performance and gcc optimizer performance.
Getting rid of branches can be useful, though - so getting rid of loops completely is good, and those branchless math tricks really do work. Beyond that, try never to go out of the L2 cache - this means a lot of precalculation/caching should also be avoided if it wastes cache space.
And, especially for x86, try to keep the number of variables in use at any one time down. It's hard to tell what compilers will do with that kind of thing, but usually having less loop iteration variables/array indexes will end up with better asm output.
Of course, this is for desktop CPUs; a slow CPU with fast memory access can precalculate a lot more, but in these days that might be an embedded system with little total memory anyway…
I've found that changing from a pointer to indexed access may make a difference; the compiler has different instruction forms and register usages to choose from. Vice versa, too. This is extremely low-level and compiler dependent, though, and only good when you need that last few percent.
E.g.
for (i = 0; i < n; ++i)
*p++ = ...; // some complicated expression
vs.
for (i = 0; i < n; ++i)
p[i] = ...; // some complicated expression
Optimizing cache locality - for example when multiplying two matrices that don't fit into cache.
Allocating with new on a pre-allocated buffer using C++'s placement new.
Counting down a loop. It's cheaper to compare against 0 than N:
for (i = N; --i >= 0; ) ...
Shifting and masking by powers of two is cheaper than division and remainder, / and %
#define WORD_LOG 5
#define SIZE (1 << WORD_LOG)
#define MASK (SIZE - 1)
uint32_t bits[K]
void set_bit(unsigned i)
{
bits[i >> WORD_LOG] |= (1 << (i & MASK))
}
Edit
(i >> WORD_LOG) == (i / SIZE) and
(i & MASK) == (i % SIZE)
because SIZE is 32 or 2^5.
Jon Bentley's Writing Efficient Programs is a great source of low- and high-level techniques -- if you can find a copy.
Eliminating branches (if/elses) by using boolean math:
if(x == 0)
x = 5;
// becomes:
x += (x == 0) * 5;
// if '5' was a base 2 number, let's say 4:
x += (x == 0) << 2;
// divide by 2 if flag is set
sum >>= (blendMode == BLEND);
This REALLY speeds things out especially when those ifs are in a loop or somewhere that is being called a lot.
The one from Assembler:
xor ax, ax
instead of:
mov ax, 0
Classical optimization for program size and performance.
In SQL, if you only need to know whether any data exists or not, don't bother with COUNT(*):
SELECT 1 FROM table WHERE some_primary_key = some_value
If your WHERE clause is likely return multiple rows, add a LIMIT 1 too.
(Remember that databases can't see what your code's doing with their results, so they can't optimise these things away on their own!)
Recycling the frame-pointer all of a sudden
Pascal calling-convention
Rewrite stack-frame tail call optimizarion (although it sometimes messes with the above)
Using vfork() instead of fork() before exec()
And one I am still looking for, an excuse to use: data driven code-generation at runtime
Liberal use of __restrict to eliminate load-hit-store stalls.
Rolling up loops.
Seriously, the last time I needed to do anything like this was in a function that took 80% of the runtime, so it was worth trying to micro-optimize if I could get a noticeable performance increase.
The first thing I did was to roll up the loop. This gave me a very significant speed increase. I believe this was a matter of cache locality.
The next thing I did was add a layer of indirection, and put some more logic into the loop, which allowed me to only loop through the things I needed. This wasn't as much of a speed increase, but it was worth doing.
If you're going to micro-optimize, you need to have a reasonable idea of two things: the architecture you're actually using (which is vastly different from the systems I grew up with, at least for micro-optimization purposes), and what the compiler will do for you.
A lot of the traditional micro-optimizations trade space for time. Nowadays, using more space increases the chances of a cache miss, and there goes your performance. Moreover, a lot of them are now done by modern compilers, and typically better than you're likely to do them.
Currently, you should (a) profile to see if you need to micro-optimize, and then (b) try to trade computation for space, in the hope of keeping as much as possible in cache. Finally, run some tests, so you know if you've improved things or screwed them up. Modern compilers and chips are far too complex for you to keep a good mental model, and the only way you'll know if some optimization works or not is to test.
In addition to Joshua's comment about code generation (a big win), and other good suggestions, ...
I'm not sure if you would call it "low-level", but (and this is downvote-bait) 1) stay away from using any more levels of abstraction than absolutely necessary, and 2) stay away from event-driven notification-style programming, if possible.
If a computer executing a program is like a car running a race, a method call is like a detour. That's not necessarily bad except there's a strong temptation to nest those things, because once you're written a method call, you tend to forget what that call could cost you.
If your're relying on events and notifications, it's because you have multiple data structures that need to be kept in agreement. This is costly, and should only be done if you can't avoid it.
In my experience, the biggest performance killers are too much data structure and too much abstraction.
I was amazed at the speedup I got by replacing a for loop adding numbers together in structs:
const unsigned long SIZE = 100000000;
typedef struct {
int a;
int b;
int result;
} addition;
addition *sum;
void start() {
unsigned int byte_count = SIZE * sizeof(addition);
sum = malloc(byte_count);
unsigned int i = 0;
if (i < SIZE) {
do {
sum[i].a = i;
sum[i].b = i;
i++;
} while (i < SIZE);
}
}
void test_func() {
unsigned int i = 0;
if (i < SIZE) { // this is about 30% faster than the more obvious for loop, even with O3
do {
addition *s1 = &sum[i];
s1->result = s1->b + s1->a;
i++;
} while ( i<SIZE );
}
}
void finish() {
free(sum);
}
Why doesn't gcc optimise for loops into this? Or is there something I missed? Some cache effect?