Should the machine instruction fxtod be ridiculously slow - optimization

I have a program on which I was trying to perform some loop optimization It's written in C++ and compiled using gcc
eventually using a profiler I tracked down more than half the execution time of the loop to the line
double x_component = in.input_vector[in.dimension_to_process] - \
(center_of_bin_0 + (double) nn * grid_distance);
Everything on this line is of type double with the exception of the loop index nn which is of type long unsigned int
the cast from long unsigned int to double generates the assembly instruction fxtod which the profiler flagged
As a test I removed the reference to nn from the line, thus removing the cast from unsigned int to double and the execution time of the loop reduced by almost a half, in a loop that performs about a dozen floating point operations on an Ultrasparc IV processor. I confirmed that this is also the case on an Ultrasparc II,
Is it normal for a cast from int to double to be much more expensive that a cache miss, let alone a floating point multiply ? And if so what does everyone else normally do about it ?
A lookup table for all possible values of nn (which in this case have a known limited range) would be faster than this

Related

What happens when adding or multiplying an integer exceeds its limit

what will happen when the integer crosses its limit? The output is 3595 , and how it will come? And it is 2 byte type ?
#include<stdio.h>
#include<conio.h>
void main()
{
int n=12,res=1;
clrscr();
while(n>3)
{
n+=3;
res*=3;
}
printf("%d",n*res);
getch();
}
The program will have undefined behavior.
The condition you gave is non terminating. It's a loop where the condition will never be terminated in a well defined manner.
You will go on multiplying and then once it will overflow. And then if you get a negative result in n or <=3 then it will stop. And in the mean time res has also overflown. As a result you will not be sure how this program behaves. We can't be sure of what the result will be.
The behaviour is undefined - you should not rely on anything specific. Common manifestations on int overflow are:
Wraparound such that 1 + INT_MAX becomes INT_MIN. This is what every Windows PC I have encountered does. The bit pattern produced by the operation matches the unsigned cousin exactly.
Clamping such that 1 + INT_MAX becomes INT_MAX. I last observed this on a machine (with signed magnitude int) running a variant of UNIX in the 1990s.

Why write 1,000,000,000 as 1000*1000*1000 in C?

In code created by Apple, there is this line:
CMTimeMakeWithSeconds( newDurationSeconds, 1000*1000*1000 )
Is there any reason to express 1,000,000,000 as 1000*1000*1000?
Why not 1000^3 for that matter?
One reason to declare constants in a multiplicative way is to improve readability, while the run-time performance is not affected.
Also, to indicate that the writer was thinking in a multiplicative manner about the number.
Consider this:
double memoryBytes = 1024 * 1024 * 1024;
It's clearly better than:
double memoryBytes = 1073741824;
as the latter doesn't look, at first glance, the third power of 1024.
As Amin Negm-Awad mentioned, the ^ operator is the binary XOR. Many languages lack the built-in, compile-time exponentiation operator, hence the multiplication.
There are reasons not to use 1000 * 1000 * 1000.
With 16-bit int, 1000 * 1000 overflows. So using 1000 * 1000 * 1000 reduces portability.
With 32-bit int, the following first line of code overflows.
long long Duration = 1000 * 1000 * 1000 * 1000; // overflow
long long Duration = 1000000000000; // no overflow, hard to read
Suggest that the lead value matches the type of the destination for readability, portability and correctness.
double Duration = 1000.0 * 1000 * 1000;
long long Duration = 1000LL * 1000 * 1000 * 1000;
Also code could simple use e notation for values that are exactly representable as a double. Of course this leads to knowing if double can exactly represent the whole number value - something of concern with values greater than 1e9. (See DBL_EPSILON and DBL_DIG).
long Duration = 1000000000;
// vs.
long Duration = 1e9;
Why not 1000^3?
The result of 1000^3 is 1003. ^ is the bit-XOR operator.
Even it does not deal with the Q itself, I add a clarification. x^y does not always evaluate to x+y as it does in the questioner's example. You have to xor every bit. In the case of the example:
1111101000₂ (1000₁₀)
0000000011₂ (3₁₀)
1111101011₂ (1003₁₀)
But
1111101001₂ (1001₁₀)
0000000011₂ (3₁₀)
1111101010₂ (1002₁₀)
For readability.
Placing commas and spaces between the zeros (1 000 000 000 or 1,000,000,000) would produce a syntax error, and having 1000000000 in the code makes it hard to see exactly how many zeros are there.
1000*1000*1000 makes it apparent that it's 10^9, because our eyes can process the chunks more easily. Also, there's no runtime cost, because the compiler will replace it with the constant 1000000000.
For readability. For comparison, Java supports _ in numbers to improve readability (first proposed by Stephen Colebourne as a reply to Derek Foster's PROPOSAL: Binary Literals for Project Coin/JSR 334) . One would write 1_000_000_000 here.
In roughly chronological order, from oldest support to newest:
XPL: "(1)1111 1111" (apparently not for decimal values, only for bitstrings representing binary, quartal, octal or hexadecimal values)
PL/M: 1$000$000
Ada: 1_000_000_000
Perl: likewise
Ruby: likewise
Fantom (previously Fan): likewise
Java 7: likewise
Swift: (same?)
Python 3.6
C++14: 1'000'000'000
It's a relatively new feature for languages to realize they ought to support (and then there's Perl). As in chux#'s excellent answer, 1000*1000... is a partial solution but opens the programmer up to bugs from overflowing the multiplication even if the final result is a large type.
Might be simpler to read and get some associations with the 1,000,000,000 form.
From technical aspect I guess there is no difference between the direct number or multiplication. The compiler will generate it as constant billion number anyway.
If you speak about objective-c, then 1000^3 won't work because there is no such syntax for pow (it is xor). Instead, pow() function can be used. But in that case, it will not be optimal, it will be a runtime function call not a compiler generated constant.
To illustrate the reasons consider the following test program:
$ cat comma-expr.c && gcc -o comma-expr comma-expr.c && ./comma-expr
#include <stdio.h>
#define BILLION1 (1,000,000,000)
#define BILLION2 (1000^3)
int main()
{
printf("%d, %d\n", BILLION1, BILLION2);
}
0, 1003
$
Another way to achieve a similar effect in C for decimal numbers is to use literal floating point notation -- so long as a double can represent the number you want without any loss of precision.
IEEE 754 64-bit double can represent any non-negative integer <= 2^53 without problem. Typically, long double (80 or 128 bits) can go even further than that. The conversions will be done at compile time, so there is no runtime overhead and you will likely get warnings if there is an unexpected loss of precision and you have a good compiler.
long lots_of_secs = 1e9;

Measuring Program Execution Time with Cycle Counters

I have confusion in this particular line-->
result = (double) hi * (1 << 30) * 4 + lo;
of the following code:
void access_counter(unsigned *hi, unsigned *lo)
// Set *hi and *lo to the high and low order bits of the cycle
// counter.
{
asm("rdtscp; movl %%edx,%0; movl %%eax,%1" // Read cycle counter
: "=r" (*hi), "=r" (*lo) // and move results to
: /* No input */ // the two outputs
: "%edx", "%eax");
}
double get_counter()
// Return the number of cycles since the last call to start_counter.
{
unsigned ncyc_hi, ncyc_lo;
unsigned hi, lo, borrow;
double result;
/* Get cycle counter */
access_counter(&ncyc_hi, &ncyc_lo);
lo = ncyc_lo - cyc_lo;
borrow = lo > ncyc_lo;
hi = ncyc_hi - cyc_hi - borrow;
result = (double) hi * (1 << 30) * 4 + lo;
if (result < 0) {
fprintf(stderr, "Error: counter returns neg value: %.0f\n", result);
}
return result;
}
The thing I cannot understand is that why is hi being multiplied with 2^30 and then 4? and then low added to it? Someone please explain what is happening in this line of code. I do know that what hi and low contain.
The short answer:
That line turns a 64bit integer that is stored as 2 32bit values into a floating point number.
Why doesn't the code just use a 64bit integer? Well, gcc has supported 64bit numbers for a long time, but presumably this code predates that. In that case, the only way to support numbers that big is to put them into a floating point number.
The long answer:
First, you need to understand how rdtscp works. When this assembler instruction is invoked, it does 2 things:
1) Sets ecx to IA32_TSC_AUX MSR. In my experience, this generally just means ecx gets set to zero.
2) Sets edx:eax to the current value of the processor’s time-stamp counter. This means that the lower 64bits of the counter go into eax, and the upper 32bits are in edx.
With that in mind, let's look at the code. When called from get_counter, access_counter is going to put edx in 'ncyc_hi' and eax in 'ncyc_lo.' Then get_counter is going to do:
lo = ncyc_lo - cyc_lo;
borrow = lo > ncyc_lo;
hi = ncyc_hi - cyc_hi - borrow;
What does this do?
Since the time is stored in 2 different 32bit numbers, if we want to find out how much time has elapsed, we need to do a bit of work to find the difference between the old time and the new. When it is done, the result is stored (again, using 2 32bit numbers) in hi / lo.
Which finally brings us to your question.
result = (double) hi * (1 << 30) * 4 + lo;
If we could use 64bit integers, converting 2 32bit values to a single 64bit value would look like this:
unsigned long long result = hi; // put hi into the 64bit number.
result <<= 32; // shift the 32 bits to the upper part of the number
results |= low; // add in the lower 32bits.
If you aren't used to bit shifting, maybe looking at it like this will help. If lo = 1 and high = 2, then expressed as hex numbers:
result = hi; 0x0000000000000002
result <<= 32; 0x0000000200000000
result |= low; 0x0000000200000001
But if we assume the compiler doesn't support 64bit integers, that won't work. While floating point numbers can hold values that big, they don't support shifting. So we need to figure out a way to shift 'hi' left by 32bits, without using left shift.
Ok then, shifting left by 1 is really the same as multiplying by 2. Shifting left by 2 is the same as multiplying by 4. Shifting left by [omitted...] Shifting left by 32 is the same as multiplying by 4,294,967,296.
By an amazing coincidence, 4,294,967,296 == (1 << 30) * 4.
So why write it in that complicated fashion? Well, 4,294,967,296 is a pretty big number. In fact, it's too big to fit in an 32bit integer. Which means if we put it in our source code, a compiler that doesn't support 64bit integers may have trouble figuring out how to process it. Written like this, the compiler can generate whatever floating point instructions it might need to work on that really big number.
Why the current code is wrong:
It looks like variations of this code have been wandering around the internet for a long time. Originally (I assume) access_counter was written using rdtsc instead of rdtscp. I'm not going to try to describe the difference between the two (google them), other than to point out that rdtsc does not set ecx, and rdtscp does. Whoever changed rdtsc to rdtscp apparently didn't know that, and failed to adjust the inline assembler stuff to reflect it. While your code might work fine despite this, it might do something weird instead. To fix it, you could do:
asm("rdtscp; movl %%edx,%0; movl %%eax,%1" // Read cycle counter
: "=r" (*hi), "=r" (*lo) // and move results to
: /* No input */ // the two outputs
: "%edx", "%eax", "%ecx");
While this will work, it isn't optimal. Registers are a valuable and scarce resource on i386. This tiny fragment uses 5 of them. With a slight modification:
asm("rdtscp" // Read cycle counter
: "=d" (*hi), "=a" (*lo)
: /* No input */
: "%ecx");
Now we have 2 fewer assembly statements, and we only use 3 registers.
But even that isn't the best we can do. In the (presumably long) time since this code was written, gcc has added both support for 64bit integers and a function to read the tsc, so you don't need to use asm at all:
unsigned int a;
unsigned long long result;
result = __builtin_ia32_rdtscp(&a);
'a' is the (useless?) value that was being returned in ecx. The function call requires it, but we can just ignore the returned value.
So, instead of doing something like this (which I assume your existing code does):
unsigned cyc_hi, cyc_lo;
access_counter(&cyc_hi, &cyc_lo);
// do something
double elapsed_time = get_counter(); // Find the difference between cyc_hi, cyc_lo and the current time
We can do:
unsigned int a;
unsigned long long before, after;
before = __builtin_ia32_rdtscp(&a);
// do something
after = __builtin_ia32_rdtscp(&a);
unsigned long long elapsed_time = after - before;
This is shorter, doesn't use hard-to-understand assembler, is easier to read, maintain and produces the best possible code.
But it does require a relatively recent version of gcc.

Difference between Objective-C primitive numbers

What is the difference between objective-c C primitive numbers? I know what they are and how to use them (somewhat), but I'm not sure what the capabilities and uses of each one is. Could anyone clear up which ones are best for some scenarios and not others?
int
float
double
long
short
What can I store with each one? I know that some can store more precise numbers and some can only store whole numbers. Say for example I wanted to store a latitude (possibly retrieved from a CLLocation object), which one should I use to avoid loosing any data?
I also noticed that there are unsigned variants of each one. What does that mean and how is it different from a primitive number that is not unsigned?
Apple has some interesting documentation on this, however it doesn't fully satisfy my question.
Well, first off types like int, float, double, long, and short are C primitives, not Objective-C. As you may be aware, Objective-C is sort of a superset of C. The Objective-C NSNumber is a wrapper class for all of these types.
So I'll answer your question with respect to these C primitives, and how Objective-C interprets them. Basically, each numeric type can be placed in one of two categories: Integer Types and Floating-Point Types.
Integer Types
short
int
long
long long
These can only store, well, integers (whole numbers), and are characterized by two traits: size and signedness.
Size means how much physical memory in the computer a type requires for storage, that is, how many bytes. Technically, the exact memory allocated for each type is implementation-dependendant, but there are a few guarantees: (1) char will always be 1 byte (2) sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long).
Signedness means, simply whether or not the type can represent negative values. So a signed integer, or int, can represent a certain range of negative or positive numbers (traditionally –2,147,483,648 to 2,147,483,647), and an unsigned integer, or unsigned int can represent the same range of numbers, but all positive (0 to 4,294,967,295).
Floating-Point Types
float
double
long double
These are used to store decimal values (aka fractions) and are also categorized by size. Again the only real guarantee you have is that sizeof(float) <= sizeof(double) <= sizeof (long double). Floating-point types are stored using a rather peculiar memory model that can be difficult to understand, and that I won't go into, but there is an excellent guide here.
There's a fantastic blog post about C primitives in an Objective-C context over at RyPress. Lots of intro CPS textbooks also have good resources.
Firstly I would like to specify the difference between au unsigned int and an int. Say that you have a very high number, and that you write a loop iterating with an unsigned int:
for(unsigned int i=0; i< N; i++)
{ ... }
If N is a number defined with #define, it may be higher that the maximum value storable with an int instead of an unsigned int. If you overflow i will start again from zero and you'll go in an infinite loop, that's why I prefer to use an int for loops.
The same happens if for mistake you iterate with an int, comparing it to a long. If N is a long you should iterate with a long, but if N is an int you can still safely iterate with a long.
Another pitfail that may occur is when using the shift operator with an integer constant, then assigning it to an int or long. Maybe you also log sizeof(long) and you notice that it returns 8 and you don't care about portability, so you think that you wouldn't lose precision here:
long i= 1 << 34;
Bit instead 1 isn't a long, so it will overflow and when you cast it to a long you have already lost precision. Instead you should type:
long i= 1l << 34;
Newer compilers will warn you about this.
Taken from this question: Converting Long 64-bit Decimal to Binary.
About float and double there is a thing to considerate: they use a mantissa and an exponent to represent the number. It's something like:
value= 2^exponent * mantissa
So the more the exponent is high, the more the floating point number doesn't have an exact representation. It may also happen that a number is too high, so that it will have a so inaccurate representation, that surprisingly if you print it you get a different number:
float f= 9876543219124567;
NSLog("%.0f",f); // On my machine it prints 9876543585124352
If I use a double it prints 9876543219124568, and if I use a long double with the .0Lf format it prints the correct value. Always be careful when using floating points numbers, unexpected things may happen.
For example it may also happen that two floating point numbers have almost the same value, that you expect they have the same value but there is a subtle difference, so that the equality comparison fails. But this has been treated hundreds of times on Stack Overflow, so I will just post this link: What is the most effective way for float and double comparison?.

Using data type like uint64_t in cgal's exact kernel

I am beginning with CGAL. What I would like to do is to create point that coordinates are number ~ 2^51.
typedef CGAL::Exact_predicates_exact_constructions_kernel K;
typedef K::Point_2 P;
uint_64 x,y;
//init them somehow
P sp0(x,y);
Then I got a long template error. Someone could help?
I guess you realize that changing the kernel may have other effects on your program.
Concerning your original question, if your integer values are smaller than 2^51, then they fit exactly in doubles (with 53 bit mantissa), so one simple option is to cast them to double, as in :
P sp0((double)x,(double)y);
Otherwise, the Exact_predicates_exact_construction_kernel should have its main number type be able to read your uint64 values (maybe cast them to unsigned long long if it's OK on your platform) :
typedef K::FT FT;
P sp0((FT)x,(FT)y);
CGAL Number types are only documented to interoperate with int and double. I recently added some code so we can construct more numbers from long (required for Eigen), and your code will work in the next version of CGAL (except that you typo-ed uint64_t) on platforms where uint64_t is unsigned int or unsigned long (not windows). For long long support, since many of our number types are based on other libraries (GMP) that do not support long long themselves yet, it may have to wait a bit.
Ok. I think that I found solution. The problem was that I used exact Kernel that supports only double, switching to inexact kernel solved the problem. It was also possible to use just double. (one of the requirements was to use data type that supports intergers up to 2^48).