Why is the value of NSUInteger 2^32 - 1 instead of 2^32? Is there a relationship between this fact and the need of a nan value? This is so confusing.
Count to 10 on your fingers. Really :)
The standard way to count to 10 is 1,2,3,..10 (the ordinality of each finger is counted). However, what about "0 fingers"?
Normally that might represent that by putting your hands behind our back, but that adds another piece of information to the system: are your hands in front (present) or behind (missing)?
In this case, putting hands behind your back would equivalent to assigning nil to an NSNumber variable. However, NSUInteger represents a native integer type which does not have this extra state and must still encode 0 to be useful.
The key to encode the value 0 on your fingers is to simply count 0,1,2..9 instead. The same number of fingers (or bits of information) are available, but now the useful 0 can be accounted for .. at the expense of not having a 10 value (there are still 10 fingers, but the 10th finger only represents the value 9). This is the same reason why unsigned integers have a maximum value of 2^n-1 and not 2^n: it allows 0 to be encoded with maximum efficiency.
Now, NaN is not a typical integer value, but rather comes from floating point encodings - think of float or CGFloat. One such common encoding is IEEE 754:
In computing, NaN, standing for not a number, is a numeric data type value representing an undefined or unrepresentable value, especially in floating-point calculations ..
2^32-1 because counting starts from 0 for bits. If it's easier think of it as 2^32 - 2^0.
It is the largest value a 32-bit unsigned integer variable can hold. Add one to that, and it will wrap around to zero.
The reason for that is that the smallest unsigned number is zero, not one. Think of it: the largest number you can fit into four decimal places is 9999, not 10000. That's 10^4-1.
You cannot store 2^32 in 4 bytes, but if you subtract one then it fits (result is 0xffffffff)
Exactly the same reason why the odometer in your car shows a maximum of 999999 mi/km (assuming 6 digits) - while there are 10^6 possible values it can't show 10^6 itself but 0 through 10^6-1.
Related
The upper limit for any int data type (excluding tinyint), is always one less than the absolute value of the lower limit.
For example, the upper limit for an int is 2,147,483,647 and ABS(lower limit) = 2,147,483,648.
Is there a reason why there is always one more negative int than positive int?
EDIT: Changed since question isn't directly related to DB's
The types you provided are signed integers. Let's see one byte(8-bit) example. With 1 byte you have 2^8 combinations which gives you 256 possible numbers to store.
Now you want to have the same number of positive and negative numbers (each group should have 128).
The point is 0 doesn't have +0 and -0. There is only one 0.
So you end up with range -128..-1..0..1..127.
The same logic works for 16/32/64-bit.
EDIT:
Why the range is -128 to 127?
It depends on how you represent signed integer:
Signed magnitude representation
Ones' complement
Two's complement
This question isn't really related to databases.
As lad2025 points out, there are an even number of values. So, by including 0, there would be one more positive or negative value. The question you are asking seems to be: "Why is there one more negative value than positive value?"
Basically, the reason is the sign-bit. One possible implementation of negative numbers is to use n - 1 bits for the absolute value and then 0 and 1 for the sign bit. The problem with this approach is that it permits +0 and -0. That is not desirable.
To fix this, computer scientists devised the twos-complement representation for signed integers. (Wikipedia explains this in more detail.) Basically, this representation maintains the concept of a sign bit that can be tested. But it changes the representation. If +1 is represented as 001, then -1 is represented as 111. That is, the negative value is the bit-wise complement of the positive value minus one. In fact the negative is always generated by subtracting 1 and using the bit-wise complement.
The issue is then the value 100 (followed by any number of zeros). The sign bit is set, so it is negative. However, when you subtract 1 and invert, it becomes itself again (011 --> 100). There is an argument for calling this "infinity" or "not a number". Instead it is assigned the smallest possible negative number.
Let's say you have a 4byte (32 bit) integer. The range defined by C++ is -231 to 231-1.
So we end up with a range -231.....0......231.
We can think of this as having 231 non negative integers (note 0 is included) and 231 negative integers.
In the debug window, when I input this command:
po 1912/10.0
The output is 191.19999999999999.
What I really want to get back is 191.2.
Why is this happening, and how can I convert an int into a double with precision?
From What Every Programmer Should Know About Floating-Point Arithmetic:
Why don’t my numbers, like 0.1 + 0.2 add up to a nice round 0.3, and instead I get a weird result like 0.30000000000000004?
Because internally, computers use a format (binary floating-point) that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all.
When the code is compiled or interpreted, your “0.1” is already rounded to the nearest number in that format, which results in a small rounding error even before the calculation happens.
This is why programmers say you should only ever store money as an integer. For example int cents = 1995; rather than float dollars = 19.95.
If your app doesn't need to be 100% precise (for example, if you're calculating screen coordinates or translucency or a color) just format your float rounded to 1 or 2 decimal places:
double someValue = 1912/10.0;
NSLog(#"2 decimals: %.2f", someValue);
NSLog(#"0 decimals: %.0f", someValue);
This code will output:
2 decimals: 191.20
0 decimals: 191
That's normal for a floating point number. Double is obviously just an extended precision floating point number. If you want to keep the pristine decimal digits, then don't allow any float/double conversion. Instead store the result as a scaled integer (in your case 1912) and place the decimal manually.
Let me try to explain this another way. When you express a number with a fractional part with a float or double, precision is most often lost. There's no way around that. If you store 1912 as a float and store 10 as a float then divide the first stored value by the second, the value will NEVER be 191.2. That's just the way floating point numbers work. If you look at the number in a debugger you'll see something like 191.19999999999999 as you describe. This, in itself, is an approximation as the value should be 191.19999999999999... but of course you can't even type all the digits in the decimal value of that stored result as the number of digits approaches infinity.
If you're going to use floating point, that's what you'll get. No way around it.
If you really want to get 191.2, then you can't use floating point, at least without doing rounding. Instead, you need to normalize the numbers by just storing the value as 1912 and printing the value with a decimal point to the left of the 2.
There's another brief online description at http://floating-point-gui.de/basic/
Right now I have a line of code like this:
float x = (([self.machine micSensitivity] - 0.0075f) / 0.00025f);
Where [self.machine micSensitivity] is a float containing the value 0.010000
So,
0.01 - 0.0075 = 0.0025
0.0025 / 0.00025 = 10.0
But in this case, it keeps returning 9.999999
I'm assuming there's some kind of rounding error but I can't seem to find a clean way of fixing it. micSensitivity is incremented/decremented by 0.00025 and that formula is meant to return a clean integer value for the user to reference so I'd rather get the programming right than just adding 0.000000000001.
Thanks.
that formula is meant to return a clean integer value for the user to reference
If that is really important to you, then why do you not multiply all the numbers in this story by 10000, coerce to int, and do integer arithmetic?
Or, if you know that the answer is arbitrarily close to an integer, round to that integer and present it.
Floating-point arithmetic is binary, not decimal. It will almost always give rounding errors. You need to take that into account. "float" has about six digit precision. "double" has about 15 digits precision. You throw away nine digits precision for no reason.
Now think: What do you want to display? What do you want to display if the result of your calculation is 9.999999999? What would you want to display if the result is 9.538105712?
None of the numbers in your question, except 10.0, can be exactly represented in a float or a double on iOS. If you want to do float math with those numbers, you will have rounding errors.
You can round your result to the nearest integer easily enough:
float x = rintf((self.machine.micSensitivity - 0.0075f) / 0.00025f);
Or you can just multiply all your numbers, including the allowed values of micSensitivity, by 4000 (which is 1/0.00025), and thus work entirely with integers.
Or you can change the allowed values of micSensitivity so that its increment is a fraction whose denominator is a power of 2. For example, if you use an increment of 0.000244140625 (which is 2-12), and change 0.0075 to 0.00732421875 (which is 30 * 2-12), you should get exact results, as long as your micSensitivity is within the range ±4096 (since 4096 is 212 and a float has 24 bits of significand).
The code you have posted is correct and functioning properly. This is a known side effect of using floating point arithmetic. See the wiki on floating point accuracy problems for a dull explanation as to why.
There are several ways to work around the problem depending on what you need to use the number for.
If you need to compare two floats, then most everything works OK: less than and greater than do what you would expect. The only trouble is testing if two floats are equal.
// If x and y are within a very small number from each other then they are equal.
if (fabs(x - y) < verySmallNumber) { // verySmallNumber is usually called epsilon.
// x and y are equal (or at least close enough)
}
If you want to print a float, then you can specify a precision to round to.
// Get a string of the x rounded to five digits of precision.
NSString *xAsAString = [NSString stringWithFormat:#"%.5f", x];
9.999999 is equal 10. there is prove:
9.999999 = x then 10x = 99.999999 then 10x-x = 9x = 90 then x = 10
I have 2 buttons that each have a tag number that I pass into this string in which I am just trying to type in either 1,1,1,1,1,1,1,1,1 or 2,2,2,2,2,2,2 or shoot - even, 1,2,2,1,1,1.
Everything works fine until the 8th or 9th time of pressing the button "1" the label says, 111111112. Then if I press the 1 again the label says, 111111168.
Maybe I am going about this totally wrong? Made sense in my head - but now I am just confused. Any help would be amazing, thank you!
-(IBAction)buttonDigitPressed:(id)sender {
currentNumber=currentNumber * 10 + (float)[sender tag];
NSLog(#"currentNumber: %.f", currentNumber);
phoneNumberLabel.text = [NSString stringWithFormat:#"%.f",currentNumber];
}
This image shows me hitting the 1 a bunch of times.. you'd think it would just keep showing 1's all the way across, no?
If this is a string operation, you should not do it using numbers. Possible reasons of the error: running out of range (because float is not big enough), loss of precision (because of the nature of float), etc. What you should do instead is
phoneNumberLabel.text = [phoneNumberLabel.text stringByAppendingFormat:#"%d", [sender tag]];
(Single precision) floating point numbers use 23 bits for the mantissa, therefore the largest integer that can be represented exactly by a float is 2^24 = 16777216.
All larger integers can not be represented exactly by a float, therefore the calculation with numbers having 8 or more digits using float cannot be exact.
Double precision floating point numbers can represent numbers up to 2^53 = 9007199254740992 exactly.
A better solution might be to work with integer types (e.g. uint64_t), or with strings as suggested in H2CO3's answer.
I've a small problem and I can't find a solution!
My code is (this is only a sample code, but my original code do something like this):
float x = [#"2.45" floatValue];
for(int i=0; i<100; i++)
x += 0.22;
NSLog(#"%f", x);
the output is 52.450001 and not 52.450000 !
I don't know because this happens!
Thanks for any help!
~SOLVED~
Thanks to everybody! Yes, I've solved with the double type!
Floats are a number representation with a certain precision. Not every value can be represented in this format. See here as well.
You can easily think of why this would be the case: there is an unlimited number of number just in the intervall (1..1), but a float only has a limited number of bits to represent all numbers in (-MAXFLOAT..MAXFLOAT).
More aptly put: in a 32bit integer representation there is a countable number of integers to be represented, But there is an infinite innumerable number of real values that cannot be fully represented in a limited representation of 32 or 64bit. Therefore there not only is a limit to the highest and lowest representable real value, but also to the accuracy.
So why is a number that has little digits after the floating point affected? Because the representation is based on a binary system instead of a decimal, making other numbers easily represented then the decimal ones.
See http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
Floating point numbers can not always be represented easily by computers. This leads to inaccuracy in some digits.
It's like me asking you what 1/3 is in decimal. No matter how hard you try, you're not going to be able to tell me what it is because decimal can't accurately describe that number.
Floats can't accurately describe some decimal numbers.