Convert.ToSingle() from double in vb.net returns wrong value - vb.net

Here is my question :
If we have the following value
0.59144706948010461
and we try to convert it to Single we receive the next value:
0.591447055
As you can see this is not that we should receive. Could you please explain how does this value get created and how can I avoid this situation?
Thank you!

As you can see this is not that we should receive.
Why not? I strongly suspect that's the closest Single value to the Double you've given.
From the documentation for Single, having fixed the typo:
All floating-point numbers have a limited number of significant digits, which also determines how accurately a floating-point value approximates a real number. A Single value has up to 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.
Your Double value is 0.5914471 when limited to 7 significant digits - and so is the Single value you're getting. Your original Double value isn't exactly 0.59144706948010461 either... the exact values of the Double and Single values are:
Double: 0.5914470694801046146693579430575482547283172607421875
Single: 0.591447055339813232421875
It's important that you understand a bit about how binary floating point works - see my articles on binary floating point and decimal floating point for more background.

When converting from double to float you're also rounding. The result should be the single-precision number that is closest to the number you are rounding.
That is exactly what you're getting here.
Floating-point numbers between 0.5 and 1 are of the form n / 2^24 where n is between 2^23 and 2^24.
0.59144706948010461... = 9922835.23723472274456576... / 2^24
so the closest single-precision floating-point number is
9922835 / 2^24 = 0.5914470553...

Related

Converting int to double screws up the decimal point

In the debug window, when I input this command:
po 1912/10.0
The output is 191.19999999999999.
What I really want to get back is 191.2.
Why is this happening, and how can I convert an int into a double with precision?
From What Every Programmer Should Know About Floating-Point Arithmetic:
Why don’t my numbers, like 0.1 + 0.2 add up to a nice round 0.3, and instead I get a weird result like 0.30000000000000004?
Because internally, computers use a format (binary floating-point) that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all.
When the code is compiled or interpreted, your “0.1” is already rounded to the nearest number in that format, which results in a small rounding error even before the calculation happens.
This is why programmers say you should only ever store money as an integer. For example int cents = 1995; rather than float dollars = 19.95.
If your app doesn't need to be 100% precise (for example, if you're calculating screen coordinates or translucency or a color) just format your float rounded to 1 or 2 decimal places:
double someValue = 1912/10.0;
NSLog(#"2 decimals: %.2f", someValue);
NSLog(#"0 decimals: %.0f", someValue);
This code will output:
2 decimals: 191.20
0 decimals: 191
That's normal for a floating point number. Double is obviously just an extended precision floating point number. If you want to keep the pristine decimal digits, then don't allow any float/double conversion. Instead store the result as a scaled integer (in your case 1912) and place the decimal manually.
Let me try to explain this another way. When you express a number with a fractional part with a float or double, precision is most often lost. There's no way around that. If you store 1912 as a float and store 10 as a float then divide the first stored value by the second, the value will NEVER be 191.2. That's just the way floating point numbers work. If you look at the number in a debugger you'll see something like 191.19999999999999 as you describe. This, in itself, is an approximation as the value should be 191.19999999999999... but of course you can't even type all the digits in the decimal value of that stored result as the number of digits approaches infinity.
If you're going to use floating point, that's what you'll get. No way around it.
If you really want to get 191.2, then you can't use floating point, at least without doing rounding. Instead, you need to normalize the numbers by just storing the value as 1912 and printing the value with a decimal point to the left of the 2.
There's another brief online description at http://floating-point-gui.de/basic/

Objective C, division between floats not giving an exact answer

Right now I have a line of code like this:
float x = (([self.machine micSensitivity] - 0.0075f) / 0.00025f);
Where [self.machine micSensitivity] is a float containing the value 0.010000
So,
0.01 - 0.0075 = 0.0025
0.0025 / 0.00025 = 10.0
But in this case, it keeps returning 9.999999
I'm assuming there's some kind of rounding error but I can't seem to find a clean way of fixing it. micSensitivity is incremented/decremented by 0.00025 and that formula is meant to return a clean integer value for the user to reference so I'd rather get the programming right than just adding 0.000000000001.
Thanks.
that formula is meant to return a clean integer value for the user to reference
If that is really important to you, then why do you not multiply all the numbers in this story by 10000, coerce to int, and do integer arithmetic?
Or, if you know that the answer is arbitrarily close to an integer, round to that integer and present it.
Floating-point arithmetic is binary, not decimal. It will almost always give rounding errors. You need to take that into account. "float" has about six digit precision. "double" has about 15 digits precision. You throw away nine digits precision for no reason.
Now think: What do you want to display? What do you want to display if the result of your calculation is 9.999999999? What would you want to display if the result is 9.538105712?
None of the numbers in your question, except 10.0, can be exactly represented in a float or a double on iOS. If you want to do float math with those numbers, you will have rounding errors.
You can round your result to the nearest integer easily enough:
float x = rintf((self.machine.micSensitivity - 0.0075f) / 0.00025f);
Or you can just multiply all your numbers, including the allowed values of micSensitivity, by 4000 (which is 1/0.00025), and thus work entirely with integers.
Or you can change the allowed values of micSensitivity so that its increment is a fraction whose denominator is a power of 2. For example, if you use an increment of 0.000244140625 (which is 2-12), and change 0.0075 to 0.00732421875 (which is 30 * 2-12), you should get exact results, as long as your micSensitivity is within the range ±4096 (since 4096 is 212 and a float has 24 bits of significand).
The code you have posted is correct and functioning properly. This is a known side effect of using floating point arithmetic. See the wiki on floating point accuracy problems for a dull explanation as to why.
There are several ways to work around the problem depending on what you need to use the number for.
If you need to compare two floats, then most everything works OK: less than and greater than do what you would expect. The only trouble is testing if two floats are equal.
// If x and y are within a very small number from each other then they are equal.
if (fabs(x - y) < verySmallNumber) { // verySmallNumber is usually called epsilon.
// x and y are equal (or at least close enough)
}
If you want to print a float, then you can specify a precision to round to.
// Get a string of the x rounded to five digits of precision.
NSString *xAsAString = [NSString stringWithFormat:#"%.5f", x];
9.999999 is equal 10. there is prove:
9.999999 = x then 10x = 99.999999 then 10x-x = 9x = 90 then x = 10

What is wrong with Math.Round() in VB.Net?

I have encountered a weird case in Math.Round function in VB.Net
Math.Round((32.625), 2)
Result : 32.62
Math.Round((32.635), 2)
Result : 32.64
I need 32.63 but the function is working in different logic in these cases.
I can get the decimal part and make what I want doing something on it. But isn't this too weird, one is rounding to higher, one is rounding to lower.
So how can I get 32.63 from 32.625 without messing with decimal part ? (as the natural logic of Maths)
Math.Round uses banker's rounding by default. You can change that by specifying a different MidPointRounding option. From the MSDN:
Rounding away from zero
Midpoint values are rounded to the next number away from zero. For
example, 3.75 rounds to 3.8, 3.85 rounds to 3.9, -3.75 rounds to -3.8,
and -3.85 rounds to -3.9. This form of rounding is represented by the
MidpointRounding.AwayFromZero enumeration member. Rounding away from
zero is the most widely known form of rounding.
Rounding to nearest, or banker's rounding
Midpoint values are rounded to the nearest even number. For example,
both 3.75 and 3.85 round to 3.8, and both -3.75 and -3.85 round to
-3.8. This form of rounding is represented by the MidpointRounding.ToEven enumeration member.
Rounding to nearest is the standard form of rounding used in financial
and statistical operations. It conforms to IEEE Standard 754, section
4. When used in multiple rounding operations, it reduces the rounding error that is caused by consistently rounding midpoint values in a
single direction. In some cases, this rounding error can be
significant.
So, what you want is:
Math.Round(32.625, 2, MidpointRounding.AwayFromZero)
Math.Round(32.635, 2, MidpointRounding.AwayFromZero)
As others have mentioned, if precision is important, you should be using Decimal variables rather than floating point types. For instance:
Math.Round(32.625D, 2, MidpointRounding.AwayFromZero)
Math.Round(32.635D, 2, MidpointRounding.AwayFromZero)
Try this (from memory):
Math.Round((32.635), 2, MidPointRounding.AwayFromZero)
Try this.
Dim d As Decimal = 3.625
Dim r As Decimal = Math.Ceiling(d * 100D) / 100D
MsgBox(r)
This should do what you want.
Hers a quick function you can add to simplify your life and make it so you don't have to type so much all the time.
Private Function roundd(dec As Decimal)
Dim d As Decimal = dec
Dim r As Decimal = Math.Ceiling(d * 100D) / 100D
Return r
End Function
Add this to your application then use the function
roundd(3.624)
or whatever you need.
to display the result - example
msgbox(roundd(3.625))
This will display a messagebox with 3.63
Textbox1.text = roundd(3.625)
this will set textbox1.text - 3.63 etc. etc.
So if you need to round more then one number, it won't be so tedious and you can save alot of typing.
Hope this helps.
You can't using floats which is what numbers like 32.625 is treated as in VB.Net. (There is also the issue of Banker's rounding as mention by #StevenDoggart - you are probably going to have to deal with both issues.)
The issue is that the number stored is not exactly what is entered because these numbers do not into a fixed binary representation e.g. 32.625 is stored as 32.62499997 and 32.635 as 32.63500001.
The only way to be exact is to store the numbers as the type Decimal
DIM num as Decimal
num = ToDecimal("32.625")

VB.NET Single data type calculation issue

I want to perform a basic calculation with fractional numbers using vb.net.
Dim a As Single= 7200.5
Dim b As Single= 7150.3
Dim c As Single= a - b
'Expected result = 50.2
MsgBox(a.ToString + " - " + b.ToString + " = " + c.ToString.Trim)
'Produced result is: 50.2002
Dim single1 As Single
Dim single2 As Single
Dim single3 As Single
single1 = 425000
single2 = 352922.2
single3 = single1 - single2
'Expected result is: 72077.8
MsgBox(single3.ToString)
'Produced result is: 72077.81
How can the results be so inaccurate for such a simple calculation? The problem is solved when I change the data type to Decimal, but Decimal objects consume more memory (16 bytes). Is there any alternative data type that i can use to perform simple fractional calculations with accurate results?
This is to do with the way floating point numbers are stored in memory, and a Single in .Net is a single precision floating point number, which is much less accurate than a Decimal or a Double for storing decimal numbers.
When the computer calculates your number, it only has binary fractions to use and in a single precision floating point number, they're not very accurate.
See http://en.wikipedia.org/wiki/Single-precision_floating-point_format for more information.
EDIT: There's some more information specific to VB.Net here: http://msdn.microsoft.com/en-us/library/ae382yt8(v=vs.110).aspx
The Single and Double data types are not precise. They use the floating point method to store their values. Floating points use less memory and allow for faster calculations, but they are imprecise. That is the trade-off that you have to accept if you are going to use them. If precision is important, then they are not an option for you. Decimal is precise (to a certain number of fractional digits, that is), so usually, that is the best choice for precise fractional numbers in most cases. If you really need to save memory, and you are guaranteed that your numbers will be within a certain range, then you could use an Int16, Int32, or Int64 instead. For instance, if you only care about two fractional digits, you could simply multiply everything by 100 and then just divide by 100 (using Decimal types for the division) before displaying it. In that way, you can store many numbers and perform many operations using less memory, and only need to use the Decimal data type when you need to display a result.
Dim a As Integer = 720050 '7200.5
Dim b As Integer = 715030 '7150.3
Dim c As Integer = a - b
Dim cDisplay As Decimal = CDec(c) / CDec(100)
MessageBox.Display(String.Format("{0} - {1} = {2}", a, b, c))
You can use the Decimal data type instead. It will work great! This is because Decimal is a fixed point value, whereas Single and Double are floating point values (with loss of precision).

Trouble with floats in Objective-C

I've a small problem and I can't find a solution!
My code is (this is only a sample code, but my original code do something like this):
float x = [#"2.45" floatValue];
for(int i=0; i<100; i++)
x += 0.22;
NSLog(#"%f", x);
the output is 52.450001 and not 52.450000 !
I don't know because this happens!
Thanks for any help!
~SOLVED~
Thanks to everybody! Yes, I've solved with the double type!
Floats are a number representation with a certain precision. Not every value can be represented in this format. See here as well.
You can easily think of why this would be the case: there is an unlimited number of number just in the intervall (1..1), but a float only has a limited number of bits to represent all numbers in (-MAXFLOAT..MAXFLOAT).
More aptly put: in a 32bit integer representation there is a countable number of integers to be represented, But there is an infinite innumerable number of real values that cannot be fully represented in a limited representation of 32 or 64bit. Therefore there not only is a limit to the highest and lowest representable real value, but also to the accuracy.
So why is a number that has little digits after the floating point affected? Because the representation is based on a binary system instead of a decimal, making other numbers easily represented then the decimal ones.
See http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
Floating point numbers can not always be represented easily by computers. This leads to inaccuracy in some digits.
It's like me asking you what 1/3 is in decimal. No matter how hard you try, you're not going to be able to tell me what it is because decimal can't accurately describe that number.
Floats can't accurately describe some decimal numbers.