Visual Studio expanding 1.1 to 1.1000000000000001 - vb.net

This is, at least for me, most bizzare Visual Studio 2010 behaviour ever. I'm working on MVC3 project, I copied a line of code from another project (VS2010 also, MVC1 if it matters) which looks like this:
target_height = height * 1.1
when I paste it into MVC3 project, it gets expanded to
target_height = height * 1.1000000000000001
Now, if I type 1.2, it's fine, nothing happens, but if I type 1.12 it is expanded to 1.1200000000000001.
Both target_height and height are integers. Why does one Visual Studio display 1.1 while other expands it to 1.1000000000000001?
What is going on???

I think it is autocomplete went crazy and started fixing floating points constant into "allowed" values. As written in http://accessmvp.com/Strive4Peace/VBA/VBA_L1_02_Crystal.pdf , VB autocomplete really tries to offer only "things that apply specifically to that data type". int * double is understandably not truncated into int * int (automatic conversions always happen only as needed) and what you see is double representation of 1.1 or 1.12 (epsilon = 1.11e-16).
I think it would still need some further checking or verification to learn exact conditions when this happens, but as I am not using VB.NET or MVCx this is not something I am willing to do.

The numerical literal 1.1 does not actually represent the quantity 11/10, but instead represents the quantity round[(2^53*11)/10]/(2^53), which is a tiny bit larger than 11/10. Although that value could be written out precisely as a decimal number with 53 significant figures, doing so would be about as useful as using an inch-denominated measuring tape to determine that something is 1 3/16" long and recording the measurement as 30.1625mm. If one wouldn't be able to distinguish a measurement that was longer or shorter or shorter by less than 1/64", the measurement would be 30.1625mm +/- 0.396875mm, which is functionally the same as 30.2mm +/- 0.4mm.
The fact that Visual Studio would choose to represent the numeric quantity closest to 1.1 as 1.1000000000000001 is curious. On the one hand, the literal 1.1 would be a more concise representation of the same value. On the other hand, even if the aforementioned literal would be indistinguishable from 1.1, the more verbose representation is not without advantage. In some cases, it may be helpful to know whether a quantity is slightly larger or slightly smaller than what it "appears" to be. Even though the difference between the numeric literal 1.1 and the mathematical value 11/10 is numerically insignificant (multiplying the numeric literal by ten yields precisely 11), the difference between (1.1-1.0) and (1/10) is noticeable (multiplying the numeric expression by 10 yields a value greater than one).

1.1 and 1.12 must not have an exact binary representation.
see this : https://stackoverflow.com/questions/634206/what-every-programmer-should-know-about

Related

Issue with "CDbl" function while subtract values of two textboxes

I am trying to subtract the value from two textboxes in Visual Studio 2012.
Example input and results:
textbox1 - textbox2 = label1
25.9 - 25.4 = 0.50 (it's ok)
173.07 - 173 = 0.06 (should be 0.07)
144.98 - 142.12 = 2.85 (should be 2.86)
My code (I tried all three lines separately):
label1.text = (Convert.ToDouble(textbox1.text) - Convert.ToDouble(textbox2.text)).ToString
label1.text = (CDbl(textbox1.text) - CDbl(textbox2.text)).ToString
label1.text = (Val(textbox1.text) - Val(textbox2.text)).ToString
This error (may be not an error) occurs some times, not every time.
What am I missing here? And what should I use instead of "CDbl" ?
what should I use instead of "CDbl" ?
When you start with the a string, the best option is Double.Parse() or Double.TryParse(), depending on the possibility for bad data.
But even that's not enough in this case. Computers use something called IEEE754 for floating point arithmetic. This scheme for encoding floating point numbers is designed as an efficient way to represent numbers in binary, and further has direct support in CPUs for arithmetic operations, meaning it is much faster than any available alternative (it's not even close). Pretty much every programming platform uses it.
The downside is there is some loss of precision. When treated as IEEE754 doubles, 173.07-173 produces .69999999.
You can solve this in two ways:
Round the results. This isn't an option when using division, but with just addition and subtraction you can track significant digits and round to get exact results. This is a pain, though.
Use the Decimal type. Decimal isn't perfect, but is does have a much greater degree of precision (at the cost of some performance), and for your sample data produces exact results.
In short, try this code:
label1.text = (Decimal.Parse(textbox1.text) - Decimal.Parse(textbox2.text)).ToString()

Why BigFloat.to_s is not precise enough?

I am not sure if this is a bug. But I've been playing with big and I cant understand why this code works this way:
https://carc.in/#/r/2w96
Code
require "big"
x = BigInt.new(1<<30) * (1<<30) * (1<<30)
puts "BigInt: #{x}"
x = BigFloat.new(1<<30) * (1<<30) * (1<<30)
puts "BigFloat: #{x}"
puts "BigInt from BigFloat: #{x.to_big_i}"
Output
BigInt: 1237940039285380274899124224
BigFloat: 1237940039285380274900000000
BigInt from BigFloat: 1237940039285380274899124224
First I though that BigFloat requires to change BigFloat.default_precision to work with bigger number. But from this code it looks like it only matters when trying to output #to_s value.
Same with precision of BigFloat set to 1024 (https://carc.in/#/r/2w98):
Output
BigInt: 1237940039285380274899124224
BigFloat: 1237940039285380274899124224
BigInt from BigFloat: 1237940039285380274899124224
BigFloat.to_s uses LibGMP.mpf_get_str(nil, out expptr, 10, 0, self). Where GMP is saying:
mpf_get_str (char *str, mp_exp_t *expptr, int base, size_t n_digits, const mpf_t op)
Convert op to a string of digits in base base. The base argument may vary from 2 to 62 or from -2 to -36. Up to n_digits digits will be generated. Trailing zeros are not returned. No more digits than can be accurately represented by op are ever generated. If n_digits is 0 then that accurate maximum number of digits are generated.
Thanks.
In GMP (it applies to all languages not just Crystal), integers (C mpz_t, Crystal BigInt) and floats (C mpf_t, Crystal BigFloat) have separate default precision.
Also, note that using an explicit precision is better than setting a default one, because the default precision might not be reentrant (it depends on a configure-time switch). Also, if someone reads only a part of your code, they may skip the part with setting the default precision and assume a wrong one. Although I do not know the Crystal binding well, I assume that such functionality is exposed somewhere.
The zero parameter passed to mpf_get_str means to guess the value from the precision. I know the number of significant digits is proportional and close to precision / log2(10). Floating point numbers have finite precision. In that case, it was not the mpf_get_str call which made the last digits zero - it was the internal representation that did not keep such data. It looks like your (default) precision is too small to store all the necessary digits.
To summarize, there are two solutions:
Set a global default precision. Although this approach will work, it will require to either change the default precision frequently, or use one in the whole program. Both ways, the approach with the default precision is a form of procrastination which is going to have its vengeance later.
Set a precision on variable basis. This is a better solution than the former. Although it requires more code (1-2 more lines per variable initialization), it is going to pay back later. For example, in a space object tracking system, the physics calculations have to be super-precise, but other systems could use lower precision numbers for speed and memory saving.
I am still unsure what made the conversion BigFloat --> BigInt yield the missing digits.

Why is my very small number not being stored precisely?

In an answer on StackOverflow en Español, I showed that Perl 6 avoids the calculation errors of many other languages because it keeps track of the numerators and denominators. That is to say, decimal numbers are actually represented as Ratios. However, it does make a small error with very small numbers:
> 0.000000000000000000071.nude.perl
(71, 1000000000000000000000)
> 0.0000000000000000000071.nude.perl
(71, 10000000000000000000000)
> 0.00000000000000000000071.nude.perl
(71, 99999999999999991611392)
Is this something that will be fixed in future versions?
I get the same answers using perl6/rakudo-star-2015.09 and perl6/rakudo-star-2015.11
Denominators are supposed to be limited to 64-bit - you need a FatRat to go beyond that.
However, said limit does not appear to be enforced in current Rakudo: If you do so manually, it will happily construct your number via Rat.new(71, 10**23).
My guess would be you have uncovered a bug in the handling of rational literals, but it might only trigger in code that is not future-proof anyway.
edit: It is possible to use angle brackets to get an allomorphic value, and this produces the correct value. In fact, regular rational literals are also specced to fall back to RatStr on overflow.
However, this fallback mechanism does not appear to be implemented in Rakudo.

Compiler Type Promotion of Right Hand Side expressions automatically in an Assignment Statement

Why does a compiler not type promote all evaluations of expressions in the right hand side of an assignment expression to at least the left hand sides type level?
e.g.
"double x = (88.0 - 32) * 5 / 9" converts to Celsius from Fahrenheit correctly but...
"double x = (88.0 - 32) * (5 / 9)" will not.
My question is not why the second example does not return the desired result. My question is why does the compiler not type promote the evaluation of (5/9) to that of a double.
Why does a compiler not type promote all evaluations of expressions in
the right hand side of an assignment expression to at least the left
hand sides type level?
Very good question. Actually,let's suppose for sometime that the compiler does this automatically. Now, taking your example :-
double x = 88.0 - 32 * 5 / 9
Now the RHS part of this assignment can be converted into double completely for all tokens(lexemes) in several of ways. I am adding some of them :-
88.0 - 32 * (double)(5 / 9)
88.0 - 32 * 5 / 9 // default rule
88.0 - (double)(32 * 5) / 9
Individually type-casting to double every token which doesn't seem to be a double entity.
Several other ways.
This turns to combinatorial problem like "In how many ways a given expression can be reduced to double(whatever type)?"
But, the compiler designers wouldn't take such a pain in their *** to convert each of the tokens to the desired highest type(double here) considering the exhaustive use of memory. Also it appears like an unnatural rationale behind it doing this way for no reason because users could better perform the operation by manually giving some hints to the compiler that it has to typecast using the way coded by the user.
Being everything automatic conversion is not going to yield you the result always, as sometimes what a user wants may not be achieved with this kind of rationale of automatic type promotion, BUT, the vice-versa of type-promoting will serve in a much better way as is done by the compilers today. Current rule for type-casting is serving all the purposes correctly, though with some extra effort, but, FLAWLESSLY.

How does VB.NET 2008 round off integer numbers? [duplicate]

According to the documentation, the decimal.Round method uses a round-to-even algorithm which is not common for most applications. So I always end up writing a custom function to do the more natural round-half-up algorithm:
public static decimal RoundHalfUp(this decimal d, int decimals)
{
if (decimals < 0)
{
throw new ArgumentException("The decimals must be non-negative",
"decimals");
}
decimal multiplier = (decimal)Math.Pow(10, decimals);
decimal number = d * multiplier;
if (decimal.Truncate(number) < number)
{
number += 0.5m;
}
return decimal.Round(number) / multiplier;
}
Does anybody know the reason behind this framework design decision?
Is there any built-in implementation of the round-half-up algorithm into the framework? Or maybe some unmanaged Windows API?
It could be misleading for beginners that simply write decimal.Round(2.5m, 0) expecting 3 as a result but getting 2 instead.
The other answers with reasons why the Banker's algorithm (aka round half to even) is a good choice are quite correct. It does not suffer from negative or positive bias as much as the round half away from zero method over most reasonable distributions.
But the question was why .NET use Banker's actual rounding as default - and the answer is that Microsoft has followed the IEEE 754 standard. This is also mentioned in MSDN for Math.Round under Remarks.
Also note that .NET supports the alternative method specified by IEEE by providing the MidpointRounding enumeration. They could of course have provided more alternatives to solving ties, but they choose to just fulfill the IEEE standard.
Probably because it's a better algorithm. Over the course of many roundings performed, you will average out that all .5's end up rounding equally up and down. This gives better estimations of actual results if you are for instance, adding a bunch of rounded numbers. I would say that even though it isn't what some may expect, it's probably the more correct thing to do.
While I cannot answer the question of "Why did Microsoft's designers choose this as the default?", I just want to point out that an extra function is unnecessary.
Math.Round allows you to specify a MidpointRounding:
ToEven - When a number is halfway between two others, it is rounded toward the nearest even number.
AwayFromZero - When a number is halfway between two others, it is rounded toward the nearest number that is away from zero.
Decimals are mostly used for money; banker’s rounding is common when working with money. Or you could say.
It is mostly bankers that need the
decimal type; therefore it does
“banker’s rounding”
Bankers rounding have the advantage that on average you will get the same result if you:
round a set of “invoice lines” before adding them up,
or add them up then round the total
Rounding before adding up saved a lot of work in the days before computers.
(In the UK when we went decimal banks would not deal with half pence, but for many years there was still a half pence coin and shop often had prices ending in half pence – so lots of rounding)
Use another overload of Round function like this:
decimal.Round(2.5m, 0,MidpointRounding.AwayFromZero)
It will output 3. And if you use
decimal.Round(2.5m, 0,MidpointRounding.ToEven)
you will get banker's rounding.