Is MidpointRounding.AwayFromZero working right in .NET Core 3.1? - asp.net-core

It's my understanding from the docs that MidpointRounding.ToEven is the default behavior of Math.Round(). I am using MidpointRounding.AwayFromZero to override that and it does not appear to work. I'm either confused about how MidpointRounding.AwayFromZero works or it isn't working right.
For example, I have a double: 0.285. In my simple mind, when rounding that to two decimal places, that should round to 0.29. However, .NET Core 3.1 Math.Round is rounding it to 0.28, which is exactly the same behavior as the default for Math.Round and also the MidpointRounding.ToEven. Because, 0.29 is further away from zero than 0.28, then one would think that MidpointRounding.AwayFromZero would return 0.29, right? Why name it AwayFromZero and then return a number that is closer to zero. That doesn't make sense.
Math.Round(0.285, 2, MidpointRounding.AwayFromZero) // .NET says this is 0.28

By default, the Math. Round method uses Banker's Rounding not normal rounding.
In banker's rounding a number with a final digit of 5 is rounded to the nearest even number rather than to the next larger number as you might expect. The idea is that statistically half of a sample of numbers are rounded up and half are rounded down.
The reasons with your digital number in normal rounding are described in the office doc.
Math.Round(0.285, 2, MidpointRounding.AwayFromZero) // .NET says this is 0.28
Notes to Callers
Because of the loss of precision that can result from
representing decimal values as floating-point numbers or performing
arithmetic operations on floating-point values, in some cases the
Round(Double, Int32, MidpointRounding) method may not appear to round
midpoint values as specified by the mode parameter. This is
illustrated in the following example, where 2.135 is rounded to 2.13
instead of 2.14. This occurs because internally the method multiplies
value by 10digits, and the multiplication operation in this case
suffers from a loss of precision.
This is indeed due to the fragile precision of floating-point numbers. 0.5 can be stored perfectly in IEEE floating point, but 0.45, 0.445 etc. cannot. For example, the actual value that is stored when you specify 2.44445 is 11009049289107177/4503599627370496 which is 2.44449999999999989519494647... It should now be obvious why the number is rounded the way it is.#cdhowie
If you need to store fractional numbers precisely, consider using the decimal type instead.
Solution
using public static decimal Round(decimal d, int decimals, MidpointRounding mode);
Math.Round((decimal)0.285, 2, MidpointRounding.AwayFromZero); // .NET is 0.29
More details about Math.Round you can see answer from #Sergey Berezovskiy.
.NET Math.Round(,,MidpointRounding.AwayFromZero) not working correctly

Related

Why does Math.Round(1.275, 2) equal 1.27?

Rounding under Visual Studio 2019 does not work as expected as outlined in the documentation in certain cases for me.
Under Visual Studio 2019, when I run Math.Round(1.275, 2) I get 1.27. Based on the default rounding it should round to the nearest even digit 1.28. If I run Math.Round(0.275, 2) I get 0.28.
The documentation does also give you the reason for this unexpected behavior:
When rounding midpoint values, the rounding algorithm performs an equality test. Because of problems of binary representation and precision in the floating-point format, the value returned by the method can be unexpected. For more information, see Rounding and precision.
Floating point numbers are stored according to IEEE 754 which is not always a precise representation of the actual value you want to store. You will find lots of ressources where you can learn about how floating point numbers are represented in binary and how exactly IEEE 754 works.
Two things here.
First, this is likely a result of the underlying representation of floats. The documentation itself warns:
When rounding midpoint values, the rounding algorithm performs an equality test. Because of problems of binary representation and precision in the floating-point format, the value returned by the method can be unexpected.
Second, The Math.Round function also takes a rounding strategy argument as the third parameter. Based on the default rounding strategy, this behavior actually seems inline with what the documentation specified. Check out this example they have:
// 3.4 = Math.Round( 3.45, 1)
// -3.4 = Math.Round(-3.45, 1)
// 3.4 = Math.Round(3.45, 1, MidpointRounding.ToEven)
// 3.5 = Math.Round(3.45, 1, MidpointRounding.AwayFromZero)
// 3.4 = Math.Round(3.47, 1, MidpointRounding.ToZero)
// -3.4 = Math.Round(-3.45, 1, MidpointRounding.ToEven)
// -3.5 = Math.Round(-3.45, 1, MidpointRounding.AwayFromZero)
// -3.4 = Math.Round(-3.47, 1, MidpointRounding.ToZero)
It seems to me that the default tries to round to the the closest integer. For example, 2.75 is closer to 3 than 2 so it gets rounded to 2.8. The opposite applies to 1.275. Maybe I'm mistaken, but either way, check out the MidpointRounding argument — that should probably solve your problem.

Kotlin BigDecimal multiplication wrong results

I need to use BigDecimal for some computation but am a bit surprised by the behaviour:
val thousand = BigDecimal(1000)
val fee = BigDecimal(0.005)
println(thousand * fee)
You'd expect the console to contain 5 but the result is 5.000000000000000104083408558608425664715468883514404296875000
I know that I can limit the precision and do some rounding with setScale but the real question is Why is this needed in the first place. This result is obviously wrong.
What am I missing?
The issue is likely to be with the construction of the fee BigDecimal. This is taking a double value and converting it to a BigDecimal. Unfortunately, some fairly simple decimal fractions are impossible to precisely represent as doubles or floats, and this constructor for BigDecimal will take that imprecise double as its value.
From the documentation:
The results of this constructor can be somewhat unpredictable. One might assume that writing new BigDecimal(0.1) in Java creates a BigDecimal which is exactly equal to 0.1 (an unscaled value of 1, with a scale of 1), but it is actually equal to 0.1000000000000000055511151231257827021181583404541015625. This is because 0.1 cannot be represented exactly as a double (or, for that matter, as a binary fraction of any finite length). Thus, the value that is being passed in to the constructor is not exactly equal to 0.1, appearances notwithstanding.
The way around this is to use the String constructor, which gets round the issue of having to convert "via" a double.

kotlin rounding off in BigDecimal

import java.math.BigDecimal
BigDecimal(0.235).setScale(2, BigDecimal.ROUND_HALF_UP) // 0.23
BigDecimal("0.235").setScale(2, BigDecimal.ROUND_HALF_UP) // 0.24
In kotlin, when input 0.235 is given as double then the output is 0.23.
when input 0.235 is given as string then the output is 0.24
Here is the definition of ROUND_HALF_UP given in the documentation:
Rounding mode where values are rounded towards the nearest
neighbor. Ties are broken by rounding up.
From the BigDecimal docs:
The results of this constructor can be somewhat unpredictable. One might assume that writing new BigDecimal(0.1) in Java creates a
BigDecimal which is exactly equal to 0.1 (an unscaled value of 1,
with a scale of 1), but it is actually equal to
0.1000000000000000055511151231257827021181583404541015625. This is because 0.1 cannot be represented exactly as a double (or, for that
matter, as a binary fraction of any finite length). Thus, the value
that is being passed in to the constructor is not exactly equal to
0.1, appearances notwithstanding.
The String constructor, on the other hand, is perfectly predictable: writing new BigDecimal("0.1") creates a BigDecimal which
is exactly equal to 0.1, as one would expect. Therefore, it is
generally recommended that the String
constructor
be used in preference to this one.
The issue here is that in the first case you are calling the BigDecimal constructor using a floating point (read: not exact) literal. Consider the following script (in Java):
BigDecimal blah = new BigDecimal(0.235d);
System.out.println(blah);
This prints 0.23499999999999998667732370449812151491641998291015625 in my demo tool. That is, you are not actually passing in literal 0.235, but rather a floating point approximation to it. It so happens, in this case, that the actual literal value is slightly less than 0.235, leading the round half up to result in 0.23 rather than 0.24.

If I only need 1 or 2 digit accuracy, is float better than decimal type?

If I only need 1 or 2 digit after the decimal place accuracy, should I use float or still go with decimal(18,2)?
The numeric value in question represents salary.
You should use decimal type or better still, money type, which is specially suited for these needs.
You should not use float as float is an approximate representation of a decimal value.
See MSDN documentation why we should not use float here
The float and real data types are known as approximate data types. The behavior of float and real follows the IEEE 754 specification on approximate numeric data types.
Approximate numeric data types do not store the exact values specified for many numbers; they store an extremely close approximation of the value. For many applications, the tiny difference between the specified value and the stored approximation is not noticeable. At times, though, the difference becomes noticeable.
Because of the approximate nature of the float and real data types, do not use these data types when exact numeric behavior is required, such as in financial applications, in operations involving rounding, or in equality checks. Instead, use the integer, decimal, money, or smallmoney data types.
The main question in deciding whether you want to use a binary decimal or a decimal decimal is not accuracy, really. It's "how would I expect the calculations to proceed".
The main thing you get from decimal is something you can calculate on paper, with fixed-point numbers. E.g.:
13.22
+ 7.3
-----
20.52
It's not really that decimal is more precise than a float (though it can be that as well, for certain applications). The point is it makes the same mistakes you would make on paper - it's a decimal decimal number, not binary.
Or in another line of thought, your inputs are definitely decimal decimal numbers (typically, in a string or such). So if you use a float, you get decimal -> binary -> calculation -> decimal. Binary to decimal means no loss of information (and finite binary number can be exactly represented in a finite decimal number), but the other way around this isn't true - even something like 0.1 has no finite representation in a binary decimal number (just like 1 / 3 has no finite representation in a decimal decimal number, but works fine in say base 6).

TSQL Money casted as float is rounding the precision

I have a database that is storing amounts and being displayed in a gridview. I have an amount that is input as 3,594,879.59 and when I look in the gridview I am getting 3,594,880.00.
The SQL Money type is the default Money, nothing was done in SQL when creating the table to customize the Money type. In Linq I am casting the amount to a float?
What is causing this to happen? It is only happening on big numbers (ex. I put 1.5 in the db and 1.5 shows in the gridview).
Cast the SQL money type to the CLR type decimal. Decimal is a floating-point numeric type that uses a base-10 internal representation and so can represent any decimal number within range without approximation.
It's slower than float, and you're trading range for precision, but for anything involving money, use decimal to avoid approximation errors.
EDIT: As for "why is this happening" - two reasons. Firstly, floating-point numbers use a base-2 internal representation, in which it is impossible to represent some decimal fractions exactly. Secondly, the reason floating-point numbers are called floating-point is that instead of using a fixed precision for the integer part and a fixed precision for the fractional part, they offer a continuous trade-off between magnitude and precision. Numbers where the integral part is relatively small - like 1.5 - allow the majority of the internal representation to be assigned to the fractional part, and so provide much greater accuracy. As the magnitude of the integral part increases, the bits that were previously used for precision are now needed to store the larger integer value and so the accuracy of the fractional part is compromised.
Very, very crudely, it's like having ten digits and you can put the decimal point wherever you like, so for small values, you can represent very accurate fractions:
1.0000000123
but for larger values, you don't have nearly so much fractional precision available:
1234567890.2
For details of how this actually works, check out the IEEE 754 standard.
If the destination is a standard 32-bit float, then you are getting exactly what you should. Try keeping it as money, or changed it to a scaled integer, or a double-precision (64-bit) floating point.
A 32-bit float has six to seven significant figures of precision. 64-bit floats have just under 16 digits of precision.