vb.net divistion wrong outcome - vb.net

A very weird issue happens when substracting two numbers in vb.net
It returns a wrongful outcome for a very simple math equation :
I am using visual studio 2019
I don't want to use any math.round to fix the value for such simple equation
Any ideas why that issue happens?
Dim diff As Decimal = 100.1 - 100
MsgBox(diff)
it returns 0.0999999999999943

There is a lot more going on in your math than you assume! You have three different primitive types there:
100.1 is a Double
100 is an Integer
diff is a Decimal
If you put Option Strict On at the top of your code, your code will not compile. This will tell you
Option Strict On disallows implicit conversions from 'Double' to 'Decimal'.
So you must either explicitly cast (this is what your Option Strict Off version is doing implicitly)
Dim diff As Decimal = CDec(100.1 - 100)
but since it's the same thing you are already doing implicitly, you will get that same floating point issue. The better thing to do is to start with a Decimal instead of Double in the first place, such as
Dim diff = 100.1D - 100
By using the Decimal literal D at the end of the floating point number it actually does Decimal math, and the result:

Related

CInt vs. Math.Round in Visual Basic .NET

What is the difference between:
Dim a As Integer = CInt(2.2)
and
Dim a As Integer = Math.Round(2.2)
?
CInt returns an integer but will round the .5 to nearest even number so:
2 = CInt(2.5)
4 = CInt(3.5)
Are both true, which might not be what you want.
Math.Round can be told to round away from zero. But returns a double, so we still need to cast it
3 = CInt(Math.Round(2.5, MidpointRounding.AwayFromZero))
There is bigger differences in CInt(), Int() and Round()... and others.
Round has parameters of rounding, so it is flexible and user friendly. But it do not change variable type. No "type conversion".
Meanwhile CInt() is a bit cryptic as it rounds too. And it is doing "Type conversion" to integer.
2 = Int(2.555), 3 = CInt(2.555)
2 = Int(2.5), 2 = CInt(2.5)
Some documentation states:
When the fractional part of expression is exactly .5, CInt always rounds it to the nearest even number. For example, .5 rounds to 0, and 1.5 rounds to 2.
But I do not like that "exact 0.5", in real word it is "0.5000001"
So, doing integer math (like calculating bitmaps address Hi and Lo bytes) do not use CInt(). Use old school INT(). Until you get to negative numbers... see the fix() function.
If there is no need to convert type, use floor().
I think all this chaos of number conversion is for some sort of compatibility with some ancient software.
The difference between those two functions is that they do totally different things:
CInt converts to an Integer type
Math.Round rounds the value to the nearest Integer
Math.Round in this instance will get you 2.0, as specified by the MSDN documentation. You are also using the function incorrectly, see the MSDN link above.
Both will raise an Exception if conversion fails, you can use Try..Catch for this.
Side note: You're new to VB.NET, but you might want to try switching to C#. I find that it is a hybrid of VB.NET & C++ and it will be far easier for you to work with than VB.NET.

Why Does Clng Work Differently In These Scenarios And Can It Be Reproduced In SQL Server? (Not Banker's Rounding)

Executing the following statement results in Access SQL:
CLNG((CCUR(1.225)/1)*100) = 123
The Conversion Goes, Decimal > Currency > Double > Double > Long
If I remove the CCUR conversion function:
CLNG(((1.225)/1)*100) = 122
The Conversion here goes , Decimal > Double > Double > Long
What is the difference between these two?
This extends to being different between Code And Access SQL
In Access SQL
clng((CCUR(1.015)/1)*100)/100 = 1.01 (Wrong Rounding)
In Access VBA
clng((CCUR(1.015)/1)*100)/100 = 1.02 (Appropriate Rounding Here)
Microsoft explain that the CLng function uses Banker's Rounding, here.
When the fractional part is exactly 0.5, CInt and CLng always round it to the nearest even number. For example, 0.5 rounds to 0, and 1.5 rounds to 2. CInt and CLng differ from the Fix and Int functions, which truncate, rather than round, the fractional part of a number. Also, Fix and Int always return a value of the same type as is passed in.
Looking at a similar question and the subsequent answer HERE, it explains that there are changes to the bit calculation behind the scenes, based on how it is calculated, but I'm not sure how the data type effects it.
What am I missing, and why is it calculating this way? How could I reproduce this behavior predictably in SQL Server?
EDIT
After some digging I believe that this is truly the result of a rounding point issue. In SQL server it will round floats to the nearest whole number if it is outside of the 15 digit max of precision. Access seems to hold more somehow, even though a Double is equivalent to a Float(53) in TSQL.
The difference in results is a combination of two different issues: Jet/ACE vs VBA expression evaluation and binary floating point representation of decimal numbers.
The first is that the Jet/ACE expression engine implicitly converts fractional numbers to Decimal while VBA converts them to Double. This can be easily demonstrated (note the Eval() function evaluates an expression using the Jet/ACE db engine):
?Typename(1.015), eval("typename(1.015)")
Double Decimal
The second issue is that of floating point arithmetic. This is somewhat more difficult to demonstrate because VBA always rounds its output, but the issue is more obvious using another language (Python, in this case):
>>> from decimal import Decimal
>>> Decimal(1.015)
Decimal('1.0149999999999999023003738329862244427204132080078125')
The Double type in VBA uses floating-point arithmetic, while the Decimal type uses integer arithmetic (it stores the position of the decimal point behind the scenes).
The upshot to this is that Banker's rounding or traditional rounding is a red herring. The determining factor is whether the binary floating point representation of the number is slightly greater or less than its decimal representation.
To see how this works in your original question see the following VBA:
?Eval("typename((CCUR(1.225)/1))"), Eval("typename(((1.225)/1))")
Double Decimal
?Eval("typename(CCUR(1.225))"), Eval("typename(1.225)")
Currency Decimal
And Python:
>>> Decimal(1.225)
Decimal('1.225000000000000088817841970012523233890533447265625')
I should also point out that your assumption of the conversion to Double in your second example is incorrect. The data type remains Decimal until the final conversion to Long. The difference between the first two functions is that multiplying a Decimal by a Currency type in Jet/ACE results in a Double. This seems like somewhat odd behavior to me, but the code bears it out:
?eval("TypeName(1.225)"), eval("TypeName(1.225)")
Decimal Decimal
?eval("TypeName(CCUR(1.225))"), eval("TypeName((1.225))")
Currency Decimal
?eval("TypeName(CCUR(1.225)/1)"), eval("TypeName((1.225)/1)")
Double Decimal
?eval("TypeName((CCUR(1.225)/1)*100)"), eval("TypeName(((1.225)/1)*100)")
Double Decimal
?eval("TypeName(CLNG((CCUR(1.225)/1)*100))"), eval("TypeName(CLNG(((1.225)/1)*100))")
Long Long
So the conversion in the two cases is actually:
Decimal > Currency > Double > Double > Long (as you correctly assumed); and
Decimal > Decimal > Decimal > Decimal > Long (correcting your initial assumption).
To answer your question in the comment below, Eval() uses the same expression engine as Jet/ACE, so it is functionally equivalent to entering the same formula in an Access query. For further proof, I present the following:
SELECT
TypeName(1.225) as A1,
TypeName(CCUR(1.225)) as A2,
TypeName(CCUR(1.225)/1) as A3,
TypeName((CCUR(1.225)/1)*100) as A4,
TypeName(CLNG((CCUR(1.225)/1)*100)) as A5
SELECT
TypeName(1.225) as B1,
TypeName((1.225)) as B2,
TypeName((1.225)/1) as B3,
TypeName(((1.225)/1)*100) as B4,
TypeName(CLNG(((1.225)/1)*100)) as B5

math.floor is supposed to return integer

I am trying to get the integer part of a number after dividing two variables.
ie, get 3 if the value is 3.75
displaycount and itemcount are both integer variables.
Dim cntr As Integer
cntr = Math.Floor(Math.Abs(itemCount / displaycount))
That code produces a blue squiggly in VS2012 with the comment that "runtime errors may occur when converting Double to Integer" BUT Math.Floor is supposed to take a decimal or double and return an integer.
"Math.Floor is supposed to take a decimal or double and return an integer." No, it isn't. It returns a value of the same type as its argument. See the documentation, e.g. Math.Floor Method (Double).
I would have expected VS to suggest a fix of adding CInt() around the RHS of the assignment; did that not appear for you?
If you need an Integer as result, consider using either the CInt, Int or the Fix functions.
CInt rounds to the nearest integer using the bankers's rounding (n.5 rounds towards the closest even number).
Int removes the fractional parts. Negative numbers are truncated towards smaller numbers
Int(-8.4) = -9.
Fix removes the fractional parts. Negative numbers are truncated towards greater numbers
Fix(-8.4) = -8.
See Conversion.Int Method and Type Conversion Functions (Visual Basic).

Option Strict On and Constant in Visual Basic?

Please forgive me, I haven't used this site very much! I am working in Visual Studio with Visual Basic. I finished programming my project with Option Strict Off, then when I turned Option Strict on, I was alerted that this code was wrong:
Const TAX_Decimal As Decimal = 0.07
The explanation was that "Option Strict On disallows implicit conversions from 'Double' to 'Decimal'"
But I thought I had declared it as a decimal! It made me change it to:
Const TAX_Decimal As Decimal = CDec(0.07)
The only thing I did with this constant was multiply it by a decimal and saved it to a variable declared as a decimal!
Can someone tell me why this is happening?
Double is 8 bytes and Decimal is 16 bytes. Option Strict prevents from automatic type conversion. By default if you write a number with decimals in VB.NET it is considered as double and not decimal. For saying decimal you have to use some character to specify (I thing for decimal is m) so if you declare
Const VAR as decimal = 0.07m
then you wont require casting.
When the compiler sees a numeric literal, it selects a type based upon the size of the number, punctuation marks, and suffix (if any), and then translates the the sequence of characters in it to that type; all of this is done without regard for what the compiler is going to do with the number. Once this is done, the compiler will only allow the number to be used as its own type, explicitly cast to another type, or in the two cases defined below implicitly converted to another type.
If the number is interpreted as any integer type (int, long, etc.) the compiler will allow it to be used to initialize any integer type in which the number is representable, as well as any binary or decimal floating-point type, without regard for whether or not the number can be represented precisely in that type.
If the number is type Single [denoted by an f suffix], the compiler will allow it to be used to initialize a Double, without regard for whether the resulting Double will accurately represent the literal with which the Single was initialized.
Numeric literals of type Double [including a decimal point, but with no suffix] or Decimal [a "D" suffix not followed immediately by a plus or minus] cannot be used to initialize a variable of any other, even if the number would be representable precisely in the target type, or the result would be the target type's best representation of the numeric literal in question.
Note that conversions between type Decimal and the other floating-point types (double and float) should be avoided whenever possible, since the conversion methods are not very accurate. While there are many double values for which no exact Decimal representation exists, there is a wide numeric range in which Decimal values are more tightly packed than double values. One might expect that converting a double would choose the closest Decimal value, or at least one of the Decimal values which is between that number and the next higher or lower double value, but the normal conversion methods do not always do so. In some cases the result may be off by a significant margin.
If you ever find yourself having to convert Double to Decimal, you're probably doing something wrong. While there are some operations which are available on Double that are not available on Decimal, the act of converting between the two types means whatever Decimal result you end up with is apt to be less precise than if all computations had been done in Double`.

Cleanest way to convert a `Double` or `Single` to `Integer`, without rounding

Converting a floating-point number to an integer using either CInt or CType will cause the value of that number to be rounded. The Int function and Math.Floor may be used to convert a floating-point number to a whole number, rounding toward negative infinity, but both functions return floating-point values which cannot be implicitly used as Integer values without a cast.
Is there a concise and idiomatic alternative to IntVar = CInt(Int(FloatingPointVar));? Pascal included Round and Trunc functions which returned Integer; is there some equivalent in either the VB.NET language or in the .NET framework?
A similar question, CInt does not round Double value consistently - how can I remove the fractional part? was asked in 2011, but it simply asked if there was a way to convert a floating-point number to an integer; the answers suggested a two-step process, but it didn't go into any depth about what does or does not exist in the framework. I would find it hard to believe that the Framework wouldn't have something analogous to the Pascal Trunc function, given that such a thing will frequently be needed when performing graphical operations using floating-point operands [such operations need to be rendered as discrete pixels, and should be rounded in such a way that round(x)-1 = round(x-1) for all x that fit within the range of +/- (2^31-1); even if such operations are rounded, they should use Floor(x+0.5), rather than round-to-nearest-even, so as to ensure the above property]
Incidentally, in C# a typecast from Double to Int using (type)expr notation uses round-to-zero semantics; the fact that this differs from the VB.NET behavior suggests that one or both languages is using its own conversion routines rather an explicit conversion operator included in the Framework. It would seem likely that the Framework should define a conversion operator? Does such an operator exist within the framework? What does it do? Is there a way to invoke it from C# and/or VB.NET?
After some searching, it seems that VB has no clean way of accomplishing that, short of writing an extension method.
The C# (int) cast translates directly into conv.i4 in IL. VB has no such operators, and no framework function seems to provide an alternative.
Usenet had an interesting discussion about this back in 2005 – of course a lot has changed since then but I think this still holds.
You can use the Math.Truncate method.
Calculates the integral part of a specified double-precision floating-point number.
For example:
Dim a As double = 1.6666666
Dim b As Integer = Math.Truncate(a) ' b = 1
I know this is an old case but I saw no one suggest the Math.Round() function.
Yes Math.Round takes a double and returns a double. However it returns a number that has been rounded to a whole number. It should easily and concisely convert to an integer using cInt. Would that suffice?
cInt(math.round(10000.54564)) ' = 10001
cInt(math.round(10000.49564)) ' = 10000
You may need extract the Int part of a float number:
float num = 12.234;
string toint = "" + num;
string auxil = toint.Split('.');
int newnum = Int.Parse(auxil[0]);