Remove scientific notation representation from a number of type double in C# and WPF - scientific-notation

I am working with WPF and C# backend. I am having unit conversion package which converts from an inch to mm. All the values in this conversion are double. On UI, once I'm writing "0.000039", it is getting converted into scientific notation as soon as I type '3' of the above value and it is appending '9' to the scientific notation value. To get rid of this I want to remove scientific notation representation of numbers with a double datatype. How to do this? Please help.

Related

datatype character declaration (0.05D) why is the D not redundant after declaration?

I'm taking a Visual Basic class and I've been taught to use a type character after declaring a constant variable that is a decimal, like so:
const VARIABLE_NAME As Decimal = 0.06D
It seems redundant to me to add the D at the end, as I have already declared the data type. im afraid to ask my teacher, because i assume she probably wont be able to give me a clear answer in front of the class. I previously took a class on micro-processors so I have some (little) understating of how floats are stored in memory using binary. Can anyone give me a clear explanation so I can share it with my other classmates?
The data type you declare for your entity (a constant) is not necessarily the data type of the expression used to initialize that entity. You declare the type on the left side of the =, and it does not extend to the right. If the data types do not match, a conversion will need to happen upon assignment.
As documented, the type of a literal expression is dictated by its shape. A literal that falls under Numeric, fractional part is interpreted as a Double by default.
If you enable Option Strict On (which you should), the declaration
Const VARIABLE_NAME As Decimal = 0.06
will fail with the error:
Option Strict On disallows implicit conversions from 'Double' to 'Decimal'.
This is because there is no implicit conversion from Double to Decimal, as the Double data type can possibly contain values that Decimal cannot represent.
To avoid the conversion, you provide a type character D that makes the literal Decimal in the first place.
Compare this to
Const VARIABLE_NAME As Decimal = 42
The left part is Decimal, the right part is Integer, but no compile error occurs even with Option Strict On, because now there is an implicit widening conversion from Integer to Decimal, because Decimal can represent all values an Integer can possibly have.

Why Does Clng Work Differently In These Scenarios And Can It Be Reproduced In SQL Server? (Not Banker's Rounding)

Executing the following statement results in Access SQL:
CLNG((CCUR(1.225)/1)*100) = 123
The Conversion Goes, Decimal > Currency > Double > Double > Long
If I remove the CCUR conversion function:
CLNG(((1.225)/1)*100) = 122
The Conversion here goes , Decimal > Double > Double > Long
What is the difference between these two?
This extends to being different between Code And Access SQL
In Access SQL
clng((CCUR(1.015)/1)*100)/100 = 1.01 (Wrong Rounding)
In Access VBA
clng((CCUR(1.015)/1)*100)/100 = 1.02 (Appropriate Rounding Here)
Microsoft explain that the CLng function uses Banker's Rounding, here.
When the fractional part is exactly 0.5, CInt and CLng always round it to the nearest even number. For example, 0.5 rounds to 0, and 1.5 rounds to 2. CInt and CLng differ from the Fix and Int functions, which truncate, rather than round, the fractional part of a number. Also, Fix and Int always return a value of the same type as is passed in.
Looking at a similar question and the subsequent answer HERE, it explains that there are changes to the bit calculation behind the scenes, based on how it is calculated, but I'm not sure how the data type effects it.
What am I missing, and why is it calculating this way? How could I reproduce this behavior predictably in SQL Server?
EDIT
After some digging I believe that this is truly the result of a rounding point issue. In SQL server it will round floats to the nearest whole number if it is outside of the 15 digit max of precision. Access seems to hold more somehow, even though a Double is equivalent to a Float(53) in TSQL.
The difference in results is a combination of two different issues: Jet/ACE vs VBA expression evaluation and binary floating point representation of decimal numbers.
The first is that the Jet/ACE expression engine implicitly converts fractional numbers to Decimal while VBA converts them to Double. This can be easily demonstrated (note the Eval() function evaluates an expression using the Jet/ACE db engine):
?Typename(1.015), eval("typename(1.015)")
Double Decimal
The second issue is that of floating point arithmetic. This is somewhat more difficult to demonstrate because VBA always rounds its output, but the issue is more obvious using another language (Python, in this case):
>>> from decimal import Decimal
>>> Decimal(1.015)
Decimal('1.0149999999999999023003738329862244427204132080078125')
The Double type in VBA uses floating-point arithmetic, while the Decimal type uses integer arithmetic (it stores the position of the decimal point behind the scenes).
The upshot to this is that Banker's rounding or traditional rounding is a red herring. The determining factor is whether the binary floating point representation of the number is slightly greater or less than its decimal representation.
To see how this works in your original question see the following VBA:
?Eval("typename((CCUR(1.225)/1))"), Eval("typename(((1.225)/1))")
Double Decimal
?Eval("typename(CCUR(1.225))"), Eval("typename(1.225)")
Currency Decimal
And Python:
>>> Decimal(1.225)
Decimal('1.225000000000000088817841970012523233890533447265625')
I should also point out that your assumption of the conversion to Double in your second example is incorrect. The data type remains Decimal until the final conversion to Long. The difference between the first two functions is that multiplying a Decimal by a Currency type in Jet/ACE results in a Double. This seems like somewhat odd behavior to me, but the code bears it out:
?eval("TypeName(1.225)"), eval("TypeName(1.225)")
Decimal Decimal
?eval("TypeName(CCUR(1.225))"), eval("TypeName((1.225))")
Currency Decimal
?eval("TypeName(CCUR(1.225)/1)"), eval("TypeName((1.225)/1)")
Double Decimal
?eval("TypeName((CCUR(1.225)/1)*100)"), eval("TypeName(((1.225)/1)*100)")
Double Decimal
?eval("TypeName(CLNG((CCUR(1.225)/1)*100))"), eval("TypeName(CLNG(((1.225)/1)*100))")
Long Long
So the conversion in the two cases is actually:
Decimal > Currency > Double > Double > Long (as you correctly assumed); and
Decimal > Decimal > Decimal > Decimal > Long (correcting your initial assumption).
To answer your question in the comment below, Eval() uses the same expression engine as Jet/ACE, so it is functionally equivalent to entering the same formula in an Access query. For further proof, I present the following:
SELECT
TypeName(1.225) as A1,
TypeName(CCUR(1.225)) as A2,
TypeName(CCUR(1.225)/1) as A3,
TypeName((CCUR(1.225)/1)*100) as A4,
TypeName(CLNG((CCUR(1.225)/1)*100)) as A5
SELECT
TypeName(1.225) as B1,
TypeName((1.225)) as B2,
TypeName((1.225)/1) as B3,
TypeName(((1.225)/1)*100) as B4,
TypeName(CLNG(((1.225)/1)*100)) as B5

Represent Double as floating point binary string using built-in functions

I'm using VB.NET, writing a winforms application where I'm trying to convert from a denary real number to a signed floating point binary number, as a string representation. For example, 9.125 would become "0100100100000100" (the first ten digits are the significand and the last six digits the exponent.).
I can write a function for this if I have to, but I'd rather not waste time if there's a built-in functionality available. I know there's some ToString overload or something that works on Integers, but I haven't been able to find anything that works on Doubles.

Option Strict On and Constant in Visual Basic?

Please forgive me, I haven't used this site very much! I am working in Visual Studio with Visual Basic. I finished programming my project with Option Strict Off, then when I turned Option Strict on, I was alerted that this code was wrong:
Const TAX_Decimal As Decimal = 0.07
The explanation was that "Option Strict On disallows implicit conversions from 'Double' to 'Decimal'"
But I thought I had declared it as a decimal! It made me change it to:
Const TAX_Decimal As Decimal = CDec(0.07)
The only thing I did with this constant was multiply it by a decimal and saved it to a variable declared as a decimal!
Can someone tell me why this is happening?
Double is 8 bytes and Decimal is 16 bytes. Option Strict prevents from automatic type conversion. By default if you write a number with decimals in VB.NET it is considered as double and not decimal. For saying decimal you have to use some character to specify (I thing for decimal is m) so if you declare
Const VAR as decimal = 0.07m
then you wont require casting.
When the compiler sees a numeric literal, it selects a type based upon the size of the number, punctuation marks, and suffix (if any), and then translates the the sequence of characters in it to that type; all of this is done without regard for what the compiler is going to do with the number. Once this is done, the compiler will only allow the number to be used as its own type, explicitly cast to another type, or in the two cases defined below implicitly converted to another type.
If the number is interpreted as any integer type (int, long, etc.) the compiler will allow it to be used to initialize any integer type in which the number is representable, as well as any binary or decimal floating-point type, without regard for whether or not the number can be represented precisely in that type.
If the number is type Single [denoted by an f suffix], the compiler will allow it to be used to initialize a Double, without regard for whether the resulting Double will accurately represent the literal with which the Single was initialized.
Numeric literals of type Double [including a decimal point, but with no suffix] or Decimal [a "D" suffix not followed immediately by a plus or minus] cannot be used to initialize a variable of any other, even if the number would be representable precisely in the target type, or the result would be the target type's best representation of the numeric literal in question.
Note that conversions between type Decimal and the other floating-point types (double and float) should be avoided whenever possible, since the conversion methods are not very accurate. While there are many double values for which no exact Decimal representation exists, there is a wide numeric range in which Decimal values are more tightly packed than double values. One might expect that converting a double would choose the closest Decimal value, or at least one of the Decimal values which is between that number and the next higher or lower double value, but the normal conversion methods do not always do so. In some cases the result may be off by a significant margin.
If you ever find yourself having to convert Double to Decimal, you're probably doing something wrong. While there are some operations which are available on Double that are not available on Decimal, the act of converting between the two types means whatever Decimal result you end up with is apt to be less precise than if all computations had been done in Double`.

Convert SqlDecimal to Decimal

The problem is that the SqlDecimal datatype packs more bits than the Decimal datatype which is native to the CLR. So how does one map between the two in the most practical way. This wohn't work that well:
SqlDecimal x = ...
decimal z = x.value; // can overflow
To have more numbers pass one can strip trailing zeros. But if you accept the loss of precision that the conversion gives you one would expect there'd be a function to do this lossy conversion.
Is there? Or what would be best practices here?
I've already made a function which both crops and removes trailing zeroes to do the conversion but I'd rather use a standard .NET BCL function if one such exists.
Use SqlDecimal.Round with a precision that matches the .NET decimal type.