double rounded to 1 when using MsgBox(d) and Console.WriteLine(d) - vb.net

Why vb prints out 1??? when d is a double aproximation to 1? shoudnt be 0.99999 or something similar? what if I really need the float value? and how could I print it?
Dim d As Double
For i = 1 To 10
d = d + 0.1
Next
MsgBox(d)
Console.WriteLine(d)
thanks

When using MsgBox or Console.WriteLine, double.ToString() is called in order to convert the double to a string.
By default this uses the G format specifier.
The general ("G") format specifier converts a number to the most compact of either fixed-point or scientific notation, depending on the type of the number and whether a precision specifier is present. The precision specifier defines the maximum number of significant digits that can appear in the result string. If the precision specifier is omitted or zero, the type of the number determines the default precision, as indicated in the following table.
And:
However, if the number is a Decimal and the precision specifier is omitted, fixed-point notation is always used and trailing zeros are preserved.
When converting the infinite 0.9999999.... to a string, since it goes forever, rounding occurs, this results in 1.
A simple test is to run this:
MsgBox((0.9999999999999999999999999).ToString())

Related

SAS TO COBOL conversion variable declaration

Friends,
I am doing SAS to COBOL conversion.I am stuck with below declaration and conversion.So I am getting SOC7 in COBOL run.Please provide some solution.
IP in SAS - PD3.5
OP in SAS - z6.5
My COBOL declaration below.
IP s9.9(5);
OP .9(5);
Please suggest some solution..
Thanks a lot!!
Packed Decimal is stored one digit per nibble, which is two digits per byte, with the last nibble storing the sign. The sign nibbles C, A, F, and E are treated as positive; the sign nibbles B and D are treated as negative. Sign nibbles C and D are referred to as "preferred sign". A sign nibble of F is considered "unsigned," meaning it is neither positive nor negative, though pragmatically you can think of it as positive for arithmetic purposes. +123 is stored in two bytes as x'123C', -456 is stored as x'456D'.
The SAS PD informat specifies PDw.d where w is the width of the field in bytes and d is the number of decimal places to the right within the field. So PD3.5 is a 3 byte field (which would store 5 digits and a sign) with all 5 digits to the right of the decimal point.
To obtain the COBOL declaration for a SAS PDw.d declaration...
a = (w * 2) - 1
b = a - d
if b = 0
PIC SVd Packed-Decimal
else
PIC S9(b)Vd Packed-Decimal
The SAS Z format specifies Zw.d where w is the width of the field in bytes and d is the number of decimal places to the right within the field. The field will be padded with zeroes on the left to make it w bytes wide. So Z6.5 specifies a 6 byte output field with 5 bytes to the right of the decimal point. One byte is taken by the decimal point itself, and unfortunately there is no room for the sign, which may be a bug or may be intentional (perhaps all the data is known to be positive).
IP PIC Sv99999 Packed-Decimal.
OP PIC .99999.
When you MOVE IP TO OP the conversion from Packed Decimal to Zoned Decimal will be done for you by COBOL.

How to write in vb.net a decimal number without rounding the 12 digits after the decimal point?

I try to write the result of the division of the number 5991 by 2987 in vb.net, without rounding the decimal or floating point, strictly equal to 2.005691329092 on 12 digits after the decimal point. The decimal part (0.05691329092) is also noted 17/2987 in scientific notation. My tests always led me to 2.005691329093 rounding!
2 17/2987
Actual value is correct, issue is how to format value to display it as expected.
There are no built-in method, but you can multiply value by required amount of digits after decimal point and truncate it.
Dim value As Double = 5991.0 / 2987.0
value = Math.Truncate(value * Math.Pow(10, 12)) / Math.Pow(10, 12)
Console.WriteLine($"Result: {value:F12}")

Internal error occurred during runtime generation of Program Dump ID: BCD_OVERFLOW

Internal error occurred during runtime generation of Program (Dump ID: BCD_OVERFLOW)
No error during check but activation gives this error.
This issue occurs in any ABAP code if you try to assign a value to a numeric attribute or variable, which is "out of range" (so leads to an overflow). See here all the possible value range for numeric types:
Type Value Range
---------- ------------------------------------------------------------------------------
b 0 to 255
s -32,768 to +32,767
i -2,147,483,648 to +2,147,483,647
int8 -9,223,372,036,854,775,808 to +9,223,372,036,854,775,807
p The valid length for packed numbers is between 1 and 16 bytes. Two places are
packed into one byte, where the last byte contains only one place and the sign,
which is the number of places or places calculated from 2 * len1. After the
decimal separator, up to 14 decimal places are allowed ( the number of decimal
places should not exceed the number of places). Depending on the field length
len and the number of decimal places dec, the value range is: (-10^(2len-1)
+1) / (10^(+dec)) to (+10^(2len-1) -1) /(10^(+dec)) in increments of 10^(-dec).
Any intermediate values are rounded decimally. Invalid content produces undefined
behavior.
decfloat16 Decimal floating point numbers of this type are represented internally with 16
places in accordance with the IEEE-754-2008 standard. Valid values are numbers
between 1E385(1E-16 - 1) and -1E-383 for the negative range, 0 and +1E-383 to
1E385(1 - 1E-16) for the positive range. Values between the ranges form the
subnormal range and are rounded. Outside of the subnormal range, each 16-digit
decimal number can be represented exactly with a decimal floating point number
of this type.
decfloat34 Decimal floating point numbers of this type are represented internally with 34
places in accordance with the IEEE-754-2008 standard. Valid values are numbers
between 1E6145(1E-34 - 1) and -1E-6143 for the negative range, 0 to +1E-6143
and 1E6145(1 - 1E-34) for the positive range. Values between the ranges form
the subnormal range and are rounded. Outside of the subnormal range, each
34-digit decimal number can be represented exactly using a decimal floating
point number.
f Binary floating point numbers are represented internally according to the
IEEE-754 standard (double precision). In ABAP, 17 places are represented (one
integer digit and 16 decimal places). Valid values are numbers between
-1.7976931348623157E+308 and -2.2250738585072014E-308 for the negative range
and between +2.2250738585072014E-308 and +1.7976931348623157E+308 for the
positive range, plus 0. Both validity intervals are extended to the value zero
by subnormal numbers according to IEEE-754. Not every sixteen-digit number can
be represented exactly by a binary floating point number.
Minimal reproducible example:
REPORT ztest.
DATA num TYPE int1.
num = 1000. " <=== run time error
The solution is to use a larger data type like int2 (up to 32767) or I (integer on 4 bytes):
REPORT ztest.
DATA num TYPE int2. " <=== larger type
num = 1000. " <=== no more error
NB: decfloat34 is the larger possible numeric data type, it can handle virtually any value.
This issue can occur in methods, Function modules, or in Report also.
The main reason behind this issue is the runtime configuration of attributes present in code.
For example,
DATA: num type int1 value 256.
This statement is syntactically fine but the value that is being assigned to variable num is more than the range of type INT1. So it will dump while activation only.
Solution:
DATA: num type int1 value {<=255}
Similarly, this error can occur in any case where the compile-time and runtime configuration conflict.

Number format in Oracle SQL

I've given a task of exporting data from an Oracle view to a fixed length text file, however I've been given specification of how data should be exported to a text file. I.e.
quantity NUM (10)
price NUM (8,2)
participant_id CHAR (3)
brokerage NUM (10,2)
cds_fees NUM (8,2)
My confusion arises in Numeric types where when it says (8,2). If I'm to use same as text, does it effectively means
10 characters (as to_char(<field name>, '9999999.99'))
or
8 characters (as to_char(<field name>, '99999.99'))
when exporting to fixed length text field in the text file?
I was looking at this question which gave an insight, but not entirely.
Appreciate if someone could enlighten me with some examples.
Thanks a lot.
According to the Oracle docs on types
Optionally, you can also specify a precision (total number of digits)
and scale (number of digits to the right of the decimal point):
If a precision is not specified, the column stores values as given. If
no scale is specified, the scale is zero.
So in your case, a NUMBER(8,2), has got:
8 digits in total
2 of which are after the decimal point
This gives you a range of -999999.99 to 999999.99
I assume that you mean NUMBER data type by NUM.
When it says NUMBER(8,2), it means that there will be 8 digits, and that the number should be rounded to the nearest hundredth. Which means that there will be 6 digits before, and 2 digits after the decimal point.
Refer to oracle doc:
You use the NUMBER datatype to store fixed-point or floating-point
numbers. Its magnitude range is 1E-130 .. 10E125. If the value of an
expression falls outside this range, you get a numeric overflow or
underflow error. You can specify precision, which is the total number
of digits, and scale, which is the number of digits to the right of
the decimal point. The syntax follows:
NUMBER[(precision,scale)]
To declare fixed-point numbers, for which you must specify scale, use
the following form:
NUMBER(precision,scale)
To declare floating-point numbers, for which you cannot specify
precision or scale because the decimal point can "float" to any
position, use the following form:
NUMBER
To declare integers, which have no decimal point, use this form:
NUMBER(precision) -- same as NUMBER(precision,0)
You cannot use constants or variables to specify precision and scale;
you must use integer literals. The maximum precision of a NUMBER value
is 38 decimal digits. If you do not specify precision, it defaults to
38 or the maximum supported by your system, whichever is less.
Scale, which can range from -84 to 127, determines where rounding
occurs. For instance, a scale of 2 rounds to the nearest hundredth
(3.456 becomes 3.46). A negative scale rounds to the left of the
decimal point. For example, a scale of -3 rounds to the nearest
thousand (3456 becomes 3000). A scale of 0 rounds to the nearest whole
number. If you do not specify scale, it defaults to 0.
NUMBER(p,s)
p(precision) = length of the number in digits
s(scale) = places after the decimal point
So Number(8,2) in your example is a '999999.99'
You can see more examples here.

Xcode decimal places

I would like to display the a number value to the max number of decimal places. If you don't format the float to:
#"%.1f"
then it will display the number as e.g. 1.000000. What I would like is that the number would have the max number of decimal places it needs e.g.
1 would not need any
1.5 would need 1 decimal place
1.24 would need 2 decimal places
Is there some sort of code that formats the number to the max number of decimal places?
Replace "f" with "g".
From printf(3):
gG The double argument is converted in style f or e (or F or E for G conver-
sions). The precision specifies the number of significant digits. If the
precision is missing, 6 digits are given; if the precision is zero, it is
treated as 1. Style e is used if the exponent from its conversion is less
than -4 or greater than or equal to the precision. Trailing zeros are removed
from the fractional part of the result; a decimal point appears only if it is
followed by at least one digit.