Xcode decimal places - objective-c

I would like to display the a number value to the max number of decimal places. If you don't format the float to:
#"%.1f"
then it will display the number as e.g. 1.000000. What I would like is that the number would have the max number of decimal places it needs e.g.
1 would not need any
1.5 would need 1 decimal place
1.24 would need 2 decimal places
Is there some sort of code that formats the number to the max number of decimal places?

Replace "f" with "g".
From printf(3):
gG The double argument is converted in style f or e (or F or E for G conver-
sions). The precision specifies the number of significant digits. If the
precision is missing, 6 digits are given; if the precision is zero, it is
treated as 1. Style e is used if the exponent from its conversion is less
than -4 or greater than or equal to the precision. Trailing zeros are removed
from the fractional part of the result; a decimal point appears only if it is
followed by at least one digit.

Related

How to write in vb.net a decimal number without rounding the 12 digits after the decimal point?

I try to write the result of the division of the number 5991 by 2987 in vb.net, without rounding the decimal or floating point, strictly equal to 2.005691329092 on 12 digits after the decimal point. The decimal part (0.05691329092) is also noted 17/2987 in scientific notation. My tests always led me to 2.005691329093 rounding!
2 17/2987
Actual value is correct, issue is how to format value to display it as expected.
There are no built-in method, but you can multiply value by required amount of digits after decimal point and truncate it.
Dim value As Double = 5991.0 / 2987.0
value = Math.Truncate(value * Math.Pow(10, 12)) / Math.Pow(10, 12)
Console.WriteLine($"Result: {value:F12}")

Number format SQL INSERT

I have created a table where a column has the format NUMBER(2,3).
I try to insert the value 5.73 but it doesn't work.
The error is :
ORA-01438 - "value larger than specified precision allowed for this column"
Cause: When inserting or updating records, a numeric value was entered
that exceeded the precision defined for the column.*
I read the documentation but i don't understand the scale.
So, what is the format accepted value 0-99 with 3 values ​​after the decimal point ?
Thanks.
You are misunderstanding precision and scale. You have a number with a precision of 2. That means that there are two significant digits. It has a scale of 3, which means that these are to the right of the decimal point.
So, your column can represent values between 0.000 and 0.099
What you want is NUMERIC(5, 3). "precision - scale" is the number of digits to the left of the decimal point.
This has come from here:
https://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm#CNCPT1832
Optionally, you can also specify a precision (total number of digits) and scale (number of >digits to the right of the decimal point):
column_name NUMBER (precision, scale)
So in your example you are allowed a total number of 2 digits ( and 3 digits to the right of the decimal point). Which doesn't work for 5.73, perhaps you need a type of number(3,2) which would allow 3 digits 2 of which can be right of the decimal point.

Number format in Oracle SQL

I've given a task of exporting data from an Oracle view to a fixed length text file, however I've been given specification of how data should be exported to a text file. I.e.
quantity NUM (10)
price NUM (8,2)
participant_id CHAR (3)
brokerage NUM (10,2)
cds_fees NUM (8,2)
My confusion arises in Numeric types where when it says (8,2). If I'm to use same as text, does it effectively means
10 characters (as to_char(<field name>, '9999999.99'))
or
8 characters (as to_char(<field name>, '99999.99'))
when exporting to fixed length text field in the text file?
I was looking at this question which gave an insight, but not entirely.
Appreciate if someone could enlighten me with some examples.
Thanks a lot.
According to the Oracle docs on types
Optionally, you can also specify a precision (total number of digits)
and scale (number of digits to the right of the decimal point):
If a precision is not specified, the column stores values as given. If
no scale is specified, the scale is zero.
So in your case, a NUMBER(8,2), has got:
8 digits in total
2 of which are after the decimal point
This gives you a range of -999999.99 to 999999.99
I assume that you mean NUMBER data type by NUM.
When it says NUMBER(8,2), it means that there will be 8 digits, and that the number should be rounded to the nearest hundredth. Which means that there will be 6 digits before, and 2 digits after the decimal point.
Refer to oracle doc:
You use the NUMBER datatype to store fixed-point or floating-point
numbers. Its magnitude range is 1E-130 .. 10E125. If the value of an
expression falls outside this range, you get a numeric overflow or
underflow error. You can specify precision, which is the total number
of digits, and scale, which is the number of digits to the right of
the decimal point. The syntax follows:
NUMBER[(precision,scale)]
To declare fixed-point numbers, for which you must specify scale, use
the following form:
NUMBER(precision,scale)
To declare floating-point numbers, for which you cannot specify
precision or scale because the decimal point can "float" to any
position, use the following form:
NUMBER
To declare integers, which have no decimal point, use this form:
NUMBER(precision) -- same as NUMBER(precision,0)
You cannot use constants or variables to specify precision and scale;
you must use integer literals. The maximum precision of a NUMBER value
is 38 decimal digits. If you do not specify precision, it defaults to
38 or the maximum supported by your system, whichever is less.
Scale, which can range from -84 to 127, determines where rounding
occurs. For instance, a scale of 2 rounds to the nearest hundredth
(3.456 becomes 3.46). A negative scale rounds to the left of the
decimal point. For example, a scale of -3 rounds to the nearest
thousand (3456 becomes 3000). A scale of 0 rounds to the nearest whole
number. If you do not specify scale, it defaults to 0.
NUMBER(p,s)
p(precision) = length of the number in digits
s(scale) = places after the decimal point
So Number(8,2) in your example is a '999999.99'
You can see more examples here.

double rounded to 1 when using MsgBox(d) and Console.WriteLine(d)

Why vb prints out 1??? when d is a double aproximation to 1? shoudnt be 0.99999 or something similar? what if I really need the float value? and how could I print it?
Dim d As Double
For i = 1 To 10
d = d + 0.1
Next
MsgBox(d)
Console.WriteLine(d)
thanks
When using MsgBox or Console.WriteLine, double.ToString() is called in order to convert the double to a string.
By default this uses the G format specifier.
The general ("G") format specifier converts a number to the most compact of either fixed-point or scientific notation, depending on the type of the number and whether a precision specifier is present. The precision specifier defines the maximum number of significant digits that can appear in the result string. If the precision specifier is omitted or zero, the type of the number determines the default precision, as indicated in the following table.
And:
However, if the number is a Decimal and the precision specifier is omitted, fixed-point notation is always used and trailing zeros are preserved.
When converting the infinite 0.9999999.... to a string, since it goes forever, rounding occurs, this results in 1.
A simple test is to run this:
MsgBox((0.9999999999999999999999999).ToString())

How do I interpret precision and scale of a number in a database?

I have the following column specified in a database: decimal(5,2)
How does one interpret this?
According to the properties on the column as viewed in SQL Server Management studio I can see that it means: decimal(Numeric precision, Numeric scale).
What do precision and scale mean in real terms?
It would be easy to interpret this as a decimal with 5 digits and two decimals places...ie 12345.12
P.S. I've been able to determine the correct answer from a colleague but had great difficulty finding an answer online. As such, I'd like to have the question and answer documented here on stackoverflow for future reference.
Numeric precision refers to the maximum number of digits that are present in the number.
ie 1234567.89 has a precision of 9
Numeric scale refers to the maximum number of decimal places
ie 123456.789 has a scale of 3
Thus the maximum allowed value for decimal(5,2) is 999.99
Precision of a number is the number of digits.
Scale of a number is the number of digits after the decimal point.
What is generally implied when setting precision and scale on field definition is that they represent maximum values.
Example, a decimal field defined with precision=5 and scale=2 would allow the following values:
123.45 (p=5,s=2)
12.34 (p=4,s=2)
12345 (p=5,s=0)
123.4 (p=4,s=1)
0 (p=0,s=0)
The following values are not allowed or would cause a data loss:
12.345 (p=5,s=3) => could be truncated into 12.35 (p=4,s=2)
1234.56 (p=6,s=2) => could be truncated into 1234.6 (p=5,s=1)
123.456 (p=6,s=3) => could be truncated into 123.46 (p=5,s=2)
123450 (p=6,s=0) => out of range
Note that the range is generally defined by the precision: |value| < 10^p ...
Precision, Scale, and Length in the SQL Server 2000 documentation reads:
Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number. For example, the number 123.45 has a precision of 5 and a scale of 2.
Precision refers to the total number of digits while scale refers to the digits allowed after the decimal.
The example quoted by would have a precision of 7 and a scale of 2.
Moreover, DECIMAL(precision, scale) is an exact value data type unlike something like a FLOAT(precision, scale) which stores approximate numeric data.
For example, a column defined as FLOAT(7,4) is displayed as -999.9999. MySQL performs rounding when storing values, so if you insert 999.00009 into a FLOAT(7,4) column, the approximate result is 999.0001.
Let me know if this helps!