How do I interpret precision and scale of a number in a database? - sql

I have the following column specified in a database: decimal(5,2)
How does one interpret this?
According to the properties on the column as viewed in SQL Server Management studio I can see that it means: decimal(Numeric precision, Numeric scale).
What do precision and scale mean in real terms?
It would be easy to interpret this as a decimal with 5 digits and two decimals places...ie 12345.12
P.S. I've been able to determine the correct answer from a colleague but had great difficulty finding an answer online. As such, I'd like to have the question and answer documented here on stackoverflow for future reference.

Numeric precision refers to the maximum number of digits that are present in the number.
ie 1234567.89 has a precision of 9
Numeric scale refers to the maximum number of decimal places
ie 123456.789 has a scale of 3
Thus the maximum allowed value for decimal(5,2) is 999.99

Precision of a number is the number of digits.
Scale of a number is the number of digits after the decimal point.
What is generally implied when setting precision and scale on field definition is that they represent maximum values.
Example, a decimal field defined with precision=5 and scale=2 would allow the following values:
123.45 (p=5,s=2)
12.34 (p=4,s=2)
12345 (p=5,s=0)
123.4 (p=4,s=1)
0 (p=0,s=0)
The following values are not allowed or would cause a data loss:
12.345 (p=5,s=3) => could be truncated into 12.35 (p=4,s=2)
1234.56 (p=6,s=2) => could be truncated into 1234.6 (p=5,s=1)
123.456 (p=6,s=3) => could be truncated into 123.46 (p=5,s=2)
123450 (p=6,s=0) => out of range
Note that the range is generally defined by the precision: |value| < 10^p ...

Precision, Scale, and Length in the SQL Server 2000 documentation reads:
Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number. For example, the number 123.45 has a precision of 5 and a scale of 2.

Precision refers to the total number of digits while scale refers to the digits allowed after the decimal.
The example quoted by would have a precision of 7 and a scale of 2.
Moreover, DECIMAL(precision, scale) is an exact value data type unlike something like a FLOAT(precision, scale) which stores approximate numeric data.
For example, a column defined as FLOAT(7,4) is displayed as -999.9999. MySQL performs rounding when storing values, so if you insert 999.00009 into a FLOAT(7,4) column, the approximate result is 999.0001.
Let me know if this helps!

Related

loosing precision in division with DB2

I encounter strange DB2 behaviour. An example will illustrate it :
SELECT CAST(11458.5648 AS DECIMAL(30,10)) / CAST(120.1 AS DECIMAL(30,10)), 11458.5648 / 120.1 FROM MYTABLE FETCH FIRST 1 ROW ONLY
returns :
1 | 2
---------------------------
95.4 | 95.4085328893
Of course, the good result is in column 2, but why DB2 does this awful error ?
If I cast to DECIMAL(20,10), the result is good too, but starting with DECIMAL(22,10), I loose 1 digit of precision in the result at each step ...
Any idea about it ?
thanks
You need to understand how decimal arithmetic is handled (for your platform & version of Db2)
For Db2 LUW v11.5
Two decimal operands If both operands are decimal, the operation is
performed in decimal. The result of any decimal arithmetic operation
is a decimal number with a precision and scale that are dependent on
the operation and the precision and scale of the operands. If the
operation is addition or subtraction and the operands do not have the
same scale, the operation is performed with a temporary copy of one of
the operands. The copy of the shorter operand is extended with
trailing zeros so that its fractional part has the same number of
digits as the longer operand.
The result of a decimal operation cannot have a precision greater than
31. The result of decimal addition, subtraction, and multiplication is derived from a temporary result which can have a precision greater
than 31. If the precision of the temporary result is not greater than
31, the final result is the same as the temporary result.
Decimal arithmetic in SQL Use the formulas shown here to calculate the
precision and scale of the result of decimal operations in SQL. The
formulas use the following symbols:
p Precision of the first operand.
s Scale of the first operand.
p' Precision of the second operand.
s' Scale of the second operand.
Assuming the default mode, by casting the operands to decimal(30,10) your results has
p = 31
s = 31-30+10-10 ==> 1
moral of the story, don't artificially increase the precision and scale of your operands.

Number format SQL INSERT

I have created a table where a column has the format NUMBER(2,3).
I try to insert the value 5.73 but it doesn't work.
The error is :
ORA-01438 - "value larger than specified precision allowed for this column"
Cause: When inserting or updating records, a numeric value was entered
that exceeded the precision defined for the column.*
I read the documentation but i don't understand the scale.
So, what is the format accepted value 0-99 with 3 values ​​after the decimal point ?
Thanks.
You are misunderstanding precision and scale. You have a number with a precision of 2. That means that there are two significant digits. It has a scale of 3, which means that these are to the right of the decimal point.
So, your column can represent values between 0.000 and 0.099
What you want is NUMERIC(5, 3). "precision - scale" is the number of digits to the left of the decimal point.
This has come from here:
https://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm#CNCPT1832
Optionally, you can also specify a precision (total number of digits) and scale (number of >digits to the right of the decimal point):
column_name NUMBER (precision, scale)
So in your example you are allowed a total number of 2 digits ( and 3 digits to the right of the decimal point). Which doesn't work for 5.73, perhaps you need a type of number(3,2) which would allow 3 digits 2 of which can be right of the decimal point.

Default Scale value of Number column in Oracle

Oracle Docs mentioned that the default precision value is 38 and scale is 0.
If a precision is not specified, the column stores values as given. If
no scale is specified, the scale is zero.
But the table mentioned there (Table 26-1) contradicts the statement.
Input Data | Specified As | Stored As
7,456,123.89 | NUMBER | 7456123.89
If the default scale is 0 (number of digits to the right of the decimal point) then how come the above number is stored with 2 decimal digits. i.e. .89
or have I totally misunderstood the default scale concept?
It might be more helpful to consider four different cases:
No precision or scale NUMBER
Precision and no scale NUMBER(9)
Both precision and scale NUMBER(9,2)
Scale only NUMBER(*,2)
The quote ...
If no scale is specified, the scale is zero.
... refers to the second of those, in which a precision but no scale is specified, not the first.

Number format in Oracle SQL

I've given a task of exporting data from an Oracle view to a fixed length text file, however I've been given specification of how data should be exported to a text file. I.e.
quantity NUM (10)
price NUM (8,2)
participant_id CHAR (3)
brokerage NUM (10,2)
cds_fees NUM (8,2)
My confusion arises in Numeric types where when it says (8,2). If I'm to use same as text, does it effectively means
10 characters (as to_char(<field name>, '9999999.99'))
or
8 characters (as to_char(<field name>, '99999.99'))
when exporting to fixed length text field in the text file?
I was looking at this question which gave an insight, but not entirely.
Appreciate if someone could enlighten me with some examples.
Thanks a lot.
According to the Oracle docs on types
Optionally, you can also specify a precision (total number of digits)
and scale (number of digits to the right of the decimal point):
If a precision is not specified, the column stores values as given. If
no scale is specified, the scale is zero.
So in your case, a NUMBER(8,2), has got:
8 digits in total
2 of which are after the decimal point
This gives you a range of -999999.99 to 999999.99
I assume that you mean NUMBER data type by NUM.
When it says NUMBER(8,2), it means that there will be 8 digits, and that the number should be rounded to the nearest hundredth. Which means that there will be 6 digits before, and 2 digits after the decimal point.
Refer to oracle doc:
You use the NUMBER datatype to store fixed-point or floating-point
numbers. Its magnitude range is 1E-130 .. 10E125. If the value of an
expression falls outside this range, you get a numeric overflow or
underflow error. You can specify precision, which is the total number
of digits, and scale, which is the number of digits to the right of
the decimal point. The syntax follows:
NUMBER[(precision,scale)]
To declare fixed-point numbers, for which you must specify scale, use
the following form:
NUMBER(precision,scale)
To declare floating-point numbers, for which you cannot specify
precision or scale because the decimal point can "float" to any
position, use the following form:
NUMBER
To declare integers, which have no decimal point, use this form:
NUMBER(precision) -- same as NUMBER(precision,0)
You cannot use constants or variables to specify precision and scale;
you must use integer literals. The maximum precision of a NUMBER value
is 38 decimal digits. If you do not specify precision, it defaults to
38 or the maximum supported by your system, whichever is less.
Scale, which can range from -84 to 127, determines where rounding
occurs. For instance, a scale of 2 rounds to the nearest hundredth
(3.456 becomes 3.46). A negative scale rounds to the left of the
decimal point. For example, a scale of -3 rounds to the nearest
thousand (3456 becomes 3000). A scale of 0 rounds to the nearest whole
number. If you do not specify scale, it defaults to 0.
NUMBER(p,s)
p(precision) = length of the number in digits
s(scale) = places after the decimal point
So Number(8,2) in your example is a '999999.99'
You can see more examples here.

Why decimal behave differently?

I am doing this small exercise.
declare #No decimal(38,5);
set #No=12345678910111213.14151;
select #No*1000/1000,#No/1000*1000,#No;
Results are:
12345678910111213.141510
12345678910111213.141000
12345678910111213.14151
Why are the results of first 2 selects different when mathematically it should be same?
it is not going to do algebra to convert 1000/1000 to 1. it is going to actually follow the order of operations and do each step.
#No*1000/1000
yields: #No*1000 = 12345678910111213141.51000
then /1000= 12345678910111213.141510
and
#No/1000*1000
yields: #No/1000 = 12345678910111.213141
then *1000= 12345678910111213.141000
by dividing first you lose decimal digits.
because of rounding, the second sql first divides by 1000 which is 12345678910111.21314151, but your decimal is only 38,5, so you lose the last three decimal points.
because when you divide first you get:
12345678910111.21314151
then only six decimal digits are left after point:
12345678910111.213141
then *1000
12345678910111213.141
because the intermediary type is the same as the argument's - in this case decimal(38,5). so dividing first gives you a loss of precision that's reflected in the truncated answer. multiplying by 1000 first doesn't give any loss of precision because that doesn't overload 38 digits.
It's probably because you lose part of data making division first. Notice that #No has 5-point decimal precision so when you divide this number by 1000 you suddenly need 8 digits for decimal part:
123.12345 / 1000 = 0.12312345
So the value has to be rounded (0.12312) and then this value is multiply by 1000 -> 123.12 (you lose 0.00345.
I think that's why the result is what it is...
The first does #No*1000 then divides it by 1000. The intermediates values are always able to represent all the decimal places. The second expression first divides by 1000, which throws away the last two decimal places, before multiplying back to the original value.
You can get around the problem by using CONVERT or CAST on the first value in your expression to increase the number of decimal places and avoid a loss of precision.
DECLARE #num decimal(38,5)
SET #num = 12345678910111213.14151
SELECT CAST(#num AS decimal(38,8)) / 1000 * 1000