Database field definitions - sql

I need to do a data migration from a data base and I'm not too familiar with databases so I would like some clarification. I have some documentation that could apply to either an Oracle or a SQL database and it has a column defined as NUMBER(10,5). I would like to know what this means. I think it means that the number has 10 digits with 5 after the decimal point, but I would like clarification. Also would this be different for either SQL or Oracle?

The first number is precision the second number is scale. The equivalent in SQL Server can be as Decimal / Numeric and you could define it like so:
DECLARE #MyDec decimal(18,2)
The 18 is the max total number of decimal digits that can be stored (that is the total number of digits, for instance 123.45 the precision here is 5, while the scale is 2). The 2 is the scale and it specifies the max number of digits stored to the right of the decimal point.
See this article
Just remember the more precision the more size in storage bytes. So keep it at a minimum if possible.
p (precision)
Specifies the maximum total number of
decimal digits that can be stored,
both to the left and to the right of
the decimal point. The precision must
be a value from 1 through the maximum
precision. The maximum precision is
38. The default precision is 18.
s (scale)
Specifies the maximum number of
decimal digits that can be stored to
the right of the decimal point. Scale
must be a value from 0 through p.
Scale can be specified only if
precision is specified. The default
scale is 0; therefore, 0 <= s <= p.
Maximum storage sizes vary, based on
the precision.
Finally, it is worth mentioning that in oracle you can define a scale greater then a precision, for instance Number(3, 10) is valid in oracle. SQL Server on the other hand requires that the precision >= scale. So if you defined Number(3,10) in oracle, it would map into sql as Number(10,10).

Defining a column in Oracle as NUMBER(10,5) means that the column value can have a decimal of up to five places of precision, and ten digits in overall length. If you insert a value into the column that does not have any decimal places defined, the maximum the column will support is 10 digits. For example, these values will be supported by the column defined as NUMBER(10,5):
1234567890
12345.67890
It made validation a pain.
MySQL and SQL Server don't support the NUMBER data type - to support decimals, you're looking at using DECIMAL (or FLOAT?). I haven't looked at PostgreSQL, but I would figure it to be similar to Oracle.

In Oracle, a column defined as NUMBER(4,5) requires a zero for the first digit after the decimal point and rounds all values past the fifth digit after the decimal point.
From the Oracle documentation
NUMBER(p,s)
where: p is the precision, or the
total number of digits. Oracle
guarantees the portability of numbers
with precision ranging from 1 to 38. s
is the scale, or the number of digits
to the right of the decimal point. The
scale can range from -84 to 127.
Here are some examples :
Actual data .000127 stored in NUMBER(4,5) becomes .00013
Actual data 7456123.89 stored in NUMBER(7,-2) becomes 7456100
Edited
JonH mentions something noteworthy:
Oracle allows the scale > precision,
so SQL will map that so that if s>p
then p becomes s. That is NUMBER(3, 4)
in oracle becomes NUMERIC(4,4) in SQL.

Related

SQL Server - float vs varchar

In SQL Server, I have decimal data to be stored in a table (which is never used for joins or filtering). This decimal data is variable - 80% of the time it has single digit values (1, 4, 5) and remaining 20% are with 16 digit decimals (0.8999999761581421, 3.0999999046325684).
I am wondering If I can save any storage space going with varchar instead of float, or if I should stick with float since this is numeric data?
Here's an interesting observation:
Start with the mathematical value 0.9
Convert that to a binary number. For the same reason that 1/3 cannot be expressed in a finite number of digits in base 10, the number 0.9 cannot be expressed in a finite number of digits in base 2. The exact mathematical value is:
0.1 1100 1100 1100 1100 1100 1100 1100 ... with the "1100" repeating forever.
Let's store this value in an IEEE-754 single-precision floating-point value. (In SQL Server, this is called REAL type). To do that we have to round to 23 significant bits. The result is:
0.1 1100 1100 1100 1100 1100 11
Convert this to its exact decimal equivalent, you get this:
0.89999997615814208984375
Round that to 16 places after the decimal point. You get:
0.8999999761581421
Which is coincidentally the value you show as your example.
If you do the same thing to 3.1, you get 3.0999999046325684
Is it possible that all your inputs are simply numbers with one digit after the decimal point, which have been stored as a floating-point value, and then converted back into decimal?
Always use the most appropriate datatype! Since this is clearly numerical data - use a numerical type. This will allow to e.g. sum the values, order by those values - those are numbers - so treat and store them as such!!
If you need to support fractions, you could use FLOAT or REAL, but those are notorious for rounding errors etc. Using DECIMAL(p,s) avoids those pitfalls - it's stable, it's precise, not prone to rounding errors. So that would be my logical choice.
See the official MS docs for DECIMAL for your details on how to define the p (precision - total number of digits overall) and s (scale - number of digits after the decimal point).
And btw: those are stored in fewer bytes that a varchar column large enough to hold these values would be!

Non-standard number data types in Oracle

Besides the "usual" number data type where precision is greater then scale there are many "non-standard" number data types where scale greater then precision or where scale is negative.
For example
NUMBER(2, 5) means that there are 5 digits in the fractional part, 3 of them is obligatory zeros.
NUMBER(2,-6) Here the scale is -6, which means the value is rounded to millions and the precision is 2, so 2 significant digits can be stored.
Can somebody provide examples of using such data types in practice?

Why does a column with datatype number(3,2) not accept 22.3 as a value in oracle?

I have a column in a table whose datatype is number(3,2).
I try to insert 22.3 into this column and it gives me an error stating that value larger than specified precision.
My point is that 22.3 has a precision of 3. Then why doesn't it accept this as a value?
Quoted from Oracle's documentation:
Optionally, you can also specify a precision (total number of digits) and scale (number of digits to the right of the decimal point)
So NUMBER(3,2) allows a total of 3 digits, 2 of which are to the right of the decimal point, leaving only one to the left of it. In other words, the largest number that could fit into this column is 9.99.

SQL - Which data type represents percentages well?

In SQL I am looking at decimal and float. Float says it is an approximation. I need to store percentages. They don't have to be very large or small. Some examples are
60.2
40
Which data type should I use?
decimal(x,y)
x is the total number of digits you want to be able to represent
y is the total number of digits after the decimal point that you want to be able to represent

Ceiling(convert(decimal,decimalvalue)) vs Ceiling(convert(float,decimalvalue))

When we do Ceiling(convert(decimal,8.09)) then result is 8 and in Ceiling(convert(float,8.09)) result is 9, explain?
A DECIMAL Type in SQL Uses a Precision numeric data type instead of an Approximate numeric data type like a FLOAT. What this implies is that the data stores not only the number but a dimension of how precise it can be, whilst in comparison a float is always a scaled approximation store for a numeric value.
There are 3 pieces to the DECIMAL precision, the value, the P number (for precision) and S number (for scale). The P number is the maximum number of digits that the data type can store so if I have a DECIMAL with a precision of 4 I can only go upto 9999 or as low as 0.001. The default is 18 digits.
The problem you are having is your S number. The S number is the precision of the numbers after the decimal point, a sort of sub-set maximum on top of the P number. So a S precision of 2 means I can have .01 to .99, precision of 4 is .0001 to .9999 and so on. This in combination with P can lead to truncation if you don't account for the maximum digits. So although a conversion of the number 12345.12345 (P,S) = (6,3) should have the 3 decimal digits (12345.123), the maximum digits are 6 so you end up with (12345.1) In order to have an S number the P number must also be declared:
DECIMAL(P[,S])
In this way, due to the construction limits of P and S, P cannot be smaller than S and S not smaller than 0 (you cannot have 14 decimal places in a number which the maximum digits is only 5) :
P >= [S] >= 0
To solve your problem, when you do your CONVERT, declare how precise you need your decimal to be as, by default, the S value is set to 0 :
SELECT CONVERT(DECIMAL(18,6), 8.09)
Here are a few examples to show the precision, run them and see how they work:
SELECT CONVERT(DECIMAL(10,1) , 12.345678) --10 Maximum Digits, 1 Decimal Places (Expect round off)
SELECT CONVERT(DECIMAL(18,3) , 12.2345) --18 Maximum Digits, 3 Decimal Places (Expect 3rd decimal round up)
SELECT CONVERT(DECIMAL(3,4) , 123.456789) --3 Maximum Digits, 4 Decimal Places (Expect 4th decimal round up, but get overflow error as P < S )
SELECT CONVERT(DECIMAL(18,6) , 8.09) --18 Maximum Digits, 6 Decimal Places (Expect no data change in precision)
I hope that helps you out, if possible always use decimal and specify a precision where you know there are bounds. It can be more efficient depending on the data and nature of the procedure.
This is a precision issue.
Because
convert(decimal,8.09) == 8
whereas
convert(float,8.09) == 8.09
decimal has a default precision of 18
float is float(53) (also a synonym for double) which has 15 digit precision
What are you actually trying to do? What is the context?