How do we find out values truncated in oracle database - sql

If a column is of integer datatype and the average of values in that column is a decimal.In the output, the decimal part gets truncated, how do you make sure that the decimal part is not truncated
25
25.5
30.1
28.09
values inserted as
total_mark number(10)
25
25
30
28
how will I find out the last 3 values got truncated

If you are concerned about preserving decimal precision, then use a column type which supports that. For example, if you wanted to preserve two decimal places, you could use NUMBER(10,2), instead of NUMBER(10), the latter which has no decimal precision.
Using NUMBER(10) you won't know which values got truncated, because in reality they all got truncated. But it just so happens in the case of 25 that it had no non zero decimal component.

Related

SQL , format all numbers to 2 decimal places (eg 20 to 20.00)

I have a data set with inconsistencies in a column with double values. Some are displayed as eg. 24.55, and others as 24.5 or 24. I want all values to be displayed to 2 decimals, so 24 should be 24.00, and 23.1 should be 23.10 etc. What code would work in this instance?
In general, such conversions are both database-specific and GUI-specific. However, the database can convert the value to something with two decimal places by using numeric/decimal (those are equivalent):
select cast(value as numeric(10, 2))
The "2" is the two digits after the decimal place. This should be displayed with two digits -- in any reasonable interface.
If you are using MySQL (as PHP suggests), you can use the format() function to accomplish this:
select format(value, 2)

Find float column max scale and precision

I have a column with datatype float in Teradata. I want to find the Maximum precision and scale for that column.
Note: My column's scale part has more than 10 digits in most of the places.
Sample Data
123.12321323002
13123213.13200003
33232.213123001
The output I need is
Precsion 19 (scale + length of 13123213) and scale is 11 (length of 12321323002)
or
8 (length of 13123213), 11 (length of 12321323002).
I tried to find them buy converting the column as varchar and splitting them based on the '.' and make the integer and fractional part as 2 columns and then finding the max length of 2 columns. But when I'm select the data, Teradata rounds off the scale part. So after that, if I convert them as char, I'm getting lesser value for scale part.
For example:
org data: 1234.12312000123101
data when I select from Teradata: 1234.12312000123
This is a bit long for a comment.
Teradata uses the IEEE format for real/float values. This gives 15-17 digits of precision. Alas, you need 19 digits, so the values you want are not being stored in the database. You cannot recover them.
What you can do is fix the database, so it uses numeric/decimal/number. This supports what you want: NUMERIC(19, 11). You would then need to reload the data so it is correctly stored in the database.
When you need high precision without predefined scale simply switch to the NUMBER datatype, which is a mixture of DECIMAL and FLOAT.
Exact numeric, at least 38 digits precision, no predefined scale, range of 1E-130 .. 1E125.
Float on steroids :-)

Numeric field overflow exception

How I should rewrite my insert statement ?
CREATE TABLE test_table (
rate decimal(16,8)
);
INSERT INTO test_table VALUES (round(3884.90000000 / 0.00003696, 8));
Exception:
ERROR: numeric field overflow
SQL state: 22003
Detail: A field with precision 16, scale 8 must round to an absolute value less than 10^8. Rounded overflowing value: 105110930.73593074
Database: Greenplum Database 4.3.8.0 build 1 (based on PostgreSQL 8.2.15)
You should use decimal(17,8)
CREATE TABLE test_table
(
rate decimal(17,8)
);
Use decimal in below format
decimal(precision, scale)
1) The precision of a numeric is the total count of significant digits in the whole number, that is, the number of digits to both sides of the decimal point
2) The scale of a numeric is the count of decimal digits in the fractional part, to the right of the decimal point
Since the result of your insert statement is 105110930.73593074, Total number of digits is 17 and after decimal it has 8 so you should use decimal(17,8)
Select (round(3884.90000000 / 0.00003696, 8));

Arithmetic overflow error on decimal field

I have a field cost with values 0.987878656435798654 , 0.765656787898767
I am trying to figure out what would be the datatype for this.
When I give decimal 15,15 and trying to load data it is throwing me an error
Arithmetic overflow error converting varchar to data type numeric.
The problem is that you are not allocating any length to the value before the decimal.
DECIMAL (15, 15) means that it has a precision of 15 digits after the decimal, but only enough room for 15 digits total - thus leaving no room for values greater than 1.
This means that DECIMAL (15, 15) only supports values in the following range:
-0.999999999999999 to 0.999999999999999 (15 digits after the decimal).
You have 18 digits in your first example, so I would recommend using something like DECIMAL (21, 18)
DECIMAL (21, 18) will support values in the range from: -999.999999999999999999 to 999.999999999999999999 (18 digits after the decimal).
But, you should analyze your own data to see what the maximum value would be that you need to support.
Try this...
SELECT LEN(YourColumn)
FROM YourTable
Then , if they are below 1 every time, try this...
SELECT CONVERT(DECIMAL(X,X-1),YourColumn)
Where X is what is returned in the LEN statement. and X-1 is one less than that.
Remember, it's DECIMAL(TotalLength,Precision) so you need to make sure have enough space for the total value.

How to change timestamp column size in DB2?

Any idea how to change timestamp column size in DB2?
I tried altering table, drop and then create table. Both didn't work.
Here are the queries I've tried:
alter table clnt_notes alter column lupd_ts set data type timestamp(26)
create table CLNT_NOTES
(NOTE_ID int not null generated always as identity (start with 1, increment by 1),
CLNT_ID varchar(10) not null,
TX varchar(200),
LUPD_TS timestamp(26) not null)
It depends on your DB2 platform and version. Timestamps in DB2 used to all have 6 digit precision for the fractional seconds portion. In string form, "YYYY-MM-DD-HH:MM:SS.000000"
However, DB2 LUW 10.5 and DB2 for IBM i 7.2 support from 0 to 12 digits of precision for the fraction seconds portion. In string form, you could have from "YYYY-MM-DD-HH:MM:SS" to "YYYY-MM-DD-HH:MM:SS.000000000000".
The default precision is 6, so if you specify a timestamp without a precision (length), you get the six digit precision. Otherwise you may specify a precision from o to 12.
create table mytable (
ts0 timestamp(0)
, ts6 timestamp
, ts6_also timestamp(6)
, ts12 timestamp(12)
);
Note however, that while the external (not exactly a string) format the DBMS surfaces could vary from 19 to 32 bytes. The internal format the TS is stored in may not. On DB2 for IBM i at least the internal storage format of TS field takes between 7 and 13 bytes depending on precision.
timestamp(0) -> 7 bytes
timestamp(6) -> 10 bytes (default)
timestamp(12) -> 13 bytes
Since you refer to 10 as the length, I'm going to assume you're looking in SYSIBM.SYSCOLUMNS (or another equivalent table in the catalog.
The LENGTH column in the catalog refers to the internal length of the field. You can calculate this using the following formula:
FLOOR( ((p+1)/2) + x )
p is the precision of the timestamp (the number of places after the decimal [the microseconds])
x is 7 for a timestamp without a timezone, or 9 if it has a timezone (if supported by your platform)
If you are comparing the to a field in the SQLCA, that field will be the length of a character representation of the timestamp. See this Information Center article for an explanation between the two fields.
If you truly want to change the scale of your timestamp field, then you can use the following statement. x should be an integer for the number of places after the decimal in the seconds position.
The number of allowed decimals varies by platform and version. If you're on an older version, you can likely not change the scale, which is set at 6. However, some of the newer platforms (like z/OS 10+, LUW 9.7+), will allow you to set the scale to a number between 0 and 12 (inclusive).
ALTER TABLE SESSION.TSTAMP_TEST
ALTER COLUMN tstamp
SET DATA TYPE TIMESTAMP(x);