How to change timestamp column size in DB2? - sql

Any idea how to change timestamp column size in DB2?
I tried altering table, drop and then create table. Both didn't work.
Here are the queries I've tried:
alter table clnt_notes alter column lupd_ts set data type timestamp(26)
create table CLNT_NOTES
(NOTE_ID int not null generated always as identity (start with 1, increment by 1),
CLNT_ID varchar(10) not null,
TX varchar(200),
LUPD_TS timestamp(26) not null)

It depends on your DB2 platform and version. Timestamps in DB2 used to all have 6 digit precision for the fractional seconds portion. In string form, "YYYY-MM-DD-HH:MM:SS.000000"
However, DB2 LUW 10.5 and DB2 for IBM i 7.2 support from 0 to 12 digits of precision for the fraction seconds portion. In string form, you could have from "YYYY-MM-DD-HH:MM:SS" to "YYYY-MM-DD-HH:MM:SS.000000000000".
The default precision is 6, so if you specify a timestamp without a precision (length), you get the six digit precision. Otherwise you may specify a precision from o to 12.
create table mytable (
ts0 timestamp(0)
, ts6 timestamp
, ts6_also timestamp(6)
, ts12 timestamp(12)
);
Note however, that while the external (not exactly a string) format the DBMS surfaces could vary from 19 to 32 bytes. The internal format the TS is stored in may not. On DB2 for IBM i at least the internal storage format of TS field takes between 7 and 13 bytes depending on precision.
timestamp(0) -> 7 bytes
timestamp(6) -> 10 bytes (default)
timestamp(12) -> 13 bytes

Since you refer to 10 as the length, I'm going to assume you're looking in SYSIBM.SYSCOLUMNS (or another equivalent table in the catalog.
The LENGTH column in the catalog refers to the internal length of the field. You can calculate this using the following formula:
FLOOR( ((p+1)/2) + x )
p is the precision of the timestamp (the number of places after the decimal [the microseconds])
x is 7 for a timestamp without a timezone, or 9 if it has a timezone (if supported by your platform)
If you are comparing the to a field in the SQLCA, that field will be the length of a character representation of the timestamp. See this Information Center article for an explanation between the two fields.
If you truly want to change the scale of your timestamp field, then you can use the following statement. x should be an integer for the number of places after the decimal in the seconds position.
The number of allowed decimals varies by platform and version. If you're on an older version, you can likely not change the scale, which is set at 6. However, some of the newer platforms (like z/OS 10+, LUW 9.7+), will allow you to set the scale to a number between 0 and 12 (inclusive).
ALTER TABLE SESSION.TSTAMP_TEST
ALTER COLUMN tstamp
SET DATA TYPE TIMESTAMP(x);

Related

import record of fraction of seconds into postgreSQL database

There is one csv file & in one column datetime column value is in nano second (i.e. 21/11/2021 01:00:05.120972944). I need to insert csv file data into postgreSQL database. When i used datetime column datatype timestamp(6) then throwing error invalid syntax of datetime column. What would be correct datatype of datetime column in potgreSQL database.
The maximum precision for a timestamp is 6. You're providing data with a precision of 9.
laetitia=# select now()::timestamp(9);
WARNING: TIMESTAMP(9) precision reduced to maximum allowed, 6
LINE 1: select now()::timestamp(9);
^
now
----------------------------
2022-10-05 11:41:02.107602
(1 row)
So my suggestion is to add the data into a temporary table with this column as text and then transform it into a timestamp to insert it into your regular table. (Actually, when loading data from CSV files, I always suggest loading everything in a temporary table and then transforming it with SQL).
For example:
laetitia=# select col::timestamp(9)
from (values ('01/11/2021 01:00:05.120972944')) as test(col);
WARNING: TIMESTAMP(9) precision reduced to maximum allowed, 6
LINE 1: select col::timestamp(9)
^
col
----------------------------
2021-01-11 01:00:05.120973
(1 row)
I guess the warning is acceptable in that case, or you can craft another query to avoid that warning too.
Oh, I almost forgot, make sure your date time default format is the right one because if Postgres wants MM/DD/YYYY, then the 21/11/2022 is out of range!

Find float column max scale and precision

I have a column with datatype float in Teradata. I want to find the Maximum precision and scale for that column.
Note: My column's scale part has more than 10 digits in most of the places.
Sample Data
123.12321323002
13123213.13200003
33232.213123001
The output I need is
Precsion 19 (scale + length of 13123213) and scale is 11 (length of 12321323002)
or
8 (length of 13123213), 11 (length of 12321323002).
I tried to find them buy converting the column as varchar and splitting them based on the '.' and make the integer and fractional part as 2 columns and then finding the max length of 2 columns. But when I'm select the data, Teradata rounds off the scale part. So after that, if I convert them as char, I'm getting lesser value for scale part.
For example:
org data: 1234.12312000123101
data when I select from Teradata: 1234.12312000123
This is a bit long for a comment.
Teradata uses the IEEE format for real/float values. This gives 15-17 digits of precision. Alas, you need 19 digits, so the values you want are not being stored in the database. You cannot recover them.
What you can do is fix the database, so it uses numeric/decimal/number. This supports what you want: NUMERIC(19, 11). You would then need to reload the data so it is correctly stored in the database.
When you need high precision without predefined scale simply switch to the NUMBER datatype, which is a mixture of DECIMAL and FLOAT.
Exact numeric, at least 38 digits precision, no predefined scale, range of 1E-130 .. 1E125.
Float on steroids :-)

Is there a native technique in PostgreSQL to force "timestamp without time zone" to not include milliseconds?

I am using PostgreSQL 9.6.17. (Migrating from MySQL)
A java program writes dates inside a table. The date formats is the following:
2019-01-01 09:00:00
But it can also be 2019-01-01 09:00:00.00 or 2019-01-01 09:00:00.000 when inserted in the database, which messes up my date management in my program when retrieved.
On insertion, I would like all the date to have the very same format: 2019-01-01 09:00:00. The datatype used by the column is timestamp without a time zone.
How can I tell postgresql to not input milliseconds in timestamp without timezone via configuration or SQL query ?
This data types doc does not provide any information about that.
Quote from the manual
time, timestamp, and interval accept an optional precision value p which specifies the number of fractional digits retained in the seconds field. By default, there is no explicit bound on precision. The allowed range of p is from 0 to 6
So just define your column as timestamp(0), e.g.:
create table foo
(
some_timestamp timestamp(0)
);
If you have an existing table with data, you can simply ALTER the column:
alter table some_table
alter column some_timestamp type timestamp(0);
If you now insert a timestamp with milliseconds, the value will be rounded to remove the milliseconds.
Note that technically you still have milliseconds in the stored value, but they are always set to 0
You can cast:
mytimestamptz::timestamp(0)
This will round the result to the nearest second. If you want to truncate instead:
date_trunc('second', mytimestamp)
Retrieve as a timestamp and in the application querying the database, manage the precision however you want. eg. via JDBC you'll get a Java LocalDateTime object, in Python you'll get a datetime object.
If you want to retrieve timestamps as strings, there are lots of formatting options available
SELECT to_char(when, 'YYYY-MM-DD HH24:MI:SS') FROM mytable
Drop any milliseconds on input by specifying the precision option to your timestamp type:
CREATE TABLE mytable (..., when TIMESTAMP(0));

How do we find out values truncated in oracle database

If a column is of integer datatype and the average of values in that column is a decimal.In the output, the decimal part gets truncated, how do you make sure that the decimal part is not truncated
25
25.5
30.1
28.09
values inserted as
total_mark number(10)
25
25
30
28
how will I find out the last 3 values got truncated
If you are concerned about preserving decimal precision, then use a column type which supports that. For example, if you wanted to preserve two decimal places, you could use NUMBER(10,2), instead of NUMBER(10), the latter which has no decimal precision.
Using NUMBER(10) you won't know which values got truncated, because in reality they all got truncated. But it just so happens in the case of 25 that it had no non zero decimal component.

Earliest Timestamp supported in PostgreSQL

I work with different databases in a number of different time zones (and periods of time) and one thing that normally originates problems, is the date/time definition.
For this reason, and since a date is a reference to a starting value, to keep track of how it was calculated, I try to store the base date; i.e.: the minimum date supported in that particular computer/database;
If I am seeing it well, this depends on the RDBMS and on the particular storage of the type.
In SQL Server, I found a couple of ways of calculating this "base date";
SELECT CONVERT(DATETIME, 0)
or
SELECT DATEADD(MONTH, 0, 0 )
or even a cast like this:
DECLARE #300 BINARY(8)
SET #300 = 0x00000000 + CAST(300 AS BINARY(4))
set #dt=(SELECT CAST(#300 AS DATETIME) AS BASEDATE)
print CAST(#dt AS NVARCHAR(100))
(where #dt is a datetime variable)
My question is, is there a similar way of calculating the base date in PostgreSQL, i.e.: the value that is the minimum date supported and is on the base of all calculations?
From the description of the date type, I can see that the minimum date supported is 4713 BC, but is there a way of getting this value programmatically (for instance as a formatted date string), as I do in SQL Server?
The manual states the values as:
Low value: 4713 BC
High value: 294276 AD
with the caveat, as Chris noted, that -infinity is also supported.
See the note later in the same page in the manual; the above is only true if you are using integer timestamps, which are the default in all vaguely recent versions of PostgreSQL. If in doubt:
SHOW integer_datetimes;
will tell you. If you're using floating point datetimes instead, you get greater range and less (non-linear) precision. Any attempt to work out the minimum programatically must cope with that restriction.
PostgreSQL does not just let you cast zero to a timestamp to get the minimum possible timestamp, nor would this make much sense if you were using floating point datetimes. You can use the julian date conversion function, but this gives you the epoch not the minimum time:
postgres=> select to_timestamp(0);
to_timestamp
------------------------
1970-01-01 08:00:00+08
(1 row)
because it accepts negative values. You'd think that giving it negative maxint would work, but the results are surprising to the point where I wonder if we've got a wrap-around bug lurking here:
postgres=> select to_timestamp(-922337203685477);
to_timestamp
---------------------------------
294247-01-10 12:00:54.775808+08
(1 row)
postgres=> select to_timestamp(-92233720368547);
to_timestamp
---------------------------------
294247-01-10 12:00:54.775808+08
(1 row)
postgres=> select to_timestamp(-9223372036854);
to_timestamp
------------------------------
294247-01-10 12:00:55.552+08
(1 row)
postgres=> select to_timestamp(-922337203685);
ERROR: timestamp out of range
postgres=> select to_timestamp(-92233720368);
to_timestamp
---------------------------------
0954-03-26 09:50:36+07:43:24 BC
(1 row)
postgres=> select to_timestamp(-9223372036);
to_timestamp
------------------------------
1677-09-21 07:56:08+07:43:24
(1 row)
(Perhaps related to the fact that to_timestamp takes a double, even though timestamps are stored as integers these days?).
I think it's possibly wisest to just let the timestamp range be any timestamp you don't get an error on. After all, the range of valid timestamps is not continuous:
postgres=> SELECT TIMESTAMP '2000-02-29';
timestamp
---------------------
2000-02-29 00:00:00
(1 row)
postgres=> SELECT TIMESTAMP '2001-02-29';
ERROR: date/time field value out of range: "2001-02-29"
LINE 1: SELECT TIMESTAMP '2001-02-29';
so you can't assume that just because a value is between two valid timestamps, it is its self valid.
The earliest timestamp is '-infinity'. This is a special value. The other side is 'infinity' which is later than any specific timestamp.
I don't know of a way of getting this programaticly. I would just use the value hard-coded the way you might use NULL. That means you have to handle infinities on the client side though.