Casting the Bigint number Returns NULL - sql

I need o convert a integer value to the highest data type in hive as my value is of 25 digits
select cast(18446744073709551614 as bigint);
NULL value will be returned for the above select stmnt;
I am very well aware that the supplied number is greater than the largest number of Bigint. But we are getting such values upon which i have to calculate the max,min,sum,avg
So how can i cast this type of values so that i will not get the NULLs.

Use decimal(38, 0) for storing numbers bigger than BIGINT, it can store 38 digits. BIGINT can store 19 digits. Read also manual about decimal type.
For literals postfix BD is required. Example:
hive> select CAST(18446744073709551614BD AS DECIMAL(38,0))+CAST(18446744073709551614BD AS DECIMAL(38,0));
OK
36893488147419103228
Time taken: 0.334 seconds, Fetched: 1 row(s)
hive> select CAST(18446744073709551614BD AS DECIMAL(38,0))*2;
OK
36893488147419103228
Time taken: 0.129 seconds, Fetched: 1 row(s)

Related

Transform number of seconds in (Date)Time format 'mm:ss' in Informix SQL

I'm requesting an Informix database with SQL. I have a column with numbers that represent a number of seconds. I want to transform this number to a time (mm:ss) format in my SQL statement. For example, the number '90' should be transformed into '01:30'. It's important that the new field shouldn't be a string field, but a (date)time field.
You can construct this as:
select floor(secs / 60) || ':' || lpad(mod(secs, 60), 2, '0')
SELECT DATETIME(0:0) MINUTE TO SECOND + colname UNITS SECOND
FROM data_table
This would convert the row containing a numeric value 90 to the value 01:30 with type DATETIME MINUTE TO SECOND. You can vary the type to deal with larger values:
SELECT DATETIME(0:0:0) HOUR TO SECOND + colname UNITS SECOND
FROM data_table
This will process non-negative values from 0 to 86399 producing answers from 00:00:99 to 23:59:59 of type DATETIME HOUR TO SECOND.
You can add up to 5 fractional digits of seconds if desired.
If the input values can be negative or 86400 or larger, then you have to define what you want — you will get an error if the value is 3600 in the first example, or 86400 or larger in the second.

Hive: Reduce millisecond precision in timestamp

In Hive, is there anyway to reduce millisecond precision (not rounding)?
For example I have the following timestamp with millisecond in 3 decimal places
2019-10-08 21:21:39.163
I want to get a timestamp exactly in 1 decimal place (remove two last milliseconds 63):
2019-10-08 21:21:39.1
I only get so far as to turning the timestamp into a decimal with one value precision:
cast(floor(cast(2019-10-08 21:21:39.163 AS double)/0.100)*0.100 AS decimal(16,1)) AS updatetime
This gives:
1570537299.1
The problem: I do not know how to turn the above value back to a timestamp in millisecond. Even better, if there is a better way to reduce timestamp precision from 3 to 1 decimal place, I will appreciate it.
The reason I have to cast the above code into decimal is because if I only do:
floor(cast(2019-10-08 21:21:39.163 AS double)/0.100)*0.100 AS exec_time
This gives something like:
1570537299.100000001
This is not good, since I need to join this table X with another table Y.
Table X has timestamp like 2019-10-08 21:21:39.163.
But table Y stores data in each 100ms interval, whose timestamp is exactly: 2019-10-08 21:21:39.1
The trailing 00000001 would prevent the timestamp from Table X to map exactly with Table Y
If you need to remove last two milliseconds, use substr() and cast to timestamp again if necessary. For example:
with your_data as
(
select timestamp('2019-10-08 21:21:39.163') as original_timestamp --your example
)
select original_timestamp,
substr(original_timestamp,1,21) truncated_string,
timestamp(substr(original_timestamp,1,21)) truncated_timestamp --this may be not necessary, timestamp is compatible w string
from your_data
Returns:
original_timestamp truncated_string truncated_timestamp
2019-10-08 21:21:39.163 2019-10-08 21:21:39.1 2019-10-08 21:21:39.1

hive:varchar column could not return month

How to return month from varchar column and values like "20180912" in hive?
It's strange that it worked fine with function month() on string type in hive,however it returns null now.
And month(from_unixtime(unix_timestamp)(date,'yyyymmdd')) return vaules that do not match the real month
Use substr():
hive> select substr('20180912',5,2);
OK
09
Time taken: 1.675 seconds, Fetched: 1 row(s)

Initcap of word

I'm having a table x it contain the column resource_name in this column I'm having data like NASRI(SRI).
I'm applying initcap on this column it's giving output Nasri(sri). But my expected output is Nasri(Sri).
How I can achieve the desired result?
Thank you
One possible solution is to use split() with concat_ws(). If value does not contain '()', then it will also work correctly. Demo with ():
hive> select concat_ws('(',initcap(split('NASRI(SRI)','\\(')[0]),
initcap(split('NASRI(SRI)','\\(')[1])
);
OK
Nasri(Sri)
Time taken: 0.974 seconds, Fetched: 1 row(s)
And for value without () it also works good:
hive> select concat_ws('(',initcap(split('NASRI','\\(')[0]),
initcap(split('NASRI','\\(')[1])
);
OK
Nasri
Time taken: 0.697 seconds, Fetched: 1 row(s)

How to change timestamp column size in DB2?

Any idea how to change timestamp column size in DB2?
I tried altering table, drop and then create table. Both didn't work.
Here are the queries I've tried:
alter table clnt_notes alter column lupd_ts set data type timestamp(26)
create table CLNT_NOTES
(NOTE_ID int not null generated always as identity (start with 1, increment by 1),
CLNT_ID varchar(10) not null,
TX varchar(200),
LUPD_TS timestamp(26) not null)
It depends on your DB2 platform and version. Timestamps in DB2 used to all have 6 digit precision for the fractional seconds portion. In string form, "YYYY-MM-DD-HH:MM:SS.000000"
However, DB2 LUW 10.5 and DB2 for IBM i 7.2 support from 0 to 12 digits of precision for the fraction seconds portion. In string form, you could have from "YYYY-MM-DD-HH:MM:SS" to "YYYY-MM-DD-HH:MM:SS.000000000000".
The default precision is 6, so if you specify a timestamp without a precision (length), you get the six digit precision. Otherwise you may specify a precision from o to 12.
create table mytable (
ts0 timestamp(0)
, ts6 timestamp
, ts6_also timestamp(6)
, ts12 timestamp(12)
);
Note however, that while the external (not exactly a string) format the DBMS surfaces could vary from 19 to 32 bytes. The internal format the TS is stored in may not. On DB2 for IBM i at least the internal storage format of TS field takes between 7 and 13 bytes depending on precision.
timestamp(0) -> 7 bytes
timestamp(6) -> 10 bytes (default)
timestamp(12) -> 13 bytes
Since you refer to 10 as the length, I'm going to assume you're looking in SYSIBM.SYSCOLUMNS (or another equivalent table in the catalog.
The LENGTH column in the catalog refers to the internal length of the field. You can calculate this using the following formula:
FLOOR( ((p+1)/2) + x )
p is the precision of the timestamp (the number of places after the decimal [the microseconds])
x is 7 for a timestamp without a timezone, or 9 if it has a timezone (if supported by your platform)
If you are comparing the to a field in the SQLCA, that field will be the length of a character representation of the timestamp. See this Information Center article for an explanation between the two fields.
If you truly want to change the scale of your timestamp field, then you can use the following statement. x should be an integer for the number of places after the decimal in the seconds position.
The number of allowed decimals varies by platform and version. If you're on an older version, you can likely not change the scale, which is set at 6. However, some of the newer platforms (like z/OS 10+, LUW 9.7+), will allow you to set the scale to a number between 0 and 12 (inclusive).
ALTER TABLE SESSION.TSTAMP_TEST
ALTER COLUMN tstamp
SET DATA TYPE TIMESTAMP(x);