what is equivalent of prestodb date_trunc in hive - hive

I have a prestodb query which uses DATE_TRUNC like this:
DATE_TRUNC('week', DATE(dd.signup_timestamp))
what will be its equivalent in hive?
Another similar question is, what is equivalent of this expression in presto in hive:
date_sub(date_trunc('week', now()), 180)

select date_sub('2018-09-05', cast(from_unixtime(unix_timestamp(), 'u') AS int)) as c;
output: 2018-09-03
And for other:
date_sub(date_sub(from_unixtimestamp(unix_timestamp(), cast(from_unixtime(unix_timestamp(), 'u'), 'u')as int)), 180)

Related

Convert Teradata to Bigquery(GCP) . DAY() TO SECOND

I am trying to convert the DAY() TO SECOND function in teradata to GCP sql.
Can someone help me convert this?
AVERAGE(((run_end_dttm - run_start_dttm )DAY(4) TO SECOND )) AS elapsed_time,
As was mentioned Bigquery doesn't support INTERVAL data type what actually Teradata DAY(4) TO SECOND function returns.
If your aim stands to get an interval difference between two timestamps in the compliant to Teradata output format i.e. day hour:minute:second.millisecond you might consider to write your own Bigquery UDF function which will afford this transformation.
Below I'm sharing Bigquery function prototype for this kind of conversion, leveraging some of the Timestamp and Time built-in functions:
CREATE TEMP FUNCTION
time_conv(t1 timestamp,
t2 timestamp) AS ((
SELECT
FORMAT( '%d %d:%d:%d.%d', ABS(day), EXTRACT(hour
FROM
time(second)), EXTRACT(minute
FROM
time(second)), EXTRACT(second
FROM
time(second)), EXTRACT(millisecond
FROM
time(second)) ) AS output
FROM
UNNEST([STRUCT( TIMESTAMP_DIFF(t1,t2, day) AS day,
TIMESTAMP_SECONDS(TIMESTAMP_DIFF(t1,t2, second)) AS second )]) ));
WITH
`example` AS (
SELECT
TIMESTAMP("2021-10-19 21:45:21") AS t1,
TIMESTAMP("2021-10-15 18:17:56") AS t2 )
SELECT
time_conv(t1,
t2)
FROM
example
You can also tweak Bigquery FORMAT() block getting the desirable output.

Parse Julian Date in BigQuery

I convert the function, CAST(CAST(Column1 AS CHAR(7)) AS DATE FORMAT 'YYYYDDD') from Teradata to BigQuery as FORMAT_DATE('%E4Y%j',PARSE_DATE('%E4Y%j',CAST(Column1 AS STRING))) where Column1 is DECIMAL in TD so NUMERIC in BQ. If Column1 has value '2020280' in BQ, I get 2020001 in parse results but I need it to be '2020280'. Where do I go wrong ?
PARSE_DATE does not support %j. Use DATE_ADD as a workaround:
select FORMAT_DATE(
'%E4Y%j',
DATE_ADD(
PARSE_DATE('%E4Y', LEFT('2020280', 4)),
INTERVAL (CAST(RIGHT('2020280', 3) AS INT64) - 1) DAY
)
);

Function to convert epoch to timestamp in Amazon Redshift

I'm working on Amazon Redshift database and I have dates in milliseconds since epoch. I want to convert that to timestamp, and writing this query that I found on another thread
SELECT TIMESTAMP 'epoch' + column_with_time_in_ms/1000 *INTERVAL '1 second'
FROM table_name LIMIT 1000;
gives me the result in YYYY-MM-DD HH:MM:SS.
My question is:
How do I write a SQL function in Redshift that takes integer parameter that are the milliseconds and does this conversion?
Thanks.
You seem to want a scalar UDF that wraps the conversion code.
In Redshift, you could write this as:
create function ms_epoch_to_ts(int)
returns timestamp
immutable
as $$
select timestamp 'epoch' + $1 / 1000 * interval '1 second'
$$ language sql;

Migrating Oracle query to PostgreSQL

Can you please help me with this? How can I convert below query to PostgreSQL.
The query below gives different output when executed in PostgreSQL than when executed in Oracle.
SELECT
to_char(to_date('01011970','ddmmyyyy') + 1/24/60/60 * 4304052,'dd-mon-yyyy hh24:mi:ss')
from dual;
Let's assume you want to use the same expression as in Oracle to compute the resulting value.
The reason it is not working when you simply remove from dual is because this expression is being evaluated to 0 as integer division truncates results towards 0.
select 1/24/60/60 * 4304052;
?column?
----------
0
(1 row)
If I make one of them a decimal, it will give you the required result
select 1.0/24/60/60 * 4304052;
?column?
-----------------------------
49.815416666666666347848000
Now, after changing this, your expression will return the same result you got in Oracle.
SELECT to_char( to_date('01011970','ddmmyyyy')
+ INTERVAL '1 DAY' * (1.0/24/60/60 * 4304052) ,'dd-mon-yyyy hh24:mi:ss') ;
to_char
----------------------
19-feb-1970 19:34:12
(1 row)
Note that I had to add an interval expression, because unlike Oracle, a Postgres DATE does not store time component and simply adding a number to date will result in an error. Using an interval will ensure that it will be evaluated as timestamp.
knayak=# select pg_typeof( current_date);
pg_typeof
-----------
date
(1 row)
knayak=# select pg_typeof( current_date + INTERVAl '1 DAY');
pg_typeof
-----------------------------
timestamp without time zone
(1 row)
I think you want:
select '1970-01-01'::date + 4304052 * interval '1 second';
You can use to_char() to convert this back to a string, if you really want:
select to_char('1970-01-01'::date + 4304052 * interval '1 second', 'YYYY-MM-SS HH24:MI:SS');

NUMTODSINTERVAL in PostgreSQL

Is there a function in PostgreSQL that is the same as NUMTODSINTERVAL(n, interval unit) in Oracle?
Just multiply your variable with the desired interval:
interval '1' day * n
Since Postgres 9.4 you can also use the function make_interval()
make_interval(days => n)
If you want a functionality similar to this function (i.e. the unit is variable -- not constant): a simple concatenation & a cast is enough in PostgreSQL:
select cast(num || unit as interval)
SQLFiddle
You can read more about interval's input formats here.
'1 day'::interval
do something like this.