I need to add a minute in my SQL query which i was written in HIVE DB.
Select date_need from myschema.table order by date_need
If i am giving as AS,`` to use alias.It was not accepting.I need to add a minute.
For example , date_need has 2021-05-09 03:30:24 and i need to display as 2021-05-09 03:31:24
date_need declared as string type in query
use +INTERVAL '1' MINUTE.
Select cast(date_need as timestamp)+INTERVAL '1' MINUTE as date_need from myschema.table order by 1
Related
I have a column of dates in my table (referred as org_day).
I try to add a new column that represent the day after, that is
day_after = org_day + day (or 24 hours) (for all rows of org_day)
From what I've read, the DATE_ADD function of SQL does not
work on the entire column, so trying to do something like:
DATE_ADD (org_day, INTERVAL 24 HOUR) or
DATE_ADD (DATE org_day, INTERVAL 24 HOUR)
do not work.
The usual examples that do work look like:
DATE_ADD (DATE '2019-12-22', INTERVAL 1 day),
But I want to perform this operation on the entire column,
not on a constant date.
Appreciate any help.
To update the entire column, you need to set everything on that column. Try this, hope it solved ur problem...
UPDATE table_name SET column_name = DATE_ADD(var, interval);
You can try this:
CREATE OR REPLACE TABLE
mydataset.mytable AS
SELECT
org_day,
DATE_ADD(org_day, INTERVAL 1 day) day_after
FROM
mydataset.mytable;
This above statement will modify the the existing table by adding a new column, without deleting exiting data.
I would suggest using a view:
create view v_t as
select t.*, date_add(org_day, interval 1 day) as day_after
from t;
If you always want the new column to be in synch with existing column, then a view ensures that the data is consistent. The value is calculated when you query the data.
I have 4 fields in my dataset(SparkSql) and my aim is to extract hour from timestamp and then partition by hour_interval in spark.sql query
username(varchar)
timestamp(long)
ipaddress(varchar)
Now these are the things, i need to partition by hour_interval from the longtimestamp .
So i created a test table in mysql and I tried below command, it works for fetching hour _interval from timestamp
SELECT username, originaltime , ipaddress, HOUR(FROM_UNIXTIME(originaltime / 1000)) as hourinterval FROM testmyactivity ;
This gives below output
suresasash3456 1557731954785 1.1.1.1 1 7
Now I need to partition by this hour_interval but I am not able to do it
Below is the query which is not working
SELECT username, ipaddress , HOUR(FROM_UNIXTIME(originaltime / 1000)) as hourinterval, OVER (partition by hourinterval) FROM testmyactivity ;
The above gives me the error message
right syntax to use near 'partition by hour interval)
Expected Output
Step 1 :
Spark Sql query which can extract hour from the timestamp and then partition by hour_interval
Step2: After the above step, i can perform groupByKey on hour_interval so that my dataset will be equally distributed to executors available
here is the documentation.
val partitioned_df = df.partitionBy($"colName")
partitioned_df.explain
now you can use partitioned_df for group-by queries.
Trying to use the statement:
SELECT *
FROM data.example
WHERE TIMESTAMP(timeCollected) < DATE_ADD(USEC_TO_TIMESTAMP(NOW()), 60, 'MINUTE')
to get data from my bigquery data. It seems to return same set of result even when time is not within the range. timeCollected is of the format 2015-10-29 16:05:06.
I'm trying to build a query that is meant to return is data that is not older than an hour. So data collected within the last hour should be returned, the rest should be ignored.
Using Standard SQL:
SELECT * FROM data
WHERE timestamp > TIMESTAMP_ADD(CURRENT_TIMESTAMP(), INTERVAL -60 MINUTE)
The query you made means "return to me anything that has a collection time smaller than an hour in the future" which will literally mean your whole table. You want the following (from what I got through your comment, at least) :
SELECT *
FROM data.example
WHERE TIMESTAMP(timeCollected) > DATE_ADD(USEC_TO_TIMESTAMP(NOW()), -60, 'MINUTE')
This means that any timeCollected that is NOT greater than an hour ago will not be returned. I believe this is what you want.
Also, unless you need it, Select * is not ideal in BigQuery. Since the data is saved by column, you can save money by selecting only what you need down the line. I don't know your use case, so * may be warranted though
To get table data collected within the last hour:
SELECT * FROM [data.example#-3600000--1]
https://cloud.google.com/bigquery/table-decorators
Using Standard SQL:
SELECT * FROM data WHERE timestamp > **TIMESTAMP_SUB**(CURRENT_TIMESTAMP(), INTERVAL 60 MINUTE)
I am trying to write a query to run on Oracle database. The table ActionTable contains actionStartTime and actionEndTime columns. I need to find out which action took longer than 1 hour to complete.
actionStartTime and actionEndTime are of timestamp type
I have a query which gives me the time taken for each action:
select (actionEndTime - actionStartTime) actionDuration from ActionTable
What would be my where clause that would return only actions that took longer than 1 hour to finish?
Subtracting two timestamps returns an interval. So you'd want something like
SELECT (actionEndTime - actionStartTime) actionDuration
FROM ActionTable
WHERE actionEndTime - actionStartTime > interval '1' hour
I have a table containing events which happen in my application like people logging in and people changing settings.
They have a date/time against the event in the following format:
2010-01-29 10:27:29
Is it possible to use SQL to select the events that have only happened in the last 5 mins?
So if the date/time was 2010-01-29 10:27:29 it would only select the events that happened between 10:27:29 and 10:22:29?
Cheers
Eef
SELECT foo FROM bar WHERE event_time > DATE_SUB(NOW(), INTERVAL 5 MINUTES)
(Not sure if it's minutes or minute)
WHERE my_timestamp < DATE_SUB(now(), INTERVAL 5 MINUTE)
You should provide table and column names to make it easy for us to answer your question.
You can write SQL as
SELECT *
FROM Table
WHERE DateTimeColumnName <= '2010/01/29 10:27:29'
AND DateTimeColumnName >= '2010/01/29 10:22:29'
or you can use BETWEEN
SELECT *
FROM Table
WHERE DateTimeColumnName BETWEEN '2010/01/29 10:22:29' AND '2010/01/29 10:27:29'
Now see if there are datetime functions in MySQL to do a Date Math so just pass a single date stamp, and do the math to subtract 5 min from it and use it as the second parameter in the between clause.