group-by/aggregation over date ranges - sql

Got table ABC with one column. A date column "created". So sample values are like;
created
2009-06-18 13:56:00
2009-06-18 12:56:00
2009-06-17 14:02:00
2009-06-17 13:12:23
2009-06-16 10:02:10
I want to write a query so that the results are:
count created
2 2009-06-18
2 2009-06-17
1 2009-06-16
Basically count how many entries belong to each date, but ignoring time.
This is in PL-SQL with Oracle.
Any ideas?

The TRUNC function returns the DATE of the DATETIME.
select trunc(created),count(*) from ABC group by trunc(created)

select count(*), to_char('YYYY-MM-DD', created) from ABC group by to_char('YYYY-MM-DD', created)

Just for completeness, I'll add a generalized variant that works for arbitrarily defined bucket sizes:
SELECT trunc( (created - to_date('2000.01.01', 'YYYY.MM.DD'))
* 24
) / 24
+ to_date('2000.01.01', 'YYYY.MM.DD') theDate
, count(*)
FROM Layer
GROUP BY trunc( (created - to_date('2000.01.01', 'YYYY.MM.DD'))
* 24
) / 24
+ to_date('2000.01.01', 'YYYY.MM.DD')
ORDER BY 1;
The query above will give you the counts by hour; use something smaller than 24 to get larger intervals, or bigger to create smaller intervals. By reversing the position of the * and /, you can make your buckets be in increments of days (e.g. doing " / 7 " instead of " * 24 " would give you buckets of one week each.

Related

sql Query to find the maximium hour of particular event in table

I have a single table with fields (crime-id int , crime_time timestamp , crime string, city string )
There are only 9 unique crimes in table . I need to find the Time ie the hour in which a particular crime occured frequency in max times . Eg if Robbery cause most between 10- 11 it must show 10 or 11 ... the time may start from 00:00 nd ends in 23:59
viod answer is almost ok.
But you need a group by to count the robbery in time slot.
Also need put an alias for the subquery.
SELECT period, max(nb)
FROM (
SELECT extract(hour from crime_time) as period, count(*) as nb
FROM crimes
WHERE crime_string = 'Robbery'
GROUP BY extract(hour from crime_time)
) as subquery_alias
GROUP BY period
This should do, but I have not tested it (and you may have to find hive equivalents of the postgres function I use: extract (doc is available here: http://www.postgresql.org/docs/9.1/static/functions-datetime.html).
SELECT max(nb), period
FROM (
SELECT count(*) as nb, period
FROM (
SELECT crime_string, extract(hour from crime_time) as period
FROM crimes
WHERE crime_string = 'Robbery'
)
GROUP BY period
);

SQL sum 2 different column by different condtion then subtraction and add

what I am trying is kind of complex, I will try my best to explain.
I achieved the first part which is to sum the column by hours.
example
ID TIMESTAMP CUSTAFFECTED
1 10-01-2013 01:00:23 23
2 10-01-2013 03:00:23 55
3 10-01-2013 05:00:23 2369
4 10-01-2013 04:00:23 12
5 10-01-2013 01:00:23 1
6 10-01-2013 12:00:23 99
7 10-01-2013 01:00:23 22
8 10-01-2013 02:00:23 3
output would be
Hour TotalCALLS CUSTAFFECTED
10/1/2013 01:00 3 46
10/1/2013 02:00 1 3
10/1/2013 03:00 1 55
10/1/2013 04:00 1 12
10/1/2013 05:00 1 2369
10/1/2013 12:00 1 99
Query
SELECT TRUNC(STARTDATETIME, 'HH24') AS hour,
COUNT(*) AS TotalCalls,
sum(CUSTAFFECTED) AS CUSTAFFECTED
FROM some_table
where STARTDATETIME >= To_Date('09-12-2013 00:00:00','MM-DD-YYYY HH24:MI:SS') and
STARTDATETIME <= To_Date('09-13-2013 00:00:00','MM-DD-YYYY HH24:MI:SS') and
GROUP BY TRUNC(STARTDATETIME, 'HH')
what I need
what I need sum 2 queries and group by timestamp/hour. 2nd query is exactly same as first but just the where clause is different.
2nd query
SELECT TRUNC(RESTOREDDATETIME , 'HH24') AS hour,
COUNT(*) AS TotalCalls,
SUM(CUSTAFFECTED) AS CUSTRESTORED
FROM some_table
where RESTOREDDATETIME >= To_Date('09-12-2013 00:00:00','MM-DD-YYYY HH24:MI:SS') and
RESTOREDDATETIME <= To_Date('09-13-2013 00:00:00','MM-DD-YYYY HH24:MI:SS')
GROUP BY TRUNC(RESTOREDDATETIME , 'HH24')
so I need to subtract custaffected - custrestoed, and display tht total.
I added link to excel file. http://goo.gl/ioo9hg
Thanks
Ok, now that correct sql is in question text, try this:
SELECT TRUNC(STARTDATETIME, 'HH24') AS hour,
COUNT(*) AS TotalCalls,
Sum(case when RESTOREDDATETIME is null Then 0 else 1 end) RestoredCount,
Sum(CUSTAFFECTED) as CUSTAFFECTED,
Sum(case when RESTOREDDATETIME is null Then 0 else CUSTAFFECTED end) CustRestored,
SUM(CUSTAFFECTED) -
Sum(case when RESTOREDDATETIME is null Then 0 else CUSTAFFECTED end) AS CUSTNotRestored
FROM some_table
where STARTDATETIME >= To_Date('09-12-2013 00:00:00','MM-DD-YYYY HH24:MI:SS')
and STARTDATETIME <= To_Date('09-13-2013 00:00:00','MM-DD-YYYY HH24:MI:SS')
GROUP BY TRUNC(STARTDATETIME, 'HH24')
I recently needed to do this and had to play with it some to get it to work.
The challenge is to get the results of one query to link over to another query all inside the same query and then manipulate the returned value of a field so that the value in a given field in one query's resultset, call it FieldA, is subtracted from the value in a field in a different resultset, call it FieldB. It doesn't matter if the subject values are the result of an aggregation function like COUNT(...); they could be any numeric field in a resultset needing grouping or not. Looking at values from aggregation functions just means you need to adjust your query logic to use GROUP BY for the proper fields. The approach requires creating in-line views in the query and using those as the source of data for doing the subtraction.
A red herring when dealing with this kind of thing is the MINUS operator (assuming you are using an Oracle database) but that will not work since MINUS is not about subtracting values inside a resultset's field values from one another, but subtracting one set of matching records found in another set of records from the final result set returned from the query. In addition, MINUS is not a SQL standard operator so your database probably won't support it if it isn't Oracle you are using. Still, it's awfully nice to have around when you need it.
OK, enough prelude. Here's the query form you will want to use, taking for example a date range we want grouped by YYYY-MM:
select inlineview1.year_mon, (inlineview1.CNT - inlineview2.CNT) as finalcnt from
(SELECT TO_CHAR(*date_field*, 'YYYY-MM') AS year_mon, count(*any_field_name*) as CNT
FROM *schemaname.tablename*
WHERE *date_field* > TO_DATE('*{a year}-{a month}-{a day}*', 'YYYY-MM-DD') and
*date_field* < TO_DATE('*{a year}-{a month}-{a day}*', 'YYYY-MM-DD') and
*another_field* = *{value_of_some_kind}* -- ... etc. ...
GROUP BY TO_CHAR(*date_field*, 'YYYY-MM')) inlineview1,
(SELECT TO_CHAR(*date_field*, 'YYYY-MM') AS year_mon, count(*any_field_name*) as CNT
FROM *schemaname.tablename*
WHERE *date_field* > TO_DATE('*{a year}-{a month}-{a day}*', 'YYYY-MM-DD') and
*date_field* < TO_DATE('*{a year}-{a month}-{a day}*', 'YYYY-MM-DD') and
*another_field* = *{value_of_some_kind}* -- ... etc. ...
GROUP BY TO_CHAR(*date_field*, 'YYYY-MM')) inlineview2
WHERE
inlineview1.year_mon = inlineview2.year_mon
order by *either or any of the final resultset's fields* -- optional
A bit less abstractly, an example wherein a bookseller wants to see the net number of books that were sold in any given month in 2013. To do this, the seller must subtract the number of books retruned for refund from the number sold. He does not care when the book was sold, as he feels a returned book represents a loss of a sale and income statistically no matter when it occurs vs. when the book was sold. Example:
select bookssold.year_mon, (bookssold.CNT - booksreturned.CNT) as netsalescount from
(SELECT TO_CHAR(SALE_DATE, 'YYYY-MM') AS year_mon, count(TITLE) as CNT
FROM RETAILOPS.ACTIVITY
WHERE SALE_DATE > TO_DATE('2012-12-31', 'YYYY-MM-DD') and
SALE_DATE < TO_DATE('2014-01-01', 'YYYY-MM-DD') and
OPERATION = 'sale'
GROUP BY TO_CHAR(SALE_DATE, 'YYYY-MM')) bookssold,
(SELECT TO_CHAR(SALE_DATE, 'YYYY-MM') AS year_mon, count(TITLE) as CNT
FROM RETAILOPS.ACTIVITY
WHERE SALE_DATE > TO_DATE('2012-12-31', 'YYYY-MM-DD') and
SALE_DATE < TO_DATE('2014-01-01', 'YYYY-MM-DD') and
OPERATION = 'return'
GROUP BY TO_CHAR(SALE_DATE, 'YYYY-MM')) booksreturned
WHERE
bookssold.year_mon = booksreturned.year_mon
order by bookssold.year_mon desc
Note that to be sure the query returns as expected, the two in-line views must be equijoined based as shown above on some criteria, as in:
bookssold.year_mon = booksreturned.year_mon
or the subtraction of the counted records can't be done on a 1:1 basis, as the query parser will not know which of the records returned with a grouped count value is to be subtracted from which. Failing to specifiy an equijoin condition will yield a Cartesian join result, probably not what you want (though you may inded want that). For example, adding 'booksreturned.year_mon' right after 'bookssold.year_mon' to the returned fields list in the top-level select statement in the above example and eliminating the
bookssold.year_mon = booksreturned.year_mon
criteria in its WHERE clause will produce a working query that does the subtraction calculation on the CNT values for the YYYY-MM values in the first two columns of the resultset and shows them in the third column. Handy to know this if you need it, as it has solid application in business trends analysis if you can compare sales and returns not just within a given atomic timeframe but as compared across such timeframes in a 1:N fashion.

Using sql function generate_series() in redshift

I'd like to use the generate series function in redshift, but have not been successful.
The redshift documentation says it's not supported. The following code does work:
select *
from generate_series(1,10,1)
outputs:
1
2
3
...
10
I'd like to do the same with dates. I've tried a number of variations, including:
select *
from generate_series(date('2008-10-01'),date('2008-10-10 00:00:00'),1)
kicks out:
ERROR: function generate_series(date, date, integer) does not exist
Hint: No function matches the given name and argument types.
You may need to add explicit type casts. [SQL State=42883]
Also tried:
select *
from generate_series('2008-10-01 00:00:00'::timestamp,
'2008-10-10 00:00:00'::timestamp,'1 day')
And tried:
select *
from generate_series(cast('2008-10-01 00:00:00' as datetime),
cast('2008-10-10 00:00:00' as datetime),'1 day')
both kick out:
ERROR: function generate_series(timestamp without time zone, timestamp without time zone, "unknown") does not exist
Hint: No function matches the given name and argument types.
You may need to add explicit type casts. [SQL State=42883]
If not looks like I'll use this code from another post:
SELECT to_char(DATE '2008-01-01'
+ (interval '1 month' * generate_series(0,57)), 'YYYY-MM-DD') AS ym
PostgreSQL generate_series() with SQL function as arguments
Amazon Redshift seems to be based on PostgreSQL 8.0.2. The timestamp arguments to generate_series() were added in 8.4.
Something like this, which sidesteps that problem, might work in Redshift.
SELECT current_date + (n || ' days')::interval
from generate_series (1, 30) n
It works in PostgreSQL 8.3, which is the earliest version I can test. It's documented in 8.0.26.
Later . . .
It seems that generate_series() is unsupported in Redshift. But given that you've verified that select * from generate_series(1,10,1) does work, the syntax above at least gives you a fighting chance. (Although the interval data type is also documented as being unsupported on Redshift.)
Still later . . .
You could also create a table of integers.
create table integers (
n integer primary key
);
Populate it however you like. You might be able to use generate_series() locally, dump the table, and load it on Redshift. (I don't know; I don't use Redshift.)
Anyway, you can do simple date arithmetic with that table without referring directly to generate_series() or to interval data types.
select (current_date + n)
from integers
where n < 31;
That works in 8.3, at least.
Using Redshift today, you can generate a range of dates by using datetime functions and feeding in a number table.
select (getdate()::date - generate_series)::date from generate_series(1,30,1)
Generates this for me
date
2015-11-06
2015-11-05
2015-11-04
2015-11-03
2015-11-02
2015-11-01
2015-10-31
2015-10-30
2015-10-29
2015-10-28
2015-10-27
2015-10-26
2015-10-25
2015-10-24
2015-10-23
2015-10-22
2015-10-21
2015-10-20
2015-10-19
2015-10-18
2015-10-17
2015-10-16
2015-10-15
2015-10-14
2015-10-13
2015-10-12
2015-10-11
2015-10-10
2015-10-09
2015-10-08
The generate_series() function is not fully supported by Redshift. See the Unsupported PostgreSQL functions section of the developer guide.
UPDATE
generate_series is working with Redshift now.
SELECT CURRENT_DATE::TIMESTAMP - (i * interval '1 day') as date_datetime
FROM generate_series(1,31) i
ORDER BY 1
This will generate last 30 days date
Ref: generate_series function in Amazon Redshift
As of writing this, generate_series() on our instance of Redshift (1.0.33426) could not be used to, for example, create a table:
# select generate_series(1,100,1);
1
2
...
# create table normal_series as select generate_series(1,100,1);
INFO: Function "generate_series(integer, integer, integer) not supported.
ERROR: Specified types or functions (one per INFO message) not supported on Redshift tables.
However, with recursive works:
# create table recursive_series as with recursive t(n) as (select 1::integer union all select n+1 from t where n < 100) select n from t;
SELECT
-- modify as desired, here is a date series:
# select getdate()::date + n from recursive_series;
2021-12-18
2021-12-19
...
I needed to do something similar, but with 5 minutes intervals over 7 days. So here's a CTE based hack (ugly but not too verbose)
INSERT INTO five_min_periods
WITH
periods AS (select 0 as num UNION select 1 as num UNION select 2 UNION select 3 UNION select 4 UNION select 5 UNION select 6 UNION select 7 UNION select 8 UNION select 9 UNION select 10 UNION select 11),
hours AS (select num from periods UNION ALL select num + 12 from periods),
days AS (select num from periods where num <= 6),
rightnow AS (select CAST( TO_CHAR(GETDATE(), 'yyyy-mm-dd hh24') || ':' || trim(TO_CHAR((ROUND((DATEPART (MINUTE, GETDATE()) / 5), 1) * 5 ),'09')) AS TIMESTAMP) as start)
select
ROW_NUMBER() OVER(ORDER BY d.num DESC, h.num DESC, p.num DESC) as idx
, DATEADD(minutes, -p.num * 5, DATEADD( hours, -h.num, DATEADD( days, -d.num, n.start ) ) ) AS period_date
from days d, hours h, periods p, rightnow n
Should be able to extend this to other generation schemes. The trick here is using the Cartesian product join (i.e. no JOIN/WHERE clause) to multiply the hand-crafted CTE's to produce the necessary increments and apply to an anchor date.
Redshift's generate_series() function is a leader node only function and as such you cannot use it for downstream processing on the compute nodes. This can be replace by a recursive CTE (or keep a "dates" table on your database). I have an example of such in a recent answer:
Cross join Redshift with sequence of dates
One caution I like to give in answers like this is to be careful with inequality joins (or cross joins or any under-qualified joins) when working with VERY LARGE tables which can happen often in Redshift. If you are joining with a moderate Redshift table of say 1M rows then things will be fine. But if you are doing this on a table of 1B rows then the data explosion will likely cause massive performance issues as the query spills to disk.
I've written a couple of white papers on how to write this type of query in a data space sensitive way. This issue of massive intermediate results is not unique to Redshift and I first developed my approach solving a client's HIVE query issue. "First rule of writing SQL for Big Data - don't make more"
Per the comments of #Ryan Tuck and #Slobodan Pejic generate_series() does not work on Redshift when joining to another table.
The workaround I used was to write out every value in the series in the query:
SELECT
'2019-01-01'::date AS date_month
UNION ALL
SELECT
'2019-02-01'::date AS date_month
Using a Python function like this:
import arrow
def generate_date_series(start, end):
start = arrow.get(start)
end = arrow.get(end)
months = list(
f"SELECT '{month.format('YYYY-MM-DD')}'::date AS date_month"
for month in arrow.Arrow.range('month', start, end)
)
return "\nUNION ALL\n".join(months)
perhaps not as elegant as other solutions, but here's how I did it:
drop table if exists #dates;
create temporary table #dates as
with recursive cte(val_date) as
(select
cast('2020-07-01' as date) as val_date
union all
select
cast(dateadd(day, 1, val_date) as date) as val_date
from
cte
where
val_date <= getdate()
)
select
val_date as yyyymmdd
from
cte
order by
val_date
;
For five minute buckets i would do the following:
select date_trunc('minute', getdate()) - (i || ' minutes')::interval
from generate_series(0, 60*5-1, 5) as i
You could replace 5 by any given interval, and 60 by the number of rows you want.
SELECT CURRENT_DATE::TIMESTAMP - (i * interval '1 day') as date_datetime
FROM generate_series(1,(select datediff(day,'01-Jan-2021',now()::date))) i
ORDER BY 1

Group by data intervals

I have a single table which stores bandwidth usage on the network over a period of time. One column will contain the date time (primary key) and another column will record the bandwidth. Data is recorded every minute. We will have other columns recording other data at that moment in time.
If the user requests the data on 15 minute intervals (within a 24 hour period given start and end date), is it possible with a single query to get the data I require or would I have to write a stored procedure/cursor to do this? Users may then request 5 minute intervals data etc.
I will most likely be using Postgres but are there other NOSQL options which would be better?
Any ideas?
WITH t AS (
SELECT ts, (random()*100)::int AS bandwidth
FROM generate_series('2012-09-01', '2012-09-04', '1 minute'::interval) ts
)
SELECT date_trunc('hour', ts) AS hour_stump
,(extract(minute FROM ts)::int / 15) AS min15_slot
,count(*) AS rows_in_timeslice -- optional
,sum(bandwidth) AS sum_bandwidth
FROM t
WHERE ts >= '2012-09-02 00:00:00+02'::timestamptz -- user's time range
AND ts < '2012-09-03 00:00:00+02'::timestamptz -- careful with borders
GROUP BY 1, 2
ORDER BY 1, 2;
The CTE t provides data like your table might hold: one timestamp ts per minute with a bandwidth number. (You don't need that part, you work with your table instead.)
Here is a very similar solution for a very similar question - with detailed explanation how this particular aggregation works:
date_trunc 5 minute interval in PostgreSQL
Here is a similar solution for a similar question concerning running sums - with detailed explanation and links for the various functions used:
PostgreSQL: running count of rows for a query 'by minute'
Additional question in comment
WITH -- same as above ...
SELECT DISTINCT ON (1,2)
date_trunc('hour', ts) AS hour_stump
,(extract(minute FROM ts)::int / 15) AS min15_slot
,bandwidth AS bandwith_sample_at_min15
FROM t
WHERE ts >= '2012-09-02 00:00:00+02'::timestamptz
AND ts < '2012-09-03 00:00:00+02'::timestamptz
ORDER BY 1, 2, ts DESC;
Retrieves one un-aggregated sample per 15 minute interval - from the last available row in the window. This will be the 15th minute if the row is not missing. Crucial parts are DISTINCT ON and ORDER BY.
More information about the used technique here:
Select first row in each GROUP BY group?
select
date_trunc('hour', d) +
(((extract(minute from d)::integer / 5 * 5)::text) || ' minute')::interval
as "from",
date_trunc('hour', d) +
((((extract(minute from d)::integer / 5 + 1) * 5)::text) || ' minute')::interval
- '1 second'::interval
as "to",
sum(random() * 1000) as bandwidth
from
generate_series('2012-01-01', '2012-01-31', '1 minute'::interval) s(d)
group by 1, 2
order by 1, 2
;
That for 5 minutes ranges. For 15 minutes divide by 15.

Group DateTime into 5,15,30 and 60 minute intervals

I am trying to group some records into 5-, 15-, 30- and 60-minute intervals:
SELECT AVG(value) as "AvgValue",
sample_date/(5*60) as "TimeFive"
FROM DATA
WHERE id = 123 AND sample_date >= 3/21/2012
i want to run several queries, each would group my average values into the desired time increments. So the 5-min query would return results like this:
AvgValue TimeFive
6.90 1995-01-01 00:05:00
7.15 1995-01-01 00:10:00
8.25 1995-01-01 00:15:00
The 30-min query would result in this:
AvgValue TimeThirty
6.95 1995-01-01 00:30:00
7.40 1995-01-01 01:00:00
The datetime column is in yyyy-mm-dd hh:mm:ss format
I am getting implicit conversion errors of my datetime column. Any help is much appreciated!
Using
datediff(minute, '1990-01-01T00:00:00', yourDatetime)
will give you the number of minutes since 1990-1-1 (you can use the desired base date).
Then you can divide by 5, 15, 30 or 60, and group by the result of this division.
I've cheked it will be evaluated as an integer division, so you'll get an integer number you can use to group by.
i.e.
group by datediff(minute, '1990-01-01T00:00:00', yourDatetime) /5
UPDATE As the original question was edited to require the data to be shown in date-time format after the grouping, I've added this simple query that will do what the OP wants:
-- This convert the period to date-time format
SELECT
-- note the 5, the "minute", and the starting point to convert the
-- period back to original time
DATEADD(minute, AP.FiveMinutesPeriod * 5, '2010-01-01T00:00:00') AS Period,
AP.AvgValue
FROM
-- this groups by the period and gets the average
(SELECT
P.FiveMinutesPeriod,
AVG(P.Value) AS AvgValue
FROM
-- This calculates the period (five minutes in this instance)
(SELECT
-- note the division by 5 and the "minute" to build the 5 minute periods
-- the '2010-01-01T00:00:00' is the starting point for the periods
datediff(minute, '2010-01-01T00:00:00', T.Time)/5 AS FiveMinutesPeriod,
T.Value
FROM Test T) AS P
GROUP BY P.FiveMinutesPeriod) AP
NOTE: I've divided this in 3 subqueries for clarity. You should read it from inside out. It could, of course, be written as a single, compact query
NOTE: if you change the period and the starting date-time you can get any interval you need, like weeks starting from a given day, or whatever you can need
If you want to generate test data for this query use this:
CREATE TABLE Test
( Id INT IDENTITY PRIMARY KEY,
Time DATETIME,
Value FLOAT)
INSERT INTO Test(Time, Value) VALUES('2012-03-22T00:00:22', 10)
INSERT INTO Test(Time, Value) VALUES('2012-03-22T00:03:22', 10)
INSERT INTO Test(Time, Value) VALUES('2012-03-22T00:04:45', 10)
INSERT INTO Test(Time, Value) VALUES('2012-03-22T00:07:21', 20)
INSERT INTO Test(Time, Value) VALUES('2012-03-22T00:10:25', 30)
INSERT INTO Test(Time, Value) VALUES('2012-03-22T00:11:22', 30)
INSERT INTO Test(Time, Value) VALUES('2012-03-22T00:14:47', 30)
The result of executing the query is this:
Period AvgValue
2012-03-22 00:00:00.000 10
2012-03-22 00:05:00.000 20
2012-03-22 00:10:00.000 30
Building on #JotaBe's answer (to which I cannot comment on--otherwise I would), you could also try something like this which would not require a subquery.
SELECT
AVG(value) AS 'AvgValue',
-- Add the rounded seconds back onto epoch to get rounded time
DATEADD(
MINUTE,
(DATEDIFF(MINUTE, '1990-01-01T00:00:00', your_date) / 30) * 30,
'1990-01-01T00:00:00'
) AS 'TimeThirty'
FROM YourTable
-- WHERE your_date > some max lookback period
GROUP BY
(DATEDIFF(MINUTE, '1990-01-01T00:00:00', your_date) / 30)
This change removes temp tables and subqueries. It uses the same core logic for grouping by 30 minute intervals but, when presenting the data back as part of the result I'm just reversing the interval calculation to get the rounded date & time.
So, in case you googled this, but you need to do it in mysql, which was my case:
In MySQL you can do
GROUP BY
CONCAT(
DATE_FORMAT(`timestamp`,'%m-%d-%Y %H:'),
FLOOR(DATE_FORMAT(`timestamp`,'%i')/5)*5
)
In the new SQL Server 2022, you can use DATE_BUCKET, this rounds it down to the nearest interval specified.
SELECT
DATE_BUCKET(minute, 5, d.sample_date) AS TimeFive,
AVG(d.value) AS AvgValue
FROM DATA d
WHERE d.id = 123
AND d.sample_date >= '20121203'
GROUP BY
DATE_BUCKET(minute, 5, d.sample_date);
You can use the following statement, this removed the second component and calculates the number of minutes away from the five minute mark and uses this to round down to the time block. This is ideal if you want to change your window, you can simply change the mod value.
select dateadd(minute, - datepart(minute, [YOURDATE]) % 5, dateadd(minute, datediff(minute, 0, [YOURDATE]), 0)) as [TimeBlock]
This will help exactly what you want
replace dt - your datetime c - call field astro_transit1 - your table 300 refer 5 min so add 300 each time for time gap increase
SELECT FROM_UNIXTIME( 300 * ROUND( UNIX_TIMESTAMP( r.dt ) /300 ) ) AS 5datetime, ( SELECT r.c FROM astro_transit1 ra WHERE ra.dt = r.dt ORDER BY ra.dt DESC LIMIT 1 ) AS first_val FROM astro_transit1 r GROUP BY UNIX_TIMESTAMP( r.dt ) DIV 300 LIMIT 0 , 30