I have a problem with a SQL query. I have a list of dates in one column, I would like to create pairs of dates. The dates are sequenced, so I have to match the first date with the second and create a record, then the third date with the fourth date and create a record etc .. as in the following example:
ID DATA
50 10/04/2019
50 12/04/2019
50 13/04/2019
50 17/04/2019
50 18/04/2019
50 19/04/2019
ID DATA_START DATA_END
50 10/04/2019 12/04/2019
50 13/04/2019 17/04/2019
50 18/04/2019 19/04/2019
Thanks very much everyone for the help
You should mark your rows that should be grouped together (into single row) and which date will have which role (start or end).
Here's the code:
with a as (
/*Source data*/
select 50 as id, convert(date, '2019-04-10', 23) as dt union all
select 50 as id, convert(date, '2019-04-12', 23) as dt union all
select 50 as id, convert(date, '2019-04-13', 23) as dt union all
select 50 as id, convert(date, '2019-04-17', 23) as dt union all
select 50 as id, convert(date, '2019-04-18', 23) as dt union all
select 50 as id, convert(date, '2019-04-19', 23) as dt
)
select
id,
[1] as dt_start,
[0] as dt_end
from (
select
id,
dt,
/*
the first row (with modulo = 1) is date from
and the second row (with modulo = 0) is date to
*/
(row_number() over(partition by id order by dt)) % 2 as dt_role,
/*Integer division by 2 will group rows together*/
(row_number() over(partition by id order by dt) + 1) / 2 as dt_group
from a
) as s
pivot (
max(dt) for dt_role in ([0], [1])
) as p
GO
id | dt_start | dt_end
-: | :--------- | :---------
50 | 2019-04-10 | 2019-04-12
50 | 2019-04-13 | 2019-04-17
50 | 2019-04-18 | 2019-04-19
db<>fiddle here
I have records of No. of calls coming to a call center. When a call comes into a call center a ticket is open.
So, let's say ticket 1 (T1) is open on 8/1/19 and it stays open till 8/5/19. So, if a person ran a query everyday then on 8/1 it will show 1 ticket open...same think on day 2 till day 5....I want to get records by day to see how many tickets were open for each day.....
In short, Frequency Distribution by Day.
Ticket Open_date Close_date
T1 8/1/2019 8/5/2019
T2 8/1/2019 8/6/2019
Result:
Result
Date # Tickets_Open
8/1/2019 2
8/2/2019 2
8/3/2019 2
8/4/2019 2
8/5/2019 2
8/6/2019 1
8/7/2019 0
8/8/2019 0
8/9/2019 0
8/10/2019 0
We can handle your requirement via the use of a calendar table, which stores all dates covering the full range in your data set.
WITH dates AS (
SELECT '2019-08-01' AS dt UNION ALL
SELECT '2019-08-02' UNION ALL
SELECT '2019-08-03' UNION ALL
SELECT '2019-08-04' UNION ALL
SELECT '2019-08-05' UNION ALL
SELECT '2019-08-06' UNION ALL
SELECT '2019-08-07' UNION ALL
SELECT '2019-08-08' UNION ALL
SELECT '2019-08-09' UNION ALL
SELECT '2019-08-10'
)
SELECT
d.dt,
COUNT(t.Open_date) AS num_tickets_open
FROM dates d
LEFT JOIN tickets t
ON d.dt BETWEEN t.Open_date AND t.Close_date
GROUP BY
d.dt;
Note that in practice if you expect to have this reporting requirement in the long term, you might want to replace the dates CTE above with a bona-fide table of dates.
This solution generates the list of dates from the tickets table using CTE recursion and calculates the count:
WITH Tickets(Ticket, Open_date, Close_date) AS
(
SELECT "T1", "8/1/2019", "8/5/2019"
UNION ALL
SELECT "T2", "8/1/2019", "8/6/2019"
),
Ticket_dates(Ticket, Dates) as
(
SELECT t1.Ticket, CONVERT(DATETIME, t1.Open_date)
FROM Tickets t1
UNION ALL
SELECT t1.Ticket, DATEADD(dd, 1, CONVERT(DATETIME, t1.Dates))
FROM Ticket_dates t1
inner join Tickets t2 on t1.Ticket = t2.Ticket
where DATEADD(dd, 1, CONVERT(DATETIME, t1.Dates)) <= CONVERT(DATETIME, t2.Close_date)
)
SELECT CONVERT(varchar, Dates, 1), count(*)
FROM Ticket_dates
GROUP by Dates
ORDER by Dates
A "general purpose" trick is to generate a series of numbers, which can be done using CTE's but there are many alternatives, and from that create the needed range of dates. Once that exists then you can left join your ticket data to this and then count by date.
CREATE TABLE mytable(
Ticket VARCHAR(8) NOT NULL PRIMARY KEY
,Open_date DATE NOT NULL
,Close_date DATE NOT NULL
);
INSERT INTO mytable(Ticket,Open_date,Close_date) VALUES ('T1','8/1/2019','8/5/2019');
INSERT INTO mytable(Ticket,Open_date,Close_date) VALUES ('T2','8/1/2019','8/6/2019');
Also note I am using a cross apply in this example to "attach" the min and max dates of your tickets to each numbered row. You would need to include your own logic on what data to select here.
;WITH
cteDigits AS (
SELECT 0 AS digit UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL
SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9
)
, cteTally AS (
SELECT
[1s].digit
+ [10s].digit * 10
+ [100s].digit * 100 /* add more like this as needed */
AS num
FROM cteDigits [1s]
CROSS JOIN cteDigits [10s]
CROSS JOIN cteDigits [100s] /* add more like this as needed */
)
select
n.num + 1 rownum
, dateadd(day,n.num,ca.min_date) as on_date
, count(t.Ticket) as tickets_open
from cteTally n
cross apply (select min(Open_date), max(Close_date) from mytable) ca (min_date, max_date)
left join mytable t on dateadd(day,n.num,ca.min_date) between t.Open_date and t.Close_date
where dateadd(day,n.num,ca.min_date) <= ca.max_date
group by
n.num + 1
, dateadd(day,n.num,ca.min_date)
order by
rownum
;
result:
+--------+---------------------+--------------+
| rownum | on_date | tickets_open |
+--------+---------------------+--------------+
| 1 | 01.08.2019 00:00:00 | 2 |
| 2 | 02.08.2019 00:00:00 | 2 |
| 3 | 03.08.2019 00:00:00 | 2 |
| 4 | 04.08.2019 00:00:00 | 2 |
| 5 | 05.08.2019 00:00:00 | 2 |
| 6 | 06.08.2019 00:00:00 | 1 |
+--------+---------------------+--------------+
I have a dataset something like below
|date|flag|
|20190503|0|
|20190504|1|
|20190505|1|
|20190506|1|
|20190507|1|
|20190508|0|
|20190509|0|
|20190510|0|
|20190511|1|
|20190512|1|
|20190513|0|
|20190514|0|
|20190515|1|
What I want to achieve is to group the consecutive dates by flag=1, and add one column counter to mark 1 for the first day of the consecutive days where flag=1, and 2 for the 2nd day and etc, assign 0 for flag=0
|date|flag|counter|
|20190503|0|0|
|20190504|1|1|
|20190505|1|2|
|20190506|1|3|
|20190507|1|4|
|20190508|0|0|
|20190509|0|0|
|20190510|0|0|
|20190511|1|1|
|20190512|1|2|
|20190513|0|0|
|20190514|0|0|
|20190515|1|1|
I tried analytical function and hierarchy query, but still haven't found a solution, seeking help, any hint is appreciated!
Thanks,
Hong
You can define the groups using a cumulative sum of the zeros. Then use row_number():
select t.*,
(case when flag = 0 then 0
else row_number() over (partition by grp order by date)
end) as counter
from (select t.*,
sum(case when flag = 0 then 1 else 0 end) over (order by date) as grp
from t
) t;
A very different approach is to take the difference between the current date and a cumulative max of the flag = 0 date:
select t.*,
datediff(day,
max(case when flag = 0 then date end) over (order by date),
date
) as counter
from t;
Note that the logic of these two approaches is different -- although they should produce the same results for the data you have provided. For missing dates, the first just ignores missing dates. The second will increment the counter for missing dates.
Well - Vertica has a very nice CONDITIONAL_CHANGE_EVENT() function that could help you there ...
Everytime the expression between the brackets changes, an integer is incremented by 1. This gives you a new group identifier, or a criterion to PARTITION BY, every time the flag changes. So one SELECT to get the grouping info, and then partition by the obtained grouping info. Here goes:
WITH
input(dt,flag) AS (
SELECT '2019-05-03'::DATE,0
UNION ALL SELECT '2019-05-04'::DATE,1
UNION ALL SELECT '2019-05-05'::DATE,1
UNION ALL SELECT '2019-05-06'::DATE,1
UNION ALL SELECT '2019-05-07'::DATE,1
UNION ALL SELECT '2019-05-08'::DATE,0
UNION ALL SELECT '2019-05-09'::DATE,0
UNION ALL SELECT '2019-05-10'::DATE,0
UNION ALL SELECT '2019-05-11'::DATE,1
UNION ALL SELECT '2019-05-12'::DATE,1
UNION ALL SELECT '2019-05-13'::DATE,0
UNION ALL SELECT '2019-05-14'::DATE,0
UNION ALL SELECT '2019-05-15'::DATE,1
)
,
grp_input AS (
SELECT
*
, CONDITIONAL_CHANGE_EVENT(flag) OVER(ORDER BY dt) AS grp
FROM input
)
SELECT
dt
, flag
, CASE FLAG
WHEN 0 THEN 0
ELSE ROW_NUMBER() OVER(PARTITION BY grp ORDER BY dt)
END AS counter
FROM grp_input;
-- out dt | flag | counter
-- out ------------+------+---------
-- out 2019-05-03 | 0 | 0
-- out 2019-05-04 | 1 | 1
-- out 2019-05-05 | 1 | 2
-- out 2019-05-06 | 1 | 3
-- out 2019-05-07 | 1 | 4
-- out 2019-05-08 | 0 | 0
-- out 2019-05-09 | 0 | 0
-- out 2019-05-10 | 0 | 0
-- out 2019-05-11 | 1 | 1
-- out 2019-05-12 | 1 | 2
-- out 2019-05-13 | 0 | 0
-- out 2019-05-14 | 0 | 0
-- out 2019-05-15 | 1 | 1
-- out (13 rows)
-- out
I am looking to return all days from the past 6 months.
Per example:
Column1
-------
01-OCT-18
30-SEP-18
29-SEP-18
........
01-APR-18
#TimBiegeleisen - Your solution pointed me in the right direction, so you get the points.
#MT0 - "ADD_MONTHS" as far as I know is not used in T-SQL so the the clarification I believe was necessary. but thank you for the pointer with the updates will refrain from doing that in the future.
We can compare each date in Column1 against SYSDATE, 6 months earlier, and then display the dates in the format you want using TO_CHAR with an appropriate format mask:
SELECT
TO_CHAR(Column1, 'dd/mm/yy') AS output
FROM yourTable
WHERE
Column1 >= ADD_MONTHS(SYSDATE, -6);
Demo
This will get you all the dates (in the format in your example) from the last 6 months:
SQL Fiddle
Query 1:
SELECT TO_CHAR( SYSDATE - LEVEL + 1, 'DD-MON-RR' ) AS Column1
FROM DUAL
CONNECT BY SYSDATE - LEVEL + 1 >= ADD_MONTHS( SYSDATE, -6 )
Results:
| COLUMN1 |
|-----------|
| 11-OCT-18 |
| 10-OCT-18 |
| 09-OCT-18 |
...
| 13-APR-18 |
| 12-APR-18 |
| 11-APR-18 |
Update
the idea is to produce a list of days from the past 6 months and a count of how many times a particular value has been recorded against each date
SQL Fiddle
Oracle 11g R2 Schema Setup:
Create an example table with multiple rows for various days:
CREATE TABLE table_name ( value ) AS
SELECT TRUNC( SYSDATE ) - 0 FROM DUAL CONNECT BY LEVEL <= 5
UNION ALL SELECT TRUNC( SYSDATE ) - 1 FROM DUAL CONNECT BY LEVEL <= 3
UNION ALL SELECT TRUNC( SYSDATE ) - 2 FROM DUAL CONNECT BY LEVEL <= 7
UNION ALL SELECT TRUNC( SYSDATE ) - 3 FROM DUAL CONNECT BY LEVEL <= 2
UNION ALL SELECT TRUNC( SYSDATE ) - 4 FROM DUAL CONNECT BY LEVEL <= 1
Query 1:
SELECT TO_CHAR( c.Column1, 'DD-MON-RR' ) AS Column1,
COUNT( t.value ) AS num_values_per_day
FROM (
SELECT TRUNC( SYSDATE ) - LEVEL + 1 AS Column1
FROM DUAL
CONNECT BY TRUNC( SYSDATE ) - LEVEL + 1 >= ADD_MONTHS( SYSDATE, -6 )
) c
LEFT OUTER JOIN table_name t
ON ( c.column1 = t.value )
GROUP BY c.Column1
ORDER BY c.Column1 DESC
Results:
| COLUMN1 | NUM_VALUES_PER_DAY |
|-----------|--------------------|
| 11-OCT-18 | 5 |
| 10-OCT-18 | 3 |
| 09-OCT-18 | 7 |
| 08-OCT-18 | 2 |
| 07-OCT-18 | 1 |
| 06-OCT-18 | 0 |
| 05-OCT-18 | 0 |
...
| 14-APR-18 | 0 |
| 13-APR-18 | 0 |
| 12-APR-18 | 0 |
So for the additional task provided in your comment you might want to adjust Tim Biegeleisens solution a little bit:
SELECT TRUNC(Column1) AS "Day"
, count(*) as "Count"
FROM yourTable
GROUP BY TRUNC(Column1)
WHERE Column1 >= ADD_MONTHS(SYSDATE, -6);
I will add more to it however this was the starting point I needed, I was over complicating the query by trying to use "CONNECT BY LEVEL" the query I had before worked fine for dates in the future but would not return anything previous to the sysdate (I will play around a bit more with the above to figure out how it works but for the time being I know enough).
Thanks for the answer, I was able to figure out the solution for what I wanted via the following:
SELECT col1
FROM table1
WHERE col1 >= add_months(sysdate,-6)
I have a view like this:
Year | Month | Week | Category | Value |
2017 | 1 | 1 | A | 1
2017 | 1 | 1 | B | 2
2017 | 1 | 1 | C | 3
2017 | 1 | 2 | A | 4
2017 | 1 | 2 | B | 5
2017 | 1 | 2 | C | 6
2017 | 1 | 3 | A | 7
2017 | 1 | 3 | B | 8
2017 | 1 | 3 | C | 9
2017 | 1 | 4 | A | 10
2017 | 1 | 4 | B | 11
2017 | 1 | 4 | C | 12
2017 | 2 | 5 | A | 1
2017 | 2 | 5 | B | 2
2017 | 2 | 5 | C | 3
2017 | 2 | 6 | A | 4
2017 | 2 | 6 | B | 5
2017 | 2 | 6 | C | 6
2017 | 2 | 7 | A | 7
2017 | 2 | 7 | B | 8
2017 | 2 | 7 | C | 9
2017 | 2 | 8 | A | 10
2017 | 2 | 8 | B | 11
2017 | 2 | 8 | C | 12
And I need to make a new view which needs to show average of value column (let's call it avg_val) and the value from the max week of the month (max_val_of_month). Ex: max week of january is 4, so the value of category A is 10. Or something like this to be clear:
Year | Month | Category | avg_val | max_val_of_month
2017 | 1 | A | 5.5 | 10
2017 | 1 | B | 6.5 | 11
2017 | 1 | C | 7.5 | 12
2017 | 2 | A | 5.5 | 10
2017 | 2 | B | 6.5 | 11
2017 | 2 | C | 7.5 | 12
I have use window function, over partition by year, month, category to get the avg value. But how can I get the value of the max week of each month?
Assuming that you need a month average and a value for the max week not the max value per month
SELECT year, month, category, avg_val, value max_week_val
FROM (
SELECT *,
AVG(value) OVER (PARTITION BY year, month, category) avg_val,
ROW_NUMBER() OVER (PARTITION BY year, month, category ORDER BY week DESC) rn
FROM view1
) q
WHERE rn = 1
ORDER BY year, month, category
or more verbose version without window functions
SELECT q.year, q.month, q.category, q.avg_val, v.value max_week_val
FROM (
SELECT year, month, category, avg(value) avg_val, MAX(week) max_week
FROM view1
GROUP BY year, month, category
) q JOIN view1 v
ON q.year = v.year
AND q.month = v.month
AND q.category = v.category
AND q.max_week = v.week
ORDER BY year, month, category
Here is a dbfiddle demo for both queries
And here is my NEW version.
My thanks to #peterm for pointing me about the prior false value of val_from_max_week_of_month. So, I corrected it:
SELECT
a.Year,
a.Month,
a.Category,
max(a.Week) AS max_week,
AVG(a.Value) AS avg_val,
(
SELECT b.Value
FROM decades AS b
WHERE
b.Year = a.Year AND
b.Month = a.Month AND
b.Week = max(a.Week) AND
b.Category = a.Category
) AS val_from_max_week_of_month
FROM decades AS a
GROUP BY
a.Year,
a.Month,
a.Category
;
The new results:
First, you might need to check, how do you handle the first week in January. If 1st of January are not a Monday, there are several interpretations & not every one of them will fit the solutions here. You'll either need to use:
the ISO week concept, ie. the week column should hold the ISO week & the year column should hold the ISO year (week-year, rather). Note: in this concept, 1st of January actually sometimes belongs to the previous year
use your own concept, where the first week of the year is "split" into two if 1st of January is not a Monday.
Note: the solutions below will not work if (in your table) the first week of January can be 52 or 53.
Given that avg_val is just a simple aggregation, while max_val_of_month can be calculated with typical greatest-n-per-group queries. It has a lot of possible solutions in PostgreSQL, with varying performance. Fortunately, your query will naturally have an easily determined selectivity: you'll always need (approx.) a quarter of your data.
Usual winners (in performance) are:
(These are not surprise though, as these 2 should perform more and more as you need more portion of the original data.)
array_agg() with order by variant:
select year, month, category, avg(value) avg_val,
(array_agg(value order by week desc))[1] max_val_of_month
from table_name
group by year, month, category;
distinct on variant:
select distinct on (year, month, category) year, month, category,
avg(value) over (partition by year, month, category) avg_val,
value max_val_of_month
from table_name
order by year, month, category, week desc;
The pure window function variant is not that bad either:
row_number() variant:
select year, month, category, avg_val, max_val_of_month
from (select year, month, category, value max_val_of_month,
avg(value) over (partition by year, month, category) avg_val,
row_number() over (partition by year, month, category order by week desc) rn
from table_name) w
where rn = 1;
But the LATERAL variant is only viable with an index:
LATERAL variant:
create index idx_table_name_year_month_category_week_desc
on table_name(year, month, category, week desc);
select year, month, category,
avg(value) avg_val,
max_val_of_month
from table_name t
cross join lateral (select value max_val_of_month
from table_name
where (year, month, category) = (t.year, t.month, t.category)
order by week desc
limit 1) m
group by year, month, category, max_val_of_month;
But most of the solutions above can actually utilize this index, not just this last one.
Without the index: http://rextester.com/WNEL86809
With the index: http://rextester.com/TYUA52054
with data (yr, mnth, wk, cat, val) as
(
-- begin test data
select 2017 , 1 , 1 , 'A' , 1 from dual union all
select 2017 , 1 , 1 , 'B' , 2 from dual union all
select 2017 , 1 , 1 , 'C' , 3 from dual union all
select 2017 , 1 , 2 , 'A' , 4 from dual union all
select 2017 , 1 , 2 , 'B' , 5 from dual union all
select 2017 , 1 , 2 , 'C' , 6 from dual union all
select 2017 , 1 , 3 , 'A' , 7 from dual union all
select 2017 , 1 , 3 , 'B' , 8 from dual union all
select 2017 , 1 , 3 , 'C' , 9 from dual union all
select 2017 , 1 , 4 , 'A' , 10 from dual union all
select 2017 , 1 , 4 , 'B' , 11 from dual union all
select 2017 , 1 , 4 , 'C' , 12 from dual union all
select 2017 , 2 , 5 , 'A' , 1 from dual union all
select 2017 , 2 , 5 , 'B' , 2 from dual union all
select 2017 , 2 , 5 , 'C' , 3 from dual union all
select 2017 , 2 , 6 , 'A' , 4 from dual union all
select 2017 , 2 , 6 , 'B' , 5 from dual union all
select 2017 , 2 , 6 , 'C' , 6 from dual union all
select 2017 , 2 , 7 , 'A' , 7 from dual union all
select 2017 , 2 , 8 , 'A' , 10 from dual union all
select 2017 , 2 , 8 , 'B' , 11 from dual union all
select 2017 , 2 , 7 , 'B' , 8 from dual union all
select 2017 , 2 , 7 , 'C' , 9 from dual union all
select 2018 , 2 , 7 , 'C' , 9 from dual union all
select 2017 , 2 , 8 , 'C' , 12 from dual
-- end test data
)
select * from
(
select
-- data.*: all columns of the data table
data.*,
-- avrg: partition by a combination of year,month and category to work out -
-- the avg for each category in a month of a year
avg(val) over (partition by yr, mnth, cat) avrg,
-- mwk: partition by year and month to work out -
-- the max week of a month in a year
max(wk) over (partition by yr, mnth) mwk
from
data
)
-- as OP's interest is in the max week of each month of a year, -
-- "wk" column value is matched against
-- the derived column "mwk"
where wk = mwk
order by yr,mnth,cat;