Running total... with a twist - sql

I am trying to figure out the SQL to do a running total for a daily quota system. The system works like this...
Each day a user gets a quota of 2 "consumable things". If they use them all up, the next day they get another 2. If they somehow over use them (use more than 2), the next day they still get 2 (they can't have a negative balance). If they don't use them all, the remainder carries to the next day (which can carry to the next, etc...).
Here is a chart of data to use as validation. It's laid out as quota for the day, amount used that day, amount left at the end of the day:
2 - 2 - 0
2 - 0 - 2
4 - 3 - 1
3 - 0 - 3
5 - 7 - 0
2 - 1 - 1
3 - 0 - 3
5 - 2 - 3
5 - 1 - 4
6 - 9 - 0
The SQL to start of with would be:
WITH t(x, y) AS (
VALUES (2, '2013-09-16'),
(0, '2013-09-17'),
(3, '2013-09-18'),
(0, '2013-09-19'),
(7, '2013-09-20'),
(1, '2013-09-21'),
(0, '2013-09-22'),
(2, '2013-09-23'),
(1, '2013-09-24'),
(9, '2013-09-25')
)
For the life of me, trying recursive with statements and window aggregates, I cannot figure out how to make it work (but I can certainly see the pattern).
It should be something like 2 - x + SUM(previous row), but I don't know how to put that in to SQL.

Try creating custom aggregate function like:
CREATE FUNCTION quota_calc_func(numeric, numeric, numeric) -- carry over, daily usage and daily quota
RETURNS numeric AS
$$
SELECT GREATEST(0, $1 + $3 - $2);
$$
LANGUAGE SQL STRICT IMMUTABLE;
CREATE AGGREGATE quota_calc( numeric, numeric ) -- daily usage and daily quota
(
SFUNC = quota_calc_func,
STYPE = numeric,
INITCOND = '0'
);
WITH t(x, y) AS (
VALUES (2, '2013-09-16'),
(0, '2013-09-17'),
(3, '2013-09-18'),
(0, '2013-09-19'),
(7, '2013-09-20'),
(1, '2013-09-21'),
(0, '2013-09-22'),
(2, '2013-09-23'),
(1, '2013-09-24'),
(9, '2013-09-25')
)
SELECT x, y, quota_calc(x, 2) over (order by y)
FROM t;
May contain bugs, haven't tested it.

they can't have a negative balance
That triggered my memory :-)
I had a similar problem >10 years ago on a Teradata system.
The logic could be easily implemented using recursion, for each row do:
add 2 "new" and substract x "used" quota, if this is less than zero
use zero instead.
I can't remember how i found that solution, but i finally implemented it using simple cumulative sums:
SELECT
dt.*,
CASE -- used in following calculation, this is just for illustration
WHEN MIN(quota_raw) OVER (ORDER BY datecol ROWS UNBOUNDED PRECEDING) >= 0 THEN 0
ELSE MIN(quota_raw) OVER (ORDER BY datecol ROWS UNBOUNDED PRECEDING)
END AS correction,
quota_raw
- CASE
WHEN MIN(quota_raw) OVER (ORDER BY datecol ROWS UNBOUNDED PRECEDING) >= 0 THEN 0
ELSE MIN(quota_raw) OVER (ORDER BY datecol ROWS UNBOUNDED PRECEDING)
END AS quote_left
FROM
(
SELECT quota, datecol,
SUM(quota) OVER (ORDER BY datecol ROWS UNBOUNDED PRECEDING) AS quota_used,
2*COUNT(*) OVER (ORDER BY datecol ROWS UNBOUNDED PRECEDING) AS quota_available,
quota_available - quota_used AS quota_raw
FROM t
) AS dt
ORDER BY datecol
The secret sauce is the moving min "correction" which adjusts negative results to zero.

simple recursive cte solution, assuming you have no gaps in dates:
with recursive cte as (
select
t.dt,
2 as quote_day,
t.quote_used,
greatest(2 - t.quote_used, 0) as quote_left
from t
where t.dt = '2013-09-16'
union all
select
t.dt,
2 + c.quote_left as quote_day,
t.quote_used,
greatest(2 + c.quote_left - t.quote_used, 0) as quote_left
from cte as c
inner join t on t.dt = c.dt + 1
)
select *
from cte
sql fiddle demo
another solution - with cumulative aggregates:
with cte1 as (
select
dt, quote_used,
sum(2 - quote_used) over(order by dt asc) as quote_raw
from t
), cte2 as (
select
dt, quote_used, quote_raw,
least(min(quote_raw) over(order by dt asc), 0) as quote_corr
from cte1
)
select
dt,
quote_raw - quote_corr + quote_used as quote_day,
quote_used,
quote_raw - quote_corr as quote_left
from cte2
sql fiddle demo

Related

SQL - SUMIF substitute?

This question is best asked using an example - if I have daily data (in this case, daily Domestic Box Office for the movie Elvis), how can I sum only the weekend values?
If the data looks like this:
Date
DBO
6/24/2022
12755467
6/25/2022
9929779
6/26/2022
8526333
6/27/2022
4253038
6/28/2022
5267391
6/29/2022
4010762
6/30/2022
3577241
7/1/2022
5320812
7/2/2022
6841224
7/3/2022
6290576
7/4/2022
4248679
7/5/2022
3639110
7/6/2022
3002182
7/7/2022
2460108
7/8/2022
3326066
7/9/2022
4324040
7/10/2022
3530965
I'd like to be able to get results that look like this:
Weekend
DBO Sum
1
31211579
2
18452612
3
11181071
Also - not sure how tricky this would be but would love to include percent change v. last weekend.
Weekend
DBO Sum
% Change
1
31211579
2
18452612
-41%
3
11181071
-39%
I tried this with CASE WHEN but I got the results in different columns, which was not what I was looking for.
SELECT
,SUM(CASE
WHEN DATE BETWEEN '2022-06-24' AND '2022-06-26' THEN index
ELSE 0
END) AS Weekend1
,SUM(CASE
WHEN DATE BETWEEN '2022-07-01' AND '2022-07-03' THEN index
ELSE 0
END) AS Weekend2
,SUM(CASE
WHEN DATE BETWEEN '2022-07-08' AND '2022-07-10' THEN index
ELSE 0
END) AS Weekend3
FROM Elvis
I would start by filtering the data on week-end days only. Then we can group by week to get the index sum ; the last step is to use window functions to compare each week-end with the previous one:
select iso_week,
row_number() over(order by iso_week) weekend_number,
sum(index) as dbo_sum,
( sum(index) - lag(sum(index) over(order by iso_week) )
/ nullif(lag(sum(index)) over(order by iso_week), 0) as ratio_change
from (
select e.*, extract(isoweek from date) iso_week
from elvis e
where extract(dayofweek from date) in (1, 7)
) e
group by iso_week
order by iso_week
Consider below
select *,
round(100 * safe_divide(dbo_sum - lag(dbo_sum) over(order by week), lag(dbo_sum) over(order by week)), 2) change_percent
from (
select extract(week from date + 2) week, sum(dbo) dbo_sum
from your_table
where extract(dayofweek from date + 2) in (1, 2, 3)
group by week
)
if applied to sample data in your question - output is

SQL - Calculate percentage by group, for multiple groups

I have a table in GBQ in the following format :
UserId Orders Month
XDT 23 1
XDT 0 4
FKR 3 6
GHR 23 4
... ... ...
It shows the number of orders per user and month.
I want to calculate the percentage of users who have orders, I did it as following :
SELECT
HasOrders,
ROUND(COUNT(*) * 100 / CAST( SUM(COUNT(*)) OVER () AS float64), 2) Parts
FROM (
SELECT
*,
CASE WHEN Orders = 0 THEN 0 ELSE 1 END AS HasOrders
FROM `Table` )
GROUP BY
HasOrders
ORDER BY
Parts
It gives me the following result:
HasOrders Parts
0 35
1 65
I need to calculate the percentage of users who have orders, by month, in a way that every month = 100%
Currently to do this I execute the query once per month, which is not practical :
SELECT
HasOrders,
ROUND(COUNT(*) * 100 / CAST( SUM(COUNT(*)) OVER () AS float64), 2) Parts
FROM (
SELECT
*,
CASE WHEN Orders = 0 THEN 0 ELSE 1 END AS HasOrders
FROM `Table` )
WHERE Month = 1
GROUP BY
HasOrders
ORDER BY
Parts
Is there a way execute a query once and have this result ?
HasOrders Parts Month
0 25 1
1 75 1
0 45 2
1 55 2
... ... ...
SELECT
SIGN(Orders),
ROUND(COUNT(*) * 100.000 / SUM(COUNT(*), 2) OVER (PARTITION BY Month)) AS Parts,
Month
FROM T
GROUP BY Month, SIGN(Orders)
ORDER BY Month, SIGN(Orders)
Demo on Postgres:
https://dbfiddle.uk/?rdbms=postgres_10&fiddle=4cd2d1455673469c2dfc060eccea8020
You've stated that it's important for the total to be 100% so you might consider rounding down in the case of no orders and rounding up in the case of has orders for those scenarios where the percentages falls precisely on an odd multiple of 0.5%. Or perhaps rounding toward even or round smallest down would be better options:
WITH DATA AS (
SELECT SIGN(Orders) AS HasOrders, Month,
COUNT(*) * 10000.000 / SUM(COUNT(*)) OVER (PARTITION BY Month) AS PartsPercent
FROM T
GROUP BY Month, SIGN(Orders)
ORDER BY Month, SIGN(Orders)
)
select HasOrders, Month, PartsPercent,
PartsPercent - TRUNCATE(PartsPercent) AS Fraction,
CASE WHEN HasOrders = 0
THEN FLOOR(PartsPercent) ELSE CEILING(PartsPercent)
END AS PartsRound0Down,
CASE WHEN PartsPercent - TRUNCATE(PartsPercent) = 0.5
AND MOD(TRUNCATE(PartsPercent), 2) = 0
THEN FLOOR(PartsPercent) ELSE ROUND(PartsPercent) -- halfway up
END AS PartsRoundTowardEven,
CASE WHEN PartsPercent - TRUNCATE(PartsPercent) = 0.5 AND PartsPercent < 50
THEN FLOOR(PartsPercent) ELSE ROUND(PartsPercent) -- halfway up
END AS PartsSmallestTowardZero
from DATA
It's usually not advisable to test floating-point values for equality and I don't know how BigQuery's float64 will work with the comparison against 0.5. One half is nevertheless representable in binary. See these in a case where the breakout is 101 vs 99. I don't have immediate access to BigQuery so be aware that Postgres's rounding behavior is different:
https://dbfiddle.uk/?rdbms=postgres_10&fiddle=c8237e272427a0d1114c3d8056a01a09
Consider below approach
select hasOrders, round(100 * parts, 2) as parts, month from (
select month,
countif(orders = 0) / count(*) `0`,
countif(orders > 0) / count(*) `1`,
from your_table
group by month
)
unpivot (parts for hasOrders in (`0`, `1`))
with output like below

Determining how often and how long a specific event occurs in SQL data

I am curious how would be the best method to find out how often and how long an event occured within a set of SQL data that is managed using Microsoft SQL Server Management Studio 17.
Below is a simplified data table to illustrate the type of thing I'd be interested in solving. Say data is collected by a sensor for every 100ms and I want to know how often and how long the power dropped to 0.
I have a couple ideas how to do this using CTEs and/or Window functions, however my understanding of these functions doesn't seem to translate in SQL Management Studio as my code keeps tripping errors at points that should theoretically be correct.
For example I thought I could Window functions partitioned by the position number, filtered by the points when power was 0 then subtract the LAST_VALUE from the FIRST_VALUE. However the environment doesn't recognize these arguments.
I thought also about a CTE that already filters out the points where power was zero, but I couldn't bring that to a remotely functional point.
CREATE TABLE SensorData
(
[TimeStamp] DATETIME ,
[Position] INT,
[POWER] INT
);
INSERT INTO SensorData ([TimeStamp], [Position], [Power])
VALUES (4, 1, 59), (101, 1, 60), (207, 1, 50), (321, 1, 58),
(428, 1, 55), (534, 1, 59), (646, 1, 51), (755, 1, 0),
(868, 1, 0), (975, 1, 0), (1081, 1, 0), (1193, 2, 45),
(1307, 2, 52), (1412, 2, 51), (1519, 2, 55), (1629, 2, 58),
(1735, 2, 0), (1851, 2, 0), (1960, 2, 0), (2066, 2, 54);
SELECT *
FROM SensorData;
How the output looks at the end isn't so important. What's important is I know the number of events where, in this case, the power went to zero and how long this event lasted (last TimeStamp within the event minus the first TimeStamp)
Any advice would be greatly appreciated!
Doing this in multiple CTEs to keep things nicely organized can be done as follows:
with sensorevents as (
select
[TimeStamp]
, position
, power
, lag(power,1) over (order by timestamp) as prevPower
from SensorData
)
, powerloss as (
select
*
, case when [prevPower] > 0 and power = 0 then 'power loss'
when [prevPower] = 0 and power > 0 then 'power on'
end as status
, case when [prevPower] = 0 then lag(timestamp,1) over (order by timestamp)
end as powerOffTimestamp
, case when [prevPower] > 0 and power = 0 then 0
when [prevPower] = 0 and power > 0 then timestamp - lag(timestamp,1) over (order by timestamp)
end as duration
from Sensorevents
where ([prevPower] > 0 and power = 0)
or
([prevPower] = 0 and power > 0)
)
select
*
from powerloss
where status = 'power on'
The first CTE defines a new column, prevPower which tells us if we are at an edge where power loss occurs or power restoration occurs. The next CTE uses these edges and window functions again to find the timestamp where the previous event (loss) occurs for a power restoration event, and calculates the duration from the timestamp difference.
The last select statement just filters on the power restoration events:
TimeStamp position power prevPower nextPower status powerOffTimestamp duration
09/04/1903 00:00:00 2 45 0 52 power on 26/01/1902 00:00:00 15/03/1901 00:00:00
29/08/1905 00:00:00 2 54 0 power on 02/10/1904 00:00:00 28/11/1900 00:00:00
I just saw that the other reply partitions by position. To add that to this solution, you need to modify all window functions by adding a partition by position clause:
, lag(power,1) over (order by timestamp partition by position) as prevPower
https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=264deed484604cda3ace1fb60d674068
This is a gaps-and-islands problem. You need to assign a group to the "0" records. A handy identifier for the group is the number of non-zero values before the value. The rest is aggregation:
select position, min(timestamp), max(timestamp)
from (select sd.*,
sum(case when power <> 0 then 1 else 0 end) over (partition by position order by timestamp) as grp
from sensordata sd
) sd
where power = 0
group by position, grp;
Note that this assumes that you want the 0s per position.
Specifically because you are looking at power = 0, you can simplify the definition of the group: it is the sum of the power up to that point. This is constant for a group of adjacent rows with power = 0:
select position, min(timestamp), max(timestamp)
from (select sd.*,
sum(power) over (partition by position order by timestamp) as grp
from sensordata sd
) sd
where power = 0
group by position, grp;

Splitting up group by with relevant aggregates beyond the basic ones?

I'm not sure if this has been asked before because I'm having trouble even asking it myself. I think the best way to explain my dilemma is to use an example.
Say I've rated my happiness on a scale of 1-10 every day for 10 years and I have the results in a big table where I have a single date correspond to a single integer value of my happiness rating. I say, though, that I only care about my happiness over 60 day periods on average (this may seem weird but this is a simplified example). So I wrap up this information to a table where I now have a start date field, an end date field, and an average rating field where the start days are every day from the first day to the last over all 10 years, but the end dates are exactly 60 days later. To be clear, these 60 day periods are overlapping (one would share 59 days with the next one, 58 with the next, and so on).
Next I pick a threshold rating, say 5, where I want to categorize everything below it into a "bad" category and everything above into a "good" category. I could easily add another field and use a case structure to give every 60-day range a "good" or "bad" flag.
Then to sum it up, I want to display the total periods of "good" and "bad" from maximum beginning to maximum end date. This is where I'm stuck. I could group by the good/bad category and then just take min(start date) and max(end date), but then if, say, the ranges go from good to bad to good then to bad again, output would show overlapping ranges of good and bad. In the aforementioned situation, I would want to show four different ranges.
I realize this may seem clearer to me that it would to someone else so if you need clarification just ask.
Thank you
---EDIT---
Here's an example of what the before would look like:
StartDate| EndDate| MoodRating
------------+------------+------------
1/1/1991 |3/1/1991 | 7
1/2/1991 |3/2/1991 | 7
1/3/1991 |3/3/1991 | 4
1/4/1991 |3/4/1991 | 4
1/5/1991 |3/5/1991 | 7
1/6/1991 |3/6/1991 | 7
1/7/1991 |3/7/1991 | 4
1/8/1991 |3/8/1991 | 4
1/9/1991 |3/9/1991 | 4
And the after:
MinStart| MaxEnd | Good/Bad
-----------+------------+----------
1/1/1991|3/2/1991 |good
1/3/1991|3/4/1991 |bad
1/5/1991|3/6/1991 |good
1/7/1991|3/9/1991 |bad
Currently my query with the group by rating would show:
MinStart| MaxEnd | Good/Bad
-----------+------------+----------
1/1/1991|3/6/1991 |good
1/3/1991|3/9/1991 |bad
This is something along the lines of
select min(StartDate), max(EndDate), Good_Bad
from sourcetable
group by Good_Bad
While Jason A Long's answer may be correct - I can't read it or figure it out, so I figured I would post my own answer. Assuming that this isn't a process that you're going to be constantly running, the CURSOR's performance hit shouldn't matter. But (at least to me) this solution is very readable and can be easily modified.
In a nutshell - we insert the first record from your source table into our results table. Next, we grab the next record and see if the mood score is the same as the previous record. If it is, we simply update the previous record's end date with the current record's end date (extending the range). If not, we insert a new record. Rinse, repeat. Simple.
Here is your setup and some sample data:
DECLARE #MoodRanges TABLE (StartDate DATE, EndDate DATE, MoodRating int)
INSERT INTO #MoodRanges
VALUES
('1/1/1991','3/1/1991', 7),
('1/2/1991','3/2/1991', 7),
('1/3/1991','3/3/1991', 4),
('1/4/1991','3/4/1991', 4),
('1/5/1991','3/5/1991', 7),
('1/6/1991','3/6/1991', 7),
('1/7/1991','3/7/1991', 4),
('1/8/1991','3/8/1991', 4),
('1/9/1991','3/9/1991', 4)
Next, we can create a table to store our results, as well as some variable placeholders for our cursor:
DECLARE #MoodResults TABLE(ID INT IDENTITY(1, 1), StartDate DATE, EndDate DATE, MoodScore varchar(50))
DECLARE #CurrentStartDate DATE, #CurrentEndDate DATE, #CurrentMoodScore INT,
#PreviousStartDate DATE, #PreviousEndDate DATE, #PreviousMoodScore INT
Now we put all of the sample data into our CURSOR:
DECLARE MoodCursor CURSOR FOR
SELECT StartDate, EndDate, MoodRating
FROM #MoodRanges
OPEN MoodCursor
FETCH NEXT FROM MoodCursor INTO #CurrentStartDate, #CurrentEndDate, #CurrentMoodScore
WHILE ##FETCH_STATUS = 0
BEGIN
IF #PreviousStartDate IS NOT NULL
BEGIN
IF (#PreviousMoodScore >= 5 AND #CurrentMoodScore >= 5)
OR (#PreviousMoodScore < 5 AND #CurrentMoodScore < 5)
BEGIN
UPDATE #MoodResults
SET EndDate = #CurrentEndDate
WHERE ID = (SELECT MAX(ID) FROM #MoodResults)
END
ELSE
BEGIN
INSERT INTO
#MoodResults
VALUES
(#CurrentStartDate, #CurrentEndDate, CASE WHEN #CurrentMoodScore >= 5 THEN 'GOOD' ELSE 'BAD' END)
END
END
ELSE
BEGIN
INSERT INTO
#MoodResults
VALUES
(#CurrentStartDate, #CurrentEndDate, CASE WHEN #CurrentMoodScore >= 5 THEN 'GOOD' ELSE 'BAD' END)
END
SET #PreviousStartDate = #CurrentStartDate
SET #PreviousEndDate = #CurrentEndDate
SET #PreviousMoodScore = #CurrentMoodScore
FETCH NEXT FROM MoodCursor INTO #CurrentStartDate, #CurrentEndDate, #CurrentMoodScore
END
CLOSE MoodCursor
DEALLOCATE MoodCursor
And here are the results:
SELECT * FROM #MoodResults
ID StartDate EndDate MoodScore
----------- ---------- ---------- --------------------------------------------------
1 1991-01-01 1991-03-02 GOOD
2 1991-01-03 1991-03-04 BAD
3 1991-01-05 1991-03-06 GOOD
4 1991-01-07 1991-03-09 BAD
Is this what you're looking for?
IF OBJECT_ID('tempdb..#MyDailyMood', 'U') IS NOT NULL
DROP TABLE #MyDailyMood;
CREATE TABLE #MyDailyMood (
TheDate DATE NOT NULL,
MoodLevel INT NOT NULL
);
WITH
cte_n1 (n) AS (SELECT 1 FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) n (n)),
cte_n2 (n) AS (SELECT 1 FROM cte_n1 a CROSS JOIN cte_n1 b),
cte_n3 (n) AS (SELECT 1 FROM cte_n2 a CROSS JOIN cte_n2 b),
cte_Calendar (dt) AS (
SELECT TOP (DATEDIFF(dd, '2007-01-01', '2017-01-01'))
DATEADD(dd, ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) - 1, '2007-01-01')
FROM
cte_n3 a CROSS JOIN cte_n3 b
)
INSERT #MyDailyMood (TheDate, MoodLevel)
SELECT
c.dt,
ABS(CHECKSUM(NEWID()) % 10) + 1
FROM
cte_Calendar c;
--==========================================================
WITH
cte_AddRN AS (
SELECT
*,
RN = ISNULL(NULLIF(ROW_NUMBER() OVER (ORDER BY mdm.TheDate) % 60, 0), 60)
FROM
#MyDailyMood mdm
),
cte_AssignGroups AS (
SELECT
*,
DateGroup = DENSE_RANK() OVER (PARTITION BY arn.RN ORDER BY arn.TheDate)
FROM
cte_AddRN arn
)
SELECT
BegOfRange = MIN(ag.TheDate),
EndOfRange = MAX(ag.TheDate),
AverageMoodLevel = AVG(ag.MoodLevel),
CASE WHEN AVG(ag.MoodLevel) >= 5 THEN 'Good' ELSE 'Bad' END
FROM
cte_AssignGroups ag
GROUP BY
ag.DateGroup;
Post OP update solution...
WITH
cte_AddRN AS ( -- Add a row number to each row that resets to 1 ever 60 rows.
SELECT
*,
RN = ISNULL(NULLIF(ROW_NUMBER() OVER (ORDER BY mdm.TheDate) % 60, 0), 60)
FROM
#MyDailyMood mdm
),
cte_AssignGroups AS ( -- Use DENSE_RANK to create groups based on the RN added above.
-- How it works: RN set the row number 1 - 60 then repeats itself
-- but we dont want ever 60th row grouped together. We want blocks of 60 consecutive rows grouped together
-- DENSE_RANK accompolishes this by ranking within all the "1's", "2's"... and so on.
-- verify with the following query... SELECT * FROM cte_AssignGroups ag ORDER BY ag.TheDate
SELECT
*,
DateGroup = DENSE_RANK() OVER (PARTITION BY arn.RN ORDER BY arn.TheDate)
FROM
cte_AddRN arn
),
cte_AggRange AS ( -- This is just a straight forward aggregation/rollup. It produces the results similar to the sample data you posed in your edit.
SELECT
BegOfRange = MIN(ag.TheDate),
EndOfRange = MAX(ag.TheDate),
AverageMoodLevel = AVG(ag.MoodLevel),
GorB = CASE WHEN AVG(ag.MoodLevel) >= 5 THEN 'Good' ELSE 'Bad' END,
ag.DateGroup
FROM
cte_AssignGroups ag
GROUP BY
ag.DateGroup
),
cte_CompactGroup AS ( -- This time we're using dense rank to group all of the consecutive "Good" and "Bad" values so that they can be further aggregated below.
SELECT
ar.BegOfRange, ar.EndOfRange, ar.AverageMoodLevel, ar.GorB, ar.DateGroup,
DenseGroup = ar.DateGroup - DENSE_RANK() OVER (PARTITION BY ar.GorB ORDER BY ar.BegOfRange)
FROM
cte_AggRange ar
)
-- The final aggregation step...
SELECT
BegOfRange = MIN(cg.BegOfRange),
EndOfRange = MAX(cg.EndOfRange),
cg.GorB
FROM
cte_CompactGroup cg
GROUP BY
cg.DenseGroup,
cg.GorB
ORDER BY
BegOfRange;

SQL to check for 2 or more consecutive negative week values

I want to count the number of 2 or more consecutive week periods that have negative values within a range of weeks.
Example:
Week | Value
201301 | 10
201302 | -5 <--| both weeks have negative values and are consecutive
201303 | -6 <--|
Week | Value
201301 | 10
201302 | -5
201303 | 7
201304 | -2 <-- negative but not consecutive to the last negative value in 201302
Week | Value
201301 | 10
201302 | -5
201303 | -7
201304 | -2 <-- 1st group of negative and consecutive values
201305 | 0
201306 | -12
201307 | -8 <-- 2nd group of negative and consecutive values
Is there a better way of doing this other than using a cursor and a reset variable and checking through each row in order?
Here is some of the SQL I have setup to try and test this:
IF OBJECT_ID('TempDB..#ConsecutiveNegativeWeekTestOne') IS NOT NULL DROP TABLE #ConsecutiveNegativeWeekTestOne
IF OBJECT_ID('TempDB..#ConsecutiveNegativeWeekTestTwo') IS NOT NULL DROP TABLE #ConsecutiveNegativeWeekTestTwo
CREATE TABLE #ConsecutiveNegativeWeekTestOne
(
[Week] INT NOT NULL
,[Value] DECIMAL(18,6) NOT NULL
)
-- I have a condition where I expect to see at least 2 consecutive weeks with negative values
-- TRUE : Week 201328 & 201329 are both negative.
INSERT INTO #ConsecutiveNegativeWeekTestOne
VALUES
(201327, 5)
,(201328,-11)
,(201329,-18)
,(201330, 25)
,(201331, 30)
,(201332, -36)
,(201333, 43)
,(201334, 50)
,(201335, 59)
,(201336, 0)
,(201337, 0)
SELECT * FROM #ConsecutiveNegativeWeekTestOne
WHERE Value < 0
ORDER BY [Week] ASC
CREATE TABLE #ConsecutiveNegativeWeekTestTwo
(
[Week] INT NOT NULL
,[Value] DECIMAL(18,6) NOT NULL
)
-- FALSE: The negative weeks are not consecutive
INSERT INTO #ConsecutiveNegativeWeekTestTwo
VALUES
(201327, 5)
,(201328,-11)
,(201329,20)
,(201330, -25)
,(201331, 30)
,(201332, -36)
,(201333, 43)
,(201334, 50)
,(201335, -15)
,(201336, 0)
,(201337, 0)
SELECT * FROM #ConsecutiveNegativeWeekTestTwo
WHERE Value < 0
ORDER BY [Week] ASC
My SQL fiddle is also here:
http://sqlfiddle.com/#!3/ef54f/2
First, would you please share the formula for calculating week number, or provide a real date for each week, or some method to determine if there are 52 or 53 weeks in any particular year? Once you do that, I can make my queries properly skip missing data AND cross year boundaries.
Now to queries: this can be done without a JOIN, which depending on the exact indexes present, may improve performance a huge amount over any solution that does use JOINs. Then again, it may not. This is also harder to understand so may not be worth it if other solutions perform well enough (especially when the right indexes are present).
Simulate a PREORDER BY windowing function (respects gaps, ignores year boundaries):
WITH Calcs AS (
SELECT
Grp =
[Week] -- comment out to ignore gaps and gain year boundaries
-- Row_Number() OVER (ORDER BY [Week]) -- swap with previous line
- Row_Number() OVER
(PARTITION BY (SELECT 1 WHERE Value < 0) ORDER BY [Week]),
*
FROM dbo.ConsecutiveNegativeWeekTestOne
)
SELECT
[Week] = Min([Week])
-- NumWeeks = Count(*) -- if you want the count
FROM Calcs C
WHERE Value < 0
GROUP BY C.Grp
HAVING Count(*) >= 2
;
See a Live Demo at SQL Fiddle (1st query)
And another way, simulating LAG and LEAD with a CROSS JOIN and aggregates (respects gaps, ignores year boundaries):
WITH Groups AS (
SELECT
Grp = T.[Week] + X.Num,
*
FROM
dbo.ConsecutiveNegativeWeekTestOne T
CROSS JOIN (VALUES (-1), (0), (1)) X (Num)
)
SELECT
[Week] = Min(C.[Week])
-- Value = Min(C.Value)
FROM
Groups G
OUTER APPLY (SELECT G.* WHERE G.Num = 0) C
WHERE G.Value < 0
GROUP BY G.Grp
HAVING
Min(G.[Week]) = Min(C.[Week])
AND Max(G.[Week]) > Min(C.[Week])
;
See a Live Demo at SQL Fiddle (2nd query)
And, my original second query, but simplified (ignores gaps, handles year boundaries):
WITH Groups AS (
SELECT
Grp = (Row_Number() OVER (ORDER BY T.[Week]) + X.Num) / 3,
*
FROM
dbo.ConsecutiveNegativeWeekTestOne T
CROSS JOIN (VALUES (0), (2), (4)) X (Num)
)
SELECT
[Week] = Min(C.[Week])
-- Value = Min(C.Value)
FROM
Groups G
OUTER APPLY (SELECT G.* WHERE G.Num = 2) C
WHERE G.Value < 0
GROUP BY G.Grp
HAVING
Min(G.[Week]) = Min(C.[Week])
AND Max(G.[Week]) > Min(C.[Week])
;
Note: The execution plan for these may be rated as more expensive than other queries, but there will be only 1 table access instead of 2 or 3, and while the CPU may be higher it is still respectably low.
Note: I originally was not paying attention to only producing one row per group of negative values, and so I produced this query as only requiring 2 table accesses (respects gaps, ignores year boundaries):
SELECT
T1.[Week]
FROM
dbo.ConsecutiveNegativeWeekTestOne T1
WHERE
Value < 0
AND EXISTS (
SELECT *
FROM dbo.ConsecutiveNegativeWeekTestOne T2
WHERE
T2.Value < 0
AND T2.[Week] IN (T1.[Week] - 1, T1.[Week] + 1)
)
;
See a Live Demo at SQL Fiddle (3rd query)
However, I have now modified it to perform as required, showing only each starting date (respects gaps, ignored year boundaries):
SELECT
T1.[Week]
FROM
dbo.ConsecutiveNegativeWeekTestOne T1
WHERE
Value < 0
AND EXISTS (
SELECT *
FROM
dbo.ConsecutiveNegativeWeekTestOne T2
WHERE
T2.Value < 0
AND T1.[Week] - 1 <= T2.[Week]
AND T1.[Week] + 1 >= T2.[Week]
AND T1.[Week] <> T2.[Week]
HAVING
Min(T2.[Week]) > T1.[Week]
)
;
See a Live Demo at SQL Fiddle (3rd query)
Last, just for fun, here is a SQL Server 2012 and up version using LEAD and LAG:
WITH Weeks AS (
SELECT
PrevValue = Lag(Value, 1, 0) OVER (ORDER BY [Week]),
SubsValue = Lead(Value, 1, 0) OVER (ORDER BY [Week]),
PrevWeek = Lag(Week, 1, 0) OVER (ORDER BY [Week]),
SubsWeek = Lead(Week, 1, 0) OVER (ORDER BY [Week]),
*
FROM
dbo.ConsecutiveNegativeWeekTestOne
)
SELECT #Week = [Week]
FROM Weeks W
WHERE
(
[Week] - 1 > PrevWeek
OR PrevValue >= 0
)
AND Value < 0
AND SubsValue < 0
AND [Week] + 1 = SubsWeek
;
See a Live Demo at SQL Fiddle (4th query)
I am not sure I am doing this the best way as I haven't used these much, but it works nonetheless.
You should do some performance testing of the various queries presented to you, and pick the best one, considering that code should be, in order:
Correct
Clear
Concise
Fast
Seeing that some of my solutions are anything but clear, other solutions that are fast enough and concise enough will probably win out in the competition of which one to use in your own production code. But... maybe not! And maybe someone will appreciate seeing these techniques, even if they can't be used as-is this time.
So let's do some testing and see what the truth is about all this! Here is some test setup script. It will generate the same data on your own server as it did on mine:
IF Object_ID('dbo.ConsecutiveNegativeWeekTestOne', 'U') IS NOT NULL DROP TABLE dbo.ConsecutiveNegativeWeekTestOne;
GO
CREATE TABLE dbo.ConsecutiveNegativeWeekTestOne (
[Week] int NOT NULL CONSTRAINT PK_ConsecutiveNegativeWeekTestOne PRIMARY KEY CLUSTERED,
[Value] decimal(18,6) NOT NULL
);
SET NOCOUNT ON;
DECLARE
#f float = Rand(5.1415926535897932384626433832795028842),
#Dt datetime = '17530101',
#Week int;
WHILE #Dt <= '20140106' BEGIN
INSERT dbo.ConsecutiveNegativeWeekTestOne
SELECT
Format(#Dt, 'yyyy') + Right('0' + Convert(varchar(11), DateDiff(day, DateAdd(year, DateDiff(year, 0, #Dt), 0), #Dt) / 7 + 1), 2),
Rand() * 151 - 76
;
SET #Dt = DateAdd(day, 7, #Dt);
END;
This generates 13,620 weeks, from 175301 through 201401. I modified all the queries to select the Week values instead of the count, in the format SELECT #Week = Expression ... so that tests are not affected by returning rows to the client.
I tested only the gap-respecting, non-year-boundary-handling versions.
Results
Query Duration CPU Reads
------------------ -------- ----- ------
ErikE-Preorder 27 31 40
ErikE-CROSS 29 31 40
ErikE-Join-IN -------Awful---------
ErikE-Join-Revised 46 47 15069
ErikE-Lead-Lag 104 109 40
jods 12 16 120
Transact Charlie 12 16 120
Conclusions
The reduced reads of the non-JOIN versions are not significant enough to warrant their increased complexity.
The table is so small that the performance almost doesn't matter. 261 years of weeks is insignificant, so a normal business operation won't see any performance problem even with a poor query.
I tested with an index on Week (which is more than reasonable), doing two separate JOINs with a seek was far, far superior to any device to try to get the relevant related data in one swoop. Charlie and jods were spot on in their comments.
This data is not large enough to expose real differences between the queries in CPU and duration. The values above are representative, though at times the 31 ms were 16 ms and the 16 ms were 0 ms. Since the resolution is ~15 ms, this doesn't tell us much.
My tricky query techniques do perform better. They might be worth it in performance critical situations. But this is not one of those.
Lead and Lag may not always win. The presence of an index on the lookup value is probably what determines this. The ability to still pull prior/next values based on a certain order even when the order by value is not sequential may be one good use case for these functions.
you could use a combination of EXISTS.
Assuming you only want to know groups (series of consecutive weeks all negative)
--Find the potential start weeks
;WITH starts as (
SELECT [Week]
FROM #ConsecutiveNegativeWeekTestOne AS s
WHERE s.[Value] < 0
AND NOT EXISTS (
SELECT 1
FROM #ConsecutiveNegativeWeekTestOne AS p
WHERE p.[Week] = s.[Week] - 1
AND p.[Value] < 0
)
)
SELECT COUNT(*)
FROM
Starts AS s
WHERE EXISTS (
SELECT 1
FROM #ConsecutiveNegativeWeekTestOne AS n
WHERE n.[Week] = s.[Week] + 1
AND n.[Value] < 0
)
If you have an index on Week this query should even be moderately efficient.
You can replace LEAD and LAG with a self-join.
The counting idea is basically to count start of negative sequences rather than trying to consider each row.
SELECT COUNT(*)
FROM ConsecutiveNegativeWeekTestOne W
LEFT OUTER JOIN ConsecutiveNegativeWeekTestOne Prev
ON W.week = Prev.week + 1
INNER JOIN ConsecutiveNegativeWeekTestOne Next
ON W.week = Next.week - 1
WHERE W.value < 0
AND (Prev.value IS NULL OR Prev.value > 0)
AND Next.value < 0
Note that I simply did "week + 1", which would not work when there is a year change.