I am trying to get the members of a company that qualify for 'EMERITUS' status.
To qualify, one must be a member for 35 years from the date joined 'JOIN_DATE' and must be >=65 years of age to qualify 'BIRTH_DATE'. I want to see >= 2015 under the 'EMERITUS' column. Does this query make sense?
SELECT
N.ID, N.FULL_NAME, N.MEMBER_TYPE,
N.JOIN_DATE,DA.BIRTH_DATE,
(SELECT CASE
WHEN DATEDIFF(YEAR,N.JOIN_DATE,GETDATE()) + 35 > DATEDIFF(YEAR,DA.BIRTH_DATE,GETDATE()) + 65
THEN CONVERT(VARCHAR(4),YEAR(N.JOIN_DATE) + 35)
WHEN DATEDIFF(YEAR,N.JOIN_DATE,GETDATE()) + 35 < DATEDIFF(YEAR,DA.BIRTH_DATE,GETDATE()) + 65
THEN CONVERT(VARCHAR(4),YEAR(DA.BIRTH_DATE) + 65)
ELSE NULL
END) AS 'EMERITUS'
Based upon the comments above it looks like you are on the right track.
Using the below SQL (with example in SQLFiddle listed) you should be able to get the year they will be EMERITUS and the number of years until EMERITUS.
select N_sub.*
,case when DATEDIFF(d,GETDATE(),N_sub.EMERITUS)/365.0 > 0
then DATEDIFF(d,GETDATE(),N_sub.EMERITUS)/365.0
else 0
end YEARS_UNTIL_EMERITUS
from(select N."ID"
,N.FULL_NAME
,N.MEMBER_TYPE
,N.JOIN_DATE
,N.BIRTH_DATE
, (select case
when DATEDIFF (d,N.JOIN_DATE, GETDATE ())/365 + 35 > DATEDIFF(d,N.BIRTH_DATE, GETDATE ())/365 + 65
then CONVERT(VARCHAR(10),DATEADD(year,35,N.BIRTH_DATE),110)
when DATEDIFF (d,N.JOIN_DATE, GETDATE ())/365 + 35 < DATEDIFF(d,N.BIRTH_DATE, GETDATE ())/365 + 65
then CONVERT(VARCHAR(10),DATEADD(year,65,N.BIRTH_DATE),110)
else null
end) AS 'EMERITUS'
from N
) N_sub
SQL Fiddle: http://sqlfiddle.com/#!6/e464cc/7
With this query it is a bit better than just raw comparing the years as it goes by # of days divided by 365. Logic could be added to account for leap years. The results will show the date they get Emeritus and the number of years until they would get it. 0 if == 0 or negative.
Related
I am looking to make a case in a SQL query and assign according to the condition several results.
For example :
Code :
INSERT INTO DESTINATION_TABLE (DT_TRT, NU_QUARTER, NU_YEAR) VALUES
(SELECT
CASE
WHEN #P_DT_TRT# = '1900-00-00'
THEN MAX(TT.DT_CTTT)
ELSE #P_DT_TRT#
END AS DT_TRT,
CASE
WHEN EXTRACT (MONTH FROM DT_TRT) < 4
THEN NU_QUARTER = 4 AND NU_YEAR = EXTRACT (YEAR FROM DT_TRT) - 1
ELSE NU_YEAR = EXTRACT (YEAR FROM DT_TRT)
END
CASE
WHEN EXTRACT (MONTH FROM DT_TRT) < 7
THEN 1
ELSE (CASE WHEN EXTRACT (MONTH FROM DT_TRT) < 10 THEN 2 ELSE 3 END AS NU_QUARTER)
END AS NU_QUARTER
FROM TARGET_TABLE TT);
Algorithm :
-> A date will be given in the programme to enable the calculation (#P_DT_TRT#)
If the parameter is not supplied (value = 1900-00-00)
DT_TRT = the largest constitution date (DT_CTTT) in the target table (TARGET_TABLE TT)
Otherwise DT_TRT = date given in parameter
If DT_TRT month < 4
Quarter = 4
Year = Year of DT_TRT - 1
Otherwise Year = Year of DT_TRT
If DT_TRT month < 7
Quarter = 1
Otherwise
If DT_TRT < 10
Quarter = 2
Otherwise Quarter = 3
Question : Is it possible to integrate several results (DT_TRT, NU_QUARTER, NU_YEAR) in one case ? And if so, what is the syntax ?
I work in Teradata Studio.
Thank you for your answers. :)
This seems to be your logic:
INSERT INTO DESTINATION_TABLE (DT_TRT, NU_QUARTER, NU_YEAR)
VALUES
(
-- If the parameter is not supplied (value = 1900-00-00)
-- DT_TRT = the largest constitution date (DT_CTTT) in the target table (TARGET_TABLE TT)
-- Otherwise DT_TRT = date given in parameter
CASE
WHEN #P_DT_TRT# = '1900-00-00'
THEN (SELECT Max(DT_CTTT) FROM TARGET_TABLE)
ELSE #P_DT_TRT#
END,
-- shift back year/quarter by three months to adjust for company's business year
td_quarter_of_year(Add_Months(DT_TRT, -3)),
Extract(YEAR From Add_Months(DT_TRT, -3))
)
;
I have a date field called BIRTH_DAT and the date of births have been recorded like this:
YYYMMDD, where the second number of the year is missing.
So an example would be a date of birth of 4th March 1978 would appear in this field as: 1780304
The rule in this field is that if it begins with a '1' then the date is in the 1900s, if it begins with a '2' then the date is in the 2000s
So what I want to do is to create another column that shows the correctly written version of the date so I can calculate age from it.
e.g
Column 1 is called BIRTH_DAT and has values:
1801204, 1601228, 1980803 ...
Column 2 is a new column called PROPER_DOB and has values:
19801204, 19601228, 19980803 ...
How do I go about this?
Original answer - post was tagged MySQL:
Using CASE expression and operating on a string with left, substring and concat functions will get you derired result. Based on first character we're replacing:
1 with 19
2 with 20
and for any other case when first letter is not in (1,2) we're printing unsupported date format:
select
birth_dat,
case
when left(birth_dat,1) = '1' then concat('19', substring(birth_dat from 2))
when left(birth_dat,1) = '2' then concat('20', substring(birth_dat from 2))
else 'unsupported date format'
end AS proper_dob
from yourtable
As #Siyual and #JanDoggen suggested, the right format for your column should be DATE which you could achieve by converting the string using specified format with str_to_date function like that:
select
birth_dat,
case
when left(birth_dat,1) = '1' then str_to_date(concat('19', substring(birth_dat from 2)), '%Y%m%d')
when left(birth_dat,1) = '2' then str_to_date(concat('20', substring(birth_dat from 2)), '%Y%m%d')
else 'unsupported date format'
end AS proper_dob
from yourtable
Live example for both queries: SQL fiddle
It turns out that OP is using SQL Server, so here's the edited answer:
Use CAST / CONVERT to achieve the same thing since there is not str_to_date function in SQL Server.
Use STUFF for inserting string in the another string value
SELECT *,
YEAR(GETDATE())-YEAR(PROPER_DATE) AS Age,
CASE WHEN YEAR(GETDATE())-YEAR(PROPER_DATE) < 19 THEN '0-19y' ELSE '20y+' END AS AGE_GROUP
FROM(
SELECT BIRTH_DAT,
CASE WHEN CHARINDEX('1',BIRTH_DAT) = 1 THEN STUFF(CAST(BIRTH_DAT AS VARCHAR(30)),2,0,'9')
WHEN CHARINDEX('2',BIRTH_DAT) = 1 THEN STUFF(CAST(BIRTH_DAT AS VARCHAR(30)),2,0,'0')
END AS PROPER_DATE
FROM my_table
)M
SELECT *,YEAR(GETDATE())-YEAR(PROPER_DATE) AS Age,
CASE
WHEN YEAR(GETDATE())-YEAR(PROPER_DATE) < 5 THEN '0-5Y'
WHEN YEAR(GETDATE())-YEAR(PROPER_DATE) BETWEEN 5 AND 9 THEN '5-9Y'
WHEN YEAR(GETDATE())-YEAR(PROPER_DATE) BETWEEN 10 AND 14 THEN '10-14Y'
WHEN YEAR(GETDATE())-YEAR(PROPER_DATE) BETWEEN 15 AND 19 THEN '15-19Y'
WHEN YEAR(GETDATE())-YEAR(PROPER_DATE) >= 20 THEN '20+Y'
ELSE 'NK'
END AS AGE_GROUP,
COUNT(*)
FROM(
SELECT BIRTH_DAT
,SEX
,CASE
WHEN CHARINDEX('1',BIRTH_DAT) = 1 THEN STUFF(CAST(BIRTH_DAT AS VARCHAR(10)),2,0,'9')
WHEN CHARINDEX('2',BIRTH_DAT) = 1 THEN STUFF(CAST(BIRTH_DAT AS VARCHAR(10)),2,0,'0')
END AS PROPER_DATE
FROM mytable
)M
GROUP BY (CASE
WHEN YEAR(GETDATE())-YEAR(PROPER_DATE) < 5 THEN '0-5Y'
WHEN YEAR(GETDATE())-YEAR(PROPER_DATE) BETWEEN 5 AND 9 THEN '5-9Y'
WHEN YEAR(GETDATE())-YEAR(PROPER_DATE) BETWEEN 10 AND 14 THEN '10-14Y'
WHEN YEAR(GETDATE())-YEAR(PROPER_DATE) BETWEEN 15 AND 19 THEN '15-19Y'
WHEN YEAR(GETDATE())-YEAR(PROPER_DATE) >= 20 THEN '20+Y'
ELSE 'NK'
END
I want to calculate query result on the bases of percentage, which is set from the admin panel of the website.
There are four status for the same.
Gold, Silver,Bronze and Medellin.
My current formula is
select * , isnull( CASE WHEN RowNumber <= (#totaltopPromoters*#Gold/100) Then 1
WHEN RowNumber >= (#totaltopPromoters*#Gold/100) and RowNumber <= (#totaltopPromoters*#Gold/100) + (#totaltopPromoters*#Silver/100) THEN 2
WHEN RowNumber>=(#totaltopPromoters*#Silver/100) and RowNumber<= (#totaltopPromoters*#Gold/100)+(#totaltopPromoters*#Silver/100) + (#totaltopPromoters*#Bronze/100)THEN 3
WHEN RowNumber>=(#totaltopPromoters*#Medallion/100) and RowNumber <= (#totaltopPromoters*#Gold/100)+(#totaltopPromoters*#Silver/100) + (#totaltopPromoters*#Bronze/100)+(#totaltopPromoters*#Medallion/100) THEN 4
end ,0) as
TrophyType
Can anyone guide me on this?
I am using the sql code below in the Crystal "Command" to display current years sales units and dollars (all closed sales versus sales closed using a discount). I need to add the last years unit sales qty. Does anyone have any idea the nest way to do this? Thanks to anyone who has any ideas.
Code:
SELECT
convert(char(4),datepart(yy,m.close_dt)) +
right('00' + convert(varchar,datepart(m,m.close_dt)),2) AS SortMonth,
replace(right(convert(varchar(11), m.close_dt, 106), 8), ' ', '-') AS DisplayMonth,
sum(case when lt.um_ch177= 'BAWRM' then 1 else 0 end) as Close_Units_Disc ,
sum(case when lt.um_ch177= 'BAWRM' then m.tot_ln_amt else 0 end) as Close_Dollars_Disc,
sum(case when m.close_dt >= '{?Date1}'
and m.close_dt <= '{?Date2}' then 1 else 0 end) as Close_Units_All,
sum(case when m.close_dt >= '{?Date1}'
and m.close_dt <= '{?Date2}' then tot_ln_amt else 0 end) as Close_Dollars_All
FROM
pro2sql.lt_master m WITH (NOLOCK)
LEFT OUTER JOIN pro2sql.ltuch_master lt WITH (NOLOCK) ON m.lt_acnt=lt.lt_acnt
WHERE
m.stage = 60
and m.loan_purpose <> 7
and m.app_number <> 0
and m.brch_entry {?BranchList}
and m.close_dt >= '{?Date1}'
and m.close_dt <'{?Date2}'
Group by
convert(char(4),datepart(yy,m.close_dt)) + right('00' + convert(varchar,datepart(m,m.close_dt)),2)
,replace(right(convert(varchar(11), m.close_dt, 106), 8), ' ', '-')
I can't upload a pic - not sure if this is going to be a jumble but here is the output - the last two columns are what I need to add:
DisplayMonth Close_Units_Disc Close_Dollars_Disc Close_Units_All Close_Dollars_All %Units %Dollars DisplayMonth LY CloseUnits All
Feb-2014 115 $48,919,800 190 $83,942,650 61% 58% Feb-2013
Mar-2014 202 $91,077,780 238 $109,300,903 85% 83% Mar-2013
Apr-2014 219 $89,157,481 238 $95,892,509 92% 93% Apr-2013
"Enterprise system" is not an adequate answer. What type of database is this? Oracle? Microsoft (and which version if its MSS)? Something else?
That may influence what solutions are possible, or at least the syntax
You have at least two options off the top of my head though:
One:
Expand the date range to pull in the previous year's data;
modify all of your sum case statements to include the current year only.
add an additional sum case with appropriate date range conditions for the previous year's units.
Two:
duplicate your query as a subquery for the previous year and join to it
e.g:
LEFT OUTER JOIN (
replace(right(convert(varchar(11), m.close_dt, 106), 8), ' ', '-') AS DisplayMonth,
SELECT sum(case when lt.um_ch177= 'BAWRM' then 1 else 0 end) as Close_Units_Disc
FROM pro2sql.lt_master m WITH (NOLOCK)
LEFT OUTER JOIN pro2sql.ltuch_master lt WITH (NOLOCK) ON m.lt_acnt=lt.lt_acnt
WHERE m.stage = 60 and m.loan_purpose <> 7 and m.app_number <> 0
and m.brch_entry {?BranchList}
and m.close_dt >= '{?Date3}' --//#last year's start date
and m.close_dt <'{?Date4}'` --//#last year's end date
Group by
convert(char(4),datepart(yy,m.close_dt)) +
right('00' + convert(varchar,datepart(m,m.close_dt)),2),
replace(right(convert(varchar(11), m.close_dt, 106), 8), ' ', '-')
) as lastYear
on replace(right(convert(varchar(11), m.close_dt, 106), 8), ' ', '-') = lastYear.DisplayMonth
If you go with one of these, you may want to consult with a DBA at your company to see which would be more efficient... I don't know if running a query with a larger result is more intensive than running nearly identical queries twice, and that too may change form one architecture to another (or even just with different environment variables/parameters configured on the server)
I want to count the number of 2 or more consecutive week periods that have negative values within a range of weeks.
Example:
Week | Value
201301 | 10
201302 | -5 <--| both weeks have negative values and are consecutive
201303 | -6 <--|
Week | Value
201301 | 10
201302 | -5
201303 | 7
201304 | -2 <-- negative but not consecutive to the last negative value in 201302
Week | Value
201301 | 10
201302 | -5
201303 | -7
201304 | -2 <-- 1st group of negative and consecutive values
201305 | 0
201306 | -12
201307 | -8 <-- 2nd group of negative and consecutive values
Is there a better way of doing this other than using a cursor and a reset variable and checking through each row in order?
Here is some of the SQL I have setup to try and test this:
IF OBJECT_ID('TempDB..#ConsecutiveNegativeWeekTestOne') IS NOT NULL DROP TABLE #ConsecutiveNegativeWeekTestOne
IF OBJECT_ID('TempDB..#ConsecutiveNegativeWeekTestTwo') IS NOT NULL DROP TABLE #ConsecutiveNegativeWeekTestTwo
CREATE TABLE #ConsecutiveNegativeWeekTestOne
(
[Week] INT NOT NULL
,[Value] DECIMAL(18,6) NOT NULL
)
-- I have a condition where I expect to see at least 2 consecutive weeks with negative values
-- TRUE : Week 201328 & 201329 are both negative.
INSERT INTO #ConsecutiveNegativeWeekTestOne
VALUES
(201327, 5)
,(201328,-11)
,(201329,-18)
,(201330, 25)
,(201331, 30)
,(201332, -36)
,(201333, 43)
,(201334, 50)
,(201335, 59)
,(201336, 0)
,(201337, 0)
SELECT * FROM #ConsecutiveNegativeWeekTestOne
WHERE Value < 0
ORDER BY [Week] ASC
CREATE TABLE #ConsecutiveNegativeWeekTestTwo
(
[Week] INT NOT NULL
,[Value] DECIMAL(18,6) NOT NULL
)
-- FALSE: The negative weeks are not consecutive
INSERT INTO #ConsecutiveNegativeWeekTestTwo
VALUES
(201327, 5)
,(201328,-11)
,(201329,20)
,(201330, -25)
,(201331, 30)
,(201332, -36)
,(201333, 43)
,(201334, 50)
,(201335, -15)
,(201336, 0)
,(201337, 0)
SELECT * FROM #ConsecutiveNegativeWeekTestTwo
WHERE Value < 0
ORDER BY [Week] ASC
My SQL fiddle is also here:
http://sqlfiddle.com/#!3/ef54f/2
First, would you please share the formula for calculating week number, or provide a real date for each week, or some method to determine if there are 52 or 53 weeks in any particular year? Once you do that, I can make my queries properly skip missing data AND cross year boundaries.
Now to queries: this can be done without a JOIN, which depending on the exact indexes present, may improve performance a huge amount over any solution that does use JOINs. Then again, it may not. This is also harder to understand so may not be worth it if other solutions perform well enough (especially when the right indexes are present).
Simulate a PREORDER BY windowing function (respects gaps, ignores year boundaries):
WITH Calcs AS (
SELECT
Grp =
[Week] -- comment out to ignore gaps and gain year boundaries
-- Row_Number() OVER (ORDER BY [Week]) -- swap with previous line
- Row_Number() OVER
(PARTITION BY (SELECT 1 WHERE Value < 0) ORDER BY [Week]),
*
FROM dbo.ConsecutiveNegativeWeekTestOne
)
SELECT
[Week] = Min([Week])
-- NumWeeks = Count(*) -- if you want the count
FROM Calcs C
WHERE Value < 0
GROUP BY C.Grp
HAVING Count(*) >= 2
;
See a Live Demo at SQL Fiddle (1st query)
And another way, simulating LAG and LEAD with a CROSS JOIN and aggregates (respects gaps, ignores year boundaries):
WITH Groups AS (
SELECT
Grp = T.[Week] + X.Num,
*
FROM
dbo.ConsecutiveNegativeWeekTestOne T
CROSS JOIN (VALUES (-1), (0), (1)) X (Num)
)
SELECT
[Week] = Min(C.[Week])
-- Value = Min(C.Value)
FROM
Groups G
OUTER APPLY (SELECT G.* WHERE G.Num = 0) C
WHERE G.Value < 0
GROUP BY G.Grp
HAVING
Min(G.[Week]) = Min(C.[Week])
AND Max(G.[Week]) > Min(C.[Week])
;
See a Live Demo at SQL Fiddle (2nd query)
And, my original second query, but simplified (ignores gaps, handles year boundaries):
WITH Groups AS (
SELECT
Grp = (Row_Number() OVER (ORDER BY T.[Week]) + X.Num) / 3,
*
FROM
dbo.ConsecutiveNegativeWeekTestOne T
CROSS JOIN (VALUES (0), (2), (4)) X (Num)
)
SELECT
[Week] = Min(C.[Week])
-- Value = Min(C.Value)
FROM
Groups G
OUTER APPLY (SELECT G.* WHERE G.Num = 2) C
WHERE G.Value < 0
GROUP BY G.Grp
HAVING
Min(G.[Week]) = Min(C.[Week])
AND Max(G.[Week]) > Min(C.[Week])
;
Note: The execution plan for these may be rated as more expensive than other queries, but there will be only 1 table access instead of 2 or 3, and while the CPU may be higher it is still respectably low.
Note: I originally was not paying attention to only producing one row per group of negative values, and so I produced this query as only requiring 2 table accesses (respects gaps, ignores year boundaries):
SELECT
T1.[Week]
FROM
dbo.ConsecutiveNegativeWeekTestOne T1
WHERE
Value < 0
AND EXISTS (
SELECT *
FROM dbo.ConsecutiveNegativeWeekTestOne T2
WHERE
T2.Value < 0
AND T2.[Week] IN (T1.[Week] - 1, T1.[Week] + 1)
)
;
See a Live Demo at SQL Fiddle (3rd query)
However, I have now modified it to perform as required, showing only each starting date (respects gaps, ignored year boundaries):
SELECT
T1.[Week]
FROM
dbo.ConsecutiveNegativeWeekTestOne T1
WHERE
Value < 0
AND EXISTS (
SELECT *
FROM
dbo.ConsecutiveNegativeWeekTestOne T2
WHERE
T2.Value < 0
AND T1.[Week] - 1 <= T2.[Week]
AND T1.[Week] + 1 >= T2.[Week]
AND T1.[Week] <> T2.[Week]
HAVING
Min(T2.[Week]) > T1.[Week]
)
;
See a Live Demo at SQL Fiddle (3rd query)
Last, just for fun, here is a SQL Server 2012 and up version using LEAD and LAG:
WITH Weeks AS (
SELECT
PrevValue = Lag(Value, 1, 0) OVER (ORDER BY [Week]),
SubsValue = Lead(Value, 1, 0) OVER (ORDER BY [Week]),
PrevWeek = Lag(Week, 1, 0) OVER (ORDER BY [Week]),
SubsWeek = Lead(Week, 1, 0) OVER (ORDER BY [Week]),
*
FROM
dbo.ConsecutiveNegativeWeekTestOne
)
SELECT #Week = [Week]
FROM Weeks W
WHERE
(
[Week] - 1 > PrevWeek
OR PrevValue >= 0
)
AND Value < 0
AND SubsValue < 0
AND [Week] + 1 = SubsWeek
;
See a Live Demo at SQL Fiddle (4th query)
I am not sure I am doing this the best way as I haven't used these much, but it works nonetheless.
You should do some performance testing of the various queries presented to you, and pick the best one, considering that code should be, in order:
Correct
Clear
Concise
Fast
Seeing that some of my solutions are anything but clear, other solutions that are fast enough and concise enough will probably win out in the competition of which one to use in your own production code. But... maybe not! And maybe someone will appreciate seeing these techniques, even if they can't be used as-is this time.
So let's do some testing and see what the truth is about all this! Here is some test setup script. It will generate the same data on your own server as it did on mine:
IF Object_ID('dbo.ConsecutiveNegativeWeekTestOne', 'U') IS NOT NULL DROP TABLE dbo.ConsecutiveNegativeWeekTestOne;
GO
CREATE TABLE dbo.ConsecutiveNegativeWeekTestOne (
[Week] int NOT NULL CONSTRAINT PK_ConsecutiveNegativeWeekTestOne PRIMARY KEY CLUSTERED,
[Value] decimal(18,6) NOT NULL
);
SET NOCOUNT ON;
DECLARE
#f float = Rand(5.1415926535897932384626433832795028842),
#Dt datetime = '17530101',
#Week int;
WHILE #Dt <= '20140106' BEGIN
INSERT dbo.ConsecutiveNegativeWeekTestOne
SELECT
Format(#Dt, 'yyyy') + Right('0' + Convert(varchar(11), DateDiff(day, DateAdd(year, DateDiff(year, 0, #Dt), 0), #Dt) / 7 + 1), 2),
Rand() * 151 - 76
;
SET #Dt = DateAdd(day, 7, #Dt);
END;
This generates 13,620 weeks, from 175301 through 201401. I modified all the queries to select the Week values instead of the count, in the format SELECT #Week = Expression ... so that tests are not affected by returning rows to the client.
I tested only the gap-respecting, non-year-boundary-handling versions.
Results
Query Duration CPU Reads
------------------ -------- ----- ------
ErikE-Preorder 27 31 40
ErikE-CROSS 29 31 40
ErikE-Join-IN -------Awful---------
ErikE-Join-Revised 46 47 15069
ErikE-Lead-Lag 104 109 40
jods 12 16 120
Transact Charlie 12 16 120
Conclusions
The reduced reads of the non-JOIN versions are not significant enough to warrant their increased complexity.
The table is so small that the performance almost doesn't matter. 261 years of weeks is insignificant, so a normal business operation won't see any performance problem even with a poor query.
I tested with an index on Week (which is more than reasonable), doing two separate JOINs with a seek was far, far superior to any device to try to get the relevant related data in one swoop. Charlie and jods were spot on in their comments.
This data is not large enough to expose real differences between the queries in CPU and duration. The values above are representative, though at times the 31 ms were 16 ms and the 16 ms were 0 ms. Since the resolution is ~15 ms, this doesn't tell us much.
My tricky query techniques do perform better. They might be worth it in performance critical situations. But this is not one of those.
Lead and Lag may not always win. The presence of an index on the lookup value is probably what determines this. The ability to still pull prior/next values based on a certain order even when the order by value is not sequential may be one good use case for these functions.
you could use a combination of EXISTS.
Assuming you only want to know groups (series of consecutive weeks all negative)
--Find the potential start weeks
;WITH starts as (
SELECT [Week]
FROM #ConsecutiveNegativeWeekTestOne AS s
WHERE s.[Value] < 0
AND NOT EXISTS (
SELECT 1
FROM #ConsecutiveNegativeWeekTestOne AS p
WHERE p.[Week] = s.[Week] - 1
AND p.[Value] < 0
)
)
SELECT COUNT(*)
FROM
Starts AS s
WHERE EXISTS (
SELECT 1
FROM #ConsecutiveNegativeWeekTestOne AS n
WHERE n.[Week] = s.[Week] + 1
AND n.[Value] < 0
)
If you have an index on Week this query should even be moderately efficient.
You can replace LEAD and LAG with a self-join.
The counting idea is basically to count start of negative sequences rather than trying to consider each row.
SELECT COUNT(*)
FROM ConsecutiveNegativeWeekTestOne W
LEFT OUTER JOIN ConsecutiveNegativeWeekTestOne Prev
ON W.week = Prev.week + 1
INNER JOIN ConsecutiveNegativeWeekTestOne Next
ON W.week = Next.week - 1
WHERE W.value < 0
AND (Prev.value IS NULL OR Prev.value > 0)
AND Next.value < 0
Note that I simply did "week + 1", which would not work when there is a year change.