Error building a SQL query while trying to get order by a day for a period of week - sql

I have an ASP.NET Core application, through controller endpoint I pass #by and #period string values to the SQL query.
#by takes one of the following values: day, week
#period takes one of the following values: week, month, year
When the #period is month or year, then #by is a week, else it's a day.
I have the following working query when the #period is a month or a year:
SELECT
l.region_id AS region_id,
'Region ' + r.region_desc AS region_name,
MIN(DATEADD(D, -(DATEPART(WEEKDAY, s.pos_date) - 1), s.pos_date)) AS date_pos,
CONVERT(VARCHAR(20), MIN(DATEADD(D, -(DATEPART(WEEKDAY, s.pos_date) - 1), s.pos_date)), 107) AS display_date_pos
FROM
incent_summary s
INNER JOIN
location l ON s.store_num = l.store_num
INNER JOIN
region r ON l.region_id = r.region_id
WHERE
s.pos_date >= DATEADD(day, #period , CONVERT(date, GETDATE()))
AND s.pos_date <= GETDATE()
GROUP BY
DATEPART (#by, s.pos_date),
l.region_id, r.region_desc
ORDER BY
DATEPART (#by, pos_date),
l.region_id, r.region_desc
The issue is when the #period is a week, #by is day, and the statement
MIN(DATEADD(D, -(DATEPART(WEEKDAY, s.pos_date) - 1), s.pos_date)) AS date_pos
returns the same day for all the 7 days.
Sample output when #period = year and #by = week:
region_id region_name date_pos display_date_pos
---------------------------------------------------------------------
34 Region 43 2019-12-29 00:00:00.000 Dec 29, 2019
50 Region 22 2019-12-29 00:00:00.000 Dec 29, 2019
34 Region 43 2020-01-05 00:00:00.000 Jan 05, 2020
50 Region 22 2020-01-05 00:00:00.000 Jan 05, 2020
34 Region 43 2020-01-12 00:00:00.000 Jan 12, 2020
50 Region 22 2020-01-12 00:00:00.000 Jan 12, 2020
34 Region 43 2020-01-19 00:00:00.000 Jan 19, 2020
50 Region 22 2020-01-19 00:00:00.000 Jan 19, 2020
34 Region 43 2020-01-26 00:00:00.000 Jan 26, 2020
50 Region 22 2020-01-26 00:00:00.000 Jan 26, 2020
Sample output when #period = week and #by = day:
region_id region_name date_pos display_date_pos
--------------------------------------------------------------------
34 Region 43 2020-07-12 00:00:00.000 Jul 12, 2020
50 Region 22 2020-07-12 00:00:00.000 Jul 12, 2020
34 Region 43 2020-07-12 00:00:00.000 Jul 12, 2020
50 Region 22 2020-07-12 00:00:00.000 Jul 12, 2020
34 Region 43 2020-07-19 00:00:00.000 Jul 19, 2020
50 Region 22 2020-07-19 00:00:00.000 Jul 19, 2020
34 Region 43 2020-07-19 00:00:00.000 Jul 19, 2020
50 Region 22 2020-07-19 00:00:00.000 Jul 19, 2020
34 Region 43 2020-07-19 00:00:00.000 Jul 19, 2020
50 Region 22 2020-07-19 00:00:00.000 Jul 19, 2020
How can I fix this?

SELECT
DATEADD(D, -(DATEPART(WEEKDAY, s.pos_date) - 1), s.pos_date)
Will always return the first day of the week because the logic is: "subtract from my date the number of days from sunday and add 1."
Sunday: 1 - 1 + 1 = 1 = Sunday
Monday: 2 - 2 + 1 = 1 = Sunday
.
.
.
Saturday: 7 - 7 + 1 = Sunday
That's fine when you want the first Sunday of the year/month/whatever. But the first sunday of every week is always... sunday. But in this case you really just need to take the MIN(s.pos_date) if #period is week.
There's probably some crazy way to do this in a single statement using quaternions or something else super mathy, but it's easiest to just use a case statement:
MIN
(
CASE
WHEN '#by' = 'day' THEN s.pos_date
ELSE DATEADD(D, -(DATEPART(WEEKDAY, s.pos_date) - 1), s.pos_date)
END
)
I'm not a C# programmer so I can't tell you the exact way to make sure the string DAY is passed to the query as "DAY" but I'm sure you can handle that part.
ALSO IMPORTANT The datepart "day" is day of month, so if you're going to possibly have a span greater than one month (but under a year), use dayofyear.

Related

Get the last 4 weeks prior to current week of and the same 4 weeks of last year

I have a list of date, fiscal week, and fiscal year:
DATE_VALUE FISCAL_WEEK FISCAL_YEAR_VALUE
14-Dec-20 51 2020
15-Dec-20 51 2020
16-Dec-20 51 2020
17-Dec-20 51 2020
18-Dec-20 51 2020
19-Dec-20 51 2020
20-Dec-20 51 2020
21-Dec-20 52 2020
22-Dec-20 52 2020
23-Dec-20 52 2020
24-Dec-20 52 2020
25-Dec-20 52 2020
26-Dec-20 52 2020
27-Dec-20 52 2020
28-Dec-20 1 2021
29-Dec-20 1 2021
30-Dec-20 1 2021
31-Dec-20 1 2021
1-Jan-21 1 2021
2-Jan-21 1 2021
3-Jan-21 1 2021
4-Jan-21 2 2021
5-Jan-21 2 2021
6-Jan-21 2 2021
7-Jan-21 2 2021
8-Jan-21 2 2021
9-Jan-21 2 2021
10-Jan-21 2 2021
11-Jan-21 3 2021
12-Jan-21 3 2021
13-Jan-21 3 2021
14-Jan-21 3 2021
15-Jan-21 3 2021
16-Jan-21 3 2021
17-Jan-21 3 2021
18-Jan-21 4 2021
19-Jan-21 4 2021
20-Jan-21 4 2021
21-Jan-21 4 2021
22-Jan-21 4 2021
23-Jan-21 4 2021
24-Jan-21 4 2021
20-Dec-21 52 2021
21-Dec-21 52 2021
22-Dec-21 52 2021
23-Dec-21 52 2021
24-Dec-21 52 2021
25-Dec-21 52 2021
26-Dec-21 52 2021
27-Dec-21 53 2021
28-Dec-21 53 2021
29-Dec-21 53 2021
30-Dec-21 53 2021
31-Dec-21 53 2021
1-Jan-22 53 2021
2-Jan-22 53 2021
3-Jan-22 1 2022
4-Jan-22 1 2022
5-Jan-22 1 2022
6-Jan-22 1 2022
7-Jan-22 1 2022
8-Jan-22 1 2022
9-Jan-22 1 2022
10-Jan-22 2 2022
11-Jan-22 2 2022
12-Jan-22 2 2022
13-Jan-22 2 2022
14-Jan-22 2 2022
15-Jan-22 2 2022
16-Jan-22 2 2022
17-Jan-22 3 2022
18-Jan-22 3 2022
19-Jan-22 3 2022
20-Jan-22 3 2022
21-Jan-22 3 2022
22-Jan-22 3 2022
23-Jan-22 3 2022
24-Jan-22 4 2022
25-Jan-22 4 2022
26-Jan-22 4 2022
27-Jan-22 4 2022
28-Jan-22 4 2022
29-Jan-22 4 2022
30-Jan-22 4 2022
I want to pull the last 4 weeks prior to the current week AND the same 4 weeks of the year before. Please see example 1. This works fine when all 4 weeks are within the same year. But when it comes to the beginning of a year when 1 or more weeks are in the current year but the other are in the previous year, I am not able to get the desired output below:
FISCAL_YEAR_VALUE FISCAL_WEEK
2020 51
2020 52
2021 2
2021 1
2021 52
2021 53
2022 1
2022 2
The code I have is below. I am using the date of 21-JAN-22 as an example:
SELECT
FISCAL_YEAR_VALUE,
FISCAL_WEEK
FROM TABLE_NAME
WHERE FISCAL_YEAR_VALUE IN (SELECT *
FROM (WITH T AS (
SELECT DISTINCT FISCAL_YEAR_VALUE
FROM TABLE_NAME
WHERE TRUNC(DATE_VALUE) <= TRUNC(TO_DATE('21-JAN-22'))--TEST DATE
ORDER BY FISCAL_YEAR_VALUE DESC
FETCH NEXT 2 ROWS ONLY
)
SELECT FISCAL_YEAR_VALUE
FROM T ORDER BY FISCAL_YEAR_VALUE
)
)
AND FISCAL_WEEK IN (SELECT *
FROM (WITH T AS (
SELECT DISTINCT FISCAL_WEEK, FISCAL_YEAR_VALUE
FROM TABLE_NAME
WHERE TRUNC(DATE_VALUE) <= TRUNC(TO_DATE('21-JAN-22'))--TEST DATE
ORDER BY FISCAL_YEAR_VALUE DESC, FISCAL_WEEK DESC
OFFSET 1 ROWS
FETCH NEXT 4 ROWS ONLY
)
SELECT FISCAL_WEEK
FROM T ORDER BY FISCAL_YEAR_VALUE, FISCAL_WEEK
)
)
GROUP BY FISCAL_YEAR_VALUE, FISCAL_WEEK
ORDER BY FISCAL_YEAR_VALUE, FISCAL_WEEK
Output of the code is:
FISCAL_YEAR_VALUE FISCAL_WEEK
2021 2
2021 1
2021 52
2021 53
2022 1
2022 2
As you can see, the last 2 weeks of year 2020 are not included. Please see example 2. How can I also include this exception in the code to make it dynamic? Any help would be greatly appreciated!
To find the values this year, you can use:
SELECT DISTINCT fiscal_year_value, fiscal_week
FROM table_name
WHERE date_value < TRUNC(SYSDATE, 'IW')
AND date_value >= TRUNC(SYSDATE, 'IW') - INTERVAL '28' DAY
To find the values from the previous year, you can find the maximum fiscal week from this year and subtract 1 from the year and then use that to find the upper bound of the date_value for last fiscal year and, given that can use a similar range for last year:
WITH this_year (fiscal_year_value, fiscal_week) AS (
SELECT fiscal_year_value, fiscal_week
FROM table_name
WHERE date_value < TRUNC(SYSDATE, 'IW')
AND date_value >= TRUNC(SYSDATE, 'IW') - INTERVAL '28' DAY
),
max_last_year (max_date_value) AS (
SELECT MAX(date_value) + INTERVAL '1' DAY
FROM table_name
WHERE (fiscal_year_value, fiscal_week) IN (
SELECT fiscal_year_value - 1, fiscal_week
FROM this_year
ORDER BY fiscal_year_value DESC, fiscal_week DESC
FETCH FIRST ROW ONLY
)
)
SELECT fiscal_year_value, fiscal_week
FROM this_year
UNION
SELECT t.fiscal_year_value, t.fiscal_week
FROM table_name t
INNER JOIN max_last_year m
ON ( t.date_value < m.max_date_value
AND t.date_value >= m.max_date_value - INTERVAL '28' DAY);
Which, for the sample data:
Create Table table_name(DATE_VALUE DATE, FISCAL_WEEK INT, FISCAL_YEAR_VALUE INT);
INSERT INTO table_name (date_value, fiscal_week, fiscal_year_value)
SELECT DATE '2019-12-30' + LEVEL - 1, CEIL(LEVEL/7), 2020
FROM DUAL
CONNECT BY LEVEL <= 7 * 52
UNION ALL
SELECT DATE '2020-12-28' + LEVEL - 1, CEIL(LEVEL/7), 2021
FROM DUAL
CONNECT BY LEVEL <= 7 * 53
UNION ALL
SELECT DATE '2022-01-03' + LEVEL - 1, CEIL(LEVEL/7), 2022
FROM DUAL
CONNECT BY LEVEL <= 7 * 52;
Outputs:
FISCAL_YEAR_VALUE
FISCAL_WEEK
2022
38
2022
39
2022
40
2022
41
2021
38
2021
39
2021
40
2021
41
And if today's date was 2022-01-01, would output:
FISCAL_YEAR_VALUE
FISCAL_WEEK
2021
52
2021
53
2022
1
2022
2
2020
51
2020
52
2021
1
2021
2
There may be a simpler method but without any knowledge of how you calculate a fiscal year that is not immediately possible.
fiddle

SQL dense_rank counter within time period

I would like to add a counter to records in my table, that fall in the last rolling 12 months.
Below is what I've tried and what I would like to achieve. The current counter doesn't reset to 1 for a different customer ID.
select *,
case when order_date >= DATEADD(MONTH, -12, GETDATE()) then dense_rank() over (partition by customer_id order by case when order_date >= DATEADD(MONTH, -12, GETDATE()) then order_date end asc, order_no)
end as counter
from mydata
customer_id
order_no
order_date
counter
what I want
ABC
1213
Wednesday, May 12, 2021
1
1
ABC
1257
Saturday, May 15, 2021
2
2
ABC
1345
Saturday, May 22, 2021
3
3
ABC
4562
Saturday, May 22, 2021
4
4
ABC
4362
Saturday, May 29, 2021
5
5
ABC
1421
Tuesday, June 1, 2021
6
6
GHI
5525
Wednesday, January 20, 2021
NULL
NULL
GHI
2452
Friday, February 26, 2021
NULL
NULL
GHI
1452
Tuesday, March 2, 2021
3
1
GHI
3525
Wednesday, March 3, 2021
4
2
GHI
4242
Thursday, March 4, 2021
5
3
GHI
1341
Thursday, March 4, 2021
6
4
GHI
1341
Thursday, March 4, 2021
6
4
GHI
5241
Saturday, March 13, 2021
7
5
GHI
1425
Saturday, March 20, 2021
8
6
GHI
5213
Wednesday, March 31, 2021
9
7
GHI
6312
Saturday, April 17, 2021
10
8
GHI
6312
Saturday, April 17, 2021
10
8
If you want to restart at a given point, you need to partition at that point. So not only by customer_id but also your condition:
SELECT mydata.*
, CASE
WHEN calc.in_scope = 1
THEN DENSE_RANK() OVER (PARTITION BY mydata.customer_id, calc.in_scope
ORDER BY order_date, order_no)
END AS counter
FROM mydata
CROSS
APPLY (SELECT IIF(order_date >= DATEADD(MONTH, -12, GETDATE()), 1, 0) AS in_scope
) calc
I've moved the calculation into CROSS APPLY to avoid repeating it, in case it needs to change in the future you'd then only change it in one place.
Working demo on dbfiddle

Getting datetime count range in SQL Server

I try to get the date range between the data changes in SQL Server
my query is
select count(1) as qty, Info, convert(char,dFError,100) dErr
from TableData
group by Info, convert(char,dFError,100)
order by dErr asc
I have this
qty has the number of reques to a server, info are the servers ip and the date it's when a request it's sended to another server.
qty
Info
dErr
1
1.97
Aug 11 2021 9:01AM
1
1.97
Aug 11 2021 9:06AM
88
1.33
Dec 21 2021 2:04PM
1
1.95
Dec 22 2021 9:44PM
9
1.95
Dec 22 2021 9:45PM
1
1.33
Dec 22 2021 9:51PM
19
1.33
Dec 22 2021 9:52PM
3
1.33
Dec 22 2021 9:53PM
6
1.33
Dec 27 2021 7:10PM
17
1.33
Dec 27 2021 7:11PM
15
1.95
Dec 27 2021 7:17PM
8
1.95
Dec 27 2021 7:18PM
and I want this, in Aug 11 at 9:06AM all are going to 1.97, at Dec 21 at 2:04PM all are going to 1.33, that means the date and the info
qty
Info
dErr
2
1.97
Aug 11 2021 9:06AM
88
1.33
Dec 21 2021 2:04PM
10
1.95
Dec 22 2021 9:45PM
46
1.33
Dec 27 2021 7:11PM
23
1.95
Dec 27 2021 7:18PM
in the same day can be the same group of numbers on distinct hour
qty
Info
dErr
1
1.97
Jan 24 2022 9:39AM
1
1.97
Jan 24 2022 9:51AM
1
1.97
Jan 24 2022 9:58AM
4
1.97
Jan 24 2022 10:08AM
1
1.97
Jan 24 2022 10:12AM
8
1.95
Jan 24 2022 10:24AM
2
1.95
Jan 24 2022 10:32AM
10
1.33
Jan 24 2022 10:33AM
1
1.33
Jan 24 2022 11:37AM
8
1.95
Jan 24 2022 11:59AM
1
1.95
Jan 24 2022 12:00PM
2
1.95
Jan 24 2022 12:08PM
and need to be displayed like
qty
Info
dErr
8
1.97
Jan 24 2022 10:12AM
10
1.95
Jan 24 2022 10:32AM
11
1.33
Jan 24 2022 11:37AM
11
1.95
Jan 24 2022 12:08PM
A double row_number can be used to calculate a ranking.
Then the ranking can be used in the aggregation to solve this Gaps-And-Islands type of problem.
select sum(qty) as qty, Info, max(dFError) as dErr
from (
select Info, dFError, qty
, convert(date, dFError) as dErrorDate
, Rnk = row_number() over (order by dFError)
+ row_number() over (partition by Info order by dFError desc)
from TableData
) q
group by Info, Rnk
order by dErr;
qty
Info
dErr
2
1.97
2021-08-11 09:06:00.000
88
1.33
2021-12-21 14:04:00.000
10
1.95
2021-12-22 21:45:00.000
46
1.33
2021-12-27 19:11:00.000
23
1.95
2021-12-27 19:18:00.000
8
1.97
2022-01-24 10:12:00.000
10
1.95
2022-01-24 10:32:00.000
11
1.33
2022-01-24 11:37:00.000
11
1.95
2022-01-24 12:08:00.000
Demo on db<>fiddle here
select
SUM(P_COUNT) as "COUNT",
P_DATA as "DATA",
MAX(FECHA) as "FECHA"
from
TABLEA
GROUP BY
P_DATA, CONVERT(DATE, FECHA)
ORDER BY "FECHA"
Your expected results don't match the given data - in the first set you have rows for 12/22 with both 1.33 and 1.95, but not included in your expected results.
It seems to me you want to either group by the date - or the date\hour. Here is an example of both:
Declare #testTable table (qty int, Info numeric(3,2), dErr datetime);
Insert Into #testTable (qty, Info, dErr)
Values ( 1, 1.97, 'Aug 11 2021 9:01AM')
, ( 1, 1.97, 'Aug 11 2021 9:06AM')
, (88, 1.33, 'Dec 21 2021 2:04PM')
, ( 1, 1.95, 'Dec 22 2021 9:44PM')
, ( 9, 1.95, 'Dec 22 2021 9:45PM')
, ( 1, 1.33, 'Dec 22 2021 9:51PM')
, (19, 1.33, 'Dec 22 2021 9:52PM')
, ( 3, 1.33, 'Dec 22 2021 9:53PM')
, ( 6, 1.33, 'Dec 27 2021 7:10PM')
, (17, 1.33, 'Dec 27 2021 7:11PM')
, (15, 1.95, 'Dec 27 2021 7:17PM')
, ( 8, 1.95, 'Dec 27 2021 7:18PM')
, ( 1, 1.97, 'Jan 24 2022 9:39AM')
, ( 1, 1.97, 'Jan 24 2022 9:51AM')
, ( 1, 1.97, 'Jan 24 2022 9:58AM')
, ( 4, 1.97, 'Jan 24 2022 10:08AM')
, ( 1, 1.97, 'Jan 24 2022 10:12AM')
, ( 8, 1.95, 'Jan 24 2022 10:24AM')
, ( 2, 1.95, 'Jan 24 2022 10:32AM')
, (10, 1.33, 'Jan 24 2022 10:33AM')
, ( 1, 1.33, 'Jan 24 2022 11:37AM')
, ( 8, 1.95, 'Jan 24 2022 11:59AM')
, ( 1, 1.95, 'Jan 24 2022 12:00PM')
, ( 2, 1.95, 'Jan 24 2022 12:08PM');
--==== Grouped by date
Select total_qty = sum(tt.qty)
, tt.Info
, latest_date = max(tt.dErr)
From #testTable tt
Group By
tt.Info
, cast(tt.dErr As date)
Order By
cast(tt.dErr As date);
--==== Grouped by date\hour
Select total_qty = sum(tt.qty)
, tt.Info
, latest_date = max(tt.dErr)
From #testTable tt
Group By
tt.Info
, cast(tt.dErr As date)
, datepart(Hour, tt.dErr)
Order By
cast(tt.dErr As date)
, datepart(Hour, tt.dErr);

Query Sales Group by week no and Month

I have to calculate sales based on WeekNo(Yearly) and Month.
Table1: Sales
ID SalDate Amount Region
1 2020-12-27 1000 USA
2 2020-12-28 1000 EU
3 2020-12-29 1000 AUS
4 2021-01-01 1000 USA
5 2021-01-02 1000 EU
6 2021-01-05 1000 AUS
7 2020-09-30 1000 EU
8 2020-10-01 1000 AUS
Select DateName(Month,SalDate)+' - '+Convert(Varchar(10),Year(SalDate)) Months,
'Week '+ Convert(Varchar(50), DATEPART(WEEK, SalDate)) As Weeks,
Sum(Amount) OrderValue
From [Sales]
Group By DateName(Month,SalDate)+' - '+Convert(Varchar(10),Year(SalDate)),
Convert(Varchar(50), DATEPART(WEEK, SalDate))
Months Weeks Total
January - 2021 Week 1 2000.00
January - 2021 Week 2 1000.00
December - 2020 Week 53 3000.00
October - 2020 Week 40 1000.00
September - 2020 Week 40 1000.00
But client want to merge Week1 Total with Week 53
And Week2 show as Week1.
And Week 40 Merge with Sep 2020.
I want result like below
Months Weeks Total
January - 2021 Week 1 1000.00
December - 2020 Week 53 5000.00 (2000+3000)
September - 2020 Week 40 2000.00 (1000+100)
Kindly help me to fix this problem.

Select and group past data relative to a certain date

I have data in two tables ORDERS and training.
Different kinds of training are provided to various customers. I would like to see what was the affect of these training on revenue for the customers involved. For achieving this, I would like to look at the revenue for the past 90 days and the future 90 days for each customer from the date of receiving the training. In other words, if a customer received a training on March 30 2014, I would like to look at the revenue from Jan 1 2014 till June 30 2014. I have come up with the following query which pretty much does the job:
select
o.custno,
sum(ISNULL(a.revenue,0)) as Revenue,
o.Date as TrainingDate,
DATEADD(mm, DATEDIFF(mm, 0, a.created), 0) as RevenueMonth
from ORDERS a,
(select distinct custno, max(Date) as Date from Training group by custno) o
where a.custnum = o.custno
and a.created between DATEADD(day, -90, o.Date) and DATEADD(day, 90, o.Date)
group by o.custno, o.Date, DATEADD(mm, DATEDIFF(mm, 0, a.created), 0)
order by o.custno
The sample output of this query looks something like this:
custno Revenue TrainingDate RevenueMonth
0000100 159.20 2014-06-02 00:00:00.000 2014-03-01 00:00:00.000
0000100 199.00 2014-06-02 00:00:00.000 2014-04-01 00:00:00.000
0000100 79.60 2014-06-02 00:00:00.000 2014-05-01 00:00:00.000
0000100 29.85 2014-06-02 00:00:00.000 2014-06-01 00:00:00.000
0000100 79.60 2014-06-02 00:00:00.000 2014-07-01 00:00:00.000
0000100 99.50 2014-06-02 00:00:00.000 2014-08-01 00:00:00.000
0000250 437.65 2013-02-26 00:00:00.000 2012-11-01 00:00:00.000
0000250 4181.65 2013-02-26 00:00:00.000 2012-12-01 00:00:00.000
0000250 4146.80 2013-02-26 00:00:00.000 2013-01-01 00:00:00.000
0000250 6211.93 2013-02-26 00:00:00.000 2013-02-01 00:00:00.000
0000250 2199.72 2013-02-26 00:00:00.000 2013-03-01 00:00:00.000
0000250 4452.65 2013-02-26 00:00:00.000 2013-04-01 00:00:00.000
Desired output example:
If the training was provided on March 15 2014, for customer number 100, I’d want revenue data in the following format:
CustNo Revenue TrainingDate RevenueMonth
100 <Some revenue figure> March 15 2014 Dec 15 2013 – Jan 14 2014 (Past month 1)
100 <Some revenue figure> March 15 2014 Jan 15 2014 – Feb 14 2014 (Past month 2)
100 <Some revenue figure> March 15 2014 Feb 15 2014 – Mar 14 2014 (Past month 3)
100 <Some revenue figure> March 15 2014 Mar 15 2014 – Apr 14 2014 (Future month 1)
100 <Some revenue figure> March 15 2014 Apr 15 2014 – May 14 2014 (Future month 2)
100 <Some revenue figure> March 15 2014 May 15 2014 – Jun 14 2014 (Future month 3)
Here, the RevenueMonth column doesn’t need to be in this format as long as it has the data relative to the training date. The ‘past’ and ‘future’ month references in the braces are only to explain the output, they need not be present in the output.
My query gets the data and groups the data by month. I would like the months to be grouped relative to the training date. For example - If the training was given on March 15, I would like the past month to be from Feb 15 till March 14, my query doesn't do that. I believe a little tweak in this query might just achieve what I'm after.
Any help with the query would be highly appreciated.
Something along these lines may do what you want:
select
t.custno,
sum(ISNULL(o.revenue,0)) as Revenue,
t.maxDate as TrainingDate,
((DATEDIFF(day, TrainingDate, o.created) + 89) / 30) - 3 as DeltaMonth
from ORDERS o
join (select custno, max([Date]) as maxDate from Training group by custno) t
on o.custnum = t.custno
where o.created
between DATEADD(day, -89, t.maxDate) and DATEADD(day, 90, t.maxDate)
group by t.custno, t.maxDate, DeltaMonth
order by t.custno
The general strategy is to compute a difference in months (or 30-day periods, really) from the training date, and group by that. This version uses from 89 days before to 90 days after the training, because if you run from -90 to +90 then you have one more day (the training day itself) than divides evenly into 30-day periods.
The query follows the general structure of the original, but there are several changes:
it computes DeltaMonth (from -3 to 2) as an index of 30-day periods relative to the training date, with the training date being the last day of DeltaMonth number -1.
it uses the DeltaMonth alias in the GROUP BY clause instead of repeating its formula. That provides better clarity, simplicity, and maintainability.
I changed the table aliases. I simply could not handle there being a table named "ORDERS" and an alias "o", with the latter not being associated with the former.
the query uses modern join syntax, for better clarity
the GROUP BY clause updated appropriately