Rolling sum for historical dates - sql

I have a table orders that looks like this:
order_id
sales_amount
order_time
store_id
1412412
30
2022/03/28
456
1551211
5
2022/03/27
145
I am interested in calculating the sales from stores that had their first order in the last 28 days, on a rolling basis. The following will give me this for the most recent day:
with first_order_dates AS (
select
min(order_time) as first_order_time,
store_id
from Orders
group by store_id
)
select
dateadd(day,-1, cast(getdate() as date)) AS date,
sum(sales_amount) AS new_revenue_last_28d
from Orders
left join first_order_dates
on first_order_dates.store_id = Orders.store_id
where first_order_time between dateadd(day,-29, cast(getdate() as date)) and dateadd(day,-1, cast(getdate() as date))
group by dateadd(day,-1, cast(getdate() as date))
Resulting in:
Date
new_revenue_last_28d
2022/04/06
5400
What I want is to go back and calculate this for every historical day, i.e to end up with
Date
new_revenue_last_28d
2022/04/06
5400
2022/04/05
5732
2022/04/04
4300
and so on so I can chart this. I have run out of ideas - how can I do this with only the info I have available? Using Snowflake ideally

So if you want to only show sales for shops that have their first sale in the last 28 days, and for "those 28 days, have a rolling window of the sum of those sales"
WITH data as (
select * from values
(100, '2022-04-07'::date, 10),
(100, '2022-04-06'::date, 8),
(100, '2022-04-05'::date, 11),
(100, '2022-04-01'::date, 12),
(101, '2022-04-02'::date, 110),
(101, '2022-04-01'::date, 120)
t(store_id, order_date, sales_amount)
), store_valid_orders as (
select
store_id
,order_date
,sales_amount
from data
qualify min(order_date) over(partition by store_id) >= current_date() - 28
), those_28_days as (
select current_date() - row_number()over(order by null) + 1 as date
from table(generator(ROWCOUNT => 29))
), day_join_sales as (
select
d.date
,s.store_id
,sum(s.sales_amount) as sales_amount
from those_28_days as d
left join store_valid_orders as s on d.date = s.order_date
group by 1,2
)
select
date
,store_id
,sum(sales_amount) over(partition by store_id order by date rows between 28 preceding and current row ) as prior_28_days_sales
from day_join_sales
qualify store_id is not null;
gives:
DATE
STORE_ID
PRIOR_28_DAYS_SALES
2022-04-01
100
12
2022-04-05
100
23
2022-04-06
100
31
2022-04-07
100
41
2022-04-01
101
120
2022-04-02
101
230
that is actually more complex that it needs to be.. but I half have the concept for solving rolling windows of days, which include the first sales with respect to rolling date. Which is more complex, but the above might be enough to answer your question. So I will stop here.
Take 2:
with daily 28 days of sales per store, rolled into single daily total:
WITH data as (
select * from values
(100, '2022-04-07'::date, 10),
(100, '2022-04-06'::date, 8),
(100, '2022-04-05'::date, 11),
(100, '2022-04-01'::date, 12),
(101, '2022-04-02'::date, 110),
(101, '2022-04-01'::date, 120)
t(store_id, order_date, sales_amount)
), store_first_orders as (
select
store_id
,min(order_date) as first_order
from data
group by 1
), _29_rows as (
select
row_number()over(order by null) - 1 as rn
from table(generator(ROWCOUNT => 29))
), those_29_rows as (
select
v.store_id
,dateadd(day, r.rn, v.first_order) as date
from _29_rows as r
full join store_first_orders as v
), first_28_days_of_data as (
select
r.store_id
,r.date
,d.sales_amount
from those_29_rows r
left join data as d
on d.store_id = r.store_id AND d.order_date = r.date
), per_site_dailies as (
select
store_id
,date
,sum(sales_amount) over(partition by store_id order by date) as roll_sales
from first_28_days_of_data
order by 2,1
)
select
date,
sum(roll_sales) as new_revenue_last_28d
from per_site_dailies
group by 1
having date <= current_date()
order by 1;
gives:
DATE
NEW_REVENUE_LAST_28D
2022-04-01
132
2022-04-02
242
2022-04-03
242
2022-04-04
242
2022-04-05
253
2022-04-06
261
2022-04-07
271
2022-04-08
271

Related

Calculate the amount between two dates in the table

I have these 2 tables. I need to merge into one table. Where should I put in the column the amount of expenses between two dates. How can I do it?
Profits:
Id
Date
Money
1
01.01.2022
100
2
15.01.2022
50
3
25.01.2022
30
Expenses:
Id
Date
Money
1
01.01.2022
20
2
03.01.2022
30
3
30.01.2022
40
Result:
Id
Date
Profits
Expenses(Sum)
1
01.01.2022
100
50
2
15.01.2022
50
3
25.01.2022
30
40
The following statement is a possible option:
Data:
SELECT *
INTO Profits
FROM (VALUES
(1, CONVERT(date, '01.01.2022', 104), 100),
(2, CONVERT(date, '15.01.2022', 104), 50),
(3, CONVERT(date, '25.01.2022', 104), 30)
) v (Id, [Date], [Money])
SELECT *
INTO Expenses
FROM (VALUES
(1, CONVERT(date, '01.01.2022', 104), 20),
(2, CONVERT(date, '03.01.2022', 104), 30),
(3, CONVERT(date, '30.01.2022', 104), 40)
) v (Id, [Date], [Money])
Statement:
SELECT p.Id, p.Date, p.Money AS Profits, SUM(e.Money) AS Expenses
FROM (
SELECT *, LEAD(Date) OVER (ORDER BY Date) AS NextDate
FROM Profits
) p
LEFT JOIN Expenses e ON p.Date <= e.Date AND (e.Date <= p.NextDate OR p.NextDate IS NULL)
GROUP BY p.Id, p.Date, p.Money
Result:
Id
Date
Profits
Expenses
1
2022-01-01
100
50
2
2022-01-15
50
3
2022-01-25
30
40
For SQL Server 2008 you have to replace LEAD() with a self-join (a simplified approach when there are no gaps in the Id column):
SELECT p.Id, p.Date, p.Money AS Profits, SUM(e.Money) AS Expenses
FROM (
SELECT p1.*, p2.Date AS NextDate
FROM Profits p1
LEFT JOIN Profits p2 ON p1.Id = p2.Id - 1
) p
LEFT JOIN Expenses e ON p.Date <= e.Date AND (e.Date <= p.NextDate OR p.NextDate IS NULL)
GROUP BY p.Id, p.Date, p.Money

Table with daily historical stock prices. How to pull stocks where the price reached a certain number for the first time

I have a table with historical stocks prices for hundreds of stocks. I need to extract only those stocks that reached $10 or greater for the first time.
Stock
Price
Date
AAA
9
2021-10-01
AAA
10
2021-10-02
AAA
8
2021-10-03
AAA
10
2021-10-04
BBB
9
2021-10-01
BBB
11
2021-10-02
BBB
12
2021-10-03
Is there a way to count how many times each stock hit >= 10 in order to pull only those where count = 1 (in this case it would be stock BBB considering it never reached 10 in the past)?
Since I couldn't figure how to create count I've tried the below manipulations with min/max dates but this looks like a bit awkward approach. Any idea of a simpler solution?
with query1 as (
select Stock, min(date) as min_greater10_dt
from t
where Price >= 10
group by Stock
), query2 as (
select Stock, max(date) as max_greater10_dt
from t
where Price >= 10
group by Stock
)
select Stock
from t a
join query1 b on b.Stock = a.Stock
join query2 c on c.Stock = a.Stock
where not(a.Price < 10 and a.Date between b.min_greater10_dt and c.max_greater10_dt)
This is a type of gaps-and-islands problem which can be solved as follows:
detect the change from < 10 to >= 10 using a lagged price
count the number of such changes
filter in only stock where this has happened exactly once
and take the first row since you only want the stock (you could group by here but a row number allows you to select the entire row should you wish to).
declare #Table table (Stock varchar(3), Price money, [Date] date);
insert into #Table (Stock, Price, [Date])
values
('AAA', 9, '2021-10-01'),
('AAA', 10, '2021-10-02'),
('AAA', 8, '2021-10-03'),
('AAA', 10, '2021-10-04'),
('BBB', 9, '2021-10-01'),
('BBB', 11, '2021-10-02'),
('BBB', 12, '2021-10-03');
with cte1 as (
select Stock, Price, [Date]
, row_number() over (partition by Stock, case when Price >= 10 then 1 else 0 end order by [Date] asc) rn
, lag(Price,1,0) over (partition by Stock order by [Date] asc) LaggedStock
from #Table
), cte2 as (
select Stock, Price, [Date], rn, LaggedStock
, sum(case when Price >= 10 and LaggedStock < 10 then 1 else 0 end) over (partition by Stock) StockOver10
from cte1
)
select Stock
--, Price, [Date], rn, LaggedStock, StockOver10 -- debug
from cte2
where Price >= 10
and StockOver10 = 1 and rn = 1;
Returns:
Stock
BBB
Note: providing DDL+DML as show above makes it much easier of people to assist.

Calculate standdard deviation over time

I have information about sales per day. For example:
Date - Product - Amount
01-07-2020 - A - 10
01-03-2020 - A - 20
01-02-2020 - B - 10
Now I would like to know the average sales per day and the standard deviation for the last year. For average I can just count the number of entries per item, and then count 365-amount of entries and take that many 0's, but I wonder what the best way is to calculate the standard deviation while incorporating the 0's for the days there are not sales.
Use a hierarchical (or recursive) query to generate daily dates for the year and then use a PARTITION OUTER JOIN to join it to your product data then you can find the average and standard deviation with the AVG and STDDEV aggregation functions and use COALESCE to fill in NULL values with zeroes:
WITH start_date ( dt ) AS (
SELECT DATE '2020-01-01' FROM DUAL
),
calendar ( dt ) AS (
SELECT dt + LEVEL - 1
FROM start_date
CONNECT BY dt + LEVEL - 1 < ADD_MONTHS( dt, 12 )
)
SELECT product,
AVG( COALESCE( amount, 0 ) ) AS average_sales_per_day,
STDDEV( COALESCE( amount, 0 ) ) AS stddev_sales_per_day
FROM calendar c
LEFT OUTER JOIN (
SELECT t.*
FROM test_data t
INNER JOIN start_date s
ON (
s.dt <= t."DATE"
AND t."DATE" < ADD_MONTHS( s.dt, 12 )
)
) t
PARTITION BY ( t.product )
ON ( c.dt = t."DATE" )
GROUP BY product
So, for your sample data:
CREATE TABLE test_data ( "DATE", Product, Amount ) AS
SELECT DATE '2020-07-01', 'A', 10 FROM DUAL UNION ALL
SELECT DATE '2020-03-01', 'A', 20 FROM DUAL UNION ALL
SELECT DATE '2020-02-01', 'B', 10 FROM DUAL;
This outputs:
PRODUCT | AVERAGE_SALES_PER_DAY | STDDEV_SALES_PER_DAY
:------ | ----------------------------------------: | ----------------------------------------:
A | .0819672131147540983606557377049180327869 | 1.16752986363678031669548047505759328696
B | .027322404371584699453551912568306010929 | .5227083734893166933219264686616717636897
db<>fiddle here

Group sales data by time interval for every 15 min for one day

I have a sales table and it contains sales figure by different store along with timing, let's say in one day and one of store we have done 10,000 transactions then I need to find the total sales for every 15 min for that particular business date, keeping in mind for example: if there's no sales between 12:00 PM to 12:15 PM then it should be zero as a value or null.
In a day we have 24 hours so it means 96 columns for the 15 min interval.
Sales Table:
SiteName Time Amount BusinessDate
----------------------------------------------------------
A 7:01:02 AM 20 2017-01-02
A 7:03:22 AM 25 2017-01-02
A 7:05:03 AM 33 2017-01-02
A 7:11:02 AM 55 2017-01-02
A 7:13:05 AM 46 2017-01-02
A 7:17:02 AM 21 2017-01-02
A 8:01:52 AM 18 2017-01-02
A 8:55:42 AM 7 2017-01-02
A 8:56:33 AM 7 2017-01-02
A 8:58:55 AM 31 2017-01-02
and so on
How can I accomplish this?!
Dynamic Example
Declare #SQL varchar(max) = Stuff((Select ',' + QuoteName(T)
From (Select Top 96 T=format(DateAdd(Minute,(Row_Number() Over (Order By (Select null))-1)*15,0),'HH:mm') From master..spt_values n1) A
Order by 1
For XML Path('')),1,1,'')
Select #SQL = '
Select *
From (
Select [SiteName]
,Col = format(DateAdd(MINUTE,(DatePart(HOUR,[Time])*60) + ((DatePart(MINUTE,[Time]) / 15)*15),0),''HH:mm'')
,Value = [Amount]
From Sales
) A
Pivot (sum(Value) For [Col] in (' + #SQL + ') ) p'
Exec(#SQL);
Returns 96 columns from 00:00 to 23:45
The Code Generated
Select *
From (
Select [SiteName]
,Col = format(DateAdd(MINUTE,(DatePart(HOUR,[Time])*60) + ((DatePart(MINUTE,[Time]) / 15)*15),0),'HH:mm')
,Value = [Amount]
From Sales
) A
Pivot (sum(Value) For [Col] in ([00:00],[00:15],[00:30],[00:45],[01:00],[01:15],[01:30],[01:45],[02:00],[02:15],[02:30],[02:45],[03:00],[03:15],[03:30],[03:45],[04:00],[04:15],[04:30],[04:45],[05:00],[05:15],[05:30],[05:45],[06:00],[06:15],[06:30],[06:45],[07:00],[07:15],[07:30],[07:45],[08:00],[08:15],[08:30],[08:45],[09:00],[09:15],[09:30],[09:45],[10:00],[10:15],[10:30],[10:45],[11:00],[11:15],[11:30],[11:45],[12:00],[12:15],[12:30],[12:45],[13:00],[13:15],[13:30],[13:45],[14:00],[14:15],[14:30],[14:45],[15:00],[15:15],[15:30],[15:45],[16:00],[16:15],[16:30],[16:45],[17:00],[17:15],[17:30],[17:45],[18:00],[18:15],[18:30],[18:45],[19:00],[19:15],[19:30],[19:45],[20:00],[20:15],[20:30],[20:45],[21:00],[21:15],[21:30],[21:45],[22:00],[22:15],[22:30],[22:45],[23:00],[23:15],[23:30],[23:45]) ) p
This is an option that does NOT use dynamic SQL, and instead of 96 columns wide per row, generates one row per time slot. First, I am starting with a sample table of your data.
create table #Sales
( SiteName nvarchar(1),
SaleTime time,
Amount decimal,
BusinessDate Date );
insert into #Sales ( SiteName, SaleTime, Amount, BusinessDate )
values
( 'A', '7:01:02', 20, '2017-01-02' ),
( 'A', '7:03:22', 25, '2017-01-02' ),
( 'A', '7:05:03', 33, '2017-01-02' ),
( 'A', '7:11:02', 55, '2017-01-02' ),
( 'A', '7:13:05', 46, '2017-01-02' ),
( 'A', '7:17:02', 21, '2017-01-02' ),
( 'A', '8:01:52', 18, '2017-01-02' ),
( 'A', '8:55:42', 7, '2017-01-02' ),
( 'A', '8:56:33', 7, '2017-01-02' ),
( 'A', '8:58:55', 31, '2017-01-02' );
And the query which I will explain shortly
select
allTimes.TimeStart,
allTimes.TimeEnd,
coalesce( count(S.Amount), 0 ) as NumEntries,
coalesce( sum( S.Amount), 0 ) as SumValues
from
( select
cast( DateAdd( minute, 15 * (timeSlots.Row -1), '2017-01-01' ) as time ) as TimeStart,
cast( DateAdd( minute, 15 * timeSlots.Row, '2017-01-01' ) as time ) as TimeEnd
from
( SELECT top 96
ROW_NUMBER() OVER(Order by AnyColumnInYourTable) Row
FROM
AnyTableThatHasAtLeast96Rows ) timeSlots
) allTimes
LEFT JOIN #Sales S
on allTimes.TimeStart <= S.SaleTime
AND S.SaleTime < allTimes.TimeEnd
AND ( allTimes.TimeEnd < allTimes.TimeStart
OR S.SaleTime <= allTimes.TimeEnd )
group by
allTimes.TimeStart,
allTimes.TimeEnd
Now, the explanation...
First, the inner-most query alias result "timeSlots". This can query from ANY table that has at least the 96 time slot 15 minute increments you are looking for and does nothing but returns a result set numbered sequentially from 1 to 96.
Now that we have 96 rows, we get to the next outer query alias result "allTimes". This basically does date/time math and adds the 15 minute intervals * whatever "row" number value is an create all time slots into 96 rows. I have explicitly applied a start and end time to apply >= and <. But this query does nothing but creates the explicit time slots. And since I am casting the DATEADD() component to just the TIME, it does not matter what fixed "Date" value I start with -- in this case, 2017-01-01. All I care about are the time slots themselves. The results will be like...
TimeStart TimeEnd
00:00:00 00:15:00
00:15:00 00:30:00
00:30:00 00:45:00
...
23:30:00 23:45:00
23:45:00 00:00:00 -- This one is special for the JOIN clause for time
Now, the LEFT JOIN... This is the SLIGHTLY tricky one
LEFT JOIN #Sales S
on allTimes.TimeStart <= S.SaleTime
AND S.SaleTime < allTimes.TimeEnd
AND ( allTimes.TimeEnd < allTimes.TimeStart
OR S.SaleTime <= allTimes.TimeEnd )
Here, left joining to the sales will always allow every time slot to be in the final result set. However, which slot does a given sale fit into? The Sale time must be GREATER OR EQUAL to the starting 15-minute interval...
AND..
Either... The endtime is less than the start (via the slot at 23:45 - 00:00 of the next morning) OR LESS then the beginning of the next time slot. Ex: 08:30 - 8:45 time slot is actually up to 8:44:xxxxx precision but always less than 8:45.
By doing this way with one row per time slot, I can get a count of transactions, sum of the transactions, you could even do avg, min, max for sales activity too, for finding trends.
WITH dates AS (
SELECT CAST('2009-01-01' AS datetime) 'date'
UNION ALL
SELECT DATEADD(mi, 15, t.date)
FROM dates t
WHERE DATEADD(mi, 15, t.date) < '2009-01-02')
SELECT cast([date] as time) as [date] from dates
Use the above code to get 96 columns for the 15 min interval for a day.
Join the sales table with the above CTE.
Here recursive CTE generates 15 minute intervals for 24 hour (96 rows).
Then this result LEFT JOINed to subquery. In subquery Amount is grouped by 15 minute intervals for every hour.
In result, 00:00:00 corresponds sum of amounts, which happened from 00:00:00 to 00:14:59
00:15:00 = from 00:15:00 to 00:29:59
00:30:00 = from 00:30:00 to 00:44:59
00:45:00 = from 00:45:00 to 00:59:59
and so on for every 24 hour
create table #Sales
( SiteName nvarchar(1),
SaleTime time,
Amount decimal,
BusinessDate Date );
insert into #Sales ( SiteName, SaleTime, Amount, BusinessDate )
values
( 'A', '13:22:36', 888, '2017-01-02' ),
( 'A', '00:00:00', 20, '2017-01-02' ),
( 'A', '00:00:00', 30, '2017-01-02' ),
( 'A', '00:45:00', 88, '2017-01-02' ),
( 'A', '12:46:05', 22, '2017-01-02' ),
( 'A', '12:59:59', 22, '2017-01-02' ),
( 'A', '23:59:59', 10, '2017-01-02' );
-- Below is actual query:
with rec as(
select cast('00:00:00' as time) as dt
union all
select DATEADD (mi , 15 , dt) from rec
where
dt < cast('23:45:00' as time)
)
select rec.dt, t1.summ from rec
left join
(select part, sum(Amount) as summ from (
select *, case
when DATEPART ( mi , SaleTime ) < 15 then concat(SUBSTRING (cast(SaleTime as varchar) ,1 , 2 ), ':00:00')
when DATEPART ( mi , SaleTime ) between 15 and 29 then concat(SUBSTRING (cast(SaleTime as varchar) ,1 , 2 ), ':15:00')
when DATEPART ( mi , SaleTime ) between 30 and 44 then concat(SUBSTRING (cast(SaleTime as varchar) ,1 , 2 ), ':30:00')
else concat(SUBSTRING (cast(SaleTime as varchar) ,1 , 2 ), ':45:00')
end as part
from #Sales
where BusinessDate = '2017-01-02'
) t
group by part) t1
on rec.dt = t1.part
order by rec.dt
rextester demo

SQL spread month value into weeks

I have a table where I have values by month and I want to spread these values by week, taking into account that weeks that spread into two month need to take part of the value of each of the month and weight on the number of days that correspond to each month.
For example I have the table with a different price of steel by month
Product Month Price
------------------------------------
Steel 1/Jan/2014 100
Steel 1/Feb/2014 200
Steel 1/Mar/2014 300
I need to convert it into weeks as follows
Product Week Price
-------------------------------------------
Steel 06-Jan-14 100
Steel 13-Jan-14 100
Steel 20-Jan-14 100
Steel 27-Jan-14 128.57
Steel 03-Feb-14 200
Steel 10-Feb-14 200
Steel 17-Feb-14 200
As you see above, the week that overlaps between Jan and Feb needs to be calculated as follows
(100*5/7)+(200*2/7)
This takes into account tha the week of the 27th has 5 days that fall into Jan and 2 into Feb.
Is there any possible way to create a query in SQL that would achieve this?
I tried the following
First attempt:
select
WD.week,
PM.PRICE,
DATEADD(m,1,PM.Month),
SUM(PM.PRICE/7) * COUNT(*)
from
( select '2014-1-1' as Month, 100 as PRICE
union
select '2014-2-1' as Month, 200 as PRICE
)PM
join
( select '2014-1-20' as week
union
select '2014-1-27' as week
union
select '2014-2-3' as week
)WD
ON WD.week>=PM.Month
AND WD.week < DATEADD(m,1,PM.Month)
group by
WD.week,PM.PRICE, DATEADD(m,1,PM.Month)
This gives me the following
week PRICE
2014-1-20 100 2014-02-01 00:00:00.000 14
2014-1-27 100 2014-02-01 00:00:00.000 14
2014-2-3 200 2014-03-01 00:00:00.000 28
I tried also the following
;with x as (
select price,
datepart(week,dateadd(day, n.n-2, t1.month)) wk,
dateadd(day, n.n-1, t1.month) dt
from
(select '2014-1-1' as Month, 100 as PRICE
union
select '2014-2-1' as Month, 200 as PRICE) t1
cross apply (
select datediff(day, t.month, dateadd(month, 1, t.month)) nd
from
(select '2014-1-1' as Month, 100 as PRICE
union
select '2014-2-1' as Month, 200 as PRICE)
t
where t1.month = t.month) ndm
inner join
(SELECT (a.Number * 256) + b.Number AS N FROM
(SELECT number FROM master..spt_values WHERE type = 'P' AND number <= 255) a (Number),
(SELECT number FROM master..spt_values WHERE type = 'P' AND number <= 255) b (Number)) n --numbers
on n.n <= ndm.nd
)
select min(dt) as week, cast(sum(price)/count(*) as decimal(9,2)) as price
from x
group by wk
having count(*) = 7
order by wk
This gimes me the following
week price
2014-01-07 00:00:00.000 100.00
2014-01-14 00:00:00.000 100.00
2014-01-21 00:00:00.000 100.00
2014-02-04 00:00:00.000 200.00
2014-02-11 00:00:00.000 200.00
2014-02-18 00:00:00.000 200.00
Thanks
If you have a calendar table it's a simple join:
SELECT
product,
calendar_date - (day_of_week-1) AS week,
SUM(price/7) * COUNT(*)
FROM prices AS p
JOIN calendar AS c
ON c.calendar_date >= month
AND c.calendar_date < DATEADD(m,1,month)
GROUP BY product,
calendar_date - (day_of_week-1)
This could be further simplified to join only to mondays and then do some more date arithmetic in a CASE to get 7 or less days.
Edit:
Your last query returned jan 31st two times, you need to remove the =from on n.n < ndm.nd. And as you seem to work with ISO weeks you better change the DATEPART to avoid problems with different DATEFIRST settings.
Based on your last query I created a fiddle.
;with x as (
select price,
datepart(isowk,dateadd(day, n.n, t1.month)) wk,
dateadd(day, n.n-1, t1.month) dt
from
(select '2014-1-1' as Month, 100.00 as PRICE
union
select '2014-2-1' as Month, 200.00 as PRICE) t1
cross apply (
select datediff(day, t.month, dateadd(month, 1, t.month)) nd
from
(select '2014-1-1' as Month, 100.00 as PRICE
union
select '2014-2-1' as Month, 200.00 as PRICE)
t
where t1.month = t.month) ndm
inner join
(SELECT (a.Number * 256) + b.Number AS N FROM
(SELECT number FROM master..spt_values WHERE type = 'P' AND number <= 255) a (Number),
(SELECT number FROM master..spt_values WHERE type = 'P' AND number <= 255) b (Number)) n --numbers
on n.n < ndm.nd
) select min(dt) as week, cast(sum(price)/count(*) as decimal(9,2)) as price
from x
group by wk
having count(*) = 7
order by wk
Of course the dates might be from multiple years, so you need to GROUP BY by the year, too.
Actually, you need to spred it over days, and then get the averages by week. To get the days we'll use the Numbers table.
;with x as (
select product, price,
datepart(week,dateadd(day, n.n-2, t1.month)) wk,
dateadd(day, n.n-1, t1.month) dt
from #t t1
cross apply (
select datediff(day, t.month, dateadd(month, 1, t.month)) nd
from #t t
where t1.month = t.month and t1.product = t.product) ndm
inner join numbers n on n.n <= ndm.nd
)
select product, min(dt) as week, cast(sum(price)/count(*) as decimal(9,2)) as price
from x
group by product, wk
having count(*) = 7
order by product, wk
The result of datepart(week,dateadd(day, n.n-2, t1.month)) expression depends on SET DATEFIRST so you might need to adjust accordingly.