I usually use PROC SQL for when I'm joining a table on that also has a date condition (i.e. target_date falls between start_date and end_date).
I've been able to successfully translate this to a hash join when considering an INNER JOIN:
data hash_join;
if _n_ = 1 then do;
declare hash add1(dataset:'table_2',multidata: 'Y');
add1.defineKey('key_1');
add1.defineData('start_date','end_date','value_1');
add1.defineDone();
end;
format
start_date date9.
end_date date9.
value_1 10.5
;
set table_1 (keep=key_1 target_date);
if add1.find() = 0 then do until (add1.find_next());
if start_date le target_date le end_date then output;
end;
run;
Which is the same thing as:
proc sql;
create table sql_join as select
b.start_date,
b.end_date,
b.value_1,
a.key_1,
a.target_date
from table_1 a
inner join table_2 b
on a.key_1 = b.key_1 and
a.target_date between b.start_date and b.end_date
;quit;
I'm having trouble figuring out what the equivalent would be to a LEFT JOIN though. For instance, if something doesn't JOIN, I'd want to output, which I think is straightforward:
if add1.find() ne 0 then output;
And if it JOINs and the date is between, that seems straightforward as well:
if add1.find() = 0 then do until (add1.find_next());
if start_date le target_date le end_date then output;
end;
But how do I get the rest of the records from table_1 that might join, but don't have the target_date between the start_date and end_date? For instance, let's say table_2 is a start_date and end_date of a sale, and that sale didn't start until February 1st for a key_1 = 'Clothes'. If my table_1 has 'Clothes' and sales on January 1st, it will JOIN on the key, but I want to output the blank value. Any ideas on how to do this?
Any help would be much appreciated!
You just need to keep track of whether you've found a match or not. Since you're not using the hash find to track the 'between' part of things, you can't use that, so you just have to do it yourself.
See this example. Here I modify SASHELP.CLASS to look like your input tables, then add a bit of logic to see if anything was found.
data table_1;
set sashelp.class;
rename age=target_date name=key_1;
drop height weight;
run;
data table_2;
set sashelp.class;
do _i = 1 to mod(_n_,3);
start_date = age-3+_i;
end_date = age+1-_i;
if start_date le end_date then output;
end;
rename name=key_1 height=value_1;
keep height weight start_date age end_date name;
run;
data hash_join;
if _n_ = 1 then do;
declare hash add1(dataset:'table_2',multidata: 'Y');
add1.defineKey('key_1');
add1.defineData('start_date','end_date','value_1');
add1.defineDone();
end;
format
start_date date9.
end_date date9.
value_1 10.5
;
set table_1 (keep=key_1 target_date);
if add1.find() = 0 then do until (add1.find_next());
if start_date le target_date le end_date then do;
found=1;
output;
end;
end;
call missing(of value_1); *full list of values to clear - all of hash data elements;
if not (found) then output;
run;
I think you just need to track if something has the key, but not in the range:
if add1.find() ^=0 then output;
else do;
found = 0;
do until (add1.find_next());
if start_date le target_date le end_date then do;
output;
found=1;
end;
end;
if ^found then output;
end;
No data to test with, so this is just me coding in SO. Let me know if it doesn't work.
I'm using a Crystal Reports 13 Add Command for record selection from an Oracle database connected through the Oracle 11g Client. The error I am receiving is ORA-00933: SQL command not properly ended, but I can't find anything the matter with my code (incomplete):
/* Determine units with billing code effective dates in the previous month */
SELECT "UNITS"."UnitNumber", "BILL"."EFF_DT"
FROM "MFIVE"."BILL_UNIT_ACCT" "BILL"
LEFT OUTER JOIN "MFIVE"."VIEW_ALL_UNITS" "UNITS" ON "BILL"."UNIT_ID" = "UNITS"."UNITID"
WHERE "UNITS"."OwnerDepartment" LIKE '580' AND TO_CHAR("BILL"."EFF_DT", 'MMYYYY') = TO_CHAR(ADD_MONTHS(TRUNC(SYSDATE), -1), 'MMYYYY')
INNER JOIN
/* Loop through previously identified units and determine last billing code change prior to preious month */
(
SELECT "BILL2"."UNIT_ID", MAX("BILL2"."EFF_DT")
FROM "MFIVE"."BILL_UNIT_ACCT" "BILL2"
WHERE TO_CHAR("BILL2"."EFF_DT", 'MMYYYY') < TO_CHAR(ADD_MONTHS(TRUNC(SYSDATE), -1), 'MMYYYY')
GROUP BY "BILL2"."UNIT_ID"
)
ON "BILL"."UNIT_ID" = "BILL2"."UNIT_ID"
ORDER BY "UNITS"."UnitNumber", "BILL"."EFF_DT" DESC
We are a state entity that leases vehicles (units) to other agencies. Each unit has a billing code with an associated effective date. The application is to develop a report of units with billing codes changes in the previous month.
Complicating the matter is that for each unit above, the report must also show the latest billing code and associated effective date prior to the previous month. A brief example:
Given this data and assuming it is now April 2016 (ordered for clarity)...
Unit Billing Code Effective Date Excluded
---- ------------ -------------- --------
1 A 04/15/2016 Present month
1 B 03/29/2016
1 A 03/15/2016
1 C 03/02/2016
1 B 01/01/2015
2 C 03/25/2016
2 A 03/04/2016
2 B 07/24/2014
2 A 01/01/2014 A later effective date prior to previous month exists
3 D 11/28/2014 No billing code change during previous month
The report should return the following...
Unit Billing Code Effective Date
---- ------------ --------------
1 B 03/29/2016
1 A 03/15/2016
1 C 03/02/2016
1 B 01/01/2015
2 C 03/25/2016
2 A 03/04/2016
2 B 07/24/2014
Any assistance resolving the error will be appreciated.
You have a WHERE clause before the INNER JOIN clause. This is invalid syntax - if you swap them it should work:
SELECT "UNITS"."UnitNumber",
"BILL"."EFF_DT"
FROM "MFIVE"."BILL_UNIT_ACCT" "BILL"
LEFT OUTER JOIN
"MFIVE"."VIEW_ALL_UNITS" "UNITS"
ON "BILL"."UNIT_ID" = "UNITS"."UNITID"
INNER JOIN
/* Loop through previously identified units and determine last billing code change prior to preious month */
(
SELECT "UNIT_ID",
MAX("EFF_DT")
FROM "MFIVE"."BILL_UNIT_ACCT"
WHERE TO_CHAR("EFF_DT", 'MMYYYY') < TO_CHAR(ADD_MONTHS(TRUNC(SYSDATE), -1), 'MMYYYY')
GROUP BY "UNIT_ID"
) "BILL2"
ON "BILL"."UNIT_ID" = "BILL2"."UNIT_ID"
WHERE "UNITS"."OwnerDepartment" LIKE '580'
AND TO_CHAR("BILL"."EFF_DT", 'MMYYYY') = TO_CHAR(ADD_MONTHS(TRUNC(SYSDATE), -1), 'MMYYYY')
ORDER BY "UNITS"."UnitNumber", "BILL"."EFF_DT" DESC
Also, you need to move the "BILL2" alias outside the () brackets as you do not need the alias inside the brackets but you do outside.
Are you really sure you need the double-quotes ""? Double-quotes enforce case sensitivity in column names - the default behaviour is for Oracle to convert all table and column names to upper case to abstract the case-sensitivity from the user - since you are using both double-quotes and upper-case names the quotes seems redundant.
SOLUTION:
SELECT DISTINCT "BILL2".*
FROM "MFIVE"."BILL_UNIT_ACCT" "BILL"
LEFT JOIN
/* Returns all rows for each unit with effective date >= maximum prior effective date and effective date < current month */
(
SELECT "UNITS"."UnitNumber" AS "UNITNO",
UPPER("UNITS"."SpecNoDescription") AS "TS_DESCR",
"UNITS"."UsingDepartment" AS "USING_DEPT",
"UNITS"."OwnerDepartment" AS "OWNING_DEPT",
"BILL"."BILLING_CODE",
"BILL"."EFF_DT",
"BILL"."UNIT_ID"
FROM "MFIVE"."BILL_UNIT_ACCT" "BILL"
LEFT OUTER JOIN "MFIVE"."VIEW_ALL_UNITS" "UNITS" ON "BILL"."UNIT_ID" = "UNITS"."UNITID"
WHERE TO_NUMBER(TO_CHAR(TRUNC("BILL"."EFF_DT"), 'YYYYMM'), '999999') < TO_NUMBER(TO_CHAR(TRUNC(ADD_MONTHS(SYSDATE, 0)), 'YYYYMM'), '999999') AND
"UNITS"."OwnerDepartment" = '580' AND
"BILL"."EFF_DT" >=
/* Returns maximum effective date prior to previous month for each unit */
(
SELECT MAX("BILL3"."EFF_DT")
FROM "MFIVE"."BILL_UNIT_ACCT" "BILL3"
WHERE "BILL3"."UNIT_ID" = "BILL"."UNIT_ID" AND
TO_NUMBER(TO_CHAR(TRUNC("BILL3"."EFF_DT"), 'YYYYMM'), '999999') < TO_NUMBER(TO_CHAR(TRUNC(ADD_MONTHS(SYSDATE, -1)), 'YYYYMM'), '999999')
GROUP BY "BILL3"."UNIT_ID"
)
) BILL2
ON "BILL"."UNIT_ID" = "BILL2"."UNIT_ID"
WHERE TO_CHAR("BILL"."EFF_DT", 'YYYYMM') = TO_CHAR(ADD_MONTHS(TRUNC(SYSDATE), -1), 'YYYYMM')
ORDER BY "BILL2"."UNITNO", "BILL2"."EFF_DT" DESC
If the where before Join really matters to you, use a CTE. (Employing with clause for temporary table and joining on the same.)
With c as (SELECT "UNITS"."UnitNumber", "BILL"."EFF_DT","BILL"."UNIT_ID" -- Correction: Was " BILL"."UNIT_ID" (spacetanker)
FROM "MFIVE"."BILL_UNIT_ACCT" "BILL" -- Returning unit id column too, to be used in join
LEFT OUTER JOIN "MFIVE"."VIEW_ALL_UNITS" "UNITS" ON "BILL"."UNIT_ID" = "UNITS"."UNITID"
WHERE "UNITS"."OwnerDepartment" LIKE '580' AND TO_CHAR("BILL"."EFF_DT", 'MMYYYY') = TO_CHAR(ADD_MONTHS(TRUNC(SYSDATE), -1), 'MMYYYY'))
select * from c --Filter out your required columns from c and d alias results, e.g c.UNIT_ID
INNER JOIN
--Loop through previously identified units and determine last billing code change prior to preious month */
(
SELECT "BILL2"."UNIT_ID", MAX("BILL2"."EFF_DT")
FROM "MFIVE"."BILL_UNIT_ACCT" "BILL2"
WHERE TO_CHAR("BILL2"."EFF_DT", 'MMYYYY') < TO_CHAR(ADD_MONTHS(TRUNC(SYSDATE), -1), 'MMYYYY')
GROUP BY "BILL2"."UNIT_ID"
) d
ON c."UNIT_ID" = d."UNIT_ID"
order by c."UnitNumber", c."EFF_DT" desc -- COrrection: Removed semicolon that Crystal Reports didn't like (spacetanker)
It seems this query has lots of scope for tuning though. However, one who has access to the data and requirement specification is the best judge.
EDIT :
You are not able to see data PRIOR to previous month since you are using BILL.EFF_DT in your original question's select statement, which is filtered to give only dates of previous month(..AND TO_CHAR("BILL"."EFF_DT", 'MMYYYY') = TO_CHAR(ADD_MONTHS(TRUNC(SYSDATE), -1), 'MMYYYY'))
If you want the data as you want, I guess you have to use the BILL2 section (d part in my subquery), by giving an alias to Max(EFF_DT), and using that alias in your select clause.
I give up! I've been trying to make this work for some time now but I can't get the logic/and or code right.
What I'm trying to do is to count ongoing cases for every date in a defined period. I have a table looking basically like this:
CaseId OpenDate CloseDate
1 01JAN2014 05JAN2014
2 02JAN2014 04JAN2014
3 02JAN2014 .
4 03JAN2014 04JAN2014
5 06JAN2014 08JAN2014
6 07JAN2014 .
I created a data set iterating dates from today-30 to today as comparative dates (CompDate).
Definition of ongoing case is (CompDate <= OpenDate and CompDate < CloseDate) or CloseDate = .
My plan was to join the dates with the table and get something like this
CompDate OngoingCases
01JAN2014 1
02JAN2014 3
03JAN2014 4
04JAN2014 2
05JAN2014 1
06JAN2014 2
07JAN2014 3
So far I've came up with this code which gives me something else..
proc sql;
create table Ongoing as
select distinct
a.CompDate,
count(distinct case when (a.CompDate <= datepart(b.caseopendate) and (a.CompDate < datepart(b.caseclosedate) or b.caseclosedate = . )) then b.caseid end) as Cases
from List_of_dates as a
left outer join dcms_cases as b
on a.Date
where
.
.
group by a.Date
;
quit;
I think all you need is to join two tables applying your condition with dates in ON-statement. The following SQL will do the trick:
proc sql;
create table Ongoing as
select a.CompDate, count(b.CaseId) as OngoingCases
from List_of_dates a
left join dcms_cases b
on b.OpenDate<=a.CompDate and (b.CloseDate>a.CompDate or b.CloseDate=.)
group by a.CompDate
;
quit;
I made a few assumptions:
1) You meant to say the comp was greater than open date but less than close date or closed date is missing
2) You want the final result set to show how many ongoing cases were there based off the open date rather than the comp date since the comp date would be one static value... based on your description.
I have code below that used your sample and named it 'AA1'. It will give you the result set below.
Data AA1;
set AA1;
Format CompDate date9. Ongoing $3.;
CompDate = today() - 100; /*Change this to whatever your criteria is for the comparison*/
If ((CompDate >= opendate) and (CompDate < closedate)) or (closedate = .) then Ongoing = 'Y'; else Ongoing = 'N';
run;
/*Sort table in order to do a count of the ongoing cases*/
proc sort data = AA1;
by caseID CompDate Ongoing;
run;
/*Count how many ongoing cases exist based on the Open date values */
data AA1(rename= (count=OngoingCases));
set AA1;
count + 1;
by opendate;
if first.opendate then count = 1;
run;
/*Clean up to keep the variables you want in your result set */
data final;
set new(keep=opendate ongoingCases);
run;
One way is to iterate over your date range for every record in your dataset, and output records which satisfy your criteria...
%LET START = today()-30 ;
%LET END = today() ;
data want ;
set have ;
do CompDate = &START to &END ;
OngoingCase = (OpenDate <= CompDate < CloseDate)
or (OpenDate <= CompDate and missing(CloseDate)) ;
if OngoingCase then output ;
end ;
format CompDate date9. ;
run ;
proc summary data=want nway ;
class CompDate ;
var OngoingCase ;
output out=case_sum (drop=_:) sum= ;
run ;
I modified the current query slightly and moved the CASE condition to the WHERE clause.
proc sql;
create table Ongoing as
select distinct
a.CompDate,
count(distinct b.caseid ) as Cases
from List_of_dates as a
left outer join dcms_cases as b
on a.Date
where (a.CompDate <= datepart(b.caseopendate) and a.CompDate < datepart(b.caseclosedate))
or b.caseclosedate = .
group by a.Date
;
quit;
How do you create a moving average in SQL?
Current table:
Date Clicks
2012-05-01 2,230
2012-05-02 3,150
2012-05-03 5,520
2012-05-04 1,330
2012-05-05 2,260
2012-05-06 3,540
2012-05-07 2,330
Desired table or output:
Date Clicks 3 day Moving Average
2012-05-01 2,230
2012-05-02 3,150
2012-05-03 5,520 4,360
2012-05-04 1,330 3,330
2012-05-05 2,260 3,120
2012-05-06 3,540 3,320
2012-05-07 2,330 3,010
This is an Evergreen Joe Celko question.
I ignore which DBMS platform is used. But in any case Joe was able to answer more than 10 years ago with standard SQL.
Joe Celko SQL Puzzles and Answers citation:
"That last update attempt suggests that we could use the predicate to
construct a query that would give us a moving average:"
SELECT S1.sample_time, AVG(S2.load) AS avg_prev_hour_load
FROM Samples AS S1, Samples AS S2
WHERE S2.sample_time
BETWEEN (S1.sample_time - INTERVAL 1 HOUR)
AND S1.sample_time
GROUP BY S1.sample_time;
Is the extra column or the query approach better? The query is
technically better because the UPDATE approach will denormalize the
database. However, if the historical data being recorded is not going
to change and computing the moving average is expensive, you might
consider using the column approach.
MS SQL Example:
CREATE TABLE #TestDW
( Date1 datetime,
LoadValue Numeric(13,6)
);
INSERT INTO #TestDW VALUES('2012-06-09' , '3.540' );
INSERT INTO #TestDW VALUES('2012-06-08' , '2.260' );
INSERT INTO #TestDW VALUES('2012-06-07' , '1.330' );
INSERT INTO #TestDW VALUES('2012-06-06' , '5.520' );
INSERT INTO #TestDW VALUES('2012-06-05' , '3.150' );
INSERT INTO #TestDW VALUES('2012-06-04' , '2.230' );
SQL Puzzle query:
SELECT S1.date1, AVG(S2.LoadValue) AS avg_prev_3_days
FROM #TestDW AS S1, #TestDW AS S2
WHERE S2.date1
BETWEEN DATEADD(d, -2, S1.date1 )
AND S1.date1
GROUP BY S1.date1
order by 1;
One way to do this is to join on the same table a few times.
select
(Current.Clicks
+ isnull(P1.Clicks, 0)
+ isnull(P2.Clicks, 0)
+ isnull(P3.Clicks, 0)) / 4 as MovingAvg3
from
MyTable as Current
left join MyTable as P1 on P1.Date = DateAdd(day, -1, Current.Date)
left join MyTable as P2 on P2.Date = DateAdd(day, -2, Current.Date)
left join MyTable as P3 on P3.Date = DateAdd(day, -3, Current.Date)
Adjust the DateAdd component of the ON-Clauses to match whether you want your moving average to be strictly from the past-through-now or days-ago through days-ahead.
This works nicely for situations where you need a moving average over only a few data points.
This is not an optimal solution for moving averages with more than a few data points.
select t2.date, round(sum(ct.clicks)/3) as avg_clicks
from
(select date from clickstable) as t2,
(select date, clicks from clickstable) as ct
where datediff(t2.date, ct.date) between 0 and 2
group by t2.date
Example here.
Obviously you can change the interval to whatever you need. You could also use count() instead of a magic number to make it easier to change, but that will also slow it down.
General template for rolling averages that scales well for large data sets
WITH moving_avg AS (
SELECT 0 AS [lag] UNION ALL
SELECT 1 AS [lag] UNION ALL
SELECT 2 AS [lag] UNION ALL
SELECT 3 AS [lag] --ETC
)
SELECT
DATEADD(day,[lag],[date]) AS [reference_date],
[otherkey1],[otherkey2],[otherkey3],
AVG([value1]) AS [avg_value1],
AVG([value2]) AS [avg_value2]
FROM [data_table]
CROSS JOIN moving_avg
GROUP BY [otherkey1],[otherkey2],[otherkey3],DATEADD(day,[lag],[date])
ORDER BY [otherkey1],[otherkey2],[otherkey3],[reference_date];
And for weighted rolling averages:
WITH weighted_avg AS (
SELECT 0 AS [lag], 1.0 AS [weight] UNION ALL
SELECT 1 AS [lag], 0.6 AS [weight] UNION ALL
SELECT 2 AS [lag], 0.3 AS [weight] UNION ALL
SELECT 3 AS [lag], 0.1 AS [weight] --ETC
)
SELECT
DATEADD(day,[lag],[date]) AS [reference_date],
[otherkey1],[otherkey2],[otherkey3],
AVG([value1] * [weight]) / AVG([weight]) AS [wavg_value1],
AVG([value2] * [weight]) / AVG([weight]) AS [wavg_value2]
FROM [data_table]
CROSS JOIN weighted_avg
GROUP BY [otherkey1],[otherkey2],[otherkey3],DATEADD(day,[lag],[date])
ORDER BY [otherkey1],[otherkey2],[otherkey3],[reference_date];
select *
, (select avg(c2.clicks) from #clicks_table c2
where c2.date between dateadd(dd, -2, c1.date) and c1.date) mov_avg
from #clicks_table c1
Use a different join predicate:
SELECT current.date
,avg(periods.clicks)
FROM current left outer join current as periods
ON current.date BETWEEN dateadd(d,-2, periods.date) AND periods.date
GROUP BY current.date HAVING COUNT(*) >= 3
The having statement will prevent any dates without at least N values from being returned.
assume x is the value to be averaged and xDate is the date value:
SELECT avg(x) from myTable WHERE xDate BETWEEN dateadd(d, -2, xDate) and xDate
In hive, maybe you could try
select date, clicks, avg(clicks) over (order by date rows between 2 preceding and current row) as moving_avg from clicktable;
For the purpose, I'd like to create an auxiliary/dimensional date table like
create table date_dim(date date, date_1 date, dates_2 date, dates_3 dates ...)
while date is the key, date_1 for this day, date_2 contains this day and the day before; date_3...
Then you can do the equal join in hive.
Using a view like:
select date, date from date_dim
union all
select date, date_add(date, -1) from date_dim
union all
select date, date_add(date, -2) from date_dim
union all
select date, date_add(date, -3) from date_dim
NOTE: THIS IS NOT AN ANSWER but an enhanced code sample of Diego Scaravaggi's answer. I am posting it as answer as the comment section is insufficient. Note that I have parameter-ized the period for Moving aveage.
declare #p int = 3
declare #t table(d int, bal float)
insert into #t values
(1,94),
(2,99),
(3,76),
(4,74),
(5,48),
(6,55),
(7,90),
(8,77),
(9,16),
(10,19),
(11,66),
(12,47)
select a.d, avg(b.bal)
from
#t a
left join #t b on b.d between a.d-(#p-1) and a.d
group by a.d
--#p1 is period of moving average, #01 is offset
declare #p1 as int
declare #o1 as int
set #p1 = 5;
set #o1 = 3;
with np as(
select *, rank() over(partition by cmdty, tenor order by markdt) as r
from p_prices p1
where
1=1
)
, x1 as (
select s1.*, avg(s2.val) as avgval from np s1
inner join np s2
on s1.cmdty = s2.cmdty and s1.tenor = s2.tenor
and s2.r between s1.r - (#p1 - 1) - (#o1) and s1.r - (#o1)
group by s1.cmdty, s1.tenor, s1.markdt, s1.val, s1.r
)
I'm not sure that your expected result (output) shows classic "simple moving (rolling) average" for 3 days. Because, for example, the first triple of numbers by definition gives:
ThreeDaysMovingAverage = (2.230 + 3.150 + 5.520) / 3 = 3.6333333
but you expect 4.360 and it's confusing.
Nevertheless, I suggest the following solution, which uses window-function AVG. This approach is much more efficient (clear and less resource-intensive) than SELF-JOIN introduced in other answers (and I'm surprised that no one has given a better solution).
-- Oracle-SQL dialect
with
data_table as (
select date '2012-05-01' AS dt, 2.230 AS clicks from dual union all
select date '2012-05-02' AS dt, 3.150 AS clicks from dual union all
select date '2012-05-03' AS dt, 5.520 AS clicks from dual union all
select date '2012-05-04' AS dt, 1.330 AS clicks from dual union all
select date '2012-05-05' AS dt, 2.260 AS clicks from dual union all
select date '2012-05-06' AS dt, 3.540 AS clicks from dual union all
select date '2012-05-07' AS dt, 2.330 AS clicks from dual
),
param as (select 3 days from dual)
select
dt AS "Date",
clicks AS "Clicks",
case when rownum >= p.days then
avg(clicks) over (order by dt
rows between p.days - 1 preceding and current row)
end
AS "3 day Moving Average"
from data_table t, param p;
You see that AVG is wrapped with case when rownum >= p.days then to force NULLs in first rows, where "3 day Moving Average" is meaningless.
We can apply Joe Celko's "dirty" left outer join method (as cited above by Diego Scaravaggi) to answer the question as it was asked.
declare #ClicksTable table ([Date] date, Clicks int)
insert into #ClicksTable
select '2012-05-01', 2230 union all
select '2012-05-02', 3150 union all
select '2012-05-03', 5520 union all
select '2012-05-04', 1330 union all
select '2012-05-05', 2260 union all
select '2012-05-06', 3540 union all
select '2012-05-07', 2330
This query:
SELECT
T1.[Date],
T1.Clicks,
-- AVG ignores NULL values so we have to explicitly NULLify
-- the days when we don't have a full 3-day sample
CASE WHEN count(T2.[Date]) < 3 THEN NULL
ELSE AVG(T2.Clicks)
END AS [3-Day Moving Average]
FROM #ClicksTable T1
LEFT OUTER JOIN #ClicksTable T2
ON T2.[Date] BETWEEN DATEADD(d, -2, T1.[Date]) AND T1.[Date]
GROUP BY T1.[Date]
Generates the requested output:
Date Clicks 3-Day Moving Average
2012-05-01 2,230
2012-05-02 3,150
2012-05-03 5,520 4,360
2012-05-04 1,330 3,330
2012-05-05 2,260 3,120
2012-05-06 3,540 3,320
2012-05-07 2,330 3,010