I have to generate a report of the amount of tasks done per day, and per hour. This way, the report will look like a grid.
I'd like the days of the month (from 1 to 31) in the horizontal axis, and the hours (from 8:00 to 18:00) vertically.
How do I select this kind of data from a database using SQL in PostgreSQL?
The query that you are looking for is a SQL aggregation query. It may seem a bit complicated, but the structure is pretty easy.
select extract(hour from date_time_of_task) as thehour,
sum(case when extract(day from date_time_of_task) = 1 then 1 else 0 end) as day_01,
sum(case when extract(day from date_time_of_task) = 2 then 1 else 0 end) as day_02,
sum(case when extract(day from date_time_of_task) = 3 then 1 else 0 end) as day_03,
sum(case when extract(day from date_time_of_task) = 4 then 1 else 0 end) as day_04,
... up to day 31
group by extract(hour from date_time_of_task)
order by 1
This is simply grouping by the hour of the day. Then it pivots the data manually for each day of the month. The "sum" counts the number of rows that meet the two conditions at the same time -- the hour of the row and the day of the column.
The key elements to an elegant solution are generate_series(), date_part(), a CTE, GROUP BY and count(*), LEFT JOIN and finally: the crosstab() function (with 1 parameter) from the additional module tablefunc. To install it, run once per database:
CREATE EXTENSION tablefunc;
Query:
SELECT *
FROM crosstab($x$
WITH x AS (
SELECT date_part('day', date_time_of_task)::int AS d
,date_part('hour', date_time_of_task)::int AS h
,count(*)::int AS ct
FROM tasks
GROUP BY 1,2
)
SELECT d, h, ct
FROM (SELECT generate_series(1,31) AS d, generate_series(0,23) AS h) t
LEFT JOIN x USING (d,h)
ORDER BY 1,2
$x$)
AS orders(
day int
,h8 int, h9 int, h10 int, h11 int, h12 int, h13 int, h14 int, h15 int
,h16 int, h17 int, h18 int);
Produces a matrix of days and hours with the count of tasks in each field like you describe.
BTW: I used this aux. function to generate the target definition list:
SELECT 'day int, ' || string_agg (x, ', ')
FROM (SELECT ('h' || generate_series(8,18) || ' int') AS x) a
Related
This question is best asked using an example - if I have daily data (in this case, daily Domestic Box Office for the movie Elvis), how can I sum only the weekend values?
If the data looks like this:
Date
DBO
6/24/2022
12755467
6/25/2022
9929779
6/26/2022
8526333
6/27/2022
4253038
6/28/2022
5267391
6/29/2022
4010762
6/30/2022
3577241
7/1/2022
5320812
7/2/2022
6841224
7/3/2022
6290576
7/4/2022
4248679
7/5/2022
3639110
7/6/2022
3002182
7/7/2022
2460108
7/8/2022
3326066
7/9/2022
4324040
7/10/2022
3530965
I'd like to be able to get results that look like this:
Weekend
DBO Sum
1
31211579
2
18452612
3
11181071
Also - not sure how tricky this would be but would love to include percent change v. last weekend.
Weekend
DBO Sum
% Change
1
31211579
2
18452612
-41%
3
11181071
-39%
I tried this with CASE WHEN but I got the results in different columns, which was not what I was looking for.
SELECT
,SUM(CASE
WHEN DATE BETWEEN '2022-06-24' AND '2022-06-26' THEN index
ELSE 0
END) AS Weekend1
,SUM(CASE
WHEN DATE BETWEEN '2022-07-01' AND '2022-07-03' THEN index
ELSE 0
END) AS Weekend2
,SUM(CASE
WHEN DATE BETWEEN '2022-07-08' AND '2022-07-10' THEN index
ELSE 0
END) AS Weekend3
FROM Elvis
I would start by filtering the data on week-end days only. Then we can group by week to get the index sum ; the last step is to use window functions to compare each week-end with the previous one:
select iso_week,
row_number() over(order by iso_week) weekend_number,
sum(index) as dbo_sum,
( sum(index) - lag(sum(index) over(order by iso_week) )
/ nullif(lag(sum(index)) over(order by iso_week), 0) as ratio_change
from (
select e.*, extract(isoweek from date) iso_week
from elvis e
where extract(dayofweek from date) in (1, 7)
) e
group by iso_week
order by iso_week
Consider below
select *,
round(100 * safe_divide(dbo_sum - lag(dbo_sum) over(order by week), lag(dbo_sum) over(order by week)), 2) change_percent
from (
select extract(week from date + 2) week, sum(dbo) dbo_sum
from your_table
where extract(dayofweek from date + 2) in (1, 2, 3)
group by week
)
if applied to sample data in your question - output is
My background is Oracle but we've moved to Hadoop on AWS and I'm accessing our logs using Hive SQL. I've been asked to return a report where the number of high severity errors on the system of any given type exceeds 9 in any rolling period of 30 days (9 but I use 2 in the example to keep the example data volumes down) by uptime. I've written code to do this but I don't really understand performance tuning in Hive. A lot of the stuff I learned in Oracle doesn't seem applicable.
Can this be improved?
Data is roughly
CREATE TABLE LOG_TABLE
(SYSTEM_ID VARCHAR(1),
EVENT_TYPE VARCHAR(2),
EVENT_ID VARCHAR(3),
EVENT_DATE DATE,
UPTIME INT);
INSERT INOT LOG_TABLE
VALUES
('1','A1','138','2018-10-29',34),
('1','A2','146','2018-11-13',49),
('1','A3','140','2018-11-02',38),
('1','B1','130','2018-10-13',18),
('1','B1','150','2018-11-19',55),
('1','B2','137','2018-10-27',32),
('2','A1','128','2018-10-11',59),
('2','A1','131','2018-10-16',64),
('2','A1','136','2018-10-25',73),
('2','A2','139','2018-10-31',79),
('2','A2','145','2018-11-11',90),
('2','A2','147','2018-11-14',93),
('2','A3','135','2018-10-24',72),
('2','B1','124','2018-10-03',51),
('2','B1','133','2018-10-19',67),
('2','B2','134','2018-10-22',70),
('2','B2','142','2018-11-06',85),
('2','B2','148','2018-11-15',94),
('2','B2','149','2018-11-17',96),
('3','A2','127','2018-10-10',122),
('3','A3','123','2018-10-01',113),
('3','A3','125','2018-10-06',118),
('3','A3','126','2018-10-07',119),
('3','A3','141','2018-11-05',148),
('3','A3','144','2018-11-10',153),
('3','B1','132','2018-10-18',130),
('3','B1','143','2018-11-08',151),
('3','B2','129','2018-10-12',124);
and code that works is as follows. I do a self join on the log table to return all the records with the gap between them and include those with a gap of 30 days or less. I then select those where there are more than 2 events into a second cte and from these I count distinct event types and event ids by system and uptime range
WITH EVENTGAP AS
(SELECT T1.EVENT_TYPE,
T1.SYSTEM_ID,
T1.EVENT_ID,
T2.EVENT_ID AS EVENT_ID2,
T1.EVENT_DATE,
T2.EVENT_DATE AS EVENT_DATE2,
T1.UPTIME,
DATEDIFF(T2.EVENT_DATE,T1.EVENT_DATE) AS EVENT_GAP
FROM LOG_TABLE T1
INNER JOIN LOG_TABLE T2
ON (T1.EVENT_TYPE=T2.EVENT_TYPE
AND T1.SYSTEM_ID=T2.SYSTEM_ID)
WHERE DATEDIFF(T2.EVENT_DATE,T1.EVENT_DATE) BETWEEN 0 AND 30
AND T1.UPTIME BETWEEN 0 AND 299
AND T2.UPTIME BETWEEN 0 AND 330),
EVENTCOUNT
AS (SELECT EVENT_TYPE,
SYSTEM_ID,
EVENT_ID,
EVENT_DATE,
COUNT(1)
FROM EVENTGAP
GROUP BY EVENT_TYPE,
SYSTEM_ID,
EVENT_ID,
EVENT_DATE
HAVING COUNT(1)>2)
SELECT EVENTGAP.SYSTEM_ID,
CASE WHEN FLOOR(UPTIME/50) = 0 THEN '0-49'
WHEN FLOOR(UPTIME/50) = 1 THEN '50-99'
WHEN FLOOR(UPTIME/50) = 2 THEN '100-149'
WHEN FLOOR(UPTIME/50) = 3 THEN '150-199'
WHEN FLOOR(UPTIME/50) = 4 THEN '200-249'
WHEN FLOOR(UPTIME/50) = 5 THEN '250-299' END AS UPTIME_BAND,
COUNT(DISTINCT EVENTGAP.EVENT_ID2) AS EVENT_COUNT,
COUNT(DISTINCT EVENTGAP.EVENT_TYPE) AS TYPE_COUNT
FROM EVENTGAP
WHERE EVENTGAP.EVENT_ID IN (SELECT DISTINCT EVENTCOUNT.EVENT_ID FROM EVENTCOUNT)
GROUP BY EVENTGAP.SYSTEM_ID,
CASE WHEN FLOOR(UPTIME/50) = 0 THEN '0-49'
WHEN FLOOR(UPTIME/50) = 1 THEN '50-99'
WHEN FLOOR(UPTIME/50) = 2 THEN '100-149'
WHEN FLOOR(UPTIME/50) = 3 THEN '150-199'
WHEN FLOOR(UPTIME/50) = 4 THEN '200-249'
WHEN FLOOR(UPTIME/50) = 5 THEN '250-299' END
This gives the following result, which should be unique counts of event ids and event types that have 3 or more events falling in any rolling 30 day period. Some events may be in more than one period but will only be counted once.
EVENTGAP.SYSTEM_ID UPTIME_BAND EVENT_COUNT TYPE_COUNT
2 50-99 10 3
3 100-149 4 1
In both Hive and Oracle, you would want to do this using window functions, using a window frame clause. The exact logic is different in the two databases.
In Hive you can use range between if you convert event_date to a number. A typical method is to subtract a fixed value from it. Another method is to use unix timestamps:
select lt.*
from (select lt.*,
count(*) over (partition by event_type
order by unix_timestamp(event_date)
range between 60*24*24*30 preceding and current row
) as rolling_count
from log_table lt
) lt
where rolling_count >= 2 -- or 9
I am trying to calculate the sum of working days per month in a Oracle MV
Here is my request:
CREATE MATERIALIZED VIEW DIM_DATE_MV
BUILD IMMEDIATE
REFRESH COMPLETE ON DEMAND
START WITH sysdate NEXT (TRUNC(sysdate)+1) + 7 / 24
as SELECT
CAL.DATE_D as ID_DATE,
(CASE WHEN (
(TRIM(TO_CHAR(CAL.DATE_D,'Day','nls_date_language=english')) IN ('Saturday','Sunday')) OR
(TRIM(TO_CHAR(CAL.DATE_D,'DD-MM')) IN ('01-01', '01-05', '08-05', '14-07', '15-08', '01-11', '11-11', '25-12')) OR
(TO_CHAR(CAL.DATE_D, 'DD-MM-YYYY') IN (SELECT TO_CHAR(DOFF.DATE_OFF, 'DD-MM-YYYY') FROM ODSISIC.DAY_OFF DOFF where DOFF.IMPACT='ALL'))
) THEN 0 ELSE 1 END) as IS_WORKING_DAY,
(CASE WHEN TO_CHAR(CAL.DATE_D , 'YYYY-MM') = TO_CHAR(CAL.DATE_D , 'YYYY-MM') THEN (Select SUM(IS_WORKING_DAY) from DIM_DATE_MV group by CAL.YEAR_MONTH_NUM) ELSE 0 END)
as NB_WORKING_DAY_MONTH
FROM ODSISIC.ORACLE_CALENDAR CAL
LEFT JOIN ODSISIC.DAY_OFF DOFF
ON DOFF.DATE_OFF = CAL.DATE_D
IS_WORKING_DAY = 0 if it's Holidays, Weekend or Date in the table DATE_OFF which contains all holidays with a different date from year to year.
I want the SUM GROUP BY month of IS_WORKING_DAY = 1 in NB_WORKING_DAY_MONTH.
How can I calculate this SUM directly in my query rather than creating an intermediate table for my join with the DAY_OFF table ?
Thanks :)
After thinking intelligently, I resolved by redoing my SQL query :
CREATE MATERIALIZED VIEW DIM_DATE_MV
BUILD IMMEDIATE
REFRESH COMPLETE ON DEMAND
START WITH sysdate NEXT (TRUNC(sysdate)+1) + 7 / 24
as SELECT
CAL.DATE_D as ID_DATE,
IS_WORKING_DAY as IS_WORKING_DAY,
A.SUM as NB_WORKING_DAY_MONTH
FROM (SELECT SUM(IS_WORKING_DAY) as SUM, OCAL.YEAR_MONTH_NUM as ID_MONTH from ODSISIC.ORACLE_CALENDAR OCAL group by OCAL.YEAR_MONTH_NUM) A
INNER JOIN ODSISIC.ORACLE_CALENDAR CAL
on CAL.YEAR_MONTH_NUM = A.ID_MONTH
LEFT JOIN ODSISIC.DAY_OFF DOFF
ON DOFF.DATE_OFF = CAL.DATE_D
;
I calculated the workdays before creating the view (which implies that my table DATE_OFF must be fed before ORACLE_CALENDAR)
I added a join to populate my table according to the id_month.
Its working fine now
I have the following table log:
event_time | name |
-------------------------
2014-07-16 11:40 Bob
2014-07-16 10:00 John
2014-07-16 09:20 Bob
2014-07-16 08:20 Bob
2014-07-15 11:20 Bob
2014-07-15 10:20 John
2014-07-15 09:00 Bob
I would like to generate a report, where I can group data by number of entries per day and by entry day. So the resulting report for the table above would be something like this:
event_date | 0-2 | 3 | 4-99 |
-------------------------------
2014-07-16 1 1 0
2014-07-15 2 0 0
I use the following approached to solve it:
Select with grouping in range
How to select the count of values grouped by ranges
If I find answer before anybody post it here, I will share it.
Added
I would like to count a number of daily entries for each name. Then I check to which column this value belongs to, and the I add 1 to that column.
I took it in two steps. Inner query gets the base counts. The outer query uses case statements to sum counts.
SQL Fiddle Example
select event_date,
sum(case when cnt between 0 and 2 then 1 else 0 end) as "0-2",
sum(case when cnt = 3 then 1 else 0 end) as "3",
sum(case when cnt between 4 and 99 then 1 else 0 end) as "4-99"
from
(select cast(event_time as date) as event_date,
name,
count(1) as cnt
from log
group by cast(event_time as date), name) baseCnt
group by event_date
order by event_date
try like this
select da,sum(case when c<3 then 1 else 0 end) as "0-2",
sum(case when c=3 then 1 else 0 end) as "3",
sum(case when c>3 then 1 else 0 end) as "4-66" from (
select cast(event_time as date) as da,count(*) as c from
table1 group by cast(event_time as date),name) as aa group by da
First aggregate in two steps:
SELECT day, CASE
WHEN ct < 3 THEN '0-2'
WHEN ct > 3 THEN '4_or_more'
ELSE '3'
END AS cat
,count(*)::int AS val
FROM (
SELECT event_time::date AS day, count(*) AS ct
FROM tbl
GROUP BY 1
) sub
GROUP BY 1,2
ORDER BY 1,2;
Names should be completely irrelevant according to your description.
Then take the query and run it through crosstab():
SELECT *
FROM crosstab(
$$SELECT day, CASE
WHEN ct < 3 THEN '0-2'
WHEN ct > 3 THEN '4_or_more'
ELSE '3'
END AS cat
,count(*)::int AS val
FROM (
SELECT event_time::date AS day, count(*) AS ct
FROM tbl
GROUP BY 1
) sub
GROUP BY 1,2
ORDER BY 1,2$$
,$$VALUES ('0-2'::text), ('3'), ('4_or_more')$$
) AS f (day date, "0-2" int, "3" int, "4_or_more" int);
crosstab() is supplied by the additional module tablefunc. Details and instructions in this related answer:
PostgreSQL Crosstab Query
This is a variation on a PIVOT query (although PostgreSQL supports this via the crosstab(...) table functions). The existing answers cover the basic technique, I just prefer to construct queries without the use of CASE, where possible.
To get started, we need a couple of things. The first is essentially a Calendar Table, or entries from one (if you don't already have one, they're among the most useful dimension tables). If you don't have one, the entries for the specified dates can easily be generated:
WITH Calendar_Range AS (SELECT startOfDay, startOfDay + INTERVAL '1 DAY' AS nextDay
FROM GENERATE_SERIES(CAST('2014-07-01' AS DATE),
CAST('2014-08-01' AS DATE),
INTERVAL '1 DAY') AS dr(startOfDay))
SQL Fiddle Demo
This is primarily used to create the first step in the double aggregate, like so:
SELECT Calendar_Range.startOfDay, COUNT(Log.name)
FROM Calendar_Range
LEFT JOIN Log
ON Log.event_time >= Calendar_Range.startOfDay
AND Log.event_time < Calendar_Range.nextDay
GROUP BY Calendar_Range.startOfDay, Log.name
SQL Fiddle Demo
Remember that most aggregate columns with a nullable expression (here, COUNT(Log.name)) will ignore null values (not count them). This is also one of the few times it's acceptable to not include a grouped-by column in the SELECT list (normally it makes the results ambiguous). For the actual queries I'll put this into a subquery, but it would also work as a CTE.
We also need a way to construct our COUNT ranges. That's pretty easy too:
Count_Range AS (SELECT text, start, LEAD(start) OVER(ORDER BY start) as next
FROM (VALUES('0 - 2', 0),
('3', 3),
('4+', 4)) e(text, start))
SQL Fiddle Demo
We'll be querying these as "exclusive upper-bound" as well.
We now have all the pieces we need to do the query. We can actually use these virtual tables to make queries in both veins of the current answers.
First, the SUM(CASE...) style.
For this query, we'll take advantage of the null-ignoring qualities of aggregate functions again:
WITH Calendar_Range AS (SELECT startOfDay, startOfDay + INTERVAL '1 DAY' AS nextDay
FROM GENERATE_SERIES(CAST('2014-07-14' AS DATE),
CAST('2014-07-17' AS DATE),
INTERVAL '1 DAY') AS dr(startOfDay)),
Count_Range AS (SELECT text, start, LEAD(start) OVER(ORDER BY start) as next
FROM (VALUES('0 - 2', 0),
('3', 3),
('4+', 4)) e(text, start))
SELECT startOfDay,
COUNT(Zero_To_Two.text) AS Zero_To_Two,
COUNT(Three.text) AS Three,
COUNT(Four_And_Up.text) AS Four_And_Up
FROM (SELECT Calendar_Range.startOfDay, COUNT(Log.name) AS count
FROM Calendar_Range
LEFT JOIN Log
ON Log.event_time >= Calendar_Range.startOfDay
AND Log.event_time < Calendar_Range.nextDay
GROUP BY Calendar_Range.startOfDay, Log.name) Entry_Count
LEFT JOIN Count_Range Zero_To_Two
ON Zero_To_Two.text = '0 - 2'
AND Entry_Count.count >= Zero_To_Two.start
AND Entry_Count.count < Zero_To_Two.next
LEFT JOIN Count_Range Three
ON Three.text = '3'
AND Entry_Count.count >= Three.start
AND Entry_Count.count < Three.next
LEFT JOIN Count_Range Four_And_Up
ON Four_And_Up.text = '4+'
AND Entry_Count.count >= Four_And_Up.start
GROUP BY startOfDay
ORDER BY startOfDay
SQL Fiddle Example
The other option is of course the crosstab query, where the CASE was being used to segment the results. We'll use the Count_Range table to decode the values for us:
SELECT startOfDay, "0 -2", "3", "4+"
FROM CROSSTAB($$WITH Calendar_Range AS (SELECT startOfDay, startOfDay + INTERVAL '1 DAY' AS nextDay
FROM GENERATE_SERIES(CAST('2014-07-14' AS DATE),
CAST('2014-07-17' AS DATE),
INTERVAL '1 DAY') AS dr(startOfDay)),
Count_Range AS (SELECT text, start, LEAD(start) OVER(ORDER BY start) as next
FROM (VALUES('0 - 2', 0),
('3', 3),
('4+', 4)) e(text, start))
SELECT Calendar_Range.startOfDay, Count_Range.text, COUNT(*) AS count
FROM (SELECT Calendar_Range.startOfDay, COUNT(Log.name) AS count
FROM Calendar_Range
LEFT JOIN Log
ON Log.event_time >= Calendar_Range.startOfDay
AND Log.event_time < Calendar_Range.nextDay
GROUP BY Calendar_Range.startOfDay, Log.name) Entry_Count
JOIN Count_Range
ON Entry_Count.count >= Count_Range.start
AND (Entry_Count.count < Count_Range.end OR Count_Range.end IS NULL)
GROUP BY Calendar_Range.startOfDay, Count_Range.text
ORDER BY Calendar_Range.startOfDay, Count_Range.text$$,
$$VALUES('0 - 2', '3', '4+')$$) Data(startOfDay DATE, "0 - 2" INT, "3" INT, "4+" INT)
(I believe this is correct, but don't have a way to test it - Fiddle doesn't seem to have the crosstab functionality loaded. In particular, CTEs probably must go inside the function itself, but I'm not sure....)
Given the following table structure:
CrimeID | No_Of_Crimes | CrimeDate | Violence | Robbery | ASB
1 1 22/02/2011 Y Y N
2 3 18/02/2011 Y N N
3 3 23/02/2011 N N Y
4 2 16/02/2011 N N Y
5 1 17/02/2011 N N Y
Is there a chance of producing a result set that looks like this with T-SQL?
Category | This Week | Last Week
Violence 1 3
Robbery 1 0
ASB 3 1
Where last week shuld be a data less than '20/02/2011' and this week should be greater than or equal to '20/02/2011'
I'm not looking for someone to code this out for me, though a code snippet would be handy :), just some advice on whether this is possible, and how i should go about it with SQL Server.
For info, i'm currently performing all this aggregation using LINQ on the web server, but this requires 19MB being sent over the network every time this request is made. (The table has lots of categories, and > 150,000 rows). I want to make the DB do all the work and only send a small amount of data over the network
Many thanks
EDIT removed incorrect sql for clarity
EDIT Forget the above try the below
select *
from (
select wk, crime, SUM(number) number
from (
select case when datepart(week, crimedate) = datepart(week, GETDATE()) then 'This Week'
when datepart(week, crimedate) = datepart(week, GETDATE())-1 then 'Last Week'
else 'OLDER' end as wk,
crimedate,
case when violence ='Y' then no_of_crimes else 0 end as violence,
case when robbery ='Y' then no_of_crimes else 0 end as robbery,
case when asb ='Y' then no_of_crimes else 0 end as asb
from crimetable) as src
UNPIVOT
(number for crime in
(violence, robbery, asb)) as pivtab
group by wk, crime
) z
PIVOT
( sum(number)
for wk in ([This Week], [Last Week])
) as pivtab
Late to the party, but a solution with an optimal query plan:
Sample data
create table crimes(
CrimeID int, No_Of_Crimes int, CrimeDate datetime,
Violence char(1), Robbery char(1), ASB char(1));
insert crimes
select 1,1,'20110221','Y','Y','N' union all
select 2,3,'20110218','Y','N','N' union all
select 3,3,'20110223','N','N','Y' union all
select 4,2,'20110216','N','N','Y' union all
select 5,1,'20110217','N','N','Y';
Make more data - about 10240 rows in total in addition to the 5 above, each 5 being 2 weeks prior to the previous 5. Also create an index that will help on crimedate.
insert crimes
select crimeId+number*5, no_of_Crimes, DATEADD(wk,-number*2,crimedate),
violence, robbery, asb
from crimes, master..spt_values
where type='P'
create index ix_crimedate on crimes(crimedate)
From here on, check output of each to see where this is going. Check also the execution plan.
Standard Unpivot to break the categories.
select CrimeID, No_Of_Crimes, CrimeDate, Category, YesNo
from crimes
unpivot (YesNo for Category in (Violence,Robbery,ASB)) upv
where YesNo='Y'
Notes:
The filter on YesNo is actually applied AFTER unpivoting. You can comment it out to see.
Unpivot again, but this time select data only for last week and this week.
select CrimeID, No_Of_Crimes, Category,
Week = sign(datediff(d,CrimeDate,w.firstDayThisWeek)+0.1)
from crimes
unpivot (YesNo for Category in (Violence,Robbery,ASB)) upv
cross join (select DATEADD(wk, DateDiff(wk, 0, getdate()), 0)) w(firstDayThisWeek)
where YesNo='Y'
and CrimeDate >= w.firstDayThisWeek -7
and CrimeDate < w.firstDayThisWeek +7
Notes:
(select DATEADD(wk, DateDiff(wk, 0, getdate()), 0)) w(firstDayThisWeek) makes a single-column table where the column contains the pivotal date for this query, being the first day of the current week (using DATEFIRST setting)
The filter on CrimeDate is actually applied on the BASE TABLE prior to unpivoting. Check plan
Sign() just breaks the data into 3 buckets (-1/0/+1). Adding +0.1 ensures that there are only two buckets -1 and +1.
The final query, pivoting by this/last week
select Category, isnull([1],0) ThisWeek, isnull([-1],0) LastWeek
from
(
select Category, No_Of_Crimes,
Week = sign(datediff(d,w.firstDayThisWeek,CrimeDate)+0.1)
from crimes
unpivot (YesNo for Category in (Violence,Robbery,ASB)) upv
cross join (select DATEADD(wk, DateDiff(wk, 0, getdate()), -1)) w(firstDayThisWeek)
where YesNo='Y'
and CrimeDate >= w.firstDayThisWeek -7
and CrimeDate < w.firstDayThisWeek +7
) p
pivot (sum(No_Of_Crimes) for Week in ([-1],[1])) pv
order by Category Desc
Output
Category ThisWeek LastWeek
--------- ----------- -----------
Violence 1 3
Robbery 1 0
ASB 3 3
I would try this:
declare #FirstDayOfThisWeek date = '20110220';
select cat.category,
ThisWeek = sum(case when cat.CrimeDate >= #FirstDayOfThisWeek
then crt.No_of_crimes else 0 end),
LastWeek = sum(case when cat.CrimeDate >= #FirstDayOfThisWeek
then 0 else crt.No_of_crimes end)
from crimetable crt
cross apply (values
('Violence', crt.Violence),
('Robbery', crt.Robbery),
('ASB', crt.ASB))
cat (category, incategory)
where cat.incategory = 'Y'
and crt.CrimeDate >= #FirstDayOfThisWeek-7
group by cat.category;