I am collecting min and max pagecount values within an interval (quarterly) from printers stored in a MS SQL db.
SELECT
p_printerName AS Printer
, DATEPART(q, pmh_pollDate) AS Quartal
, DATEPART(yy, pmh_pollDate) AS Year
, MIN(pmh_pageCount) AS first
, MAX(pmh_pageCount) AS last
, MIN(pmh_pageCountMono) AS first_Mono
, MAX(pmh_pageCountMono) AS last_Mono
, MIN(pmh_pageCountColor) AS first_Color
, MAX(pmh_pageCountColor) AS last_Color
FROM [group] INNER JOIN printerGroup ON g_groupID = pg_groupID
INNER JOIN printer ON pg_printerID = p_printerID
INNER JOIN printerMeterHistory ON p_printerID = pmh_printerID
WHERE
(p_printerName LIKE 'WBMP015')
GROUP BY
g_groupName
, p_printerName
, DATEPART(yy, pmh_pollDate)
, DATEPART(q, pmh_pollDate)
The table with printer data look something like this (shortened):
p_printerName pmh_pollDate pmh_pageCount pmh_pageCountMono pmh_pageCountColor
printer1 01.10.2022 12:32 12273 7826 4447
printer1 02.10.2022 12:32 12274 7826 4448
printer1 08.10.2022 12:32 12275 7826 4449
printer1 15.10.2022 12:32 12276 7826 4450
printer1 31.10.2022 12:32 12278 7826 4452
In this example, the pagecount for printer1 in mono would be 0 and for color 5 (calculation is done in PowerQuery)
If the values in table rows incrementing like this, the result of the calculation is correct.
Printer Quartal Year first last first_Mono last_Mono first_Color last_Color
printer1 4 2022 12273 12278 7826 7826 4447 4452
Now sometimes the records showing an incorrect value (value is -1, instead):
printer1 20.10.2022 12:32 12276 -1 4450
In this case the result of pagecount for printer1 mono would be 7827 what is wrong.
Printer Quartal Year first last first_Mono last_Mono first_Color last_Color
printer1 4 2022 12273 12278 -1 7826 4447 4452
The reason for the -1 value is related how the pagecount is retrieved from the printer and unfortunately this cannot be fixed.
I need some help to modify the query finding the first (MIN) and last (MAX) records (mono, color, all) related to the beginning and end of the interval (quarterly) where the value is not -1
Use having clause:
...
GROUP BY
g_groupName
, p_printerName
, DATEPART(yy, pmh_pollDate)
, DATEPART(q, pmh_pollDate)
HAVING
(MIN(pmh_pageCountMono) <> -1)
AND (MAX(pmh_pageCountColor <> -1))
To get the NEXT min value you can do that:
Make query in a temporary table (#tmp) without any grouping, just to get all the rows.
Delete from temporary table (#tmp) records with -1 on columns of interest.
Rerun the query with "group by" over the temporary table #tmp.
Related
The below query that should return a row for every Reading_Type, plus either the saved Reading value for that Reading_Type and date, or 0 if no Reading has been saved.
SELECT
t.*
, ISNULL(r.Reading, 0) AS Reading
FROM
Reading_Type t
LEFT JOIN
Reading r ON t.Reading_Type_ID = r.Reading_Type_ID
WHERE
r.Reading_Date = #date
OR r.Reading_Date IS NULL
It does work if there are no Readings saved for any date
It does work if the only Readings saved are for the selected date.
It does not work if a Reading_Type has a saved Reading for date X, no saved Reading for date Y, and the search is for date Y.
Reading_Type Table:
Reading_Type_ID Reading_Type
-----------------------------
1 Red
2 Blue
3 Green
Reading table (table is empty):
Reading_ID Reading_Type_ID Reading Reading_Date
-----------------------------------------------------
Query with #date = April 15, 2016 returns:
Reading_Type_ID Reading_Type Reading
----------------------------------------
1 Red 0
2 Blue 0
3 Green 0
Reading table (table has data for April 15):
Reading_ID Reading_Type_ID Reading Reading_Date
-----------------------------------------------------
1 1 5 April 15, 2016
2 3 8 April 15, 2016
Query with #date = April 15, 2016 returns:
Reading_Type_ID Reading_Type Reading
----------------------------------------
1 Red 5
2 Blue 0
3 Green 8
Query with #date = April 7, 2016 returns:
Reading_Type_ID Reading_Type Reading
----------------------------------------
1 Red 0
3 Green 0
The third query should still return a row for Reading_Type = Blue, with 0 for Reading. How do I fix my query?
Your WHERE criteria is causing your filter problem (done this myself only a million times or so). Try this instead:
SELECT
t.*
, ISNULL(r.Reading, 0) AS Reading
FROM
Reading_Type t
LEFT JOIN
Reading r ON t.Reading_Type_ID = r.Reading_Type_ID
AND r.Reading_Date = #date
Leave out the WHERE clause in this instance (unless you want to further filter your data).
Here's some information which helps detail this SQL feature: Specifying Joins in FROM or WHERE clauses
if r.Reading_Date can be null and you want to include those then
SELECT t.*, ISNULL(r.Reading, 0) AS Reading
FROM Reading_Type t
LEFT JOIN Reading r
ON r.Reading_Type_ID = t.Reading_Type_ID
AND isnull(r.Reading_Date, #date) = #date
The query is doing what you're asking it to. It's doing a left join (which will return all records from t and records from r if/when these exist)... then it applies your where condition. In the case that r.Reading_Date is neither NULL nor #date then, the records from that query will be excluded from the result set.
I think what you want, if I've understood correctly, is a left join on a subselect... so something more like this:
SELECT
t.*, ISNULL(r.Reading, 0) AS Reading
FROM
Reading_Type t
LEFT JOIN (
SELECT Reading_Type_ID, Reading
FROM Reading
WHERE Reading_Date = #date
OR Reading_Date IS NULL
) r
ON t.Reading_Type_ID = r.Reading_Type_ID
I have a table called Register which contains the following fields:
Date, AMPM, Mark.
A day can have two records for a day. Its fairly easy to select and display all the records in a list ordered by date ascending.
What I would like to do is display the data as a grid. Something along the lines of.
| Mon | Tues| Wed| Thurs| Fri | Sat
9/8/2014 | /\ | /P | /\ | L | /\ | /
Have a week beginning and then group the 5 together. I'm not even sure sql is the best option for this, but the groupby commands seem to suggest it may be able to do this.
The Data structure is as follows.
Date, AMPM, Mark
9/8/2014, AM, /
9/8/2014, PM, \
9/9/2014, AM, /
9/9/2014, PM, P
9,10,2014, AM, /
9,10,2014, PM, \
9,11,2014, PM, L
....
The mark field can contain a number of letters. P for instance means they are participating in a sporting activity. L means they were late.
Does anyone have any resources they can point me towards the right direction that would be helpful. I'm not even sure what this type of report is called and whether I should be using SQL or javascript to group this data in a presentable format. The / \ represents AM and the a PM.
The following query would get you the desired result. If you need Sunday also, you'll have to add a small condition to test for when days_after_last_Monday = 6 in the CASE statement.
select
last_Monday Week_Starting,
max(
case
when days_after_last_Monday = 0 then mark
else null
end) Mon, --if the # of days between previous Monday and reg_date is zero, then get the according mark
max(
case
when days_after_last_Monday = 1 then mark
else null
end) Tues,
max(
case
when days_after_last_Monday = 2 then mark
else null
end) Wed,
max(
case
when days_after_last_Monday = 3 then mark
else null
end) Thurs,
max(
case
when days_after_last_Monday = 4 then mark
else null
end) Fri,
max(
case
when days_after_last_Monday = 5 then mark
else null
end) Sat
from
(
select
reg_date,
last_Monday,
julianday(reg_date) - julianday(last_Monday) as days_after_last_monday, --determine the number of days between previous Monday and reg_date
mark
from
(
select
reg_date,
case
when cast (strftime('%w', reg_date) as integer) = 1 then date(reg_date, 'weekday 1')
else date(reg_date, 'weekday 1', '-7 days')
end last_monday, --determine the date of previous Monday
mark
from
(
select
reg_date,
group_concat(mark, '') mark --concatenate am and pm marks for each reg_date
from
(
SELECT
reg_date,
ampm,
mark
FROM register
order by reg_date, ampm --order by ampm so that am rows are selected before pm
)
group by reg_date
)
)
)
group by last_Monday
order by last_Monday;
SQL Fiddle demo
I have the below query which groups by week (Sun-Sat) and does the necessary calculation. the output obviously gives the week number of the year. As a first step I can store this data in a table, then when I want to use this data I want to convert the week number of year to the actual date range. Below is the query.
SELECT
DATEPART(WW,aa.Time) ddtt ,bb.Nd ,'Percentages' Report
,case when SUM(ZZ) = 0 then 0 else convert(decimal(18,3),SUM((CCC+PSY))*100/SUM(ZZ)) end Cond1
,case when SUM(ZZ) = 0 then 0 else convert(decimal(18,3),SUM(USN)*100/SUM(ZZ)) end Cond2
FROM db2000.dbo.Table aa join db2000.dbo.List bb on aa.Device = bb.DeviceID
where aa.Time between '2013/12/15' AND '2014/1/15 23:00' and Nd like '_s1'
group by bb.Nd ,DATEPART(WW,aa.Time)
order by ddtt
The output of this query is
ddtt Nd Report Cond1 Cond2
1 21S Percentages 94.787 63.998
1 41S Percentages 94.592 63.473
1 61S Percentages 94.356 65.845
2 21S Percentages 93.802 64.594
2 41S Percentages 94.141 65.486
2 61S Percentages 93.849 66.144
3 21S Percentages 94.572 65.940
3 41S Percentages 95.123 67.261
3 61S Percentages 95.044 67.211
51 21S Percentages 94.042 65.245
51 41S Percentages 94.857 65.847
51 61S Percentages 94.036 67.019
52 21S Percentages 94.592 65.469
52 41S Percentages 95.071 66.159
52 61S Percentages 93.932 66.989
53 21S Percentages 94.786 65.391
53 41S Percentages 95.266 66.883
53 61S Percentages 94.526 67.504
I want the column ddtt with values to represent the actual dates, for eg. 05/01/2014 - 11/01/2014. A separate query to accomplish this will be ok too.
To get the date from sunday to saturday given the week number you can use
SELECT dateadd(dd, -datepart(wk, '2014-01-08') - 1
, dateadd(ww, #weeknum, '2014-01-01'))
, dateadd(dd, -datepart(wk, '2014-01-08') - 2
, dateadd(ww, #weeknum + 1, '2014-01-01'))
where #weeknum is the week number, for the day of the week I used datepart(wk, '2014-01-08') because using the first of January will always return 1, regardless of the real day of week.
You query will become
SELECT DATEADD(dd, -DATEPART(wk, '2014-01-08') - 1
, DATEADD(ww, DATEPART(WW,aa.Time), '2014-01-01'))
, DATEADD(dd, -DATEPART(wk, '2014-01-08') - 2
, DATEADD(ww, DATEPART(WW,aa.Time) + 1, '2014-01-01'))
, bb.Nd
, 'Percentages' Report
, CASE WHEN SUM(ZZ) = 0 THEN 0
ELSE convert(decimal(18,3),SUM((CCC+PSY))*100/SUM(ZZ))
END Cond1
, CASE WHEN SUM(ZZ) = 0 THEN 0
ELSE convert(decimal(18,3),SUM(USN)*100/SUM(ZZ))
END Cond2
FROM db2000.dbo.Table aa
JOIN db2000.dbo.List bb ON aa.Device = bb.DeviceID
WHERE aa.Time BETWEEN '2013/12/15' AND '2014/1/15 23:00' AND Nd LIKE '_s1'
GROUP BY bb.Nd ,DATEPART(WW,aa.Time)
ORDER BY ddtt
or something similar if you want to join the two date in a string.
For a development aid project I am helping a small town in Nicaragua improving their water-network-administration.
There are about 150 households and every month a person checks the meter and charges the houshold according to the consumed water (reading from this month minus reading from last month). Today all is done on paper and I would like to digitalize the administration to avoid calculation-errors.
I have an MS Access Table in mind - e.g.:
*HousholdID* *Date* *Meter*
0 1/1/2013 100
1 1/1/2013 130
0 1/2/2013 120
1 1/2/2013 140
...
From this data I would like to create a query that calculates the consumed water (the meter-difference of one household between two months)
*HouseholdID* *Date* *Consumption*
0 1/2/2013 20
1 1/2/2013 10
...
Please, how would I approach this problem?
This query returns every date with previous date, even if there are missing months:
SELECT TabPrev.*, Tab.Meter as PrevMeter, TabPrev.Meter-Tab.Meter as Diff
FROM (
SELECT
Tab.HousholdID,
Tab.Data,
Max(Tab_1.Data) AS PrevData,
Tab.Meter
FROM
Tab INNER JOIN Tab AS Tab_1 ON Tab.HousholdID = Tab_1.HousholdID
AND Tab.Data > Tab_1.Data
GROUP BY Tab.HousholdID, Tab.Data, Tab.Meter) As TabPrev
INNER JOIN Tab
ON TabPrev.HousholdID = Tab.HousholdID
AND TabPrev.PrevData=Tab.Data
Here's the result:
HousholdID Data PrevData Meter PrevMeter Diff
----------------------------------------------------------
0 01/02/2013 01/01/2013 120 100 20
1 01/02/2013 01/01/2012 140 130 10
The query above will return every delta, for every households, for every month (or for every interval). If you are just interested in the last delta, you could use this query:
SELECT
MaxTab.*,
TabCurr.Meter as CurrMeter,
TabPrev.Meter as PrevMeter,
TabCurr.Meter-TabPrev.Meter as Diff
FROM ((
SELECT
Tab.HousholdID,
Max(Tab.Data) AS CurrData,
Max(Tab_1.Data) AS PrevData
FROM
Tab INNER JOIN Tab AS Tab_1
ON Tab.HousholdID = Tab_1.HousholdID
AND Tab.Data > Tab_1.Data
GROUP BY Tab.HousholdID) As MaxTab
INNER JOIN Tab TabPrev
ON TabPrev.HousholdID = MaxTab.HousholdID
AND TabPrev.Data=MaxTab.PrevData)
INNER JOIN Tab TabCurr
ON TabCurr.HousholdID = MaxTab.HousholdID
AND TabCurr.Data=MaxTab.CurrData
and (depending on what you are after) you could only filter current month:
WHERE
DateSerial(Year(CurrData), Month(CurrData), 1)=
DateSerial(Year(DATE()), Month(DATE()), 1)
this way if you miss a check for a particular household, it won't show.
Or you might be interested in showing last month present in the table (which can be different than current month):
WHERE
DateSerial(Year(CurrData), Month(CurrData), 1)=
(SELECT MAX(DateSerial(Year(Data), Month(Data), 1))
FROM Tab)
(here I am taking in consideration the fact that checks might be on different days)
I think the best approach is to use a correlated subquery to get the previous date and join back to the original table. This ensures that you get the previous record, even if there is more or less than a 1 month lag.
So the right query looks like:
select t.*, tprev.date, tprev.meter
from (select t.*,
(select top 1 date from t t2 where t2.date < t.date order by date desc
) prevDate
from t
) join
t tprev
on tprev.date = t.prevdate
In an environment such as the one you describe, it is very important not to make assumptions about the frequency of reading the meter. Although they may be read on average once per month, there will always be exceptions.
Testing with the following data:
HousholdID Date Meter
0 01/12/2012 100
1 01/12/2012 130
0 01/01/2013 120
1 01/01/2013 140
0 01/02/2013 120
1 01/02/2013 140
The following query:
SELECT a.housholdid,
a.date,
b.date,
a.meter,
b.meter,
a.meter - b.meter AS Consumption
FROM (SELECT *
FROM water
WHERE Month([date]) = Month(Date())
AND Year([date])=year(Date())) a
LEFT JOIN (SELECT *
FROM water
WHERE DateSerial(Year([date]),Month([date]),Day([date]))
=DateSerial(Year(Date()),Month(Date())-1,Day([date])) ) b
ON a.housholdid = b.housholdid
The above query selects the records for this month Month([date]) = Month(Date()) and compares them to records for last month ([date]) = Month(Date()) - 1)
Please do not use Date as a field name.
Returns the following result.
housholdid a.date b.date a.meter b.meter Consumption
0 01/02/2013 01/01/2013 120 100 20
1 01/02/2013 01/01/2013 140 130 10
Try
select t.householdID
, max(s.theDate) as billingMonth
, max(s.meter)-max(t.meter) as waterUsed
from myTbl t join (
select householdID, max(theDate) as theDate, max(meter) as meter
from myTbl
group by householdID ) s
on t.householdID = s.householdID and t.theDate <> s.theDate
group by t.householdID
This works in SQL not sure about access
You can use the LAG() function in certain SQL dialects. I found this to be much faster and easier to read than joins.
Source: http://blog.jooq.org/2015/05/12/use-this-neat-window-function-trick-to-calculate-time-differences-in-a-time-series/
Given the following table structure:
CrimeID | No_Of_Crimes | CrimeDate | Violence | Robbery | ASB
1 1 22/02/2011 Y Y N
2 3 18/02/2011 Y N N
3 3 23/02/2011 N N Y
4 2 16/02/2011 N N Y
5 1 17/02/2011 N N Y
Is there a chance of producing a result set that looks like this with T-SQL?
Category | This Week | Last Week
Violence 1 3
Robbery 1 0
ASB 3 1
Where last week shuld be a data less than '20/02/2011' and this week should be greater than or equal to '20/02/2011'
I'm not looking for someone to code this out for me, though a code snippet would be handy :), just some advice on whether this is possible, and how i should go about it with SQL Server.
For info, i'm currently performing all this aggregation using LINQ on the web server, but this requires 19MB being sent over the network every time this request is made. (The table has lots of categories, and > 150,000 rows). I want to make the DB do all the work and only send a small amount of data over the network
Many thanks
EDIT removed incorrect sql for clarity
EDIT Forget the above try the below
select *
from (
select wk, crime, SUM(number) number
from (
select case when datepart(week, crimedate) = datepart(week, GETDATE()) then 'This Week'
when datepart(week, crimedate) = datepart(week, GETDATE())-1 then 'Last Week'
else 'OLDER' end as wk,
crimedate,
case when violence ='Y' then no_of_crimes else 0 end as violence,
case when robbery ='Y' then no_of_crimes else 0 end as robbery,
case when asb ='Y' then no_of_crimes else 0 end as asb
from crimetable) as src
UNPIVOT
(number for crime in
(violence, robbery, asb)) as pivtab
group by wk, crime
) z
PIVOT
( sum(number)
for wk in ([This Week], [Last Week])
) as pivtab
Late to the party, but a solution with an optimal query plan:
Sample data
create table crimes(
CrimeID int, No_Of_Crimes int, CrimeDate datetime,
Violence char(1), Robbery char(1), ASB char(1));
insert crimes
select 1,1,'20110221','Y','Y','N' union all
select 2,3,'20110218','Y','N','N' union all
select 3,3,'20110223','N','N','Y' union all
select 4,2,'20110216','N','N','Y' union all
select 5,1,'20110217','N','N','Y';
Make more data - about 10240 rows in total in addition to the 5 above, each 5 being 2 weeks prior to the previous 5. Also create an index that will help on crimedate.
insert crimes
select crimeId+number*5, no_of_Crimes, DATEADD(wk,-number*2,crimedate),
violence, robbery, asb
from crimes, master..spt_values
where type='P'
create index ix_crimedate on crimes(crimedate)
From here on, check output of each to see where this is going. Check also the execution plan.
Standard Unpivot to break the categories.
select CrimeID, No_Of_Crimes, CrimeDate, Category, YesNo
from crimes
unpivot (YesNo for Category in (Violence,Robbery,ASB)) upv
where YesNo='Y'
Notes:
The filter on YesNo is actually applied AFTER unpivoting. You can comment it out to see.
Unpivot again, but this time select data only for last week and this week.
select CrimeID, No_Of_Crimes, Category,
Week = sign(datediff(d,CrimeDate,w.firstDayThisWeek)+0.1)
from crimes
unpivot (YesNo for Category in (Violence,Robbery,ASB)) upv
cross join (select DATEADD(wk, DateDiff(wk, 0, getdate()), 0)) w(firstDayThisWeek)
where YesNo='Y'
and CrimeDate >= w.firstDayThisWeek -7
and CrimeDate < w.firstDayThisWeek +7
Notes:
(select DATEADD(wk, DateDiff(wk, 0, getdate()), 0)) w(firstDayThisWeek) makes a single-column table where the column contains the pivotal date for this query, being the first day of the current week (using DATEFIRST setting)
The filter on CrimeDate is actually applied on the BASE TABLE prior to unpivoting. Check plan
Sign() just breaks the data into 3 buckets (-1/0/+1). Adding +0.1 ensures that there are only two buckets -1 and +1.
The final query, pivoting by this/last week
select Category, isnull([1],0) ThisWeek, isnull([-1],0) LastWeek
from
(
select Category, No_Of_Crimes,
Week = sign(datediff(d,w.firstDayThisWeek,CrimeDate)+0.1)
from crimes
unpivot (YesNo for Category in (Violence,Robbery,ASB)) upv
cross join (select DATEADD(wk, DateDiff(wk, 0, getdate()), -1)) w(firstDayThisWeek)
where YesNo='Y'
and CrimeDate >= w.firstDayThisWeek -7
and CrimeDate < w.firstDayThisWeek +7
) p
pivot (sum(No_Of_Crimes) for Week in ([-1],[1])) pv
order by Category Desc
Output
Category ThisWeek LastWeek
--------- ----------- -----------
Violence 1 3
Robbery 1 0
ASB 3 3
I would try this:
declare #FirstDayOfThisWeek date = '20110220';
select cat.category,
ThisWeek = sum(case when cat.CrimeDate >= #FirstDayOfThisWeek
then crt.No_of_crimes else 0 end),
LastWeek = sum(case when cat.CrimeDate >= #FirstDayOfThisWeek
then 0 else crt.No_of_crimes end)
from crimetable crt
cross apply (values
('Violence', crt.Violence),
('Robbery', crt.Robbery),
('ASB', crt.ASB))
cat (category, incategory)
where cat.incategory = 'Y'
and crt.CrimeDate >= #FirstDayOfThisWeek-7
group by cat.category;