I'm trying out MS Access SQL Query. My Data is structured like this
Rent Table
The idea is I want to split the table to collect the latest start up to 12 months back using [Year_Start] and [Month_Start] as basis. So my rough code will be:
SELECT [Renter_Name], [Amount]
FROM RentTable1
WHERE [Year_Start] = Max([Year_Start]) AND [Month_Start] = Max([Month_Start])
ORDER BY [Renter_Name];
Subsequently, other month tables will conceptually be coded like this:
SELECT [Renter_Name], [Amount]
FROM RentTable1
WHERE [Year_Start] = Max([Year_Start]) AND [Month_Start] = Max([Month_Start]) - 1
ORDER BY [Renter_Name];
And then subsequent months will be adjusted using the minus sign.
SELECT [Renter_Name], [Amount]
FROM RentTable1
WHERE [Year_Start] = Max([Year_Start]) AND [Month_Start] = Max([Month_Start]) - 2
ORDER BY [Renter_Name];
I'm also considering a case where [Month_Start] = Max([Month_Start]) - x will be zero (0) or a negative number so a theoretical code will be:
SELECT [Renter_Name], [Amount]
FROM RentTable1
IF Max([Month_Start]) - X <= 0 THEN
WHERE [Year_Start] = Max([Year_Start]) - 1 AND [Month_Start] = Max([Month_Start]) - X
ELSE
WHERE [Year_Start] = Max([Year_Start]) AND [Month_Start] = Max([Month_Start]) - X
END IF
ORDER BY [Renter_Name];
*** X being the months backward from the latest start month and year.
Clearly, you see my SQL coding skills are really weak. Pardon me as I'm really a beginner. So there are some touches other standard programming like If-Then-Else statements.
I was hoping someone could propose to correct the above codes.
Thanks! Appreciate everyone who stumble upon this question.
EDIT 1:
Just to clarify, this is the expected thought:
In the example the latest period is 2016 and 4. So it should pick it up for TABLE1.
A subsequent query is to be made to minus one month from the latest period so the result should be 2016 and 3. This goes on until 2016 and 1.
When 4 - 4 happens which equals 0, the query should be able to skip through this illogical step and go through (2016 - 1) and the get the max month using the result of (2016 - 1) which is 2015 and 12.
First of all, here is my advise: when dealing with dates use the DATE data type. You can specify the StartDate as 2016-04-01 and the EndDate as 2017-04-31. Even better: specify the EndDate as 2017-05-01 and always remember that you need to use >= for the StartDate and < for the EndDate.
Now, to your problem. You need to convert the columns to the proper date using the [DateSerial()][1] function, like this:
SELECT [Renter_Name], [Amount]
FROM RentTable1
WHERE DateSerial([Year_Start], [Month_Start], 1) =
(SELECT Max(DateSerial([Year_Start], [Month_Start], 1) as dt FROM RentTable1)
ORDER BY [Renter_Name];
To get the details for the previous month use DateAdd() function. Here is the example for the previous month:
SELECT [Renter_Name], [Amount]
FROM RentTable1
WHERE DateSerial([Year_Start], [Month_Start], 1) =
(SELECT DateAdd('m', -1, Max(DateSerial([Year_Start], [Month_Start], 1)) as dt FROM RentTable1)
ORDER BY [Renter_Name];
And here is the universal query to get the details for the X months ago:
SELECT [Renter_Name], [Amount]
FROM RentTable1
WHERE DateSerial([Year_Start], [Month_Start], 1) =
(SELECT DateAdd('m', [X] * (-1), Max(DateSerial([Year_Start], [Month_Start], 1)) as dt FROM RentTable1)
ORDER BY [Renter_Name];
Related
So I can not use a temp table to accomplish this. Basically I need to create an experation date based off the next effective date available.
So for one proc code i have 10 fee schedules with only effective dates. I tried writing this but its not working because it is giving for some records the same expiration date. I know an order by is needed but i placed it in various places and it still gives me the same issue. Any help is greatly appreciated
SELECT a.*,
( CASE
WHEN (SELECT TOP 1 b.effectivedate - 1
FROM ietl_profileprocedure b
WHERE b.effectivedate > a.effectivedate
AND a.profilesid = b.profilesid) IS NULL THEN (SELECT
Dateadd(mm, Datediff(mm, 0, Getdate()) + 1, -1))
ELSE (SELECT TOP 1 b.effectivedate - 1
FROM ietl_profileprocedure b
WHERE b.effectivedate > a.effectivedate
AND a.profilesid = b.profilesid)
END ) AS 'ExpDate'
FROM ietl_profileprocedure a
WHERE profilesid = '4197'
AND procedurecode = '90685'
Thank you Juan I looked up lead and lag and was able to come up with the code below to suit my needs! now i can have a date something was posted and find the amount that should have pertained to the ProcedureCode at that time more easily
SELECT s.ProfileSID,s.ProcedureCode,s.Amount,cast(s.EffectiveDate as date) as EffectiveDate ,
cast(isnull(dateadd(day,-1,LEAD(EffectiveDate) OVER (ORDER BY ProfileSID,ProcedureCode,EffectiveDate
)),getdate())as date) ExpDate
FROM iETL_ProfileProcedure s
WHERE ProfileSID IN ('4197')and ProcedureCode = '90685'
I'm creating a report using SQL to pull logged labor hours from our labor database for the previous month. I have it working great, but need to add logic to prevent it from breaking when it runs in January. I've tried adding If/Then statements and CASE logic, but I don't know if I'm just not doing it right, or if our system can't process it. Here's the snippet that pulls the date range:
SELECT
...
FROM
...
WHERE
...
AND
YEAR(ENTERDATE) = YEAR(current date) AND MONTH(ENTERDATE) = (MONTH(current date)-1)
Just use AND as a barrier like this. In January, the second clause will be executed instead of the first one:
SELECT
...
FROM
...
WHERE
...
AND
(
(
(MONTH(current date) > 1) AND
(YEAR(ENTERDATE) = YEAR(current date) AND MONTH(ENTERDATE) = (MONTH(current date)-1))
-- this one gets used from Feb-Dec
)
OR
(
(MONTH(current date) = 1) AND
(YEAR(ENTERDATE) = YEAR(current date) - 1 AND MONTH(ENTERDATE) = 12)
-- alternatively, in Jan only this one gets used
)
)
If your report is always going to be for the previous month, then I think the simplest idea is to declare the year and month of the previous month and then reference those in the Where clause. For example:
Declare LastMo_Month Integer = MONTH(DATEADD(MONTH,-1,getdate()));
Declare LastMo_Year Integer = YEAR(DATEADD(MONTH,-1,getdate()));
Select ...
Where MONTH(EnterDate) = #LastMo_Month
and YEAR(EnterDate) = #LastMo_Year
You could even take it a step further and allow the report to be created for any number of months ago:
Declare Delay Integer = -1;
Declare LastMo_Month Integer = MONTH(DATEADD(MONTH,#Delay,getdate()));
Declare LastMo_Year Integer = YEAR(DATEADD(MONTH,#Delay,getdate()));
Select ...
Where MONTH(EnterDate) = #LastMo_Month
and YEAR(EnterDate) = #LastMo_Year
Hope this helps.
PS - This is my first answer on StackOverflow, so sorry if the formatting isn't right!
if(month(getdate()) = 1)
begin
your jan logic
end
else
begin
your logic
end
The above answer with the Case is ok, but running a CASE on a huge result set would be pretty costly
WHERE
...
AND
DATEPART(yy,ENTERDATE) = DATEPART(yy,DATEADD(m,-1,ENTERDATE))
AND DATEPART(m,ENTERDATE) = DATEPART(m,DATEADD(m,-1,ENTERDATE))
Which Dialect of SQL are you speaking?
As opposed to doing it all with case statements, just use the built it date / time functions to subtract a month from the current date, which should handle crossing year boundaries.
TransACT
WHERE
YEAR(ENTERDATE) = year(dateadd(MONTH,-1, CURRENT_TIMESTAMP))
AND MONTH(ENTERDATE) = month(dateadd(MONTH,-1, CURRENT_TIMESTAMP))
Mysql
WHERE
YEAR(ENTERDATE) = YEAR(date_sub(curdate(),INTERVAL 1 MONTH))
AND MONTH(ENTERDATE) = MONTH(date_sub(curdate(),INTERVAL 1 MONTH) )
Try adding the previous month and year to your SELECT statement:
SELECT
...
,CASE MONTH(current date)
WHEN 1 THEN 12
ELSE MONTH(current date)-1
END AS previous_month
,CASE MONTH(current date)
WHEN 1 THEN YEAR(current date)-1
ELSE YEAR(current date)
END AS previous_year
FROM
...
WHERE
...
AND YEAR(ENTERDATE) = previous_year
AND MONTH(ENTERDATE) = previous_month
This should allow you to set the value before the WHERE comparison. This should be the most performant way to perform this procedure, as it avoids creating two entirely separate clauses or using OR.
I want to count the number of 2 or more consecutive week periods that have negative values within a range of weeks.
Example:
Week | Value
201301 | 10
201302 | -5 <--| both weeks have negative values and are consecutive
201303 | -6 <--|
Week | Value
201301 | 10
201302 | -5
201303 | 7
201304 | -2 <-- negative but not consecutive to the last negative value in 201302
Week | Value
201301 | 10
201302 | -5
201303 | -7
201304 | -2 <-- 1st group of negative and consecutive values
201305 | 0
201306 | -12
201307 | -8 <-- 2nd group of negative and consecutive values
Is there a better way of doing this other than using a cursor and a reset variable and checking through each row in order?
Here is some of the SQL I have setup to try and test this:
IF OBJECT_ID('TempDB..#ConsecutiveNegativeWeekTestOne') IS NOT NULL DROP TABLE #ConsecutiveNegativeWeekTestOne
IF OBJECT_ID('TempDB..#ConsecutiveNegativeWeekTestTwo') IS NOT NULL DROP TABLE #ConsecutiveNegativeWeekTestTwo
CREATE TABLE #ConsecutiveNegativeWeekTestOne
(
[Week] INT NOT NULL
,[Value] DECIMAL(18,6) NOT NULL
)
-- I have a condition where I expect to see at least 2 consecutive weeks with negative values
-- TRUE : Week 201328 & 201329 are both negative.
INSERT INTO #ConsecutiveNegativeWeekTestOne
VALUES
(201327, 5)
,(201328,-11)
,(201329,-18)
,(201330, 25)
,(201331, 30)
,(201332, -36)
,(201333, 43)
,(201334, 50)
,(201335, 59)
,(201336, 0)
,(201337, 0)
SELECT * FROM #ConsecutiveNegativeWeekTestOne
WHERE Value < 0
ORDER BY [Week] ASC
CREATE TABLE #ConsecutiveNegativeWeekTestTwo
(
[Week] INT NOT NULL
,[Value] DECIMAL(18,6) NOT NULL
)
-- FALSE: The negative weeks are not consecutive
INSERT INTO #ConsecutiveNegativeWeekTestTwo
VALUES
(201327, 5)
,(201328,-11)
,(201329,20)
,(201330, -25)
,(201331, 30)
,(201332, -36)
,(201333, 43)
,(201334, 50)
,(201335, -15)
,(201336, 0)
,(201337, 0)
SELECT * FROM #ConsecutiveNegativeWeekTestTwo
WHERE Value < 0
ORDER BY [Week] ASC
My SQL fiddle is also here:
http://sqlfiddle.com/#!3/ef54f/2
First, would you please share the formula for calculating week number, or provide a real date for each week, or some method to determine if there are 52 or 53 weeks in any particular year? Once you do that, I can make my queries properly skip missing data AND cross year boundaries.
Now to queries: this can be done without a JOIN, which depending on the exact indexes present, may improve performance a huge amount over any solution that does use JOINs. Then again, it may not. This is also harder to understand so may not be worth it if other solutions perform well enough (especially when the right indexes are present).
Simulate a PREORDER BY windowing function (respects gaps, ignores year boundaries):
WITH Calcs AS (
SELECT
Grp =
[Week] -- comment out to ignore gaps and gain year boundaries
-- Row_Number() OVER (ORDER BY [Week]) -- swap with previous line
- Row_Number() OVER
(PARTITION BY (SELECT 1 WHERE Value < 0) ORDER BY [Week]),
*
FROM dbo.ConsecutiveNegativeWeekTestOne
)
SELECT
[Week] = Min([Week])
-- NumWeeks = Count(*) -- if you want the count
FROM Calcs C
WHERE Value < 0
GROUP BY C.Grp
HAVING Count(*) >= 2
;
See a Live Demo at SQL Fiddle (1st query)
And another way, simulating LAG and LEAD with a CROSS JOIN and aggregates (respects gaps, ignores year boundaries):
WITH Groups AS (
SELECT
Grp = T.[Week] + X.Num,
*
FROM
dbo.ConsecutiveNegativeWeekTestOne T
CROSS JOIN (VALUES (-1), (0), (1)) X (Num)
)
SELECT
[Week] = Min(C.[Week])
-- Value = Min(C.Value)
FROM
Groups G
OUTER APPLY (SELECT G.* WHERE G.Num = 0) C
WHERE G.Value < 0
GROUP BY G.Grp
HAVING
Min(G.[Week]) = Min(C.[Week])
AND Max(G.[Week]) > Min(C.[Week])
;
See a Live Demo at SQL Fiddle (2nd query)
And, my original second query, but simplified (ignores gaps, handles year boundaries):
WITH Groups AS (
SELECT
Grp = (Row_Number() OVER (ORDER BY T.[Week]) + X.Num) / 3,
*
FROM
dbo.ConsecutiveNegativeWeekTestOne T
CROSS JOIN (VALUES (0), (2), (4)) X (Num)
)
SELECT
[Week] = Min(C.[Week])
-- Value = Min(C.Value)
FROM
Groups G
OUTER APPLY (SELECT G.* WHERE G.Num = 2) C
WHERE G.Value < 0
GROUP BY G.Grp
HAVING
Min(G.[Week]) = Min(C.[Week])
AND Max(G.[Week]) > Min(C.[Week])
;
Note: The execution plan for these may be rated as more expensive than other queries, but there will be only 1 table access instead of 2 or 3, and while the CPU may be higher it is still respectably low.
Note: I originally was not paying attention to only producing one row per group of negative values, and so I produced this query as only requiring 2 table accesses (respects gaps, ignores year boundaries):
SELECT
T1.[Week]
FROM
dbo.ConsecutiveNegativeWeekTestOne T1
WHERE
Value < 0
AND EXISTS (
SELECT *
FROM dbo.ConsecutiveNegativeWeekTestOne T2
WHERE
T2.Value < 0
AND T2.[Week] IN (T1.[Week] - 1, T1.[Week] + 1)
)
;
See a Live Demo at SQL Fiddle (3rd query)
However, I have now modified it to perform as required, showing only each starting date (respects gaps, ignored year boundaries):
SELECT
T1.[Week]
FROM
dbo.ConsecutiveNegativeWeekTestOne T1
WHERE
Value < 0
AND EXISTS (
SELECT *
FROM
dbo.ConsecutiveNegativeWeekTestOne T2
WHERE
T2.Value < 0
AND T1.[Week] - 1 <= T2.[Week]
AND T1.[Week] + 1 >= T2.[Week]
AND T1.[Week] <> T2.[Week]
HAVING
Min(T2.[Week]) > T1.[Week]
)
;
See a Live Demo at SQL Fiddle (3rd query)
Last, just for fun, here is a SQL Server 2012 and up version using LEAD and LAG:
WITH Weeks AS (
SELECT
PrevValue = Lag(Value, 1, 0) OVER (ORDER BY [Week]),
SubsValue = Lead(Value, 1, 0) OVER (ORDER BY [Week]),
PrevWeek = Lag(Week, 1, 0) OVER (ORDER BY [Week]),
SubsWeek = Lead(Week, 1, 0) OVER (ORDER BY [Week]),
*
FROM
dbo.ConsecutiveNegativeWeekTestOne
)
SELECT #Week = [Week]
FROM Weeks W
WHERE
(
[Week] - 1 > PrevWeek
OR PrevValue >= 0
)
AND Value < 0
AND SubsValue < 0
AND [Week] + 1 = SubsWeek
;
See a Live Demo at SQL Fiddle (4th query)
I am not sure I am doing this the best way as I haven't used these much, but it works nonetheless.
You should do some performance testing of the various queries presented to you, and pick the best one, considering that code should be, in order:
Correct
Clear
Concise
Fast
Seeing that some of my solutions are anything but clear, other solutions that are fast enough and concise enough will probably win out in the competition of which one to use in your own production code. But... maybe not! And maybe someone will appreciate seeing these techniques, even if they can't be used as-is this time.
So let's do some testing and see what the truth is about all this! Here is some test setup script. It will generate the same data on your own server as it did on mine:
IF Object_ID('dbo.ConsecutiveNegativeWeekTestOne', 'U') IS NOT NULL DROP TABLE dbo.ConsecutiveNegativeWeekTestOne;
GO
CREATE TABLE dbo.ConsecutiveNegativeWeekTestOne (
[Week] int NOT NULL CONSTRAINT PK_ConsecutiveNegativeWeekTestOne PRIMARY KEY CLUSTERED,
[Value] decimal(18,6) NOT NULL
);
SET NOCOUNT ON;
DECLARE
#f float = Rand(5.1415926535897932384626433832795028842),
#Dt datetime = '17530101',
#Week int;
WHILE #Dt <= '20140106' BEGIN
INSERT dbo.ConsecutiveNegativeWeekTestOne
SELECT
Format(#Dt, 'yyyy') + Right('0' + Convert(varchar(11), DateDiff(day, DateAdd(year, DateDiff(year, 0, #Dt), 0), #Dt) / 7 + 1), 2),
Rand() * 151 - 76
;
SET #Dt = DateAdd(day, 7, #Dt);
END;
This generates 13,620 weeks, from 175301 through 201401. I modified all the queries to select the Week values instead of the count, in the format SELECT #Week = Expression ... so that tests are not affected by returning rows to the client.
I tested only the gap-respecting, non-year-boundary-handling versions.
Results
Query Duration CPU Reads
------------------ -------- ----- ------
ErikE-Preorder 27 31 40
ErikE-CROSS 29 31 40
ErikE-Join-IN -------Awful---------
ErikE-Join-Revised 46 47 15069
ErikE-Lead-Lag 104 109 40
jods 12 16 120
Transact Charlie 12 16 120
Conclusions
The reduced reads of the non-JOIN versions are not significant enough to warrant their increased complexity.
The table is so small that the performance almost doesn't matter. 261 years of weeks is insignificant, so a normal business operation won't see any performance problem even with a poor query.
I tested with an index on Week (which is more than reasonable), doing two separate JOINs with a seek was far, far superior to any device to try to get the relevant related data in one swoop. Charlie and jods were spot on in their comments.
This data is not large enough to expose real differences between the queries in CPU and duration. The values above are representative, though at times the 31 ms were 16 ms and the 16 ms were 0 ms. Since the resolution is ~15 ms, this doesn't tell us much.
My tricky query techniques do perform better. They might be worth it in performance critical situations. But this is not one of those.
Lead and Lag may not always win. The presence of an index on the lookup value is probably what determines this. The ability to still pull prior/next values based on a certain order even when the order by value is not sequential may be one good use case for these functions.
you could use a combination of EXISTS.
Assuming you only want to know groups (series of consecutive weeks all negative)
--Find the potential start weeks
;WITH starts as (
SELECT [Week]
FROM #ConsecutiveNegativeWeekTestOne AS s
WHERE s.[Value] < 0
AND NOT EXISTS (
SELECT 1
FROM #ConsecutiveNegativeWeekTestOne AS p
WHERE p.[Week] = s.[Week] - 1
AND p.[Value] < 0
)
)
SELECT COUNT(*)
FROM
Starts AS s
WHERE EXISTS (
SELECT 1
FROM #ConsecutiveNegativeWeekTestOne AS n
WHERE n.[Week] = s.[Week] + 1
AND n.[Value] < 0
)
If you have an index on Week this query should even be moderately efficient.
You can replace LEAD and LAG with a self-join.
The counting idea is basically to count start of negative sequences rather than trying to consider each row.
SELECT COUNT(*)
FROM ConsecutiveNegativeWeekTestOne W
LEFT OUTER JOIN ConsecutiveNegativeWeekTestOne Prev
ON W.week = Prev.week + 1
INNER JOIN ConsecutiveNegativeWeekTestOne Next
ON W.week = Next.week - 1
WHERE W.value < 0
AND (Prev.value IS NULL OR Prev.value > 0)
AND Next.value < 0
Note that I simply did "week + 1", which would not work when there is a year change.
We've got a query that is taking a very long time to complete with a large dataset. I think I've tracked it down to a table-value function in the SQL server.
The query is designed to return the difference in printing usage between two dates. So if a printer had usage of 100 at date x and 200 at date y a row needs to be returned which reflects that it has had a usage change of 100.
These readings are taken periodically (but not every day) and stored in a table called MeterReadings. The code for the table-value function is below. This is then called from another SQL query which joins the returned table on a devices table with an inner join to get extra device information.
Any advise as to how to optimise the below would be appreciated.
ALTER FUNCTION [dbo].[DeviceUsage]
-- Add the parameters for the stored procedure here
( #StartDate DateTime , #EndDate DateTime )
RETURNS table
AS
RETURN
(
SELECT MAX(dbo.MeterReadings.ScanDateTime) AS MX,
MAX(dbo.MeterReadings.DeviceTotal - reading.DeviceTotal) AS TotalDiff,
MAX(dbo.MeterReadings.TotalCopy - reading.TotalCopy) AS CopyDiff,
MAX(dbo.MeterReadings.TotalPrint - reading.TotalPrint) AS PrintDiff,
MAX(dbo.MeterReadings.TotalScan - reading.TotalScan) AS ScanDiff,
MAX(dbo.MeterReadings.TotalFax - reading.TotalFax) AS FaxDiff,
MAX(dbo.MeterReadings.TotalMono - reading.TotalMono) AS MonoDiff,
MAX(dbo.MeterReadings.TotalColour - reading.TotalColour) AS ColourDiff,
MIN(reading.ScanDateTime) AS MN, dbo.MeterReadings.DeviceID
FROM dbo.MeterReadings INNER JOIN (SELECT * FROM dbo.MeterReadings WHERE
(dbo.MeterReadings.ScanDateTime > #StartDate) AND
(dbo.MeterReadings.ScanDateTime < #EndDate) )
AS reading ON dbo.MeterReadings.DeviceID = reading.DeviceID
WHERE (dbo.MeterReadings.ScanDateTime > #StartDate) AND (dbo.MeterReadings.ScanDateTime < #EndDate)
GROUP BY dbo.MeterReadings.DeviceID);
On the assumption that a value can only ever increase over time, it can certainly be simplified.
SELECT
DeviceID,
MIN(ScanDateTime) AS MN,
MAX(ScanDateTime) AS MX,
MAX(DeviceTotal ) - MIN(DeviceTotal) AS TotalDiff,
MAX(TotalCopy ) - MIN(TotalCopy ) AS CopyDiff,
MAX(TotalPrint ) - MIN(TotalPrint ) AS PrintDiff,
MAX(TotalScan ) - MIN(TotalScan ) AS ScanDiff,
MAX(TotalFax ) - MIN(TotalFax ) AS FaxDiff,
MAX(TotalMono ) - MIN(TotalMono ) AS MonoDiff,
MAX(TotalColour ) - MIN(TotalColour) AS ColourDiff
FROM
dbo.MeterReadings
WHERE
ScanDateTime > #StartDate
AND ScanDateTime < #EndDate
GROUP BY
DeviceID
This assumes that if you have reading on dates 1, 3, 5, 7, 9 and you want to report on 2 -> 8 then you want reading 7 - reading 3. I would have thought you wanted reading 7 - reading 1?
The above query should be fine for relatively small ranges. If you have Huge ranges of time, the MAX() - MIN() will be operating on large numbers of rows. This can then possibly be improved even further with the following (with correlated sub-queries to lookup just the two rows that you want).
As a side benefit, this also works even if the values can go down as well as up.
(I assume the existance of a Device table for a simpler query and faster performance.)
SELECT
Device.DeviceID,
start.ScanDateTime AS MN,
finish.ScanDateTime AS MX,
finish.DeviceTotal - start.DeviceTotal AS TotalDiff,
finish.TotalCopy - start.TotalCopy AS CopyDiff,
finish.TotalPrint - start.TotalPrint AS PrintDiff,
finish.TotalScan - start.TotalScan AS ScanDiff,
finish.TotalFax - start.TotalFax AS FaxDiff,
finish.TotalMono - start.TotalMono AS MonoDiff,
finish.TotalColour - start.TotalColour AS ColourDiff
FROM
dbo.Device AS device
INNER JOIN
dbo.MeterReadings AS start
ON start.DeviceID = device.DeviceID
AND start.ScanDateTime = (SELECT MIN(ScanDateTime)
FROM dbo.MeterReadings
WHERE DeviceID = device.DeviceID
AND ScanDateTime > #startDate
AND ScanDateTime < #endDate)
INNER JOIN
dbo.MeterReadings AS finish
ON finish.DeviceID = device.DeviceID
AND finish.ScanDateTime = (SELECT MAX(ScanDateTime)
FROM dbo.MeterReadings
WHERE DeviceID = device.DeviceID
AND ScanDateTime > #startDate
AND ScanDateTime < #endDate)
This can also be modified to pick up the start as being the first date on or before #startDate, if required.
EDIT: Modification to pick the start reading as being for the first date on or before #startDate.
SELECT
Device.DeviceID,
start.ScanDateTime AS MN,
finish.ScanDateTime AS MX,
COALESCE(finish.DeviceTotal, 0) - COALESCE(start.DeviceTotal, 0) AS TotalDiff,
COALESCE(finish.TotalCopy , 0) - COALESCE(start.TotalCopy , 0) AS CopyDiff,
COALESCE(finish.TotalPrint , 0) - COALESCE(start.TotalPrint , 0) AS PrintDiff,
COALESCE(finish.TotalScan , 0) - COALESCE(start.TotalScan , 0) AS ScanDiff,
COALESCE(finish.TotalFax , 0) - COALESCE(start.TotalFax , 0) AS FaxDiff,
COALESCE(finish.TotalMono , 0) - COALESCE(start.TotalMono , 0) AS MonoDiff,
COALESCE(finish.TotalColour, 0) - COALESCE(start.TotalColour, 0) AS ColourDiff
FROM
dbo.Device AS device
LEFT JOIN
dbo.MeterReadings AS start
ON start.DeviceID = device.DeviceID
AND start.ScanDateTime = (SELECT MAX(ScanDateTime)
FROM dbo.MeterReadings
WHERE DeviceID = device.DeviceID
AND ScanDateTime < #startDate)
LEFT JOIN
dbo.MeterReadings AS finish
ON finish.DeviceID = device.DeviceID
AND finish.ScanDateTime = (SELECT MAX(ScanDateTime)
FROM dbo.MeterReadings
WHERE DeviceID = device.DeviceID
AND ScanDateTime < #endDate)
Your query seems to compute a cross-product of all readings in a time range for each particular device. This works semantically because the MIN and MAX aggregates don't care about duplicates. But this is very slow. If you are comparing 100 dates with themselves you need to process 10,000 rows.
I suggest you calculate the MIN and MAX values for each metric/column over the entire time interval and then subtract them. That way you don't need to join and you need a single pass ofer the data. Like this:
select Diff = MAX(col) - MIN(col)
from readings
group by DeviceID
I'd like to be able to rollup the count of commitments to a product over years -
The data for new commitments in each year looks like this:
Year | Count of new commitments | (What I'd like - count of new commitments to date)
1986 4 4
1987 22 26
1988 14 40
1989 1 41
I know that within a year you can do year to date, month to date etc, but I need to do it over multiple years.
the mdx that gives me the first 2 columns is (really simple - but I don't know where to go from here):
select [Measures].[Commitment Count] on 0
, [Date Dim].[CY Hierarchy].[Calendar Year] on 1
from [Cube]
Any help would be great
In MDX something along the line:
with member [x] as sum(
[Date Dim].[CY Hierarchy].[Calendar Year].members(0) : [Date Dim].[CY Hierarchy].currentMember,
[Measures].[Commitment Count]
)
select [x] on 0, [Date Dim].[CY Hierarchy].[Calendar Year] on 1 from [Cube]
Use a common table expression:
with sums (year,sumThisYear,cumulativeSum)
as (
select year
, sum(commitments) as sumThisYear
, sum(commitments) as cumulativeSum
from theTable
where year = (select min(year) from theTable)
group by year
union all
select child.year
, sum(child.commitments) as sumThisYear
, sum(child.commitments) + parent.cumulativeSum as cumulativeSum
from sums par
JOIN thetable Child on par.year = child.year - 1
group by child.year,parent.cumulativeSum
)
select * from sums
There's a bit of a "trick" in there grouping on parent.cumulativeSum. We know that this will be the same value for all rows, and we need to add it to sum(child.commitments), so we group on it so SQL Server will let us refer to it. That can probably be cleaned up to remove what might be called a "smell", but it will work.
Warning: 11:15pm where I am, written off the top of my head, may need a tweak or two.
EDIT: forgot the group by in the anchor clause, added that in