I have a large but slim table that records time spent on activities.
Two tables exist Activities and RecordedTime. Recorded Time holds a date stamp signifying the day the time was spent.
I have a need to get a list of activities that only have time recorded against them in a date range.
Currently I have code which builds an exclusion list and stores those activities into a temporary table:
DECLARE #DontInclude TABLE (ActivityID INT)
INSERT INTO #DontInclude
SELECT DISTINCT ActivityID
FROM RecordedTime
WHERE DateStamp < #StartDate
INSERT INTO #DontInclude
SELECT DISTINCT ActivityID
FROM RecordedTime
WHERE DateStamp > #EndDate
The trouble with this is that alot of data lies outside of small date ranges and therefore a long time.
I cant use BETWEEN as it doesn't bring back activities that have ONLY had time recorded within the specific date range.
I've reviewed the Estimate Execution Plan and created any indexes SQL suggested.
This portion of my SP is still the bottleneck. Can any suggest what other changes I can to improve performance?
The query that you want sounds like this:
select a.*
from activities a
where not exists (select 1
from RecordedTime rt
where rt.activityId = a.activityId and
dateStamp < #StartDate
) and
not exists (select 1
from RecordedTime rt
where rt.activityId = a.activityId and
dateStamp > #EndDate
) and
exists (select 1
from RecordedTime rt
where rt.activityId = a.activityId
);
For performance, you want an index on RecordedTime(activityId, datestamp).
Note that the use of three subqueries is quite intentional. Each subquery should make optimal use of the indexes, so the query should be fairly fast.
You could combine the insert statement into one query really to make it more efficient like this:
DECLARE #DontInclude TABLE (ActivityID INT)
INSERT INTO #DontInclude
SELECT DISTINCT ActivityID
FROM RecordedTime
WHERE DateStamp < #StartDate OR Datestamp > #EndDate
Ofcourse, like #Gordon Linoff mentions, adding a non-clustered index on your recordedtime table would make it quite faster!
How about first gathering a list of ones in the range, then removing the ones that should be excluded:
SELECT DISTINCT tmpId = r.ActivityID
INTO #tmp
FROM RecordedTime r
WHERE r.DateStamp >= #StartDate and r.DateStamp < #EndDate
DELETE FROM #tmp
WHERE exists(select 1 from RecordedTime r
where r.ActivityID = tmpID
and (r.DateStamp < #startDate or
r.DateStamp > #endDate))
This should be quicker since you only check exclusion conditions ("not exists") on the ones that might be included; rather than running "not exists" on everything in the table.
Related
I have a transactional fact with irregular effective from and effectiveTo dates with cumulative total and net changes over ID and time.
Now I want to sum the net changes between two snapshots, for example 2012-01-01 and 2015-07-01 that should include all the rows including and between the snapshot dates.
I would like to select the relevant rows using the snapshot dates so that I can perform a sum on NC_Total grouped on ID.
What is the most efficient way to to this? Can I create a table valued function for this?
This is how the table looks selected for ID IN (1,2):
This is the resultset I should get back using the snapshot dates:
Select rows on snapshot date
And this is how I would like to finally sum it:
Sum of snapshot dates
I think you just want inequalities in a where clause:
select t.*
from t
where effectiveTo >= '2012-01-01' and
startFrom <= '2015-06-01'
Then you can use aggregation:
select id, sum(nc_total) as sum_net_changes
from t
where effectiveTo >= '2012-01-01' and
startFrom <= '2015-06-01'
So I ended up writing a table valued function for this but the gist of the selection had less to do with SQL than with thinking about the datelogic. The solution was not as elegant as I would have wanted:
DECLARE #t1 DATETIME = '2012-01-01'
DECLARE #t2 DATETIME = '2015-07-01'
AND (EffectiveFrom >= #t1 AND EffectiveFrom <= #t2) -- All transactions between the snapshots
I have a report that I need to create from a UserHistory table in SQL Server 2008 R2. A sample is below:
For billing purposes we need to calculate how many distinct users have had a particular status across a particular period. The period can be as low as 1 day or up to something like 2 months. So if one user was active, not active and then active again across a period then that counts as one active user.
The curve ball now comes in to play. If I had no histories within the specified period, the stored proc needs to hunt for the first history entry before the start of the period to see if it matches the status. As obviously if I was active on the 20th of October and then my account was suspended on the 15th of November, if the period was for November, I was still active.
At first I did this with a cursor going through each User table, and doing the queries per user and it's user histories however even with the relevant indexes this takes about 40 seconds for 200000 users. 40s for each day in a period for 9 statuses adds up very quickly.
My colleage and I came up with this method which has come the closest:
ALTER PROCEDURE [dbo].[cpUserHistory_CountByStatus] #StartDate DATETIME2,
#EndDate DATETIME2,
#Status VARCHAR(50)
AS
SET NOCOUNT ON
SELECT SUM(CASE WHEN CurrentMonth.UserId IS NOT NULL OR PreviousTime.UserId IS NOT NULL THEN 1 ELSE 0 END)
FROM [User]
LEFT JOIN
(SELECT UserId
FROM
UserHistory
INNER JOIN
(SELECT MAX(UserHistoryId) AS UserHistoryId
FROM UserHistory
WHERE EditedDate BETWEEN #StartDate AND #EndDate
GROUP BY UserId) LastStatus ON UserHistory.UserHistoryId = LastStatus.UserHistoryId
WHERE UserHistory.Status = #Status -- EDIT: Your status you're interested in
) CurrentMonth ON [User].UserId = CurrentMonth.UserId
LEFT JOIN
(SELECT UserId
FROM
UserHistory
INNER JOIN
(SELECT MAX(UserHistoryId) AS UserHistoryId
FROM UserHistory
WHERE EditedDate < #StartDate
GROUP BY UserId) LastStatus ON UserHistory.UserHistoryId = LastStatus.UserHistoryId
WHERE UserHistory.Status = #Status -- EDIT: Your status you're interested in
) PreviousTime ON [User].UserId = PreviousTime.UserId
But as you can see in the simplified report sample, a user was cancelled on the 3rd of the month and then reactivated on the 5th, however the total does not reflect that at some point in the period there was 1 cancelled user.
Does anyone have any ideas or better ways to go about doing this? I could really use some input.
I settled on calling a stored procedure once for the entire range which will bring back all UserHistory entries which matches my criteria and in code I can count and filter it as I please.
Not ideal as there could be quite a lot of data transfered from the DB to the application, and once in the app it can use a bit of memory - but it works, is fast and more importantly is accurate.
ALTER PROCEDURE [dbo].[cpUserHistory_SelectForBillingPeriod] #StartDate DATETIME2,
#EndDate DATETIME2
AS
BEGIN
DECLARE #Uh TABLE (
UserHistoryId INT,
UserId INT,
UserStatus VARCHAR(50),
EditedDate DATETIME2
)
INSERT INTO #Uh
SELECT UserHistoryId,
UserId,
[Status],
EditedDate
FROM UserHistory
WHERE EditedDate >= #StartDate
AND EditedDate <= #EndDate
INSERT INTO #Uh
SELECT UserHistory.UserHistoryId,
UserId,
[Status],
EditedDate
FROM UserHistory
INNER JOIN (
SELECT MAX(UserHistoryId) AS UserHistoryId
FROM UserHistory
WHERE EditedDate < #StartDate
GROUP BY UserId
) LastStatus ON UserHistory.UserHistoryId = LastStatus.UserHistoryId
SELECT *
FROM #Uh
order by EditedDate desc
Realizing that another question I asked before may be too difficult, I'm changing my requirements.
I work for a credit card company. Our database has a customer table and a transaction table. Fields in the customer table are SSN and CustomerKey. Fields in the transaction table are CustomerKey, transaction date (Transdate), and transaction amount (TransAmt).
I need a query that can identify each ssn where the sum of any of their transaction amounts > 1000 within a two day period in 2012. If a ssn has transaction amounts > 1000 within a two day period, I need the query to return all the transactions for that ssn.
Here is an example of the raw data in the Transaction Table:
Trans#-----CustKey-----Date--------Amount
1-----------12345----01/01/12--------$600
2-----------12345----01/02/12--------$500
3-----------67890----01/03/12--------$10
4-----------98765----04/01/12--------$600
5-----------43210----04/02/12--------$600
6-----------43210----04/03/12--------$100
7-----------13579----04/02/12--------$600
8-----------24568----04/03/12--------$100
Here is an example of the raw data in the Customer Table:
CustKey-----SSN
12345------123456789
67890------123456789
98765------987654321
43210------987654321
13579------246801357
24568------246801357
Here are the results I need:
Trans#------SSN---------Date---------Amount
1--------123456789----01/01/12---------$600
2--------123456789----01/02/12---------$500
3--------123456789----01/03/12----------$10
4--------987654321----04/01/12---------$600
5--------987654321----04/02/12---------$600
6--------987654321----04/03/12---------$100
As you can see in my results included all transactions for SSN 123456789 and 987654321, and excluded SSN 246801357.
One way of doing this is to roll through each two day period within a year. Here is an SQL Fiddle example.
The idea is pretty simple:
1) Create a temp table to store all matching customers
create table CustomersToShow
(
SSN int
)
2) Loop trough a year and populate temp table with customers that match the amount criteria
declare #firstDayOfTheYear datetime = '1/1/2012';
declare #lastDayOfTheYear datetime = '12/31/2012';
declare #currentDate datetime = #firstDayOfTheYear;
declare #amountThreshold money = 1000;
while #currentDate <= #lastDayOfTheYear
begin
insert into CustomersToShow(SSN)
select b.SSN
from transactions a
join customers b
on a.CustKey = b.CustKey
where TransactionDate >= #currentDate
and TransactionDate <= DATEADD(day, 2, #currentDate)
group by b.SSN
having SUM(a.TransactionAmount) >= #amountThreshold
set #currentDate = DATEADD(day,2,#currentDate)
end
3) And then just select
select a.TransNumber, b.SSN, a.TransactionDate, a.TransactionAmount
from transactions a
join customers b
on a.CustKey = b.CustKey
join CustomersToShow c
on b.SSN = c.SSN
Note: This will be slow...
While you could probably come up with a hacky way to do this via standard SQL, this is a problem that IMO is more suited to being solved by code (i.e. not by set-based logic / SQL).
It would be easy to solve if you sort the transaction list by customerKey and date, then loop through the data. Ideally I would do this in code, but alternatively you could write a stored procedure and use a loop and a cursor.
This is easy and well-suited to set-based logic if you look at it right. You simply need to join to a table that has every date range you're interested in. Every T-SQL database (Oracle has it built-in) should have a utility table named integers - it's very useful surprisingly often:
CREATE TABLE integers ( n smallint, constraint PK_integers primary key clustered (n))
INSERT integers select top 1000 row_number() over (order by o.id) from sysobjects o cross join sysobjects
Your date table then looks like:
SELECT dateadd(day, n-1, '2012') AS dtFrom, dateadd(day, n+1, '2012') AS dtTo
from integers where n <= 366
You can then (abbreviating):
SELECT ssn, dtFrom
FROM yourTables t
JOIN ( SELECT dateadd(day, n-1, '2012') as dtFrom, dateadd(day, n+1, '2012') AS dtTo
from integers where n <= 366 ) d on t.date between d.dtFrom and d.dtTo
GROUP BY ssn, dtFrom
HAVING sum(amount) > 1000
You can select all your transactions:
WHERE ssn in ( SELECT distinct ssn from ( <above query> ) t )
I have a table with order information in an E-commerce store. Schema looks like this:
[Orders]
Id|SubTotal|TaxAmount|ShippingAmount|DateCreated
This table does only contain data for every Order. So if a day goes by without any orders, no sales data is there for that day.
I would like to select subtotal-per-day for the last 30 days, including those days with no sales.
The resultset would look like this:
Date | SalesSum
2009-08-01 | 15235
2009-08-02 | 0
2009-08-03 | 340
2009-08-04 | 0
...
Doing this, only gives me data for those days with orders:
select DateCreated as Date, sum(ordersubtotal) as SalesSum
from Orders
group by DateCreated
You could create a table called Dates, and select from that table and join the Orders table. But I really want to avoid that, because it doesn't work good enough when dealing with different time zones and things...
Please don't laugh. SQL is not my kind of thing... :)
Create a function that can generate a date table as follows:
(stolen from http://www.codeproject.com/KB/database/GenerateDateTable.aspx)
Create Function dbo.fnDateTable
(
#StartDate datetime,
#EndDate datetime,
#DayPart char(5) -- support 'day','month','year','hour', default 'day'
)
Returns #Result Table
(
[Date] datetime
)
As
Begin
Declare #CurrentDate datetime
Set #CurrentDate=#StartDate
While #CurrentDate<=#EndDate
Begin
Insert Into #Result Values (#CurrentDate)
Select #CurrentDate=
Case
When #DayPart='year' Then DateAdd(yy,1,#CurrentDate)
When #DayPart='month' Then DateAdd(mm,1,#CurrentDate)
When #DayPart='hour' Then DateAdd(hh,1,#CurrentDate)
Else
DateAdd(dd,1,#CurrentDate)
End
End
Return
End
Then, join against that table
SELECT dates.Date as Date, sum(SubTotal+TaxAmount+ShippingAmount)
FROM [fnDateTable] (dateadd("m",-1,CONVERT(VARCHAR(10),GETDATE(),111)),CONVERT(VARCHAR(10),GETDATE(),111),'day') dates
LEFT JOIN Orders
ON dates.Date = DateCreated
GROUP BY dates.Date
declare #oldest_date datetime
declare #daily_sum numeric(18,2)
declare #temp table(
sales_date datetime,
sales_sum numeric(18,2)
)
select #oldest_date = dateadd(day,-30,getdate())
while #oldest_date <= getdate()
begin
set #daily_sum = (select sum(SubTotal) from SalesTable where DateCreated = #oldest_date)
insert into #temp(sales_date, sales_sum) values(#oldest_date, #daily_sum)
set #oldest_date = dateadd(day,1,#oldest_date)
end
select * from #temp
OK - I missed that 'last 30 days' part. The bit above, while not as clean, IMHO, as the date table, should work. Another variant would be to use the while loop to fill a temp table just with the last 30 days and do a left outer join with the result of my original query.
including those days with no sales.
That's the difficult part. I don't think the first answer will help you with that. I did something similar to this with a separate date table.
You can find the directions on how to do so here:
Date Table
I have a Log table table with LogID an index which i never delete any records. it has index from 1 to ~10000000. Using this table I can write
select
s.ddate, SUM(isnull(o.SubTotal,0))
from
(
select
cast(datediff(d,LogID,getdate()) as datetime) AS ddate
from
Log
where
LogID <31
) s right join orders o on o.orderdate = s.ddate
group by s.ddate
I actually did this today. We also got a e-commerce application. I don't want to fill our database with "useless" dates. I just do the group by and create all the days for the last N days in Java, and peer them with the date/sales results from the database.
Where is this ultimately going to end up? I ask only because it may be easier to fill in the empty days with whatever program is going to deal with the data instead of trying to get it done in SQL.
SQL is a wonderful language, and it is capable of a great many things, but sometimes you're just better off working the finer points of the data in the program instead.
(Revised a bit--I hit enter too soon)
I started poking at this, and as it hits some pretty tricky SQL concepts it quickly grew into the following monster. If feasible, you might be better off adapting THEn's solution; or, like many others advise, using application code to fill in the gaps could be preferrable.
-- A temp table holding the 30 dates that you want to check
DECLARE #Foo Table (Date smalldatetime not null)
-- Populate the table using a common "tally table" methodology (I got this from SQL Server magazine long ago)
;WITH
L0 AS (SELECT 1 AS C UNION ALL SELECT 1), --2 rows
L1 AS (SELECT 1 AS C FROM L0 AS A, L0 AS B),--4 rows
L2 AS (SELECT 1 AS C FROM L1 AS A, L1 AS B),--16 rows
L3 AS (SELECT 1 AS C FROM L2 AS A, L2 AS B),--256 rows
Tally AS (SELECT ROW_NUMBER() OVER(ORDER BY C) AS Number FROM L3)
INSERT #Foo (Date)
select dateadd(dd, datediff(dd, 0, dateadd(dd, -number + 1, getdate())), 0)
from Tally
where Number < 31
Step 1 is to build a temp table containint the 30 dates that you are concerned with. That abstract wierdness is about the fastest way known to build a table of consecutive integers; add a few more subqueries, and you can populate millions or more in mere seconds. I take the first 30, and use dateadd and the current date/time to convert them into dates. If you already have a "fixed" table that has 1-30, you can use that and skip the CTE entirely (by replacing table "Tally" with your table).
The outer two date function calls remove the time portion of the generated string.
(Note that I assume that your order date also has no time portion -- otherwise you've got another common problem to resolve.)
For testing purposes I built table #Orders, and this gets you the rest:
SELECT f.Date, sum(ordersubtotal) as SalesSum
from #Foo f
left outer join #Orders o
on o.DateCreated = f.Date
group by f.Date
I created the Function DateTable as JamesMLV pointed out to me.
And then the SQL looks like this:
SELECT dates.date, ISNULL(SUM(ordersubtotal), 0) as Sales FROM [dbo].[DateTable] ('2009-08-01','2009-08-31','day') dates
LEFT JOIN Orders ON CONVERT(VARCHAR(10),Orders.datecreated, 111) = dates.date
group by dates.date
SELECT DateCreated,
SUM(SubTotal) AS SalesSum
FROM Orders
GROUP BY DateCreated
I feel like I've seen this question asked before, but neither the SO search nor google is helping me... maybe I just don't know how to phrase the question. I need to count the number of events (in this case, logins) per day over a given time span so that I can make a graph of website usage. The query I have so far is this:
select
count(userid) as numlogins,
count(distinct userid) as numusers,
convert(varchar, entryts, 101) as date
from
usagelog
group by
convert(varchar, entryts, 101)
This does most of what I need (I get a row per date as the output containing the total number of logins and the number of unique users on that date). The problem is that if no one logs in on a given date, there will not be a row in the dataset for that date. I want it to add in rows indicating zero logins for those dates. There are two approaches I can think of for solving this, and neither strikes me as very elegant.
Add a column to the result set that lists the number of days between the start of the period and the date of the current row. When I'm building my chart output, I'll keep track of this value and if the next row is not equal to the current row plus one, insert zeros into the chart for each of the missing days.
Create a "date" table that has all the dates in the period of interest and outer join against it. Sadly, the system I'm working on already has a table for this purpose that contains a row for every date far into the future... I don't like that, and I'd prefer to avoid using it, especially since that table is intended for another module of the system and would thus introduce a dependency on what I'm developing currently.
Any better solutions or hints at better search terms for google? Thanks.
Frankly, I'd do this programmatically when building the final output. You're essentially trying to read something from the database which is not there (data for days that have no data). SQL isn't really meant for that sort of thing.
If you really want to do that, though, a "date" table seems your best option. To make it a bit nicer, you could generate it on the fly, using i.e. your DB's date functions and a derived table.
I had to do exactly the same thing recently. This is how I did it in T-SQL (
YMMV on speed, but I've found it performant enough over a coupla million rows of event data):
DECLARE #DaysTable TABLE ( [Year] INT, [Day] INT )
DECLARE #StartDate DATETIME
SET #StartDate = whatever
WHILE (#StartDate <= GETDATE())
BEGIN
INSERT INTO #DaysTable ( [Year], [Day] )
SELECT DATEPART(YEAR, #StartDate), DATEPART(DAYOFYEAR, #StartDate)
SELECT #StartDate = DATEADD(DAY, 1, #StartDate)
END
-- This gives me a table of all days since whenever
-- you could select #StartDate as the minimum date of your usage log)
SELECT days.Year, days.Day, events.NumEvents
FROM #DaysTable AS days
LEFT JOIN (
SELECT
COUNT(*) AS NumEvents
DATEPART(YEAR, LogDate) AS [Year],
DATEPART(DAYOFYEAR, LogDate) AS [Day]
FROM LogData
GROUP BY
DATEPART(YEAR, LogDate),
DATEPART(DAYOFYEAR, LogDate)
) AS events ON days.Year = events.Year AND days.Day = events.Day
Create a memory table (a table variable) where you insert your date ranges, then outer join the logins table against it. Group by your start date, then you can perform your aggregations and calculations.
The strategy I normally use is to UNION with the opposite of the query, generally a query that retrieves data for rows that don't exist.
If I wanted to get the average mark for a course, but some courses weren't taken by any students, I'd need to UNION with those not taken by anyone to display a row for every class:
SELECT AVG(mark), course FROM `marks`
UNION
SELECT NULL, course FROM courses WHERE course NOT IN
(SELECT course FROM marks)
Your query will be more complex but the same principle should apply. You may indeed need a table of dates for your second query
Option 1
You can create a temp table and insert dates with the range and do a left outer join with the usagelog
Option 2
You can programmetically insert the missing dates while evaluating the result set to produce the final output
WITH q(n) AS
(
SELECT 0
UNION ALL
SELECT n + 1
FROM q
WHERE n < 99
),
qq(n) AS
(
SELECT 0
UNION ALL
SELECT n + 1
FROM q
WHERE n < 99
),
dates AS
(
SELECT q.n * 100 + qq.n AS ndate
FROM q, qq
)
SELECT COUNT(userid) as numlogins,
COUNT(DISTINCT userid) as numusers,
CAST('2000-01-01' + ndate AS DATETIME) as date
FROM dates
LEFT JOIN
usagelog
ON entryts >= CAST('2000-01-01' AS DATETIME) + ndate
AND entryts < CAST('2000-01-01' AS DATETIME) + ndate + 1
GROUP BY
ndate
This will select up to 10,000 dates constructed on the fly, that should be enough for 30 years.
SQL Server has a limitation of 100 recursions per CTE, that's why the inner queries can return up to 100 rows each.
If you need more than 10,000, just add a third CTE qqq(n) and cross-join with it in dates.