I have a table with order information in an E-commerce store. Schema looks like this:
[Orders]
Id|SubTotal|TaxAmount|ShippingAmount|DateCreated
This table does only contain data for every Order. So if a day goes by without any orders, no sales data is there for that day.
I would like to select subtotal-per-day for the last 30 days, including those days with no sales.
The resultset would look like this:
Date | SalesSum
2009-08-01 | 15235
2009-08-02 | 0
2009-08-03 | 340
2009-08-04 | 0
...
Doing this, only gives me data for those days with orders:
select DateCreated as Date, sum(ordersubtotal) as SalesSum
from Orders
group by DateCreated
You could create a table called Dates, and select from that table and join the Orders table. But I really want to avoid that, because it doesn't work good enough when dealing with different time zones and things...
Please don't laugh. SQL is not my kind of thing... :)
Create a function that can generate a date table as follows:
(stolen from http://www.codeproject.com/KB/database/GenerateDateTable.aspx)
Create Function dbo.fnDateTable
(
#StartDate datetime,
#EndDate datetime,
#DayPart char(5) -- support 'day','month','year','hour', default 'day'
)
Returns #Result Table
(
[Date] datetime
)
As
Begin
Declare #CurrentDate datetime
Set #CurrentDate=#StartDate
While #CurrentDate<=#EndDate
Begin
Insert Into #Result Values (#CurrentDate)
Select #CurrentDate=
Case
When #DayPart='year' Then DateAdd(yy,1,#CurrentDate)
When #DayPart='month' Then DateAdd(mm,1,#CurrentDate)
When #DayPart='hour' Then DateAdd(hh,1,#CurrentDate)
Else
DateAdd(dd,1,#CurrentDate)
End
End
Return
End
Then, join against that table
SELECT dates.Date as Date, sum(SubTotal+TaxAmount+ShippingAmount)
FROM [fnDateTable] (dateadd("m",-1,CONVERT(VARCHAR(10),GETDATE(),111)),CONVERT(VARCHAR(10),GETDATE(),111),'day') dates
LEFT JOIN Orders
ON dates.Date = DateCreated
GROUP BY dates.Date
declare #oldest_date datetime
declare #daily_sum numeric(18,2)
declare #temp table(
sales_date datetime,
sales_sum numeric(18,2)
)
select #oldest_date = dateadd(day,-30,getdate())
while #oldest_date <= getdate()
begin
set #daily_sum = (select sum(SubTotal) from SalesTable where DateCreated = #oldest_date)
insert into #temp(sales_date, sales_sum) values(#oldest_date, #daily_sum)
set #oldest_date = dateadd(day,1,#oldest_date)
end
select * from #temp
OK - I missed that 'last 30 days' part. The bit above, while not as clean, IMHO, as the date table, should work. Another variant would be to use the while loop to fill a temp table just with the last 30 days and do a left outer join with the result of my original query.
including those days with no sales.
That's the difficult part. I don't think the first answer will help you with that. I did something similar to this with a separate date table.
You can find the directions on how to do so here:
Date Table
I have a Log table table with LogID an index which i never delete any records. it has index from 1 to ~10000000. Using this table I can write
select
s.ddate, SUM(isnull(o.SubTotal,0))
from
(
select
cast(datediff(d,LogID,getdate()) as datetime) AS ddate
from
Log
where
LogID <31
) s right join orders o on o.orderdate = s.ddate
group by s.ddate
I actually did this today. We also got a e-commerce application. I don't want to fill our database with "useless" dates. I just do the group by and create all the days for the last N days in Java, and peer them with the date/sales results from the database.
Where is this ultimately going to end up? I ask only because it may be easier to fill in the empty days with whatever program is going to deal with the data instead of trying to get it done in SQL.
SQL is a wonderful language, and it is capable of a great many things, but sometimes you're just better off working the finer points of the data in the program instead.
(Revised a bit--I hit enter too soon)
I started poking at this, and as it hits some pretty tricky SQL concepts it quickly grew into the following monster. If feasible, you might be better off adapting THEn's solution; or, like many others advise, using application code to fill in the gaps could be preferrable.
-- A temp table holding the 30 dates that you want to check
DECLARE #Foo Table (Date smalldatetime not null)
-- Populate the table using a common "tally table" methodology (I got this from SQL Server magazine long ago)
;WITH
L0 AS (SELECT 1 AS C UNION ALL SELECT 1), --2 rows
L1 AS (SELECT 1 AS C FROM L0 AS A, L0 AS B),--4 rows
L2 AS (SELECT 1 AS C FROM L1 AS A, L1 AS B),--16 rows
L3 AS (SELECT 1 AS C FROM L2 AS A, L2 AS B),--256 rows
Tally AS (SELECT ROW_NUMBER() OVER(ORDER BY C) AS Number FROM L3)
INSERT #Foo (Date)
select dateadd(dd, datediff(dd, 0, dateadd(dd, -number + 1, getdate())), 0)
from Tally
where Number < 31
Step 1 is to build a temp table containint the 30 dates that you are concerned with. That abstract wierdness is about the fastest way known to build a table of consecutive integers; add a few more subqueries, and you can populate millions or more in mere seconds. I take the first 30, and use dateadd and the current date/time to convert them into dates. If you already have a "fixed" table that has 1-30, you can use that and skip the CTE entirely (by replacing table "Tally" with your table).
The outer two date function calls remove the time portion of the generated string.
(Note that I assume that your order date also has no time portion -- otherwise you've got another common problem to resolve.)
For testing purposes I built table #Orders, and this gets you the rest:
SELECT f.Date, sum(ordersubtotal) as SalesSum
from #Foo f
left outer join #Orders o
on o.DateCreated = f.Date
group by f.Date
I created the Function DateTable as JamesMLV pointed out to me.
And then the SQL looks like this:
SELECT dates.date, ISNULL(SUM(ordersubtotal), 0) as Sales FROM [dbo].[DateTable] ('2009-08-01','2009-08-31','day') dates
LEFT JOIN Orders ON CONVERT(VARCHAR(10),Orders.datecreated, 111) = dates.date
group by dates.date
SELECT DateCreated,
SUM(SubTotal) AS SalesSum
FROM Orders
GROUP BY DateCreated
Related
I'm designing a report that returns PurchaseOrder due in future week.
Query that I've added below returns PurchaseOrder due for a particular Commodity, AmountDue and its DeliveryDate.
Obviously it only returns PO_Dates that are in the table. What I want is to also include dates where no PO is expected, i.e. null for those cell.
To me one possibility is to LEFT JOIN the dataset with set of dates of future week on Date column, that will eventually make the result null where no Purchase Order is expected.
In Firebird I don't know how to select list of week long dates and then use it in the join.
SELECT
PURCHASE_ORDER_DET.COMMODITYID AS COM_ID,
PURCHASE_ORDER_DET.DELIVERYDATE + CAST ('29.12.1899' AS DATE) as DLV_DATE,
SUM(PURCHASE_ORDER_DET.REQQUANTITY) as DLV_DUE
FROM
PURCHASE_ORDER_DET
LEFT JOIN PURCHASE_ORDER_HDR on PURCHASE_ORDER_HDR.POH_ID =
PURCHASE_ORDER_DET.POH_ID
WHERE
PURCHASE_ORDER_DET.COMMODITYID = 1
AND PURCHASE_ORDER_HDR.STATUS in (0,1,2)
AND PURCHASE_ORDER_DET.DELIVERYDATE + CAST ('30.12.1899' AS TIMESTAMP) >= '3.01.2019'
AND PURCHASE_ORDER_DET.DELIVERYDATE + CAST ('30.12.1899' AS TIMESTAMP) <= '9.01.2019'
AND PURCHASE_ORDER_DET.DELETED is NULL
Group by
PURCHASE_ORDER_DET.COMMODITYID,
PURCHASE_ORDER_DET.DELIVERYDATE
DataSet
COM_ID DLV_DATE DLV_DUE
1 3.01.2019 50.000000
1 5.01.2019 10.000000
Expected
COM_ID DLV_DATE DLV_DUE
1 3.01.2019 50.000000
1 4.01.2019 null
1 5.01.2019 10.000000
1 6.01.2019 null
1 7.01.2019 null
1 8.01.2019 null
1 9.01.2019 null
Ignoring your odd use of datatypes*, there are several possible solutions:
Use a 'calendar' table that contains dates, and right join to that table (or left join from that table). The downside of course is having to populate this table (but that is a one-off cost).
Use a selectable stored procedure to generate a date range and join on that.
Generate the range in a recursive common table expression in the query itself
Option 1 is pretty self-explanatory.
Option 2 would look something like:
CREATE OR ALTER PROCEDURE date_range(startdate date, enddate date)
RETURNS (dateval date)
AS
BEGIN
dateval = startdate;
while (dateval <= enddate) do
BEGIN
suspend;
dateval = dateval + 1;
END
END
And then use this in your query like:
select date_range.dateval, ...
from date_range(date'2019-01-03', date'2019-01-09') -- use date_range(?, ?) for parameters
left join ...
on date_range.dateval = ...
Option 3 would look something like:
WITH RECURSIVE date_range AS (
SELECT date'2019-01-03' dateval -- start date, use cast(? as date) if you need a parameter
FROM rdb$database
UNION ALL
SELECT dateval + 1
FROM date_range
WHERE dateval < date'2019-01-09' -- end date use ? if you need a parameter
)
SELECT *
FROM date_range
LEFT JOIN ...
ON date_range.dateval = ...
Recursive common table expressions have a maximum recursion depth of 1024, which means that it isn't suitable if you need a span wider than 1024 days.
*: I'd suggest that you start using DATE instead of what looks like the number of days since 30-12-1899. That avoids having to do awkward calculations like you do now. If you do need those number of days, then you can for example use datediff(DAY FROM date'1899-12-30' TO somedatevalue) or somedatevalue - date'1899-12-30' to convert from date to that numeric value.
I have a large but slim table that records time spent on activities.
Two tables exist Activities and RecordedTime. Recorded Time holds a date stamp signifying the day the time was spent.
I have a need to get a list of activities that only have time recorded against them in a date range.
Currently I have code which builds an exclusion list and stores those activities into a temporary table:
DECLARE #DontInclude TABLE (ActivityID INT)
INSERT INTO #DontInclude
SELECT DISTINCT ActivityID
FROM RecordedTime
WHERE DateStamp < #StartDate
INSERT INTO #DontInclude
SELECT DISTINCT ActivityID
FROM RecordedTime
WHERE DateStamp > #EndDate
The trouble with this is that alot of data lies outside of small date ranges and therefore a long time.
I cant use BETWEEN as it doesn't bring back activities that have ONLY had time recorded within the specific date range.
I've reviewed the Estimate Execution Plan and created any indexes SQL suggested.
This portion of my SP is still the bottleneck. Can any suggest what other changes I can to improve performance?
The query that you want sounds like this:
select a.*
from activities a
where not exists (select 1
from RecordedTime rt
where rt.activityId = a.activityId and
dateStamp < #StartDate
) and
not exists (select 1
from RecordedTime rt
where rt.activityId = a.activityId and
dateStamp > #EndDate
) and
exists (select 1
from RecordedTime rt
where rt.activityId = a.activityId
);
For performance, you want an index on RecordedTime(activityId, datestamp).
Note that the use of three subqueries is quite intentional. Each subquery should make optimal use of the indexes, so the query should be fairly fast.
You could combine the insert statement into one query really to make it more efficient like this:
DECLARE #DontInclude TABLE (ActivityID INT)
INSERT INTO #DontInclude
SELECT DISTINCT ActivityID
FROM RecordedTime
WHERE DateStamp < #StartDate OR Datestamp > #EndDate
Ofcourse, like #Gordon Linoff mentions, adding a non-clustered index on your recordedtime table would make it quite faster!
How about first gathering a list of ones in the range, then removing the ones that should be excluded:
SELECT DISTINCT tmpId = r.ActivityID
INTO #tmp
FROM RecordedTime r
WHERE r.DateStamp >= #StartDate and r.DateStamp < #EndDate
DELETE FROM #tmp
WHERE exists(select 1 from RecordedTime r
where r.ActivityID = tmpID
and (r.DateStamp < #startDate or
r.DateStamp > #endDate))
This should be quicker since you only check exclusion conditions ("not exists") on the ones that might be included; rather than running "not exists" on everything in the table.
I'm creating an internal holiday booking system and I need to put business logic rules into place but I need to do a check on how many people are booked off on the dates between the Start and End date because for example 2 apprentices may only be booked off on 1 day but I have no way off grabbing the dates between.
Any help would be appreciated
Below is the job role table
You haven't posted the RDBMS, or the names of the tables, or what exactly the Job Role table is supposed to be doing ... but I'll take a shot at this anyway. I'm using a recursive CTE to generate a list of dates, but it would be far better for you to use a Date table, and I don't even know whether your RDBMS will support this. I've also posted the syntax for a table variable below that populates data mimicking your sample.
The final output, naturally, will need to be customized to do whatever you need to do. This does show you, however, a list of every date when more than one employee is on vacation. Add extra conditions to the second JOIN or to the WHERE clause to filter on other things (like JobRole) if necessary.
-- Code originally from http://smehrozalam.wordpress.com/2009/06/09/t-sql-using-common-table-expressions-cte-to-generate-sequences/
-- Define start and end limits
DECLARE #todate DATETIME, #fromdate DATETIME
SELECT #fromdate='2014-01-01', #todate=GETDATE()-1
DECLARE #TimeOff TABLE (StartDate DATETIME, EndDate DATETIME, EmployeeID INT)
INSERT INTO #TimeOff (StartDate, EndDate, EmployeeID)
SELECT '1/1/2014', '1/7/2014', 7 UNION
SELECT '2/1/2014', '2/7/2014', 7 UNION
SELECT '3/3/2014', '3/9/2014', 7 UNION
SELECT '2/5/2014', '2/6/2014', 8
;WITH DateSequence( Date ) AS -- this will list all dates. Use this if you don't have a date table
(
SELECT #fromdate as Date
UNION ALL
SELECT DATEADD(DAY, 1, Date)
FROM DateSequence
WHERE Date < #todate
)
--select result
SELECT DateSequence.Date, TimeOffA.StartDate, TimeOffB.EndDate, TimeOffA.EmployeeID
FROM
DateSequence -- a full list of all possible dates
INNER JOIN
#TimeOff TimeOffA ON -- all dates when an employee is on vacation -- replace this with your actual table's name
DateSequence.Date BETWEEN TimeOffA.StartDate AND TimeOffA.EndDate
INNER JOIN
#TimeOff TimeOffB ON -- all dates when an employee who is NOT employee A is on vacation -- replace this with your actual table's name
DateSequence.Date BETWEEN TimeOffB.StartDate AND TimeOffB.EndDate AND
TimeOffA.EmployeeID <> TimeOffB.EmployeeID
option (MaxRecursion 2000)
Realizing that another question I asked before may be too difficult, I'm changing my requirements.
I work for a credit card company. Our database has a customer table and a transaction table. Fields in the customer table are SSN and CustomerKey. Fields in the transaction table are CustomerKey, transaction date (Transdate), and transaction amount (TransAmt).
I need a query that can identify each ssn where the sum of any of their transaction amounts > 1000 within a two day period in 2012. If a ssn has transaction amounts > 1000 within a two day period, I need the query to return all the transactions for that ssn.
Here is an example of the raw data in the Transaction Table:
Trans#-----CustKey-----Date--------Amount
1-----------12345----01/01/12--------$600
2-----------12345----01/02/12--------$500
3-----------67890----01/03/12--------$10
4-----------98765----04/01/12--------$600
5-----------43210----04/02/12--------$600
6-----------43210----04/03/12--------$100
7-----------13579----04/02/12--------$600
8-----------24568----04/03/12--------$100
Here is an example of the raw data in the Customer Table:
CustKey-----SSN
12345------123456789
67890------123456789
98765------987654321
43210------987654321
13579------246801357
24568------246801357
Here are the results I need:
Trans#------SSN---------Date---------Amount
1--------123456789----01/01/12---------$600
2--------123456789----01/02/12---------$500
3--------123456789----01/03/12----------$10
4--------987654321----04/01/12---------$600
5--------987654321----04/02/12---------$600
6--------987654321----04/03/12---------$100
As you can see in my results included all transactions for SSN 123456789 and 987654321, and excluded SSN 246801357.
One way of doing this is to roll through each two day period within a year. Here is an SQL Fiddle example.
The idea is pretty simple:
1) Create a temp table to store all matching customers
create table CustomersToShow
(
SSN int
)
2) Loop trough a year and populate temp table with customers that match the amount criteria
declare #firstDayOfTheYear datetime = '1/1/2012';
declare #lastDayOfTheYear datetime = '12/31/2012';
declare #currentDate datetime = #firstDayOfTheYear;
declare #amountThreshold money = 1000;
while #currentDate <= #lastDayOfTheYear
begin
insert into CustomersToShow(SSN)
select b.SSN
from transactions a
join customers b
on a.CustKey = b.CustKey
where TransactionDate >= #currentDate
and TransactionDate <= DATEADD(day, 2, #currentDate)
group by b.SSN
having SUM(a.TransactionAmount) >= #amountThreshold
set #currentDate = DATEADD(day,2,#currentDate)
end
3) And then just select
select a.TransNumber, b.SSN, a.TransactionDate, a.TransactionAmount
from transactions a
join customers b
on a.CustKey = b.CustKey
join CustomersToShow c
on b.SSN = c.SSN
Note: This will be slow...
While you could probably come up with a hacky way to do this via standard SQL, this is a problem that IMO is more suited to being solved by code (i.e. not by set-based logic / SQL).
It would be easy to solve if you sort the transaction list by customerKey and date, then loop through the data. Ideally I would do this in code, but alternatively you could write a stored procedure and use a loop and a cursor.
This is easy and well-suited to set-based logic if you look at it right. You simply need to join to a table that has every date range you're interested in. Every T-SQL database (Oracle has it built-in) should have a utility table named integers - it's very useful surprisingly often:
CREATE TABLE integers ( n smallint, constraint PK_integers primary key clustered (n))
INSERT integers select top 1000 row_number() over (order by o.id) from sysobjects o cross join sysobjects
Your date table then looks like:
SELECT dateadd(day, n-1, '2012') AS dtFrom, dateadd(day, n+1, '2012') AS dtTo
from integers where n <= 366
You can then (abbreviating):
SELECT ssn, dtFrom
FROM yourTables t
JOIN ( SELECT dateadd(day, n-1, '2012') as dtFrom, dateadd(day, n+1, '2012') AS dtTo
from integers where n <= 366 ) d on t.date between d.dtFrom and d.dtTo
GROUP BY ssn, dtFrom
HAVING sum(amount) > 1000
You can select all your transactions:
WHERE ssn in ( SELECT distinct ssn from ( <above query> ) t )
I feel like I've seen this question asked before, but neither the SO search nor google is helping me... maybe I just don't know how to phrase the question. I need to count the number of events (in this case, logins) per day over a given time span so that I can make a graph of website usage. The query I have so far is this:
select
count(userid) as numlogins,
count(distinct userid) as numusers,
convert(varchar, entryts, 101) as date
from
usagelog
group by
convert(varchar, entryts, 101)
This does most of what I need (I get a row per date as the output containing the total number of logins and the number of unique users on that date). The problem is that if no one logs in on a given date, there will not be a row in the dataset for that date. I want it to add in rows indicating zero logins for those dates. There are two approaches I can think of for solving this, and neither strikes me as very elegant.
Add a column to the result set that lists the number of days between the start of the period and the date of the current row. When I'm building my chart output, I'll keep track of this value and if the next row is not equal to the current row plus one, insert zeros into the chart for each of the missing days.
Create a "date" table that has all the dates in the period of interest and outer join against it. Sadly, the system I'm working on already has a table for this purpose that contains a row for every date far into the future... I don't like that, and I'd prefer to avoid using it, especially since that table is intended for another module of the system and would thus introduce a dependency on what I'm developing currently.
Any better solutions or hints at better search terms for google? Thanks.
Frankly, I'd do this programmatically when building the final output. You're essentially trying to read something from the database which is not there (data for days that have no data). SQL isn't really meant for that sort of thing.
If you really want to do that, though, a "date" table seems your best option. To make it a bit nicer, you could generate it on the fly, using i.e. your DB's date functions and a derived table.
I had to do exactly the same thing recently. This is how I did it in T-SQL (
YMMV on speed, but I've found it performant enough over a coupla million rows of event data):
DECLARE #DaysTable TABLE ( [Year] INT, [Day] INT )
DECLARE #StartDate DATETIME
SET #StartDate = whatever
WHILE (#StartDate <= GETDATE())
BEGIN
INSERT INTO #DaysTable ( [Year], [Day] )
SELECT DATEPART(YEAR, #StartDate), DATEPART(DAYOFYEAR, #StartDate)
SELECT #StartDate = DATEADD(DAY, 1, #StartDate)
END
-- This gives me a table of all days since whenever
-- you could select #StartDate as the minimum date of your usage log)
SELECT days.Year, days.Day, events.NumEvents
FROM #DaysTable AS days
LEFT JOIN (
SELECT
COUNT(*) AS NumEvents
DATEPART(YEAR, LogDate) AS [Year],
DATEPART(DAYOFYEAR, LogDate) AS [Day]
FROM LogData
GROUP BY
DATEPART(YEAR, LogDate),
DATEPART(DAYOFYEAR, LogDate)
) AS events ON days.Year = events.Year AND days.Day = events.Day
Create a memory table (a table variable) where you insert your date ranges, then outer join the logins table against it. Group by your start date, then you can perform your aggregations and calculations.
The strategy I normally use is to UNION with the opposite of the query, generally a query that retrieves data for rows that don't exist.
If I wanted to get the average mark for a course, but some courses weren't taken by any students, I'd need to UNION with those not taken by anyone to display a row for every class:
SELECT AVG(mark), course FROM `marks`
UNION
SELECT NULL, course FROM courses WHERE course NOT IN
(SELECT course FROM marks)
Your query will be more complex but the same principle should apply. You may indeed need a table of dates for your second query
Option 1
You can create a temp table and insert dates with the range and do a left outer join with the usagelog
Option 2
You can programmetically insert the missing dates while evaluating the result set to produce the final output
WITH q(n) AS
(
SELECT 0
UNION ALL
SELECT n + 1
FROM q
WHERE n < 99
),
qq(n) AS
(
SELECT 0
UNION ALL
SELECT n + 1
FROM q
WHERE n < 99
),
dates AS
(
SELECT q.n * 100 + qq.n AS ndate
FROM q, qq
)
SELECT COUNT(userid) as numlogins,
COUNT(DISTINCT userid) as numusers,
CAST('2000-01-01' + ndate AS DATETIME) as date
FROM dates
LEFT JOIN
usagelog
ON entryts >= CAST('2000-01-01' AS DATETIME) + ndate
AND entryts < CAST('2000-01-01' AS DATETIME) + ndate + 1
GROUP BY
ndate
This will select up to 10,000 dates constructed on the fly, that should be enough for 30 years.
SQL Server has a limitation of 100 recursions per CTE, that's why the inner queries can return up to 100 rows each.
If you need more than 10,000, just add a third CTE qqq(n) and cross-join with it in dates.