Realizing that another question I asked before may be too difficult, I'm changing my requirements.
I work for a credit card company. Our database has a customer table and a transaction table. Fields in the customer table are SSN and CustomerKey. Fields in the transaction table are CustomerKey, transaction date (Transdate), and transaction amount (TransAmt).
I need a query that can identify each ssn where the sum of any of their transaction amounts > 1000 within a two day period in 2012. If a ssn has transaction amounts > 1000 within a two day period, I need the query to return all the transactions for that ssn.
Here is an example of the raw data in the Transaction Table:
Trans#-----CustKey-----Date--------Amount
1-----------12345----01/01/12--------$600
2-----------12345----01/02/12--------$500
3-----------67890----01/03/12--------$10
4-----------98765----04/01/12--------$600
5-----------43210----04/02/12--------$600
6-----------43210----04/03/12--------$100
7-----------13579----04/02/12--------$600
8-----------24568----04/03/12--------$100
Here is an example of the raw data in the Customer Table:
CustKey-----SSN
12345------123456789
67890------123456789
98765------987654321
43210------987654321
13579------246801357
24568------246801357
Here are the results I need:
Trans#------SSN---------Date---------Amount
1--------123456789----01/01/12---------$600
2--------123456789----01/02/12---------$500
3--------123456789----01/03/12----------$10
4--------987654321----04/01/12---------$600
5--------987654321----04/02/12---------$600
6--------987654321----04/03/12---------$100
As you can see in my results included all transactions for SSN 123456789 and 987654321, and excluded SSN 246801357.
One way of doing this is to roll through each two day period within a year. Here is an SQL Fiddle example.
The idea is pretty simple:
1) Create a temp table to store all matching customers
create table CustomersToShow
(
SSN int
)
2) Loop trough a year and populate temp table with customers that match the amount criteria
declare #firstDayOfTheYear datetime = '1/1/2012';
declare #lastDayOfTheYear datetime = '12/31/2012';
declare #currentDate datetime = #firstDayOfTheYear;
declare #amountThreshold money = 1000;
while #currentDate <= #lastDayOfTheYear
begin
insert into CustomersToShow(SSN)
select b.SSN
from transactions a
join customers b
on a.CustKey = b.CustKey
where TransactionDate >= #currentDate
and TransactionDate <= DATEADD(day, 2, #currentDate)
group by b.SSN
having SUM(a.TransactionAmount) >= #amountThreshold
set #currentDate = DATEADD(day,2,#currentDate)
end
3) And then just select
select a.TransNumber, b.SSN, a.TransactionDate, a.TransactionAmount
from transactions a
join customers b
on a.CustKey = b.CustKey
join CustomersToShow c
on b.SSN = c.SSN
Note: This will be slow...
While you could probably come up with a hacky way to do this via standard SQL, this is a problem that IMO is more suited to being solved by code (i.e. not by set-based logic / SQL).
It would be easy to solve if you sort the transaction list by customerKey and date, then loop through the data. Ideally I would do this in code, but alternatively you could write a stored procedure and use a loop and a cursor.
This is easy and well-suited to set-based logic if you look at it right. You simply need to join to a table that has every date range you're interested in. Every T-SQL database (Oracle has it built-in) should have a utility table named integers - it's very useful surprisingly often:
CREATE TABLE integers ( n smallint, constraint PK_integers primary key clustered (n))
INSERT integers select top 1000 row_number() over (order by o.id) from sysobjects o cross join sysobjects
Your date table then looks like:
SELECT dateadd(day, n-1, '2012') AS dtFrom, dateadd(day, n+1, '2012') AS dtTo
from integers where n <= 366
You can then (abbreviating):
SELECT ssn, dtFrom
FROM yourTables t
JOIN ( SELECT dateadd(day, n-1, '2012') as dtFrom, dateadd(day, n+1, '2012') AS dtTo
from integers where n <= 366 ) d on t.date between d.dtFrom and d.dtTo
GROUP BY ssn, dtFrom
HAVING sum(amount) > 1000
You can select all your transactions:
WHERE ssn in ( SELECT distinct ssn from ( <above query> ) t )
Related
I have following SQL statement:
declare #dateFrom datetime = '2015-01-01';
declare #dateTo datetime = '2015-12-31';
select
DATEPART(WEEK, OrderDate) Week, Count(*) Number
from
table
where
OrderDate between #dateFrom and #dateTo
group by
DATEPART(WEEK, OrderDate)
order by
Week
It returns the number of orders per week, but if there were no orders at all this respective week is omitted.
How can I change the statement so it will also include weeks with 0 orders?
Gofr1 was on the right track but there are issues with the query.
1 - You do not want to use the datediff() of the begin and end as the stopping condition. It works for a whole year but will not work for partial ranges.
2 - I would add year to the key since that will allow you to handle cross year cases.
3 - You need to roll up the sales before using the Year Week Common Table Expression. Otherwise you just toss out the nulls again (order dates) with the WHERE clause.
Remember, logically the join is applied then the where clause.
The code below uses the Adventure Works 2012 DW database and obtains the correct answer.
Uses a tally table for some numbers.
Generates weekly dates and calculates year/week key for given range.
Rolls up sales from the fact table for given range.
Left joins the keys to the sales and turns null totals to zero.
Code:
-- Declare start and end date
DECLARE #dte_From datetime = '2005-07-01';
DECLARE #dte_To datetime = '2007-12-31';
-- About 200K numbers
WITH cte_Tally (n) as
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
FROM sys.all_views a
CROSS JOIN sys.all_views b
),
-- Create year/week key
cte_YearWeekKey (MyKey) as
(
SELECT
year(dateadd(week, t.n, #dte_from)) * 1000 +
datepart(week, dateadd(week, t.n, #dte_from)) as MyKey
FROM
cte_Tally as t
WHERE
dateadd(week, t.n, #dte_from) < #dte_To
),
-- Must roll up here
cte_Sales (MyKey, MyTotal) as
(
SELECT
YEAR(F.OrderDate) * 1000 +
DATEPART(WEEK, F.OrderDate) as MyKey,
COUNT(*) as MyTotal
FROM
[AdventureWorksDW2012].[dbo].[FactResellerSales] F
WHERE
F.OrderDate between #dte_From and #dte_To
GROUP BY
YEAR(F.OrderDate) * 1000 +
DATEPART(WEEK, F.OrderDate)
)
-- Join the results
SELECT
K.MyKey, ISNULL(S.MyTotal, 0) as Total
FROM
cte_YearWeekKey as K
LEFT JOIN
cte_Sales as S
ON
k.MyKey = S.MyKey
I'm creating an internal holiday booking system and I need to put business logic rules into place but I need to do a check on how many people are booked off on the dates between the Start and End date because for example 2 apprentices may only be booked off on 1 day but I have no way off grabbing the dates between.
Any help would be appreciated
Below is the job role table
You haven't posted the RDBMS, or the names of the tables, or what exactly the Job Role table is supposed to be doing ... but I'll take a shot at this anyway. I'm using a recursive CTE to generate a list of dates, but it would be far better for you to use a Date table, and I don't even know whether your RDBMS will support this. I've also posted the syntax for a table variable below that populates data mimicking your sample.
The final output, naturally, will need to be customized to do whatever you need to do. This does show you, however, a list of every date when more than one employee is on vacation. Add extra conditions to the second JOIN or to the WHERE clause to filter on other things (like JobRole) if necessary.
-- Code originally from http://smehrozalam.wordpress.com/2009/06/09/t-sql-using-common-table-expressions-cte-to-generate-sequences/
-- Define start and end limits
DECLARE #todate DATETIME, #fromdate DATETIME
SELECT #fromdate='2014-01-01', #todate=GETDATE()-1
DECLARE #TimeOff TABLE (StartDate DATETIME, EndDate DATETIME, EmployeeID INT)
INSERT INTO #TimeOff (StartDate, EndDate, EmployeeID)
SELECT '1/1/2014', '1/7/2014', 7 UNION
SELECT '2/1/2014', '2/7/2014', 7 UNION
SELECT '3/3/2014', '3/9/2014', 7 UNION
SELECT '2/5/2014', '2/6/2014', 8
;WITH DateSequence( Date ) AS -- this will list all dates. Use this if you don't have a date table
(
SELECT #fromdate as Date
UNION ALL
SELECT DATEADD(DAY, 1, Date)
FROM DateSequence
WHERE Date < #todate
)
--select result
SELECT DateSequence.Date, TimeOffA.StartDate, TimeOffB.EndDate, TimeOffA.EmployeeID
FROM
DateSequence -- a full list of all possible dates
INNER JOIN
#TimeOff TimeOffA ON -- all dates when an employee is on vacation -- replace this with your actual table's name
DateSequence.Date BETWEEN TimeOffA.StartDate AND TimeOffA.EndDate
INNER JOIN
#TimeOff TimeOffB ON -- all dates when an employee who is NOT employee A is on vacation -- replace this with your actual table's name
DateSequence.Date BETWEEN TimeOffB.StartDate AND TimeOffB.EndDate AND
TimeOffA.EmployeeID <> TimeOffB.EmployeeID
option (MaxRecursion 2000)
I have a select statement that returns two columns, a date column, and a count(value) column. When the count(value) column doesn't have any records, I need it to return a 0. Currently, it just skips that date record all together.
Here is the basics of the query.
select convert(varchar(25), DateTime, 101) as recordDate,
count(Value) as recordCount
from History
where Value < 700
group by convert(varchar(25), DateTime, 101)
Here are some results that I'm getting.
+------------+-------------+
| recordDate | recordCount |
+------------+-------------+
| 02/26/2014 | 143 |
| 02/27/2014 | 541 |
| 03/01/2014 | 21 |
| 03/02/2014 | 60 |
| 03/03/2014 | 113 |
+------------+-------------+
Notice it skips 2/28/2014. This is because the count(value) column doesn't have anything to count. How can I add the record in there that has the date of 2/28/2014, with a recordCount of 0?
To generate rows for missing dates you can join your data to a date dimension table
It would look something like this:
select convert(varchar(25), ddt.DateField, 101) as recordDate,
count(t.Value) as recordCount
from History h
right join dbo.DateDimensionTable ddt
on ddt.DateField = convert(varchar(25), h.DateTime, 101)
where h.Value < 700
group by convert(varchar(25), h.DateTime, 101)
If your table uses the DateTime column to store dates only (meaning the time is always midnight), then you can replace this
right join dbo.DateDimensionTable ddt
on ddt.DateField = convert(varchar(25), h.DateTime, 101)
with this
right join dbo.DateDimensionTable ddt
on ddt.DateField = h.DateTime
You may use COUNT(*). It will return zero if nothing was found for the column. Also you may group result set by value column if it is needed.
select convert(varchar(25), DateTime, 101) as recordDate,
CASE WHEN count(value) =0 THEN 0 ELSE COUNT(value) END recordCount
from History
where Value < 700
group by convert(varchar(25), DateTime, 101)
When you use a group by, it only creates a distinct list of values that exist in your records. Since 20140228 has no records, it will not show up in the group by.
Your best bet is to generate a list of values, dates in your case, and left join or apply that table against your history table.
I can't seem to copy my T-SQL in here so here's a hastebin.
http://hastebin.com/winaqutego.vbs
The best practice would be for you to have a datamart where a separate dimensional table for dates is kept with all dates you might be interested at - even if they lack amounts. DMason's answer shows the query with such a dimensional table.
To keep with the best practices you would have a fact table where you'd keep these historical data already pre-grouped at the granularity level you need (daily, in this case), so you wouldn't need a GROUP BY unless you needed a coarser granularity (weekly, monthly, yearly).
And in both your operational and datamart databases the dates would be stored as dates, not...
But then, since this is real world and you might not be able to change what somebody else made... If you: a) only care about the dates that appear in [History], and b) such dates are never stored with hours/minutes; then following query might be what you'd need:
SELECT MyDates.DateTime, COUNT(*)-1 AS RecordCount
FROM (
SELECT DISTINCT DateTime FROM History
) MyDates
LEFT JOIN History H
ON MyDates.DateTime = H.Datetime
AND H.Value < 700
GROUP BY MyDates.DateTime
Do try to add an index over DateTime and to further constrain the query with an earliest/latest date for better performance results.
I agree that a Dates table (AKA time dimension) is the right solution, but there is a simpler option:
SELECT
CONVERT(VARCHAR(25), DateTime, 101) AS RecordDate,
SUM(CASE WHEN Value < 700 THEN 1 ELSE 0 END) AS RecordCount
FROM
History
GROUP BY
CONVERT(VARCHAR(25), DateTime, 101)
Try this:
DECLARE #Records TABLE (
[RecordDate] DATETIME,
[RecordCount] INT
)
DECLARE #Date DATETIME = '02/26/2014' -- Enter whatever date you want to start with
DECLARE #EndDate DATETIME = '03/31/2014' -- Enter whatever date you want to stop
WHILE (1=1)
BEGIN
-- Insert the date into the temp table along with the count
INSERT INTO #Records (RecordDate, RecordCount)
VALUES (CONVERT(VARCHAR(25), #Date, 101),
(SELECT COUNT(*) FROM dbo.YourTable WHERE RecordDate = #Date))
-- Go to the next day
#Date = DATEADD(d, 1, #Date)
-- If we have surpassed the end date, break out of the loop
IF (#Date > #EndDate) BREAK;
END
SELECT * FROM #Records
If your dates have time components, you would need to modify this to check for start and end of day in the SELECT COUNT(*)... query.
I have a table with order information in an E-commerce store. Schema looks like this:
[Orders]
Id|SubTotal|TaxAmount|ShippingAmount|DateCreated
This table does only contain data for every Order. So if a day goes by without any orders, no sales data is there for that day.
I would like to select subtotal-per-day for the last 30 days, including those days with no sales.
The resultset would look like this:
Date | SalesSum
2009-08-01 | 15235
2009-08-02 | 0
2009-08-03 | 340
2009-08-04 | 0
...
Doing this, only gives me data for those days with orders:
select DateCreated as Date, sum(ordersubtotal) as SalesSum
from Orders
group by DateCreated
You could create a table called Dates, and select from that table and join the Orders table. But I really want to avoid that, because it doesn't work good enough when dealing with different time zones and things...
Please don't laugh. SQL is not my kind of thing... :)
Create a function that can generate a date table as follows:
(stolen from http://www.codeproject.com/KB/database/GenerateDateTable.aspx)
Create Function dbo.fnDateTable
(
#StartDate datetime,
#EndDate datetime,
#DayPart char(5) -- support 'day','month','year','hour', default 'day'
)
Returns #Result Table
(
[Date] datetime
)
As
Begin
Declare #CurrentDate datetime
Set #CurrentDate=#StartDate
While #CurrentDate<=#EndDate
Begin
Insert Into #Result Values (#CurrentDate)
Select #CurrentDate=
Case
When #DayPart='year' Then DateAdd(yy,1,#CurrentDate)
When #DayPart='month' Then DateAdd(mm,1,#CurrentDate)
When #DayPart='hour' Then DateAdd(hh,1,#CurrentDate)
Else
DateAdd(dd,1,#CurrentDate)
End
End
Return
End
Then, join against that table
SELECT dates.Date as Date, sum(SubTotal+TaxAmount+ShippingAmount)
FROM [fnDateTable] (dateadd("m",-1,CONVERT(VARCHAR(10),GETDATE(),111)),CONVERT(VARCHAR(10),GETDATE(),111),'day') dates
LEFT JOIN Orders
ON dates.Date = DateCreated
GROUP BY dates.Date
declare #oldest_date datetime
declare #daily_sum numeric(18,2)
declare #temp table(
sales_date datetime,
sales_sum numeric(18,2)
)
select #oldest_date = dateadd(day,-30,getdate())
while #oldest_date <= getdate()
begin
set #daily_sum = (select sum(SubTotal) from SalesTable where DateCreated = #oldest_date)
insert into #temp(sales_date, sales_sum) values(#oldest_date, #daily_sum)
set #oldest_date = dateadd(day,1,#oldest_date)
end
select * from #temp
OK - I missed that 'last 30 days' part. The bit above, while not as clean, IMHO, as the date table, should work. Another variant would be to use the while loop to fill a temp table just with the last 30 days and do a left outer join with the result of my original query.
including those days with no sales.
That's the difficult part. I don't think the first answer will help you with that. I did something similar to this with a separate date table.
You can find the directions on how to do so here:
Date Table
I have a Log table table with LogID an index which i never delete any records. it has index from 1 to ~10000000. Using this table I can write
select
s.ddate, SUM(isnull(o.SubTotal,0))
from
(
select
cast(datediff(d,LogID,getdate()) as datetime) AS ddate
from
Log
where
LogID <31
) s right join orders o on o.orderdate = s.ddate
group by s.ddate
I actually did this today. We also got a e-commerce application. I don't want to fill our database with "useless" dates. I just do the group by and create all the days for the last N days in Java, and peer them with the date/sales results from the database.
Where is this ultimately going to end up? I ask only because it may be easier to fill in the empty days with whatever program is going to deal with the data instead of trying to get it done in SQL.
SQL is a wonderful language, and it is capable of a great many things, but sometimes you're just better off working the finer points of the data in the program instead.
(Revised a bit--I hit enter too soon)
I started poking at this, and as it hits some pretty tricky SQL concepts it quickly grew into the following monster. If feasible, you might be better off adapting THEn's solution; or, like many others advise, using application code to fill in the gaps could be preferrable.
-- A temp table holding the 30 dates that you want to check
DECLARE #Foo Table (Date smalldatetime not null)
-- Populate the table using a common "tally table" methodology (I got this from SQL Server magazine long ago)
;WITH
L0 AS (SELECT 1 AS C UNION ALL SELECT 1), --2 rows
L1 AS (SELECT 1 AS C FROM L0 AS A, L0 AS B),--4 rows
L2 AS (SELECT 1 AS C FROM L1 AS A, L1 AS B),--16 rows
L3 AS (SELECT 1 AS C FROM L2 AS A, L2 AS B),--256 rows
Tally AS (SELECT ROW_NUMBER() OVER(ORDER BY C) AS Number FROM L3)
INSERT #Foo (Date)
select dateadd(dd, datediff(dd, 0, dateadd(dd, -number + 1, getdate())), 0)
from Tally
where Number < 31
Step 1 is to build a temp table containint the 30 dates that you are concerned with. That abstract wierdness is about the fastest way known to build a table of consecutive integers; add a few more subqueries, and you can populate millions or more in mere seconds. I take the first 30, and use dateadd and the current date/time to convert them into dates. If you already have a "fixed" table that has 1-30, you can use that and skip the CTE entirely (by replacing table "Tally" with your table).
The outer two date function calls remove the time portion of the generated string.
(Note that I assume that your order date also has no time portion -- otherwise you've got another common problem to resolve.)
For testing purposes I built table #Orders, and this gets you the rest:
SELECT f.Date, sum(ordersubtotal) as SalesSum
from #Foo f
left outer join #Orders o
on o.DateCreated = f.Date
group by f.Date
I created the Function DateTable as JamesMLV pointed out to me.
And then the SQL looks like this:
SELECT dates.date, ISNULL(SUM(ordersubtotal), 0) as Sales FROM [dbo].[DateTable] ('2009-08-01','2009-08-31','day') dates
LEFT JOIN Orders ON CONVERT(VARCHAR(10),Orders.datecreated, 111) = dates.date
group by dates.date
SELECT DateCreated,
SUM(SubTotal) AS SalesSum
FROM Orders
GROUP BY DateCreated
I feel like I've seen this question asked before, but neither the SO search nor google is helping me... maybe I just don't know how to phrase the question. I need to count the number of events (in this case, logins) per day over a given time span so that I can make a graph of website usage. The query I have so far is this:
select
count(userid) as numlogins,
count(distinct userid) as numusers,
convert(varchar, entryts, 101) as date
from
usagelog
group by
convert(varchar, entryts, 101)
This does most of what I need (I get a row per date as the output containing the total number of logins and the number of unique users on that date). The problem is that if no one logs in on a given date, there will not be a row in the dataset for that date. I want it to add in rows indicating zero logins for those dates. There are two approaches I can think of for solving this, and neither strikes me as very elegant.
Add a column to the result set that lists the number of days between the start of the period and the date of the current row. When I'm building my chart output, I'll keep track of this value and if the next row is not equal to the current row plus one, insert zeros into the chart for each of the missing days.
Create a "date" table that has all the dates in the period of interest and outer join against it. Sadly, the system I'm working on already has a table for this purpose that contains a row for every date far into the future... I don't like that, and I'd prefer to avoid using it, especially since that table is intended for another module of the system and would thus introduce a dependency on what I'm developing currently.
Any better solutions or hints at better search terms for google? Thanks.
Frankly, I'd do this programmatically when building the final output. You're essentially trying to read something from the database which is not there (data for days that have no data). SQL isn't really meant for that sort of thing.
If you really want to do that, though, a "date" table seems your best option. To make it a bit nicer, you could generate it on the fly, using i.e. your DB's date functions and a derived table.
I had to do exactly the same thing recently. This is how I did it in T-SQL (
YMMV on speed, but I've found it performant enough over a coupla million rows of event data):
DECLARE #DaysTable TABLE ( [Year] INT, [Day] INT )
DECLARE #StartDate DATETIME
SET #StartDate = whatever
WHILE (#StartDate <= GETDATE())
BEGIN
INSERT INTO #DaysTable ( [Year], [Day] )
SELECT DATEPART(YEAR, #StartDate), DATEPART(DAYOFYEAR, #StartDate)
SELECT #StartDate = DATEADD(DAY, 1, #StartDate)
END
-- This gives me a table of all days since whenever
-- you could select #StartDate as the minimum date of your usage log)
SELECT days.Year, days.Day, events.NumEvents
FROM #DaysTable AS days
LEFT JOIN (
SELECT
COUNT(*) AS NumEvents
DATEPART(YEAR, LogDate) AS [Year],
DATEPART(DAYOFYEAR, LogDate) AS [Day]
FROM LogData
GROUP BY
DATEPART(YEAR, LogDate),
DATEPART(DAYOFYEAR, LogDate)
) AS events ON days.Year = events.Year AND days.Day = events.Day
Create a memory table (a table variable) where you insert your date ranges, then outer join the logins table against it. Group by your start date, then you can perform your aggregations and calculations.
The strategy I normally use is to UNION with the opposite of the query, generally a query that retrieves data for rows that don't exist.
If I wanted to get the average mark for a course, but some courses weren't taken by any students, I'd need to UNION with those not taken by anyone to display a row for every class:
SELECT AVG(mark), course FROM `marks`
UNION
SELECT NULL, course FROM courses WHERE course NOT IN
(SELECT course FROM marks)
Your query will be more complex but the same principle should apply. You may indeed need a table of dates for your second query
Option 1
You can create a temp table and insert dates with the range and do a left outer join with the usagelog
Option 2
You can programmetically insert the missing dates while evaluating the result set to produce the final output
WITH q(n) AS
(
SELECT 0
UNION ALL
SELECT n + 1
FROM q
WHERE n < 99
),
qq(n) AS
(
SELECT 0
UNION ALL
SELECT n + 1
FROM q
WHERE n < 99
),
dates AS
(
SELECT q.n * 100 + qq.n AS ndate
FROM q, qq
)
SELECT COUNT(userid) as numlogins,
COUNT(DISTINCT userid) as numusers,
CAST('2000-01-01' + ndate AS DATETIME) as date
FROM dates
LEFT JOIN
usagelog
ON entryts >= CAST('2000-01-01' AS DATETIME) + ndate
AND entryts < CAST('2000-01-01' AS DATETIME) + ndate + 1
GROUP BY
ndate
This will select up to 10,000 dates constructed on the fly, that should be enough for 30 years.
SQL Server has a limitation of 100 recursions per CTE, that's why the inner queries can return up to 100 rows each.
If you need more than 10,000, just add a third CTE qqq(n) and cross-join with it in dates.