Getting total over trailing three week periods - sql

I have a large table of (say) users and their visits to various webpages. An example of the table could be:
schema.page_views
date| user_id| webpage|
----------+---------+----------------+
01/05/2012|aaaaaaaaa|www.example1.com|
01/06/2012|aaaaaaaaa|www.example2.com|
01/06/2012|ababababa|www.example1.com|
...
01/05/2013|aaaaaaaaa|www.example2.com|
01/05/2013|ababababa|www.example1.com|
03/05/2013|aaaaaaaaa|www.example2.com|
Additionally, I have a calendar lookup table:
schema.weeks
week_number| begin_date| end_date|year|
-----------+-----------+-----------+----+
18| 2012-04-23| 2012-04-30|2012|
17| 2012-04-15| 2012-04-22|2012|
16| 2012-04-08| 2012-04-14|2012|
What I would ultimately like from these two tables is a trailing three week total of page views. More specifically, if I was issuing this query today, I would like something along the lines of:
webpage| 1-3| 4-6| 7-9| 10-12|
----------------+-------+-------+-------+-------+
www.example1.com| 124875| 175682| 168542| 159780|
www.example2.com| three week period numbers |
where '1-3' refers to the most recent three weeks, '4-6' refers to four to six weeks ago, etc.
So far, I've obtained the weekly total page views for each website and week, as follows:
-- join to get begin and end_dates
with t1 as (
select a.*, b.* from schema.page_views a
join schema.weeks b
on a.date between b.begin_date and b.end_date),
-- now aggregate per week
weekly_vol as (
select webpage, year, week_number, begin_date, end_date,
count(*) volume
from t1
group by 1, 2, 3, 4, 5)
select * from weekly_vol limit 1;
gives, for example,
webpage| year| week_number| begin_date| end_date| volume|
----------------+-----+------------+-----------+-----------+-------+
www.example1.com| 2012| 18| 2012-04-23| 2012-04-30| 41871|
With a table like this, how can I get to the table I would like? I have thought of joining it on itself with something like on a.week_number between b.week_number and b.week_number + 2 but I don't think this idea will work when the last twelve weeks include the start of a new year.
Has anyone done something similar, and if so, how did you approach the problem? Suggestions for a different approach would be appreciated equally.

Your question is a bit unclear. Are week numbers just within a year or over all time? Do you want the sums relative to the last week in the data or relative to the current date?
In any case, the following does something that is at least similar to what you want. The key is the conditional aggregation in the outer query:
with t1 as (
select pv.*, w.*,
max(pv.week_number) over (partition by webpage, year) as last_week_number
from schema.page_views pv join
schema.weeks w
on pv.date between w.begin_date and w.end_date
),
weekly_vol as (
select webpage, year, week_number, begin_date, end_date,
count(*) as volume
from t1
group by webpage, year, week_number, begin_date, end_date
)
select webpage, year,
sum(case when last_week_number - week_number < 3
then volume else 0
end) as week_1_3,
sum(case when last_week_number - week_number >= 3 and last_week_number - week_number < 6
then volume else 0
end) as week_4_6
from weekly_vol
group by webpage, year;

Related

Retrieve Customers with a Monthly Order Frequency greater than 4

I am trying to optimize the below query to help fetch all customers in the last three months who have a monthly order frequency +4 for the past three months.
Customer ID
Feb
Mar
Apr
0001
4
5
6
0002
3
2
4
0003
4
2
3
In the above table, the customer with Customer ID 0001 should only be picked, as he consistently has 4 or more orders in a month.
Below is a query I have written, which pulls all customers with an average purchase frequency of 4 in the last 90 days, but not considering there is a consistent purchase of 4 or more last three months.
Query:
SELECT distinct lines.customer_id Customer_ID, (COUNT(lines.order_id)/90) PurchaseFrequency
from fct_customer_order_lines lines
LEFT JOIN product_table product
ON lines.entity_id= product.entity_id
AND lines.vendor_id= product.vendor_id
WHERE LOWER(product.country_code)= "IN"
AND lines.date >= DATE_SUB(CURRENT_DATE() , INTERVAL 90 DAY )
AND lines.date < CURRENT_DATE()
GROUP BY Customer_ID
HAVING PurchaseFrequency >=4;
I tried to use window functions, however not sure if it needs to be used in this case.
I would sum the orders per month instead of computing the avg and then retrieve those who have that sum greater than 4 in the last three months.
Also I think you should select your interval using "month(CURRENT_DATE()) - 3" instead of using a window of 90 days. Of course if needed you should handle the case of when current_date is jan-feb-mar and in that case go back to oct-nov-dec of the previous year.
I'm not familiar with Google BigQuery so I can't write your query but I hope this helps.
So I've found the solution to this using WITH operator as below:
WITH filtered_orders AS (
select
distinct customer_id ID,
extract(MONTH from date) Order_Month,
count(order_id) CountofOrders
from customer_order_lines` lines
where EXTRACT(YEAR FROM date) = 2022 AND EXTRACT(MONTH FROM date) IN (2,3,4)
group by ID, Order_Month
having CountofOrders>=4)
select distinct ID
from filtered_orders
group by ID
having count(Order_Month) =3;
Hope this helps!
An option could be first count the orders by month and then filter users which have purchases on all months above your threshold:
WITH ORDERS_BY_MONTH AS (
SELECT
DATE_TRUNC(lines.date, MONTH) PurchaseMonth,
lines.customer_id Customer_ID,
COUNT(lines.order_id) PurchaseFrequency
FROM fct_customer_order_lines lines
LEFT JOIN product_table product
ON lines.entity_id= product.entity_id
AND lines.vendor_id= product.vendor_id
WHERE LOWER(product.country_code)= "IN"
AND lines.date >= DATE_SUB(CURRENT_DATE() , INTERVAL 90 DAY )
AND lines.date < CURRENT_DATE()
GROUP BY PurchaseMonth, Customer_ID
)
SELECT
Customer_ID,
AVG(PurchaseFrequency) AvgPurchaseFrequency
FROM ORDERS_BY_MONTH
GROUP BY Customer_ID
HAVING COUNT(1) = COUNTIF(PurchaseFrequency >= 4)

Finding id's available in previous weeks but not in current week

How to find if an id which was present in previous weeks but not available in current week on a rolling basis. For e.g
Week1 has id 1,2,3,4,5
Week2 has id 3,4,5,7,8
Week3 has id 1,3,5,10,11
So I found out that id 1 and 2 are missing in week 2 and id 2,4,7,8 are missing in week 3 from previous 2 weeks But how to do this on a rolling window for a large amount of data distributed over a period of 20+ years
Please find the sample dataset and expected output. I am expecting the output to be partitioned based on the week_end Date
Dataset
ID|WEEK_START|WEEK_END|APPEARING_DATE
7152|2015-12-27|2016-01-02|2015-12-27
8350|2015-12-27|2016-01-02|2015-12-27
7152|2015-12-27|2016-01-02|2015-12-29
4697|2015-12-27|2016-01-02|2015-12-30
7187|2015-12-27|2016-01-02|2015-01-01
8005|2015-12-27|2016-01-02|2015-12-27
8005|2015-12-27|2016-01-02|2015-12-29
6254|2016-01-03|2016-01-09|2016-01-03
7962|2016-01-03|2016-01-09|2016-01-04
3339|2016-01-03|2016-01-09|2016-01-06
7834|2016-01-03|2016-01-09|2016-01-03
7962|2016-01-03|2016-01-09|2016-01-05
7152|2016-01-03|2016-01-09|2016-01-07
8350|2016-01-03|2016-01-09|2016-01-09
2403|2016-01-10|2016-01-16|2016-01-10
0157|2016-01-10|2016-01-16|2016-01-11
2228|2016-01-10|2016-01-16|2016-01-14
4697|2016-01-10|2016-01-16|2016-01-14
Excepted Output
Partition1: WEEK_END=2016-01-02
ID|MAX(LAST_APPEARING_DATE)
7152|2015-12-29
8350|2015-12-27
4697|2015-12-30
7187|2015-01-01
8005|2015-12-29
Partition1: WEEK_END=2016-01-09
ID|MAX(LAST_APPEARING_DATE)
7152|2016-01-07
8350|2016-01-09
4697|2015-12-30
7187|2015-01-01
8005|2015-12-29
6254|2016-01-03
7962|2016-01-05
3339|2016-01-06
7834|2016-01-03
Partition3: WEEK_END=2016-01-10
ID|MAX(LAST_APPEARING_DATE)
7152|2016-01-07
8350|2016-01-09
4697|2016-01-14
7187|2015-01-01
8005|2015-12-29
6254|2016-01-03
7962|2016-01-05
3339|2016-01-06
7834|2016-01-03
2403|2016-01-10
0157|2016-01-11
2228|2016-01-14
Please use below query,
select ID, MAX(APPEARING_DATE) from table_name
group by ID, WEEK_END;
Or, including WEEK)END,
select ID, WEEK_END, MAX(APPEARING_DATE) from table_name
group by ID, WEEK_END;
You can use aggregation:
select t.*, max(week_end)
from t
group by id
having max(week_end) < '2016-01-02';
Adjust the date in the having clause for the week end that you want.
Actually, your question is a bit unclear. I'm not sure if a later week end would keep the row or not. If you want "as of" data, then include a where clause:
select t.id, max(week_end)
from t
where week_end < '2016-01-02'
group by id
having max(week_end) < '2016-01-02';
If you want this for a range of dates, then you can use a derived table:
select we.the_week_end, t.id, max(week_end)
from (select '2016-01-02' as the_week_end union all
select '2016-01-09' as the_week_end
) we cross join
t
where t.week_end < we.the_week_end
group by id, we.the_week_end
having max(t.week_end) < we.the_week_end;

Expected payments by day given start and end date

I'm trying to create a SQL view that gives me the expected amount to be received by calendar day for recurring transactions. I have a table containing recurring commitments data, with the following columns:
id,
start_date,
end_date (null if still active),
payment day (1,2,3,etc.),
frequency (monthly, quarterly, semi-annually, annually),
commitment amount
For now, I do not need to worry about business days vs calendar days.
In its simplest form, the end result would contain every historical calendar day as well as future dates for the next year, and produce how much was/is expected to be received in those particular days.
I've done quite a bit of researching, but cannot seem to find an answer that addresses the specific problem. Any direction on where to start would be greatly appreciated.
The expect output would look something like this:
| Date | Expected Amount |
|1/1/18 | 100 |
|1/2/18 | 200 |
|1/3/18 | 150 |
Thank you ahead of time!
Link to data table in db-fiddle
Expected Output Spreadsheet
It's something like this, but I've never used Netezza
SELECT
cal.d, sum(r.amount) as expected_amount
FROM
(
SELECT MIN(a.start_date) + ROW_NUMBER() OVER(ORDER BY NULL) as d
FROM recurring a, recurring b, recurring c
) cal
LEFT JOIN
recurring r
ON
(
(r.frequency = 'monthly' AND r.payment_day = DATE_PART('DAY', cal.d)) OR
(r.frequency = 'annually' AND DATE_PART('MONTH', cal.d) = DATE_PART('MONTH', r.start_date) AND r.payment_day = DATE_PART('DAY', cal.d))
) AND
r.start_date >= cal.d AND
(r.end_date <= cal.d OR r.end_date IS NULL)
GROUP BY cal.d
In essence, we cartesian join our recurring table together a few times to generate a load of rows, number them and add the number onto the min date to get an incrementing date series.
The payments data table is left joined onto this incrementing date series on:
(the day of the date from the series) = (the payment day) for monthlies
(the month-day of the date from the series) = (the month and payment day of the start_date)
Finally, the whole lot is grouped and summed
I don't have a test instance of Netezza so if you encounter some minor syntax errors, do please have a stab at fixing them up yourself (to make it faster for you to get a solution). If you reach a point where you can't work out what the query is doing, let me know
Disclaimer: I'm no expert on Netezza, so I decided to write you a standard SQL that may need some tweaking to run on Netezza.
with
digit as (select 0 as x union select 1 union select 2 union select 3 union select 4
union select 5 union select 6 union select 7 union select 8 union select 9
),
number as ( -- produces numbers from 0 to 9999 (28 years)
select d1.x + d2.x * 10 + d3.x * 100 + d4.x * 1000 as n
from digit d1
cross join digit d2
cross join digit d3
cross join digit d4
),
expected_payment as ( -- expands all expected payments
select
c.start_date + nb.n as day,
c.committed_amount
from recurring_commitement c
cross join number nb
where c.start_date + nb.n <= c.end_data
and c.frequency ... -- add logic for monthly, quarterly, etc. here
)
select
day,
sum(committed_amout) as expected_amount
from expected_payment
group by day
order by day
This solution is valid for commitments that do not exceed 28 years, since the number CTE (Common Table Expression) is producing up to a maximum of 9999 days. Expand with a fifth digit if you need longer commitments.
Note: I think the way I'm adding days to a day to a date is not correct in Netezza's SQL. The expression c.start_date + nb.n may need to be rephrased.

Need to count unique transactions by month but ignore records that occur 3 days after 1st entry for that ID

I have a table with just two columns: User_ID and fail_date. Each time somebody's card is rejected they are logged in the table, their card is automatically tried again 3 days later, and if they fail again, another entry is added to the table. I am trying to write a query that counts unique failures by month so I only want to count the first entry, not the 3 day retries, if they exist. My data set looks like this
user_id fail_date
222 01/01
222 01/04
555 02/15
777 03/31
777 04/02
222 10/11
so my desired output would be something like this:
month unique_fails
jan 1
feb 1
march 1
april 0
oct 1
I'll be running this in Vertica, but I'm not so much looking for perfect syntax in replies. Just help around how to approach this problem as I can't really think of a way to make it work. Thanks!
You could use lag() to get the previous timestamp per user. If the current and the previous timestamp are less than or exactly three days apart, it's a follow up. Mark the row as such. Then you can filter to exclude the follow ups.
It might look something like:
SELECT month,
count(*) unique_fails
FROM (SELECT month(fail_date) month,
CASE
WHEN datediff(day,
lag(fail_date) OVER (PARTITION BY user_id,
ORDER BY fail_date),
fail_date) <= 3 THEN
1
ELSE
0
END follow_up
FROM elbat) x
WHERE follow_up = 0
GROUP BY month;
I'm not so sure about the exact syntax in Vertica, so it might need some adaptions. I also don't know, if fail_date actually is some date/time type variant or just a string. If it's just a string the date/time specific functions may not work on it and have to be replaced or the string has to be converted prior passing it to the functions.
If the data spans several years you might also want to include the year additionally to the month to keep months from different years apart. In the inner SELECT add a column year(fail_date) year and add year to the list of columns and the GROUP BY of the outer SELECT.
You can add a flag about whether this is a "unique_fail" by doing:
select t.*,
(case when lag(fail_date) over (partition by user_id order by fail_date) > fail_date - 3
then 0 else 1
end) as first_failure_flag
from t;
Then, you want to count this flag by month:
select to_char(fail_date, 'Mon'), -- should aways include the year
sum(first_failure_flag)
from (select t.*,
(case when lag(fail_date) over (partition by user_id order by fail_date) > fail_date - 3
then 0 else 1
end) as first_failure_flag
from t
) t
group by to_char(fail_date, 'Mon')
order by min(fail_date)
In a Derived Table, determine the previous fail_date (prev_fail_date), for a specific user_id and fail_date, using a Correlated subquery.
Using the derived table dt, Count the failure, if the difference of number of days between current fail_date and prev_fail_date is greater than 3.
DateDiff() function alongside with If() function is used to determine the cases, which are not repeated tries.
To Group By this result on Month, you can use MONTH function.
But then, the data can be from multiple years, so you need to separate them out yearwise as well, so you can do a multi-level group by, using YEAR function as well.
Try the following (in MySQL) - you can get idea for other RDBMS as well:
SELECT YEAR(dt.fail_date) AS year_fail_date,
MONTH(dt.fail_date) AS month_fail_date,
COUNT( IF(DATEDIFF(dt.fail_date, dt.prev_fail_date) > 3, user_id, NULL) ) AS unique_fails
FROM (
SELECT
t1.user_id,
t1.fail_date,
(
SELECT t2.fail_date
FROM your_table AS t2
WHERE t2.user_id = t1.user_id
AND t2.fail_date < t1.fail_date
ORDER BY t2.fail_date DESC
LIMIT 1
) AS prev_fail_date
FROM your_table AS t1
) AS dt
GROUP BY
year_fail_date,
month_fail_date
ORDER BY
year_fail_date ASC,
month_fail_date ASC

Calculating business days in Teradata

I need help in business days calculation.
I've two tables
1) One table ACTUAL_TABLE containing order date and contact date with timestamp datatypes.
2) The second table BUSINESS_DATES has each of the calendar dates listed and has a flag to indicate weekend days.
using these two tables, I need to ensure business days and not calendar days (which is the current logic) is calculated between these two fields.
My thought process was to first get a range of dates by comparing ORDER_DATE with TABLE_DATE field and then do a similar comparison of CONTACT_DATE to TABLE_DATE field. This would get me a range from the BUSINESS_DATES table which I can then use to calculate count of days, sum(Holiday_WKND_Flag) fields making the result look like:
Order# | Count(*) As DAYS | SUM(WEEKEND DATES)
100 | 25 | 8
However this only works when I use a specific order number and cant' bring all order numbers in a sub query.
My Query:
SELECT SUM(Holiday_WKND_Flag), COUNT(*) FROM
(
SELECT
* FROM
BUSINESS_DATES
WHERE BUSINESS.Business BETWEEN (SELECT ORDER_DATE FROM ACTUAL_TABLE
WHERE ORDER# = '100'
)
AND
(SELECT CONTACT_DATE FROM ACTUAL_TABLE
WHERE ORDER# = '100'
)
TEMP
Uploading the table structure for your reference.
SELECT ORDER#, SUM(Holiday_WKND_Flag), COUNT(*)
FROM business_dates bd
INNER JOIN actual_table at ON bd.table_date BETWEEN at.order_date AND at.contact_date
GROUP BY ORDER#
Instead of joining on a BETWEEN (which always results in a bad Product Join) followed by a COUNT you better assign a bussines day number to each date (in best case this is calculated only once and added as a column to your calendar table). Then it's two Equi-Joins and no aggregation needed:
WITH cte AS
(
SELECT
Cast(table_date AS DATE) AS table_date,
-- assign a consecutive number to each busines day, i.e. not increased during weekends, etc.
Sum(CASE WHEN Holiday_WKND_Flag = 1 THEN 0 ELSE 1 end)
Over (ORDER BY table_date
ROWS Unbounded Preceding) AS business_day_nbr
FROM business_dates
)
SELECT ORDER#,
Cast(t.contact_date AS DATE) - Cast(t.order_date AS DATE) AS #_of_days
b2.business_day_nbr - b1.business_day_nbr AS #_of_business_days
FROM actual_table AS t
JOIN cte AS b1
ON Cast(t.order_date AS DATE) = b1.table_date
JOIN cte AS b2
ON Cast(t.contact_date AS DATE) = b2.table_date
Btw, why are table_date and order_date timestamp instead of a date?
Porting from Oracle?
You can use this query. Hope it helps
select order#,
order_date,
contact_date,
(select count(1)
from business_dates_table
where table_date between a.order_date and a.contact_date
and holiday_wknd_flag = 0
) business_days
from actual_table a