Related
I have 3 tables:
INVENTORY_IN:
ID INV_TIMESTAMP PRODUCT_ID IN_QUANTITY SUPPLIER_ID
...
1 10.03.21 01:00:00 101 100 4
2 11.03.21 02:00:00 101 50 3
3 14.03.21 01:00:00 101 10 2
INVENTORY_OUT:
ID INV_TIMESTAMP PRODUCT_ID OUT_QUANTITY CUSTOMER_ID
...
1 10.03.21 02:00:00 101 30 1
2 11.03.21 01:00:00 101 40 2
3 12.03.21 01:00:00 101 80 1
INVENTORY_BALANCE:
INV_DATE PRODUCT_ID QUANTITY
...
09.03.21 101 20
10.03.21 101 90
11.03.21 101 100
12.03.21 101 20
13.03.21 101 20
14.03.21 101 30
I want to use FIFO (first in-first out) logic for the inventory, and to see which quantities correspond to each SUPPLIER-CUSTOMER combination.
The desired ouput looks like this (queried for dates >= 2021-03-10):
PRODUCT_ID SUPPLIER_ID CUSTOMER_ID QUANTITY
101 1 20
101 4 1 60
101 4 2 40
101 3 1 30
101 3 20
101 2 10
edit. fixed little typo in numbers.
edit. Added a diagram which explains every row. All of the black arrows correspond to supplier and customer combinations, there are 7 of them, because for supplier_id = 4 and customer_id = 1 the desired results is the sum of matched quantities happening between them. So, it explains why there are 7 arrows, while the desired results contains only 6 rows.
Option 1
This is probably a job for PL/SQL. Starting with the data types to output:
CREATE TYPE supply_details_obj AS OBJECT(
product_id NUMBER,
quantity NUMBER,
supplier_id NUMBER,
customer_id NUMBER
);
CREATE TYPE supply_details_tab AS TABLE OF supply_details_obj;
Then we can define a pipelined function to read the INVENTORY_IN and INVENTORY_OUT tables one row at a time and merge the two keeping a running total of the remaining inventory or amount to supply:
CREATE FUNCTION assign_suppliers_to_customers (
i_product_id IN INVENTORY_IN.PRODUCT_ID%TYPE
)
RETURN supply_details_tab PIPELINED
IS
v_supplier_id INVENTORY_IN.SUPPLIER_ID%TYPE;
v_customer_id INVENTORY_OUT.CUSTOMER_ID%TYPE;
v_quantity_in INVENTORY_IN.IN_QUANTITY%TYPE := NULL;
v_quantity_out INVENTORY_OUT.OUT_QUANTITY%TYPE := NULL;
v_cur_in SYS_REFCURSOR;
v_cur_out SYS_REFCURSOR;
BEGIN
OPEN v_cur_in FOR
SELECT in_quantity, supplier_id
FROM INVENTORY_IN
WHERE product_id = i_product_id
ORDER BY inv_timestamp;
OPEN v_cur_out FOR
SELECT out_quantity, customer_id
FROM INVENTORY_OUT
WHERE product_id = i_product_id
ORDER BY inv_timestamp;
LOOP
IF v_quantity_in IS NULL THEN
FETCH v_cur_in INTO v_quantity_in, v_supplier_id;
IF v_cur_in%NOTFOUND THEN
v_supplier_id := NULL;
END IF;
END IF;
IF v_quantity_out IS NULL THEN
FETCH v_cur_out INTO v_quantity_out, v_customer_id;
IF v_cur_out%NOTFOUND THEN
v_customer_id := NULL;
END IF;
END IF;
EXIT WHEN v_cur_in%NOTFOUND AND v_cur_out%NOTFOUND;
IF v_quantity_in > v_quantity_out THEN
PIPE ROW(
supply_details_obj(
i_product_id,
v_quantity_out,
v_supplier_id,
v_customer_id
)
);
v_quantity_in := v_quantity_in - v_quantity_out;
v_quantity_out := NULL;
ELSE
PIPE ROW(
supply_details_obj(
i_product_id,
v_quantity_in,
v_supplier_id,
v_customer_id
)
);
v_quantity_out := v_quantity_out - v_quantity_in;
v_quantity_in := NULL;
END IF;
END LOOP;
END;
/
Then, for the sample data:
CREATE TABLE INVENTORY_IN ( ID, INV_TIMESTAMP, PRODUCT_ID, IN_QUANTITY, SUPPLIER_ID ) AS
SELECT 0, TIMESTAMP '2021-03-09 00:00:00', 101, 20, 0 FROM DUAL UNION ALL
SELECT 1, TIMESTAMP '2021-03-10 01:00:00', 101, 100, 4 FROM DUAL UNION ALL
SELECT 2, TIMESTAMP '2021-03-11 02:00:00', 101, 50, 3 FROM DUAL UNION ALL
SELECT 3, TIMESTAMP '2021-03-14 01:00:00', 101, 10, 2 FROM DUAL;
CREATE TABLE INVENTORY_OUT ( ID, INV_TIMESTAMP, PRODUCT_ID, OUT_QUANTITY, CUSTOMER_ID ) AS
SELECT 1, TIMESTAMP '2021-03-10 02:00:00', 101, 30, 1 FROM DUAL UNION ALL
SELECT 2, TIMESTAMP '2021-03-11 01:00:00', 101, 40, 2 FROM DUAL UNION ALL
SELECT 3, TIMESTAMP '2021-03-12 01:00:00', 101, 80, 1 FROM DUAL;
The query:
SELECT product_id,
supplier_id,
customer_id,
SUM( quantity ) AS quantity
FROM TABLE( assign_suppliers_to_customers( 101 ) )
GROUP BY
product_id,
supplier_id,
customer_id
ORDER BY
MIN( inv_timestamp )
Outputs:
PRODUCT_ID | SUPPLIER_ID | CUSTOMER_ID | QUANTITY
---------: | ----------: | ----------: | -------:
101 | 0 | 1 | 20
101 | 4 | 1 | 60
101 | 4 | 2 | 40
101 | 3 | 1 | 30
101 | 3 | null | 20
101 | 2 | null | 10
Option 2
A (very) complicated SQL query:
WITH in_totals ( ID, INV_TIMESTAMP, PRODUCT_ID, IN_QUANTITY, SUPPLIER_ID, TOTAL_QUANTITY ) AS (
SELECT i.*,
SUM( in_quantity ) OVER ( PARTITION BY product_id ORDER BY inv_timestamp )
FROM inventory_in i
),
out_totals ( ID, INV_TIMESTAMP, PRODUCT_ID, OUT_QUANTITY, CUSTOMER_ID, TOTAL_QUANTITY ) AS (
SELECT o.*,
SUM( out_quantity ) OVER ( PARTITION BY product_id ORDER BY inv_timestamp )
FROM inventory_out o
),
split_totals ( product_id, inv_timestamp, supplier_id, customer_id, quantity ) AS (
SELECT i.product_id,
MIN( COALESCE( LEAST( i.inv_timestamp, o.inv_timestamp ), i.inv_timestamp ) )
AS inv_timestamp,
i.supplier_id,
o.customer_id,
SUM(
COALESCE(
LEAST(
i.total_quantity - o.total_quantity + o.out_quantity,
o.total_quantity - i.total_quantity + i.in_quantity,
i.in_quantity,
o.out_quantity
),
0
)
)
FROM in_totals i
LEFT OUTER JOIN
out_totals o
ON ( i.product_id = o.product_id
AND i.total_quantity - i.in_quantity <= o.total_quantity
AND i.total_quantity >= o.total_quantity - o.out_quantity )
GROUP BY
i.product_id,
i.supplier_id,
o.customer_id
ORDER BY
inv_timestamp
),
missing_totals ( product_id, inv_timestamp, supplier_id, customer_id, quantity ) AS (
SELECT i.product_id,
i.inv_timestamp,
i.supplier_id,
NULL,
i.in_quantity - COALESCE( s.quantity, 0 )
FROM inventory_in i
INNER JOIN (
SELECT product_id,
supplier_id,
SUM( quantity ) AS quantity
FROM split_totals
GROUP BY product_id, supplier_id
) s
ON ( i.product_id = s.product_id
AND i.supplier_id = s.supplier_id )
ORDER BY i.inv_timestamp
)
SELECT product_id, supplier_id, customer_id, quantity
FROM (
SELECT product_id, inv_timestamp, supplier_id, customer_id, quantity
FROM split_totals
WHERE quantity > 0
UNION ALL
SELECT product_id, inv_timestamp, supplier_id, customer_id, quantity
FROM missing_totals
WHERE quantity > 0
ORDER BY inv_timestamp
);
Which, for the sample data above, outputs:
PRODUCT_ID | SUPPLIER_ID | CUSTOMER_ID | QUANTITY
---------: | ----------: | ----------: | -------:
101 | 0 | 1 | 20
101 | 4 | 1 | 60
101 | 4 | 2 | 40
101 | 3 | 1 | 30
101 | 3 | null | 20
101 | 2 | null | 10
db<>fiddle here
If your system controls the timestamps so you cannot consume what was not supplied (I've met systems, that didn't track intraday balance), then you can use SQL solution with interval join. The only thing to take care here is to track the last supply that was not consumed in full: it should be added as supply with no customer.
Here's the query with comments:
CREATE TABLE INVENTORY_IN ( ID, INV_TIMESTAMP, PRODUCT_ID, IN_QUANTITY, SUPPLIER_ID ) AS
SELECT 0, TIMESTAMP '2021-03-09 00:00:00', 101, 20, 0 FROM DUAL UNION ALL
SELECT 1, TIMESTAMP '2021-03-10 01:00:00', 101, 100, 4 FROM DUAL UNION ALL
SELECT 2, TIMESTAMP '2021-03-11 02:00:00', 101, 50, 3 FROM DUAL UNION ALL
SELECT 3, TIMESTAMP '2021-03-14 01:00:00', 101, 10, 2 FROM DUAL;
CREATE TABLE INVENTORY_OUT ( ID, INV_TIMESTAMP, PRODUCT_ID, OUT_QUANTITY, CUSTOMER_ID ) AS
SELECT 1, TIMESTAMP '2021-03-10 02:00:00', 101, 30, 1 FROM DUAL UNION ALL
SELECT 2, TIMESTAMP '2021-03-11 01:00:00', 101, 40, 2 FROM DUAL UNION ALL
SELECT 3, TIMESTAMP '2021-03-12 01:00:00', 101, 80, 1 FROM DUAL;
with i as (
select
/*Get total per product, supplier at each timestamp
to calculate running sum on timestamps without need to resolve ties with over(... rows between) addition*/
inv_timestamp
, product_id
, supplier_id
, sum(in_quantity) as quan
, sum(sum(in_quantity)) over(
partition by product_id
order by
inv_timestamp asc
, supplier_id asc
) as rsum
from INVENTORY_IN
group by
product_id
, supplier_id
, inv_timestamp
)
, o as (
select /*The same for customer*/
inv_timestamp
, product_id
, customer_id
, sum(out_quantity) as quan
, sum(sum(out_quantity)) over(
partition by product_id
order by
inv_timestamp asc
, customer_id asc
) as rsum
/*Last consumption per product: when lead goes beyond the current window*/
, lead(0, 1, 1) over(
partition by product_id
order by
inv_timestamp asc
, customer_id asc
) as last_consumption
from INVENTORY_OUT
group by
product_id
, customer_id
, inv_timestamp
)
, distr as (
select
/*Distribute the quantity. This is the basic interval intersection:
new_value_to = least(t1.value_to, t2.value_to)
new_value_from = greatest(t1.value_from, t2.value_from)
So we need a capacity of the interval
*/
i.product_id
, least(i.rsum, nvl(o.rsum, i.rsum))
- greatest(i.rsum - i.quan, nvl(o.rsum - o.quan, i.rsum - i.quan)) as supplied_quan
/*At the last supply we can have something not used.
Calculate it to add later as not consumed
*/
, case
when last_consumption = 1
and i.rsum > nvl(o.rsum, i.rsum)
then i.rsum - o.rsum
end as rest_quan
, i.supplier_id
, o.customer_id
, i.inv_timestamp as i_ts
, o.inv_timestamp as o_ts
from i
left join o
on i.product_id = o.product_id
/*No equality here, because values are continuous:
>= will include the same value in two intervals if some of value_to of one table equals
another's table value_to (which is value_from for the next interval)*/
and i.rsum > o.rsum - o.quan
and o.rsum > i.rsum - i.quan
)
select
product_id
, supplier_id
, customer_id
, sum(quan) as quan
from (
select /*Get distributed quantities*/
product_id
, supplier_id
, customer_id
, supplied_quan as quan
, i_ts
, o_ts
from distr
union all
select /*Add not consumed part of last consumed supply*/
product_id
, supplier_id
, null
, rest_quan
, i_ts
, null /*No consumption*/
from distr
where rest_quan is not null
)
group by
product_id
, supplier_id
, customer_id
order by
min(i_ts) asc
/*To order not consumed last*/
, min(o_ts) asc nulls last
PRODUCT_ID | SUPPLIER_ID | CUSTOMER_ID | QUAN
---------: | ----------: | ----------: | ---:
101 | 0 | 1 | 20
101 | 4 | 1 | 60
101 | 4 | 2 | 40
101 | 3 | 1 | 30
101 | 3 | null | 20
101 | 2 | null | 10
db<>fiddle here
I am trying to find the customer count and sales by the type of customer (New and Returning) and the number of times they have purchased.
txn_date Customer_ID Transaction_Number Sales Reference(not in the SQL table) customer type (not in the sql table)
1/2/2019 1 12345 $10 Second Purchase SLS Repeat
4/3/2018 1 65890 $20 First Purchase SLS Repeat
3/22/2019 3 64453 $30 First Purchase SLS new
4/3/2019 4 88567 $20 First Purchase SLS new
5/21/2019 4 85446 $15 Second Purchase SLS new
1/23/2018 5 89464 $40 First Purchase SLS Repeat
4/3/2019 5 99674 $30 Second Purchase SLS Repeat
4/3/2019 6 32224 $20 Second Purchase SLS Repeat
1/23/2018 6 46466 $30 First Purchase SLS Repeat
1/20/2018 7 56558 $30 First Purchase SLS new
I am using the below code to get the aggregate sales and customer count for the total customers:
select seqnum, count(distinct customer_id), sum(sales) from (
select co.*,
row_number() over (partition by customer_id order by txn_date) as seqnum
from somya co)
group by seqnum
order by seqnum;
I want to get the same data by the customer type:
for example for the new customers my result should show:
New Customers Customer_Count Sum(Sales)
1st Purchase 3 $80
2nd Purchase 1 $15
Returning Customers Customer_Count Sum(Sales)
1st Purchase 3 $90
2nd Purchase 3 $60
I am trying the below query to get the data for new and repeat customers:
New Customers:
select seqnum, count(distinct customer_id), sum(sales)
from (
select co.*,
row_number() over (partition by customer_id order by trunc(txn_date)) as seqnum,
MIN (TRUNC (TXN_DATE)) OVER (PARTITION BY customer_id) as MIN_TXN_DATE
from somya co
)
where MIN_TXN_DATE between '01-JAN-19' and '31-DEC-19'
group by seqnum
order by seqnum asc;
Returning Customers:
select seqnum, count(distinct customer_id), sum(sales)
from (
select co.*,
row_number() over (partition by customer_id order by trunc(txn_date)) as seqnum,
MIN (TRUNC (TXN_DATE)) OVER (PARTITION BY customer_id) as MIN_TXN_DATE
from somya co
)
where MIN_TXN_DATE <'01-JAN-19'
group by seqnum
order by seqnum asc;
I am not able to figure out what is wrong with my query or if there is a problem with my logic.
This is just a sample data, I have transactions from all the years in my data base so I need to narrow the transaction date in the query but as soon as I narrowing down the data using the transaction date the repeat customer query doesnt give me anything and the new customer query gives me the total customer for that period.
If I understand correctly, you need to know the first time someone becomes a customer. And then use this:
select (case when first_year < 2019 then 'returning' else 'new' end) as custtype,
seqnum, count(*), sum(sales)
from (select co.*,
row_number() over (partition by customer_id, extract(year from txn_date) order by txn_date) as seqnum,
min(extract(year from txn_date)) over (partition by customer_id) as first_year
from somya co
) s
where txn_date >= date '2019-01-01' and
txn_date < date '2020-01-01'
group by (case when first_year < 2019 then 'returning' else 'new' end),
seqnum
order by custtype, seqnum;
You can categorize your sales data to assign a customer type and a purchase sequence using windowing functions, like this:
SELECT sd.txn_date,
sd.customer_id,
sd.transaction_number,
sd.sales,
case when min(txn_date) over ( partition by customer_id ) < DATE '2019-01-01'
AND max(txn_date) OVER ( partition by customer_id ) >= DATE '2019-01-01'
THEN 'Repeat'
ELSE 'New' END customer_type,
row_number() over ( partition by customer_id order by txn_date) purchase_sequence
FROM sales_data sd
+-----------+-------------+--------------------+-------+---------------+-------------------+
| TXN_DATE | CUSTOMER_ID | TRANSACTION_NUMBER | SALES | CUSTOMER_TYPE | PURCHASE_SEQUENCE |
+-----------+-------------+--------------------+-------+---------------+-------------------+
| 03-APR-18 | 1 | 65890 | 20 | Repeat | 1 |
| 02-JAN-19 | 1 | 12345 | 10 | Repeat | 2 |
| 22-MAR-19 | 3 | 64453 | 30 | New | 1 |
| 03-APR-19 | 4 | 88567 | 20 | New | 1 |
| 21-MAY-19 | 4 | 85446 | 15 | New | 2 |
| 23-JAN-18 | 5 | 89464 | 40 | Repeat | 1 |
| 03-APR-19 | 5 | 99674 | 30 | Repeat | 2 |
| 23-JAN-18 | 6 | 46466 | 30 | Repeat | 1 |
| 03-APR-19 | 6 | 32224 | 20 | Repeat | 2 |
| 20-JAN-18 | 7 | 56558 | 30 | New | 1 |
+-----------+-------------+--------------------+-------+---------------+-------------------+
Then, you can wrap that in a common table expression (aka "WITH" clause) and summarize by the customer type and purchase sequence:
WITH categorized_sales_data AS (
SELECT sd.txn_date,
sd.customer_id,
sd.transaction_number,
sd.sales,
case when min(txn_date) over ( partition by customer_id ) < DATE '2019-01-01' AND max(txn_date) OVER ( partition by customer_id ) >= DATE '2019-01-01' THEN 'Repeat' ELSE 'New' END customer_type,
row_number() over ( partition by customer_id order by txn_date) purchase_sequence
FROM sales_data sd)
SELECT customer_type, purchase_sequence, count(*), sum(sales)
FROM categorized_sales_data
group by customer_type, purchase_sequence
order by customer_type, purchase_sequence
+---------------+-------------------+----------+------------+
| CUSTOMER_TYPE | PURCHASE_SEQUENCE | COUNT(*) | SUM(SALES) |
+---------------+-------------------+----------+------------+
| New | 1 | 3 | 80 |
| New | 2 | 1 | 15 |
| Repeat | 1 | 3 | 90 |
| Repeat | 2 | 3 | 60 |
+---------------+-------------------+----------+------------+
Here's a full SQL with test data:
with sales_data (txn_date, Customer_ID, Transaction_Number, Sales ) as (
SELECT TO_DATE('1/2/2019','MM/DD/YYYY'), 1, 12345, 10 FROM DUAL UNION ALL
SELECT TO_DATE('4/3/2018','MM/DD/YYYY'), 1, 65890, 20 FROM DUAL UNION ALL
SELECT TO_DATE('3/22/2019','MM/DD/YYYY'), 3, 64453, 30 FROM DUAL UNION ALL
SELECT TO_DATE('4/3/2019','MM/DD/YYYY'), 4, 88567, 20 FROM DUAL UNION ALL
SELECT TO_DATE('5/21/2019','MM/DD/YYYY'), 4, 85446, 15 FROM DUAL UNION ALL
SELECT TO_DATE('1/23/2018','MM/DD/YYYY'), 5, 89464, 40 FROM DUAL UNION ALL
SELECT TO_DATE('4/3/2019','MM/DD/YYYY'), 5, 99674, 30 FROM DUAL UNION ALL
SELECT TO_DATE('4/3/2019','MM/DD/YYYY'), 6, 32224, 20 FROM DUAL UNION ALL
SELECT TO_DATE('1/23/2018','MM/DD/YYYY'), 6, 46466, 30 FROM DUAL UNION ALL
SELECT TO_DATE('1/20/2018','MM/DD/YYYY'), 7, 56558, 30 FROM DUAL ),
-- Query starts here
/* WITH */ categorized_sales_data AS (
SELECT sd.txn_date,
sd.customer_id,
sd.transaction_number,
sd.sales,
case when min(txn_date) over ( partition by customer_id ) < DATE '2019-01-01' AND max(txn_date) OVER ( partition by customer_id ) >= DATE '2019-01-01' THEN 'Repeat' ELSE 'New' END customer_type,
row_number() over ( partition by customer_id order by txn_date) purchase_sequence
FROM sales_data sd)
SELECT customer_type, purchase_sequence, count(*), sum(sales)
FROM categorized_sales_data
group by customer_type, purchase_sequence
order by customer_type, purchase_sequence
Response to comment from OP
all the customers whose first purchase date is in 2019 would be a new customer. Any customer who has transacted in 2019 but their first purchase date is before 2019 would be a repeat customer
So, change
case when min(txn_date) over ( partition by customer_id ) < DATE '2019-01-01'
AND max(txn_date) OVER ( partition by customer_id ) >= DATE '2019-01-01'
THEN 'Repeat' ELSE 'New' END customer_type
to
case when min(txn_date) over ( partition by customer_id )
BETWEEN DATE '2019-01-01' AND DATE '2020-01-01' - INTERVAL '1' SECOND
THEN 'New' ELSE 'Repeat' END customer_type
i.e., if and only if a customer's first purchase was in 2019 then they are "new".
I am playing around with bigquery and hit an interesting use case. I have a collection of customers and account balances. The account balances collection records any account balance change.
Customers:
+---------+--------+
| ID | Name |
+---------+--------+
| 1 | Alice |
| 2 | Bob |
+---------+--------+
Accounts balances:
+---------+---------------+---------+------------+
| ID | customer_id | value | timestamp |
+---------+---------------+---------+------------+
| 1 | 1 | -500 | 2019-02-12 |
| 2 | 1 | -200 | 2019-02-10 |
| 3 | 2 | 200 | 2019-02-10 |
| 4 | 1 | 0 | 2019-02-09 |
+---------+---------------+---------+------------+
The goal is to find out, for how long a customer has a negative account balance. The resulting collection would look like this:
+---------+--------+---------------------------------+
| ID | Name | Negative account balance since |
+---------+--------+---------------------------------+
| 1 | Alice | 2 days |
+---------+--------+---------------------------------+
Bob is not in the collection, because his last account record shows a positive value.
I think following steps are involved:
get last account balance per customer, see if it is negative
go through the account balance values until you hit a positive (or no more) value
compute datediff
Is something like this even possible in sql? Do you have any ideas on who to create such query? To get customers that currently have a negative account balance, I use this query:
SELECT customer_id FROM (
SELECT t.account_balance, ROW_NUMBER() OVER (PARTITION BY customer_id ORDER BY timestamp DESC) as seqnum FROM `account_balances` t
) t
WHERE seqnum = 1 AND account_balance<0
Below is for BigQuery Standard SQL
#standardSQL
SELECT customer_id, name,
SUM(IF(negative_positive < 0, days, 0)) negative_days,
SUM(IF(negative_positive = 0, days, 0)) zero_days,
SUM(IF(negative_positive > 0, days, 0)) positive_days
FROM (
SELECT customer_id, negative_positive, grp,
1 + DATE_DIFF(MAX(ts), MIN(ts), DAY) days
FROM (
SELECT customer_id, ts, SIGN(value) negative_positive,
COUNTIF(flag) OVER(PARTITION BY customer_id ORDER BY ts) grp
FROM (
SELECT *, SIGN(value) = IFNULL(LEAD(SIGN(value)) OVER(PARTITION BY customer_id ORDER BY ts), 0) flag
FROM `project.dataset.balances`
)
)
GROUP BY customer_id, negative_positive, grp
)
LEFT JOIN `project.dataset.customers`
ON id = customer_id
GROUP BY customer_id, name
You can test, play with above using sample data from your question as in below example
#standardSQL
WITH `project.dataset.balances` AS (
SELECT 1 customer_id, -500 value, DATE '2019-02-12' ts UNION ALL
SELECT 1, -200, '2019-02-10' UNION ALL
SELECT 2, 200, '2019-02-10' UNION ALL
SELECT 1, 0, '2019-02-09'
), `project.dataset.customers` AS (
SELECT 1 id, 'Alice' name UNION ALL
SELECT 2, 'Bob'
)
SELECT customer_id, name,
SUM(IF(negative_positive < 0, days, 0)) negative_days,
SUM(IF(negative_positive = 0, days, 0)) zero_days,
SUM(IF(negative_positive > 0, days, 0)) positive_days
FROM (
SELECT customer_id, negative_positive, grp,
1 + DATE_DIFF(MAX(ts), MIN(ts), DAY) days
FROM (
SELECT customer_id, ts, SIGN(value) negative_positive,
COUNTIF(flag) OVER(PARTITION BY customer_id ORDER BY ts) grp
FROM (
SELECT *, SIGN(value) = IFNULL(LEAD(SIGN(value)) OVER(PARTITION BY customer_id ORDER BY ts), 0) flag
FROM `project.dataset.balances`
)
)
GROUP BY customer_id, negative_positive, grp
)
LEFT JOIN `project.dataset.customers`
ON id = customer_id
GROUP BY customer_id, name
-- ORDER BY customer_id
with result
Row customer_id name negative_days zero_days positive_days
1 1 Alice 3 1 0
2 2 Bob 0 0 1
I would like to identify the returning customers from an Oracle(11g) table like this:
CustID | Date
-------|----------
XC321 | 2016-04-28
AV626 | 2016-05-18
DX970 | 2016-06-23
XC321 | 2016-05-28
XC321 | 2016-06-02
So I can see which customers returned within various windows, for example within 10, 20, 30, 40 or 50 days. For example:
CustID | 10_day | 20_day | 30_day | 40_day | 50_day
-------|--------|--------|--------|--------|--------
XC321 | | | 1 | |
XC321 | | | | 1 |
I would even accept a result like this:
CustID | Date | days_from_last_visit
-------|------------|---------------------
XC321 | 2016-05-28 | 30
XC321 | 2016-06-02 | 5
I guess it would use a partition by windowing clause with unbounded following and preceding clauses... but I cannot find any suitable examples.
Any ideas...?
Thanks
No need for window functions here, you can simply do it with conditional aggregation using CASE EXPRESSION :
SELECT t.custID,
COUNT(CASE WHEN (last_visit- t.date) <= 10 THEN 1 END) as 10_day,
COUNT(CASE WHEN (last_visit- t.date) between 11 and 20 THEN 1 END) as 20_day,
COUNT(CASE WHEN (last_visit- t.date) between 21 and 30 THEN 1 END) as 30_day,
.....
FROM (SELECT s.custID,
LEAD(s.date) OVER(PARTITION BY s.custID ORDER BY s.date DESC) as last_visit
FROM YourTable s) t
GROUP BY t.custID
Oracle Setup:
CREATE TABLE customers ( CustID, Activity_Date ) AS
SELECT 'XC321', DATE '2016-04-28' FROM DUAL UNION ALL
SELECT 'AV626', DATE '2016-05-18' FROM DUAL UNION ALL
SELECT 'DX970', DATE '2016-06-23' FROM DUAL UNION ALL
SELECT 'XC321', DATE '2016-05-28' FROM DUAL UNION ALL
SELECT 'XC321', DATE '2016-06-02' FROM DUAL;
Query:
SELECT *
FROM (
SELECT CustID,
Activity_Date AS First_Date,
COUNT(1) OVER ( PARTITION BY CustID
ORDER BY Activity_Date
RANGE BETWEEN CURRENT ROW AND INTERVAL '10' DAY FOLLOWING )
- 1 AS "10_Day",
COUNT(1) OVER ( PARTITION BY CustID
ORDER BY Activity_Date
RANGE BETWEEN CURRENT ROW AND INTERVAL '20' DAY FOLLOWING )
- 1 AS "20_Day",
COUNT(1) OVER ( PARTITION BY CustID
ORDER BY Activity_Date
RANGE BETWEEN CURRENT ROW AND INTERVAL '30' DAY FOLLOWING )
- 1 AS "30_Day",
COUNT(1) OVER ( PARTITION BY CustID
ORDER BY Activity_Date
RANGE BETWEEN CURRENT ROW AND INTERVAL '40' DAY FOLLOWING )
- 1 AS "40_Day",
COUNT(1) OVER ( PARTITION BY CustID
ORDER BY Activity_Date
RANGE BETWEEN CURRENT ROW AND INTERVAL '50' DAY FOLLOWING )
- 1 AS "50_Day",
ROW_NUMBER() OVER ( PARTITION BY CustID ORDER BY Activity_Date ) AS rn
FROM Customers
)
WHERE rn = 1;
Output
USTID FIRST_DATE 10_Day 20_Day 30_Day 40_Day 50_Day RN
------ ------------------- ---------- ---------- ---------- ---------- ---------- ----------
AV626 2016-05-18 00:00:00 0 0 0 0 0 1
DX970 2016-06-23 00:00:00 0 0 0 0 0 1
XC321 2016-04-28 00:00:00 0 0 1 2 2 1
Here is an answer that works for me, I have based it on your answers above, thanks for contributions from MT0 and Sagi:
SELECT CustID,
visit_date,
Prev_Visit ,
COUNT( CASE WHEN (Days_between_visits) <=10 THEN 1 END) AS "0-10_day" ,
COUNT( CASE WHEN (Days_between_visits) BETWEEN 11 AND 20 THEN 1 END) AS "11-20_day" ,
COUNT( CASE WHEN (Days_between_visits) BETWEEN 21 AND 30 THEN 1 END) AS "21-30_day" ,
COUNT( CASE WHEN (Days_between_visits) BETWEEN 31 AND 40 THEN 1 END) AS "31-40_day" ,
COUNT( CASE WHEN (Days_between_visits) BETWEEN 41 AND 50 THEN 1 END) AS "41-50_day" ,
COUNT( CASE WHEN (Days_between_visits) >50 THEN 1 END) AS "51+_day"
FROM
(SELECT CustID,
visit_date,
Lead(T1.visit_date) over (partition BY T1.CustID order by T1.visit_date DESC) AS Prev_visit,
visit_date - Lead(T1.visit_date) over (
partition BY T1.CustID order by T1.visit_date DESC) AS Days_between_visits
FROM T1
) T2
WHERE Days_between_visits >0
GROUP BY T2.CustID ,
T2.visit_date ,
T2.Prev_visit ,
T2.Days_between_visits;
This returns:
CUSTID | VISIT_DATE | PREV_VISIT | DAYS_BETWEEN_VISIT | 0-10_DAY | 11-20_DAY | 21-30_DAY | 31-40_DAY | 41-50_DAY | 51+DAY
XC321 | 2016-05-28 | 2016-04-28 | 30 | | | 1 | | |
XC321 | 2016-06-02 | 2016-05-28 | 5 | 1 | | | | |
I have a table with the following info
|date | user_id | week_beg | month_beg|
SQL to create table with test values:
CREATE TABLE uniques
(
date DATE,
user_id INT,
week_beg DATE,
month_beg DATE
)
INSERT INTO uniques VALUES ('2013-01-01', 1, '2012-12-30', '2013-01-01')
INSERT INTO uniques VALUES ('2013-01-03', 3, '2012-12-30', '2013-01-01')
INSERT INTO uniques VALUES ('2013-01-06', 4, '2013-01-06', '2013-01-01')
INSERT INTO uniques VALUES ('2013-01-07', 4, '2013-01-06', '2013-01-01')
INPUT TABLE:
| date | user_id | week_beg | month_beg |
| 2013-01-01 | 1 | 2012-12-30 | 2013-01-01 |
| 2013-01-03 | 3 | 2012-12-30 | 2013-01-01 |
| 2013-01-06 | 4 | 2013-01-06 | 2013-01-01 |
| 2013-01-07 | 4 | 2013-01-06 | 2013-01-01 |
OUTPUT TABLE:
| date | time_series | cnt |
| 2013-01-01 | D | 1 |
| 2013-01-01 | W | 1 |
| 2013-01-01 | M | 1 |
| 2013-01-03 | D | 1 |
| 2013-01-03 | W | 2 |
| 2013-01-03 | M | 2 |
| 2013-01-06 | D | 1 |
| 2013-01-06 | W | 1 |
| 2013-01-06 | M | 3 |
| 2013-01-07 | D | 1 |
| 2013-01-07 | W | 1 |
| 2013-01-07 | M | 3 |
I want to calculate the number of distinct user_id's for a date:
For that date
For that week up to that date (Week to date)
For the month up to that date (Month to date)
1 is easy to calculate.
For 2 and 3 I am trying to use such queries:
SELECT
date,
'W' AS "time_series",
(COUNT DISTINCT user_id) COUNT (user_id) OVER (PARTITION BY week_beg) AS "cnt"
FROM user_subtitles
SELECT
date,
'M' AS "time_series",
(COUNT DISTINCT user_id) COUNT (user_id) OVER (PARTITION BY month_beg) AS "cnt"
FROM user_subtitles
Postgres does not allow window functions for DISTINCT calculation, so this approach does not work.
I have also tried out a GROUP BY approach, but it does not work as it gives me numbers for whole week/months.
Whats the best way to approach this problem?
Count all rows
SELECT date, '1_D' AS time_series, count(DISTINCT user_id) AS cnt
FROM uniques
GROUP BY 1
UNION ALL
SELECT DISTINCT ON (1)
date, '2_W', count(*) OVER (PARTITION BY week_beg ORDER BY date)
FROM uniques
UNION ALL
SELECT DISTINCT ON (1)
date, '3_M', count(*) OVER (PARTITION BY month_beg ORDER BY date)
FROM uniques
ORDER BY 1, time_series
Your columns week_beg and month_beg are 100 % redundant and can easily be replaced by
date_trunc('week', date + 1) - 1 and date_trunc('month', date) respectively.
Your week seems to start on Sunday (off by one), therefore the + 1 .. - 1.
The default frame of a window function with ORDER BY in the OVER clause uses is RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. That's exactly what you need.
Use UNION ALL, not UNION.
Your unfortunate choice for time_series (D, W, M) does not sort well, I renamed to make the final ORDER BY easier.
This query can deal with multiple rows per day. Counts include all peers for a day.
More about DISTINCT ON:
Select first row in each GROUP BY group?
DISTINCT users per day
To count every user only once per day, use a CTE with DISTINCT ON:
WITH x AS (SELECT DISTINCT ON (1,2) date, user_id FROM uniques)
SELECT date, '1_D' AS time_series, count(user_id) AS cnt
FROM x
GROUP BY 1
UNION ALL
SELECT DISTINCT ON (1)
date, '2_W'
,count(*) OVER (PARTITION BY (date_trunc('week', date + 1)::date - 1)
ORDER BY date)
FROM x
UNION ALL
SELECT DISTINCT ON (1)
date, '3_M'
,count(*) OVER (PARTITION BY date_trunc('month', date) ORDER BY date)
FROM x
ORDER BY 1, 2
DISTINCT users over dynamic period of time
You can always resort to correlated subqueries. Tend to be slow with big tables!
Building on the previous queries:
WITH du AS (SELECT date, user_id FROM uniques GROUP BY 1,2)
,d AS (
SELECT date
,(date_trunc('week', date + 1)::date - 1) AS week_beg
,date_trunc('month', date)::date AS month_beg
FROM uniques
GROUP BY 1
)
SELECT date, '1_D' AS time_series, count(user_id) AS cnt
FROM du
GROUP BY 1
UNION ALL
SELECT date, '2_W', (SELECT count(DISTINCT user_id) FROM du
WHERE du.date BETWEEN d.week_beg AND d.date )
FROM d
GROUP BY date, week_beg
UNION ALL
SELECT date, '3_M', (SELECT count(DISTINCT user_id) FROM du
WHERE du.date BETWEEN d.month_beg AND d.date)
FROM d
GROUP BY date, month_beg
ORDER BY 1,2;
SQL Fiddle for all three solutions.
Faster with dense_rank()
#Clodoaldo came up with a major improvement: use the window function dense_rank(). Here is another idea for an optimized version. It should be even faster to exclude daily duplicates right away. The performance gain grows with the number of rows per day.
Building on a simplified and sanitized data model
- without the redundant columns
- day as column name instead of date
date is a reserved word in standard SQL and a basic type name in PostgreSQL and shouldn't be used as identifier.
CREATE TABLE uniques(
day date -- instead of "date"
,user_id int
);
Improved query:
WITH du AS (
SELECT DISTINCT ON (1, 2)
day, user_id
,date_trunc('week', day + 1)::date - 1 AS week_beg
,date_trunc('month', day)::date AS month_beg
FROM uniques
)
SELECT day, count(user_id) AS d, max(w) AS w, max(m) AS m
FROM (
SELECT user_id, day
,dense_rank() OVER(PARTITION BY week_beg ORDER BY user_id) AS w
,dense_rank() OVER(PARTITION BY month_beg ORDER BY user_id) AS m
FROM du
) s
GROUP BY day
ORDER BY day;
SQL Fiddle demonstrating the performance of 4 faster variants. It depends on your data distribution which is fastest for you.
All of them are about 10x as fast as the correlated subqueries version (which isn't bad for correlated subqueries).
Without correlated subqueries. SQL Fiddle
with u as (
select
"date", user_id,
date_trunc('week', "date" + 1)::date - 1 week_beg,
date_trunc('month', "date")::date month_beg
from uniques
)
select
"date", count(distinct user_id) D,
max(week_dr) W, max(month_dr) M
from (
select
user_id, "date",
dense_rank() over(partition by week_beg order by user_id) week_dr,
dense_rank() over(partition by month_beg order by user_id) month_dr
from u
) s
group by "date"
order by "date"
Try
SELECT
*
FROM
(
SELECT dates, count(user_id), 'D' as timesereis FROM users_data GROUP BY dates
UNION
SELECT max(dates), count(user_id), 'W' FROM users_data GROUP BY date_part('year',dates)+date_part('week',dates)
UNION
SELECT max(dates), count(user_id), 'M' FROM users_data GROUP BY date_part('year',dates)+date_part('week',dates)
) tEMP order by dates, timesereis
SQLFIDDLE
Try queries like this
SELECT count(distinct user_id), date_format(date, '%Y-%m-%d') as date_period
FROM uniques
GROUP By date_period