Mondrian MDX Last Element Aggregation - mdx

In TelCo industry is very important to know what was the customer status at some some point (end of week, month, etc).
So, I have SDC type II dimension with: customer_tk, customerID, status, date.
We use it custom reports to find what is state on some day (example):
Date = '2015-10-01'
Group Active Terminated Suspended Order
------------------------------------------------------
Group1 25 2 2 8
Group2 45 8 0 12
Group3 15 18 5 2
Group4 65 2 1 29
This is pivoted from query:
SELECT * FROM dim_customer
INNER JOIN (SELECT max(customer_tk) as maxId, customerId FROM dim_customer WHERE date<='2015-10-01' GROUP BY customerId) as maxCust
ON dim_customer.customer_tk = maxCust.maxId
And it works perfectly (date is parameter from report).
I want to put it in cube but how to create this type of join? I need cumulative count of customers
I tried with MDX Tail(filter(... )) expressions but didn't managed to get correct numbers.
So, basically, with no filters, it should return status = 8 for customer 29841 and status = 2 for customer 28425.
But if choose year = 2014, it should return status = 2 for both customers:
Thanks

Related

SQL - GROUP BY 3 values of the same column

I have this table in GBQ :
ClientID Type Month
XXX A 4
YYY C 4
FFX B 5
FFF B 6
XXX C 6
XXX A 6
YRE C 7
AAR A 7
FFF A 8
EGT B 8
FFF B 9
ETT C 9
I am counting the number of Type per ClientID and Month, with this basic query :
SELECT ClientID,
COUNT(DISTINCT Type) NbTypes,
Month
FROM Table
GROUP BY ClientID, Month
The result looks like this :
ClientID NbTypes Month
XXX 1 4
XXX 2 6
FFF 1 6
FFF 1 8
FFF 1 9
... ... ...
What I need to do is, count the number of Type per ClientID and for each Month : per the last 3 months.
For example :
For the ClientID = XXX, and Month = 8 : I want to have the count of Type where Month = 6 AND Month = 7 AND Month = 8
Is there a way to do this with GROUP BY ?
Thank you
You could use HAVING in your statement:
SELECT ClientID,
COUNT(DISTINCT Type) NbTypes,
Month
FROM Table
GROUP BY ClientID, Month
HAVING Month = EXTRACT(MONTH FROM CURRENT_DATE())
OR Month = EXTRACT(MONTH FROM DATE_SUB(DATE_TRUNC(CURRENT_DATE(), MONTH), INTERVAL 1 MONTH))
OR Month = EXTRACT(MONTH FROM DATE_SUB(DATE_TRUNC(CURRENT_DATE(), MONTH), INTERVAL 2 MONTH))
Note that in your table seems to be no column to determinate the year, so this statement will group all values with month value of the current month to current month minus two months. So for example every data from December, November and October 2021, 2020, 2019 etc. will be selected with this query.
Also note that I could not test this statement, since I don't use BigQuery.
Here is the source for the Date-Functions:
https://cloud.google.com/bigquery/docs/reference/standard-sql/date_functions
You can use a SELECT in a SELECT if that is allowed in Google Big Query
SELECT ClientID,
COUNT(DISTINCT Type) NbTypes,
Month,
MAX((select count(distinct Type)
from Table t2
where t1.ClientID=t2.ClientID
and t1.month-t2.month between 0 and 2
)
) as NbType_3_months
FROM Table t1
GROUP BY ClientID, Month
You can group rows by ClientID and Month, count the number of types and sort rows by ClientID in ascending order and by Month in descending order, and then select from each group the rows of the past three months. It is roundabout and complicated to handle such a scenario in SQL because SQL implements set-orientation only halfway up. For your case, you have to get the largest month for each ClientID, find the eligible records through a join filter, and perform grouping and count. The usual way is to fetch the original data out of the database and process it in Python or SPL. SPL, the open-source Java package, is easier to be integrated into a Java program and generate much simpler code. It gets the task done with only two lines of code:
A
1
=GBQ.query("SELECT CLIENTID, COUNT(DISTINCT TYPE) AS NBTYPES, MONTH FROM t2 GROUP BY CLIENTID, MONTH ORDER BY CLIENTID, MONTH DESC")
2
=A1.group#o(#1).run(m=~.#3-3,~=~.select(MONTH>m)).conj()

Find Individuals who have purchased 10 times within a rolling 1 year period

So let's say I have 2 tables. One table is for consumers, and another is for sales.
Consumers
ID
Name
...
1
John Johns
...
2
Cathy Dans
...
Sales
ID
consumer_id
purchase_date
...
1
1
01/03/05
...
2
1
02/04/10
...
3
1
03/04/11
...
4
2
02/14/07
...
5
2
09/24/08
...
6
2
12/15/09
...
I want to find all instances of consumers who made more than 10 purchases within any 6 month rolling period.
SELECT
consumers.id
, COUNT(sales.id)
FROM
consumers
JOIN sales ON consumers.id = sales.consumer_id
GROUP BY
consumers.id
HAVING
COUNT(sales.id) >= 10
ORDER BY
COUNT(sales.id) DESC
So I have this code, which just gives me a list of consumers who have made more than 10 purchases ALL TIME. But how do I incorporate the rolling 6 month period logic?!
Any help or guidance on which functions can help me accomplish this would be appreciated!
You can use window functions to count the number of sales in a six-month period. Then just filter down to those consumers:
select distinct consumer_id
from (select s.*,
count(*) over (partition by consumer_id
order by purchase_date
range between current row and interval '6 month' following
) as six_month_count
from sales s
) s
where six_month_count > 10;

Access sql Moving Average of Top N With 2 criterias

I have been searching the forum and found a single post that is a little smilair to my problem here: Calculate average for Top n combined with SQL Group By.
My situation is:
I have a table tblWEIGHT that contains: ID, Date, idPONR, Weight
I have a second table tblSALES that contains: ID, Date, Sales, idPONR
I have a third table tblPONR that contains: ID, PONR, idProduct
And a fouth table tblPRODUCT that contais: ID, Product
The linking:
tblWEIGHT.idPONR = tblPONR.ID
tblSALES.idPONR = tblPONR.ID
tblPONR.idProduct = tblPRODUCT.ID
The maintable of my query is tblSALES. I want to all my sales listed, with the moving average of the top5
weights of the PRODUCT where the date of the weight is less than the sales date, and the product is the same as the sold product. Its IMPORTANT that the result isn't grouped by the date. I need all the records of tblSALES.
i have gotten as far as to get the top 1 weight, but im not able to get the moving average instread.
The query that gest the top 1 is the following, and i am guessing that the query i need is going to look a lot like it.
SELECT tblSALES.ID, tblSALES.Dato, tblPONR.idPRODUCT,
(
SELECT top 1 Weight FROM tblWEIGHT INNER JOIN tblPONR ON tblWeight.idPONR = tblPONR.ID
WHERE tblPONR.idPRODUCT = idPRODUCT AND
SALES.Date > tblWEIGHT.Date
ORDER BY tblWEIGHT.Date desc
) AS LatestWeight
FROM tblSALES INNER JOIN VtblPONR ON tblSALES.idPONR = tblPONR.ID
this is not my exact query since im danish and i wouldnt make sense. I know im not supposed to use Date as a fieldname.
i imagine the filan query would be something like:
SELECT tblSALES.ID..... avg(SELECT TOP 5 weight .........)
but doing this i keep getting error at max 1 record can be returned by this subquery
Final Question.
How do i make a query that creates a moving average of the top 5 weights of my sold product, where the date of the weight is earlier than the date i sold the product?
EDIT Sampledata:
DATEFORMAT: dd/mm/yyyy
tblWEIGHT
ID Date idPONR Weight
1 01-01-2020 1 100
2 02-01-2020 2 200
3 03-01-2020 3 200
4 04-01-2020 3 400
5 05-01-2020 2 250
6 06-01-2020 1 150
7 07-01-2020 2 200
tblSALES
ID Date Sales(amt) idPONR
1 05-01-2020 30 1
2 06-01-2020 15 2
3 10-01-2020 20 3
tblPONR
ID PONR(production Number) idProduct
1 2521 1
2 1548 1
3 5484 2
tblPRODUCT
ID Product
1 Bricks
2 Tiles
Desired outcome read comments for AvgWeight
tblSALES.ID tblSALES.Date tblSales.Sales(amt) AvgWeigt
1 05-01-2020 30 123 -->avg(top 5 newest weight of both idPONR 1 And 2 because they are the same product, and where tblWeight.Date<05-01-2020)
2 06-01-2020 15 123 -->avg(top 5 newest weight of both idPONR 1 And 2 because they are the same product, and where tblWeight.Date<06-01-2020)
3 10-01-2020 20 123 -->avg(top 5 newest weight of idPONR 3 since thats the only idPONR with that product, and where tblWeight.Date<10-01-2020)
Consider:
Query1
SELECT tblWeight.ID AS WeightID, tblWeight.Date AS WtDate,
tblWeight.idPONR, tblPONR.PONR, tblPONR.idProduct, tblWeight.Weight, tblSales.SalesAmt,
tblSales.ID AS SalesID, tblSales.Date AS SalesDate
FROM (tblPONR INNER JOIN tblWeight ON tblPONR.ID = tblWeight.idPONR)
INNER JOIN tblSales ON tblPONR.ID = tblSales.idPONR;
Query2
SELECT * FROM Query1 WHERE WeightID IN (
SELECT TOP 5 WeightID FROM Query1 AS Dupe WHERE Dupe.idProduct = Query1.idProduct
AND Dupe.WtDate<Query1.SalesDate ORDER BY Dupe.WtDate);
Query3
SELECT Query2.SalesID, Query2.SalesDate, Query2.SalesAmt,
First(DAvg("Weight","Query2","idProduct=" & [idProduct] & " AND WtDate<#" & [SalesDate] & "#")) AS AvgWt
FROM Query2
GROUP BY Query2.SalesID, Query2.SalesDate, Query2.SalesAmt;

SQL How to calculate Average time between Order Purchases? (do sql calculations based on next and previous row)

I have a simple table that contains the customer email, their order count (so if this is their 1st order, 3rd, 5th, etc), the date that order was created, the value of that order, and the total order count for that customer.
Here is what my table looks like
Email Order Date Value Total
r2n1w#gmail.com 1 12/1/2016 85 5
r2n1w#gmail.com 2 2/6/2017 125 5
r2n1w#gmail.com 3 2/17/2017 75 5
r2n1w#gmail.com 4 3/2/2017 65 5
r2n1w#gmail.com 5 3/20/2017 130 5
ation#gmail.com 1 2/12/2018 150 1
ylove#gmail.com 1 6/15/2018 36 3
ylove#gmail.com 2 7/16/2018 41 3
ylove#gmail.com 3 1/21/2019 140 3
keria#gmail.com 1 8/10/2018 54 2
keria#gmail.com 2 11/16/2018 65 2
What I want to do is calculate the time average between purchase for each customer. So lets take customer ylove. First purchase is on 6/15/18. Next one is 7/16/18, so thats 31 days, and next purchase is on 1/21/2019, so that is 189 days. Average purchase time between orders would be 110 days.
But I have no idea how to make SQL look at the next row and calculate based on that, but then restart when it reaches a new customer.
Here is my query to get that table:
SELECT
F.CustomerEmail
,F.OrderCountBase
,F.Date_Created
,F.Total
,F.TotalOrdersBase
FROM #FullBase F
ORDER BY f.CustomerEmail
If anyone can give me some suggestions, that would be greatly appreciated.
And then maybe I can calculate value differences (in percentage). So for example, ylove spent $36 on their first order, $41 on their second which is a 13% increase. Then their second order was $140 which is a 341% increase. So on average, this customer increased their purchase order value by 177%. Unrelated to SQL, but is this the correct way of calculating a metric like this?
looking to your sample you clould try using the diff form min and max date divided by total
select email, datediff(day, min(Order_Date), max(Order_Date))/(total-1) as avg_days
from your_table
group by email
and for manage also the one order only
select email,
case when total-1 > 0 then
datediff(day, min(Order_Date), max(Order_Date))/(total-1)
else datediff(day, min(Order_Date), max(Order_Date)) end as avg_days
from your_table
group by email
The simplest formulation is:
select email,
datediff(day, min(Order_Date), max(Order_Date)) / nullif(total-1, 0) as avg_days
from t
group by email;
You can see this is the case. Consider three orders with od1, od2, and od3 as the order dates. The average is:
( (od2 - od1) + (od3 - od2) ) / 2
Check the arithmetic:
--> ( od2 - od1 + od3 - od2 ) / 2
--> ( od3 - od1 ) / 2
This pretty obviously generalizes to more orders.
Hence the max() minus min().

Return rows where specific number is reached for the first time (postgres)

Have hit a roadblock.
Context: am using PostgreSQL 9.5.8
I have a table, as follows, with customers' points accrued. The table has multiple rows per customer as it records every change in points (like an event table). i.e. customer 1 may buy 1 item and accrue 10 points which is one row, then on another day spend some of these points and be left with 5 points which is another row, and then purchase another item and accrue a further 10 bringing them back up to 15 which displays as another row. Each of these rows with point amounts has a created_at column.
Example table:
Customer ID created_at no_points row
123 17/09/2017 5 1
123 09/10/2017 8 2
124 10/10/2017 12 3
123 10/10/2017 15 4
125 12/10/2017 12 5
126 17/09/2017 6 6
123 11/10/2017 11 7
123 12/10/2017 9 8
127 17/09/2017 5 9
124 11/10/2017 5 10
125 13/10/2017 5 11
123 13/10/2017 12 12
I want to track the first time a customer reaches a certain threshold i.e. >= 10 points. It doesn't matter how much they go over 10 points, the only criteria is that I select the first time the customer reaches this threshold. I would also like this query to fetch only rows where the customer has reached the threshold of 10 for the first time in the last week.
Following these rules, in the above example, I would like my query to select rows 3, 4 and 5.
I have tried the following query:
SELECT x.id,
min(x.created_at)
FROM (
SELECT
p.id as id,
p.created_at as created_at,
p.amount as amount
FROM "points" p
WHERE p.amount >= 10 ) x
WHERE x.created_at >= (now()::date - 7)
AND x.created_at < now()::date
GROUP BY x.id
I'm unsure that I'm retrieving the right thing however from the result set I am seeing & the results set is huge so it's not evident. Could someone sense check?
Thanks in advance.
Use cumulative functions:
select p.*
from (select p.*,
sum(num_points) over (partition by p.customer_id order by p.created_at) as cume_num_points
from points p
) p
where cume_num_points >= 10 and
(cume_num_points - num_points) < 10;
EDIT:
I may have misunderstood the question. If you just want the first break, one method uses window functions:
select p.*
from (select p.*,
lag(num_points) over (partition by p.customer_id order by p.created_at) as prev_num_points
from points p
) p
where num_points >= 10 and
prev_num_points < 10;
Or, without a subquery:
select distinct on (p.customer_id) p.*
from customers p
where num_points >= 10
order by p.customer_id, p.created_at;