Advanced partitions query - sql

I have a table that contains something similar to the following columns:
infopath_form_id (integer)
form_type (integer)
approver (varchar)
event_timestamp (datetime)
This table contains the approval history for an infopath form and each form that is submitted in the system is given a unique infopath_form_id for this to be stored against. There is no consistent number of approvers for each form (as it differs based on the value of the transaction) however there is always at least two approvers for a form. Each approval task is written as another row to the table and only history of previous approvals is stored within this table.
What I need to find out is the average time that is taken between approvals for each form type. I've tried tackling this every which way using partitions but I'm getting stuck given that there isn't a fixed number of approvers for each form. How should I approach this problem?

I believe you want this:
SELECT infopath_form_id
, DATEDIFF(Minutes,MIN(event_timestamp),MAX(event_timestamp))/CAST(COUNT(*)-1 AS FLOAT)
FROM Table
GROUP BY infopath_form_id
That will give you the average number of minutes between the first and last entry for each InfoPath_form_id.
Explanation of functions used:
MIN() returns the earliest date
MAX() returns the latest date
DATEDIFF() returns the difference between two dates in a given unit (Minutes in this example)
COUNT() returns the number of rows per grouping item (ie InfoPath_form_id)
So simply divide the total minutes elapsed by one less than the number of records giving you the average number of minutes between events.

Related

SQL - Returning max count, after breaking down a day into hourly rows

I need to write a SQL query that helps return the highest count in a given hourly range. The problem is that in my table, it just logs orders as they come and doesn’t have a unique identifier that separates hours from hours.
So basically, I need to find the highest number of orders (on any given hour), from 7/08/2022, - 7/15/2022, have a table that does not distinguish distinct hour sets, and logs orders as they come.
I have tried to use a query that combines MAX(), COUNT(), and DATETIME(), but to no avail.
Can I please receive some help?
I've had to tackle this kind of measurement in the past..
Here's what I did for 15 minute intervals:
My datetime column is named datreg in my database log area.
cast(round(floor(cast(datreg as float(53))*24*4)/(24*4),5) as smalldatetime
I times by 4 in this formula, to get 4 intervals inside my 24 hour period.. For you it would look like this to get just hourly intervals:
cast(round(floor(cast(datreg as float(53))*24)/(24),5) as smalldatetime
This is a little piece of magic when it comes to dashboards and reports.

SQL Retention based on cohort and period

I have already seen all the related posts, but none have been able to help me.
I Have the following fields:
Where:
SOLD_AT is the date of each transaction
CUSTOMER_ID is a unique ID for each customer
COHORT is the date (Year-Month) of the first purchase of the user in that row
ORDER_MONTH is the date of (Year-Month) of the purchase in that row
PERIOD_NUMBER is the date difference in months between COHORT and ORDER_MONTH
N_CUSTOMERS is the number of customers in each PERIOD_NUMBER in each COHORT
In case is useful, I have the querys with which I have obtained these fields, but I think that including them would only add noise since the definition of each variable is more useful.
What I need to do and am not able to do is add an additional field for the retention of each period number of each cohort (not a pivot table by adding the period numbers of each cohort).
Specifically, I need the retention of each period number to be the division of the number of users of that period by the number of users of the previous period, in this way:
To do this in Python, I simply do:
cohort_pivot = df_cohort.pivot_table(index = 'cohort',
columns = 'period_number',
values = 'n_customers')
cohort_size = cohort_pivot.iloc[:,0]
retention_matrix1 = cohort_pivot.divide(cohort_size, axis = 0)
and I can then unpivot and take out the retention for each period of each cohort to create an additional column with this value.
One of the answers that I tried because it was the closest thing I saw was the answer chosen in this post, but I am not able to know the number of periods_numbers or historical months that I am going to have since the code has to be dynamic for any company that is loaded (For example, in DBT, which is the tool I'm using, you can create dynamic pivot tables instead of static ones that require to know this information, but as I say I need to create the field, not the pivot table)
Any ideas will be more than welcome, thank you very much

Boolean conditions that span rows in Spark

I'm trying to calculate a boolean column based on a group and date range.
I have a table that records transactions with the following row structure:
Person GUID - Date - Payment Amount
There are multiple rows per person.
What I want is a new boolean column, called Recent that is determined by whether the person had a transaction within a time period of say, 3 days prior. It would be True if they have, False if they have not.
Any idea for a query to do this?
It depends on when the start time for the beginning of "prior" is. If it's "now" (the current time), then it's quite easy: you want to find the max date per person and then filter on that being no more than some distance from the current time.
Take a look at window functions in Spark and how they can be used with time series.
To find the max date you'll use an expression such as
max(Date) over (partition by Person) as max_date
Hope this helps.

SQL GROUPING SETS averages with multiple many-to-many dimensions

I have a table of data with the following:
User,Platform,Dt,Activity_Flag,Total_Purchases
1,iOS,05/05/2016,1,1
1,Android,05/05/2016,1,2
2,iOS,05/05/2016,1,0
2,Android,05/05/2016,1,2
3,iOS,05/05/2016,1,1
3,Android,06/05/2016,1,3
1,iOS,06/05/2016,1,2
4,Android,06/05/2016,1,2
1,Android,06/05/2016,1,0
3,iOS,07/05/2016,1,2
2,iOS,08/05/2016,1,0
I want to do a GROUPING SETS (Platform,Dt,(Platform,Dt),()) aggregation to be able to find for each combination of Platform and Dt the following:
Total Purchases
Total Unique Users
Average Purchases per User per Day
The first two are simple as these can be achieved via a sum(Total_Purchases) and count(distinct user) respectively.
The problem I have is with the last metric. The result set should look like this but I don't know how to get the last column to be calculated correctly:
Platform,Dt,Total_Purchases,Total_Unique_Users,Average_Purchases_Per_User_Per_Day
Android,05/05/2016,4,2,2.0
iOS,05/05/2016,2,3,0.7
Android,06/05/2016,5,3,1.7
iOS,06/05/2016,2,1,2.0
iOS,07/05/2016,2,1,2.0
iOS,08/05/2016,0,1,0.0
,05/05/2016,6,3,2.0
,06/05/2016,7,3,2.3
,07/05/2016,1,1,1.0
,08/05/2016,1,1,1.0
Android,,9,4,1.8
iOS,,6,3,1.2
,,15,4,1.6
For the first ten rows we see that getting the Average purchase per user per day is a simple division of the first two columns as the dimension in these rows represent a single date only. But when we look at the final 3 rows we see that the division is not the way to achieve the desired result. This is because it needs to take an average for each day in turn to get the overall per day amount.
If this isn't clear please let me know and I'll be happy to explain better. This is my first post on this site!

Creating a calculated column (not aggregate) that changes value based on context SSAS tabular DAX

Data: I have a single row that represents an annual subscription to a product, it has an overall startDate and endDate, there is also third date which is startdate + 1 month called endDateNew. I also have a non-related date table (called table X).
Output I'm looking for: I need a new column called Categorisation that will return 'New' if the date selected in table X is between startDate and endDateNew and 'Existing' if the date is between startDate and endDate.
Problem: The column seems to evaluate immediately without taking in to account the date context from the non-related date table - I kinda expected this to happen in visual studio (where it assumes the context is all records?) but when previewing in Excel it carries through this same value through.
The bit that is working:I have an aggregate (an active subscriber count) that correctly counts the subscription as active over the months selected in Table X.
The SQL equivalent on an individual date:
case
when '2015-10-01' between startDate and endDateNew then 'New'
when '2015-10-01' < endDate then 'Existing'
end as Category
where the value would be calculated for each date in table X
Thanks!
Ross
Calculated columns are only evaluated at model refresh/process time. This is by design. There is no way to make a calculated column change based on run-time changes in filter context from a pivot table.
Ross,
Calculated columns work differently than Excel. Optimally the value is known when the record is first added to the model.
Your example is kinda similar to a slowly changing dimension .
There are several possible solutions. Here are two and a half:
Full process on the last 32 days of data every time you process the subscriptions table (which may be unacceptably inefficient).
OR
Create a new table 'Subscription scd' with the primary key from the subscriptions table and your single calculated column of 'Subscription Age in Days'. Like an outrigger. This table could be reprocessed more efficiently than reprocessing the subscriptions table, so process the subscriptions table as incrementals only and do a full process on this table for the data within the last 32 days instead.
OR
Decide which measures are interesting within the 'new/existing' context and write explicit measures for them using a dynamic filter on the date column in the measures
eg. Define
'Sum of Sales - New Subscriptions',
'Sum of Sales - Existing Subscriptions',
'Distinct Count of New Subscriptions - Last 28 Days', etc