I have the following data:
country objectid objectuse
record_date
2022-07-20 chile 0 4
2022-07-01 chile 1 4
2022-07-02 chile 1 4
2022-07-03 chile 1 4
2022-07-04 chile 1 4
... ... ... ...
2022-07-26 peru 3088 4
2022-07-27 peru 3088 4
2022-07-28 peru 3088 4
2022-07-30 peru 3088 4
2022-07-31 peru 3088 4
The data describes the daily usage of an object within a country for a single month (July 2022), and not all object are used every day. One of the things I am interested in finding is the sum of the monthly maximums for the month:
WITH month_max AS (
SELECT
country,
objectid,
MAX(objectuse) AS maxuse
FROM mytable
GROUP BY
country,
objectid
)
SELECT
country,
SUM(maxuse)
FROM month_max
GROUP BY country;
Which results in this:
country sum
-------------
chile 1224
peru 17008
But what I actually want is to get the rolling sum of the maxima from the beginning of the month up to each date. So that I get something that looks like:
country sum
record_date
2022-07-01 chile 1
2022-07-01 peru 1
2022-07-02 chile 2
2022-07-02 peru 3
... ... ...
2022-07-31 chile 1224
2022-07-31 peru 17008
I tried using a window function like this to no avail:
SELECT
*,
SUM(objectuse) OVER (
PARTITION BY country
ORDER BY record_date ROWS 30 PRECEDING
) as cumesum
FROM mytable
order BY cumesum DESC;
Is there a way I can achieve the desired result in SQL?
Thanks in advance.
EDIT: For what it's worth, I asked the same question but on Pandas and I received an answer; perhaps it helps to figure out how to do it in SQL.
What ended up working is probably not the most efficient approach to this problem. I essentially created backwards looking blocks from each day in the month back towards the beginning of the month. Within each of these buckets I get the maximum of objectuse for each objectid within that bucket. After taking the max, I sum across all the maxima for that backward looking period. I do this for every day in the data.
Here is the query that does it:
WITH daily_lookback AS (
SELECT
A.record_date,
A.country,
B.objectid,
MAX(B.objectuse) AS maxuse
FROM mytable AS A
LEFT JOIN mytable AS B
ON A.record_date >= B.record_date
AND A.country = B.country
AND DATE_PART('month', A.record_date) = DATE_PART('month', B.record_date)
AND DATE_PART('year', A.record_date) = DATE_PART('year', B.record_date)
GROUP BY
A.record_date,
A.country,
B.objectid
)
SELECT
record_date,
country,
SUM(maxuse) AS usetotal
FROM daily_lookback
GROUP BY
record_date,
country
ORDER BY
record_date;
Which gives me exactly what I was looking for: the cumulative sum of the objectid maximums for the backward looking period, like this:
country sum
record_date
2022-07-01 chile 1
2022-07-01 peru 1
2022-07-02 chile 2
2022-07-02 peru 3
... ... ...
2022-07-31 chile 1224
2022-07-31 peru 17008
You need to change your inner query to use the windowed maximum:
WITH month_max AS (
SELECT record_date, country, objectid,
MAX(objectuse) over (PARTITION BY country, objectid ORDER BY record_date) AS mx
FROM mytable
)
SELECT record_date, country, SUM(mx) as "sum"
FROM month_max
GROUP BY record_date, country;
This does assume one row per object per date.
Here's a re-written version of your query. With indexing it seems possible that it might run faster:
select record_date, country, min(usetotal) as usetotal
from mytable d inner join lateral (
select distinct sum(max(objectuse)) over () as usetotal from mytable a
where a.record_date between date_trunc('month', d.record_date) and d.record_date
and a.country = d.country
group by objectid
) T on 1 = 1
group by record_date, country
order by record_date, country;
https://dbfiddle.uk/?rdbms=postgres_14&fiddle=63760e30aecf4c885ec4967045b6cd03
Related
I have table in Teradata SQL like below:
ID trans_date
------------------------
123 | 2021-01-01
887 | 2021-01-15
123 | 2021-02-10
45 | 2021-03-11
789 | 2021-10-01
45 | 2021-09-02
And I need to calculate average monthly number of transactions made by customers in a period between 2021-01-01 and 2021-09-01, so client with "ID" = 789 will not be calculated because he made transaction later.
In the first month (01) were 2 transactions
In the second month was 1 transaction
In the third month was 1 transaction
In the nineth month was 1 transactions
So the result should be (2+1+1+1) / 4 = 1.25, isn't is ?
How can I calculate it in Teradata SQL? Of course I showed you sample of my data.
SELECT ID, AVG(txns) FROM
(SELECT ID, TRUNC(trans_date,'MON') as mth, COUNT(*) as txns
FROM mytable
-- WHERE condition matches the question but likely want to
-- use end date 2021-09-30 or use mth instead of trans_date
WHERE trans_date BETWEEN date'2021-01-01' and date'2021-09-01'
GROUP BY id, mth) mth_txn
GROUP BY id;
Your logic translated to SQL:
--(2+1+1+1) / 4
SELECT id, COUNT(*) / COUNT(DISTINCT TRUNC(trans_date,'MON')) AS avg_tx
FROM mytable
WHERE trans_date BETWEEN date'2021-01-01' and date'2021-09-01'
GROUP BY id;
You should compare to Fred's answer to see which is more efficent on your data.
Its probably very easy, bot somehow I cannot get the desired result:
My data looks like this: I have a large table with items sold. Each item has a category assigned (here A-D) and country. I would like to calculate how many items were sold in Europe, in each category, and what is the share of this particular category to total sales
my data looks like this:
country
item_id
item_cat
Europe
1
A
Europe
2
A
Europe
3
B
Europe
4
B
Europe
5
C
Europe
6
C
Europe
7
C
USA
8
D
USA
9
D
USA
10
D
my desired output like this:
country
item_cat
cat_sales
total_sales
share
Europe
A
2
7
0.29
Europe
B
2
7
0.29
Europe
C
3
7
0.43
what I tried is:
SELECT
country,
item_cat,
count(*) as cat_sales,
count(*) OVER () as total_sales,
cat_sales / total_sales as share
FROM data
where country='Europe'
group by item_cat
but SQL tells me I cannot group and use windowing in one request.
How could i solve this?
Thanks in advance
A few ways, one would be to pre-count the total sales in a CTE and then select from it for the remaining aggregate.
I don't use impala however in standard SQL this should work
with tot as (
select *,
Count(*) over(partition by country) * 1.0 as total_sales
from t
)
select country, item_cat,
Count(*) as cat_sales,
total_sales,
Round(Count(*) / total_sales, 2) as Share
from tot
where country='europe'
group by country, item_cat, total_sales
I'm working on a query to compute the distinct users of particular features of an app within a moving window. So, if there's a range from 15-20th October, I want a query to go from 8-15 Oct, 9-16 Oct etc and get the count of distinct users per feature. So for each date, it should have x rows where x is the number of features.
I have a query the following query so far:
WITH V1(edate, code, total) AS
(
SELECT date, featurecode,
DENSE_RANK() OVER ( PARTITION BY (featurecode ORDER BY accountid ASC) + DENSE_RANK() OVER ( PARTITION BY featurecode ORDER By accountid DESC) - 1
FROM....
GROUP BY edate, featurecode, appcode, accountid
HAVING appcode='sample' AND eventdate BETWEEN '15-10-2018' And '20-10-2018'
)
Select distinct date, code, total
from V1
WHERE date between '2018-10-15' AND '2018-10-20'
This returns the same set of values for all the dates. Is there any way to do this efficiently?? It's a DB2 database by the way but I'm looking for insight from postgresql users too.
Present result- All the totals are being repeated.
date code total
10/15/2018 appname-feature1 123
10/15/2018 appname-feature2 234
10/15/2018 appname-feature3 321
10/16/2018 appname-feature1 123
10/16/2018 appname-feature2 234
10/16/2018 appname-feature3 321
Desired result.
date code total
10/15/2018 appname-feature1 123
10/15/2018 appname-feature2 234
10/15/2018 appname-feature3 321
10/16/2018 appname-feature1 212
10/16/2018 appname-feature2 577
10/16/2018 appname-feature3 2345
This is not easy to do efficiently. DISTINCT counts are't incrementally maintainable (unless you go down the route of in-exact DISTINCT counts such as HyperLogLog).
It is easy to code in SQL, and try the usual indexing etc to help.
It is (possibly) not possible, however, to code with OLAP functions.. not least because you can only use RANGE BETWEEN for SUM(), COUNT(), MAX() etc, but not RANK() or DENSE_RANK() ... so just use a traditional co-related sub-select
First some data
CREATE TABLE T(D DATE,F CHAR(1),A CHAR(1));
INSERT INTO T (VALUES
('2018-10-10','X','A')
, ('2018-10-11','X','A')
, ('2018-10-15','X','A')
, ('2018-10-15','X','A')
, ('2018-10-15','X','B')
, ('2018-10-15','Y','A')
, ('2018-10-16','X','C')
, ('2018-10-18','X','A')
, ('2018-10-21','X','B')
)
;
Now a simple select
WITH B AS (
SELECT DISTINCT D, F FROM T
)
SELECT D,F
, (SELECT COUNT(DISTINCT A)
FROM T
WHERE T.F = B.F
AND T.D BETWEEN B.D - 3 DAYS AND B.D + 4 DAYS
) AS DISTINCT_A_MOVING_WEEK
FROM
B
ORDER BY F,D
;
giving, e.g.
D F DISTINCT_A_MOVING_WEEK
---------- - ----------------------
2018-10-10 X 1
2018-10-11 X 2
2018-10-15 X 3
2018-10-16 X 3
2018-10-18 X 3
2018-10-21 X 2
2018-10-15 Y 1
I am looking into a table with transaction data of a two-sided platform, where you have buyers and sellers. I want to know the total amount of unique combinations of buyers and sellers. Let's say, Abe buys from Brandon in January, that's 1 combination. If Abe buys with Cece in February, that makes 2, but if Abe then buys from Brandon again, it's still 2.
My solution was to use the DENSE_RANK() function:
WITH
combos AS (
SELECT
t.buyerid, t.sellerid,
DENSE_RANK() OVER (ORDER BY t.buyerid, t.sellerid) AS combinations
FROM transactions t
WHERE t.transaction_date < '2018-05-01'
)
SELECT
MAX(combinations) AS total_combinations
FROM combos
This works fine. Each new combo gets a higher rank, and if you select the MAX of that result, you know the amount of unique combos.
However, I want to know this total amount of unique combos on a per month basis. The problem here is that if I group per transaction month, it only counts the unique combos in that month. In the example of Abe, it would be a unique combo in January, and then another combo in the next month, because that's how grouping works in SQL.
Example:
transaction_date buyerid sellerid
2018-01-03 3828 219
2018-01-08 2831 123
2018-02-10 3828 219
The output of DENSE_RANK() named combinations over all these rows is:
transaction_date buyerid sellerid combinations
2018-01-03 3828 219 1
2018-01-08 2831 123 2
2018-02-10 3828 219 2
And therefore, when selecting the MAX combinations you know the amount of unique buyer/seller combos, which is here.
However, I would like to see a running total of unique combos up until each start of the month, for all months until now. But, when we group on month, it would go like this:
transaction_date buyerid sellerid month combinations
2018-01-03 3828 219 jan 1
2018-01-08 2831 123 jan 2
2018-02-10 3828 219 feb 1
While I actually would want an output like:
month total_combinations_at_month_start
jan 0
feb 2
mar 2
How should I solve this? I've tried to find help on all kinds of window functions, but no luck until now. Thanks!
Here is one method:
WITH combos AS (
SELECT t.*,
ROW_NUMBER() OVER (PARTITION BY sellerid, buyerid ORDER BY t.transaction_date) as combo_seqnum,
ROW_NUMBER() OVER (PARTITION BY sellerid, buyerid, date_trunc('month', t.transaction_date) ORDER BY t.transaction_date) as combo_month_seqnum
FROM transactions t
WHERE t.transaction_date < '2018-05-01'
)
SELECT 'Overall' as which, COUNT(*)
FROM combos
WHERE combo_seqnum = 1
UNION ALL
SELECT to_char(transaction_date, 'YYYY-MM'), COUNT(*)
FROM combos
WHERE combo_month_seqnum = 1
GROUP BY to_char(transaction_date, 'YYYY-MM');
This puts the results in separate rows. If you want a cumulative number and number per month:
SELECT to_char(transaction_date, 'YYYY-MM'),
SUM( (combo_month_seqnum = 1)::int ) as uniques_in_month,
SUM(SUM( (combo_seqnum = 1)::int )) OVER (ORDER BY to_char(transaction_date, 'YYYY-MM')) as uniques_through_month
FROM combos
GROUP BY to_char(transaction_date, 'YYYY-MM')
Here is a rextester illustrating the solution.
I need to count a value (M_Id) at each change of a date (RS_Date) and create a column grouped by the RS_Date that has an active total from that date.
So the table is:
Ep_Id Oa_Id M_Id M_StartDate RS_Date
--------------------------------------------
1 2001 5 1/1/2014 1/1/2014
1 2001 9 1/1/2014 1/1/2014
1 2001 3 1/1/2014 1/1/2014
1 2001 11 1/1/2014 1/1/2014
1 2001 2 1/1/2014 1/1/2014
1 2067 7 1/1/2014 1/5/2014
1 2067 1 1/1/2014 1/5/2014
1 3099 12 1/1/2014 3/2/2014
1 3099 14 2/14/2014 3/2/2014
1 3099 4 2/14/2014 3/2/2014
So my goal is like
RS_Date Active
-----------------
1/1/2014 5
1/5/2014 7
3/2/2014 10
If the M_startDate = RS_Date I need to count the M_id and then for
each RS_Date that is not equal to the start date I need to count the M_Id and then add that to the M_StartDate count and then count the next RS_Date and add that to the last active count.
I can get the basic counts with something like
(Case when M_StartDate <= RS_Date
then [m_Id] end) as Test.
But I am stuck as how to get to the result I want.
Any help would be greatly appreciated.
Brian
-added in response to comments
I am using Server Ver 10
If using SQL SERVER 2012+ you can use ROWS with your the analytic/window functions:
;with cte AS (SELECT RS_Date
,COUNT(DISTINCT M_ID) AS CT
FROM Table1
GROUP BY RS_Date
)
SELECT *,SUM(CT) OVER(ORDER BY RS_Date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS Run_CT
FROM cte
Demo: SQL Fiddle
If stuck using something prior to 2012 you can use:
;with cte AS (SELECT RS_Date
,COUNT(DISTINCT M_ID) AS CT
FROM Table1
GROUP BY RS_Date
)
SELECT a.RS_Date
,SUM(b.CT)
FROM cte a
LEFT JOIN cte b
ON a.RS_DAte >= b.RS_Date
GROUP BY a.RS_Date
Demo: SQL Fiddle
You need a cumulative sum, easy in SQL Server 2012 using Windowed Aggregate Functions. Based on your description this will return the expected result
SELECT p_id, RS_Date,
SUM(COUNT(*))
OVER (PARTITION BY p_id
ORDER BY RS_Date
ROWS UNBOUNDED PRECEDING)
FROM tab
GROUP BY p_id, RS_Date
It looks like you want something like this:
SELECT
RS_Date,
SUM(c) OVER (PARTITION BY M_StartDate ORDER BY RS_Date ROWS UNBOUNDED PRECEEDING)
FROM
(
SELECT M_StartDate, RS_Date, COUNT(DISTINCT M_Id) AS c
FROM my_table
GROUP BY M_StartDate, RS_Date
) counts
The inline view computes the counts of distinct M_Id values within each (M_StartDate, RS_Date) group (distinctness enforced only within the group), and the outer query uses the analytic version of SUM() to add up the counts within each M_StartDate.
Note that this particular query will not exactly reproduce your example results. It will instead produce:
RS_Date Active
-----------------
1/1/2014 5
1/5/2014 7
3/2/2014 8
3/2/2014 2
This is on account of some rows in your example data with RS_Date 3/2/2014 having a later M_StartDate than others. If this is not what you want then you need to clarify the question, which currently seems a bit inconsistent.
Unfortunately, analytic functions are not available until SQL Server 2012. In SQL Server 2010, the job is messier. It could be done like this:
WITH gc AS (
SELECT M_StartDate, RS_Date, COUNT(DISTINCT M_Id) AS c
FROM my_table
GROUP BY M_StartDate, RS_Date
)
SELECT
RS_Date,
(
SELECT SUM(c)
FROM gc2
WHERE gc2.M_StartDate = gc.M_StartDate AND gc2.RS_Date <= gc.RS_Date
) AS Active
FROM gc
If you are using SQL 2012 or newer you can use LAG to produce a running total.
https://msdn.microsoft.com/en-us/library/hh231256(v=sql.110).aspx