help me in executing the sql query - sql

I have a table like below. I want to calculate the sum of amount for the first 5% customers and then next 20% and next 25% and next 25% and finally remaining. This is just the sample of DB table.
5%=1, so the sum is 100
Next 20%=4, so sum=1800(200+500+300+800)
Next 25%=5, so sum=2900(600+800+500+400+600)
Next 25%=5, so sum=2500(300+800+300+800+300)
Rest=1400
Cus_ID Amount
1004 100
1064 200
1126 500
1280 300
1678 800
1719 600
1862 800
2109 500
2892 400
2957 600
3097 300
3205 800
3399 300
3460 800
4169 300
4380 800
4689 100
4886 200
4906 300
Result
5% 20% 25% next 25% Rest
100 1800 2900 2500 1400

WITH T(Cus_ID,Amount ) AS
(
SELECT 1004, 100 UNION ALL
SELECT 1064, 200 UNION ALL
SELECT 1126, 500 UNION ALL
SELECT 1280, 300 UNION ALL
SELECT 1678, 800 UNION ALL
SELECT 1719, 600 UNION ALL
SELECT 1862, 800 UNION ALL
SELECT 2109, 500 UNION ALL
SELECT 2892, 400 UNION ALL
SELECT 2957, 600 UNION ALL
SELECT 3097, 300 UNION ALL
SELECT 3205, 800 UNION ALL
SELECT 3399, 300 UNION ALL
SELECT 3460, 800 UNION ALL
SELECT 4169, 300 UNION ALL
SELECT 4380, 800 UNION ALL
SELECT 4689, 100 UNION ALL
SELECT 4886, 200 UNION ALL
SELECT 4906, 300
), T2 AS
(
SELECT *,
ROW_NUMBER() OVER (ORDER BY Cus_ID) AS RN,
ROW_NUMBER() OVER (ORDER BY Cus_ID)/ CAST(COUNT(*) OVER() AS FLOAT) AS Pct
FROM T
), T3(Amount, Grp) AS
(
SELECT a.Amount, CASE WHEN ISNULL(b.Pct,0) < 0.05 THEN 1
WHEN b.Pct < 0.25 THEN 2
WHEN b.Pct < 0.50 THEN 3
WHEN b.Pct < 0.75 THEN 4
ELSE 5
END
FROM T2 a LEFT JOIN T2 b ON b.RN=a.RN-1
)
SELECT SUM(Amount) AS Amount, Grp
FROM T3
GROUP BY Grp

Related

rank with the sum

Below is the table:
category weightage As_of_date
123abc 50 1/1/2020
456abc 100 1/2/2020
456abc 100 1/3/2020
678def 200 1/4/2020
678def 200 1/4/2020
123def 50 2/1/2020
123def 50 2/1/2020
123def 50 2/3/2020
123def 50 2/1/2020
123def 50 6/7/2020
where I want to rank the category based on weightage desc, expected results:
category weightage As_of_date dense_rank
123abc 50 1/1/2020 4
456abc 100 1/2/2020 3
456abc 100 1/3/2020 3
678def 200 1/4/2020 1
678def 200 1/4/2020 1
123def 50 2/1/2020 2
123def 50 2/1/2020 2
123def 50 2/3/2020 2
123def 50 2/1/2020 2
123def 50 6/7/2020 2
what was already tried: select desnse_rank() over (partition by category order by weightage desc), but I need to rank it based on sum(weightage) per category.
With a CTE you can do the calculations one by one. First calculate the dense rank for sum per category (ranked_by_sum), then join back to the original table to get the dense rank value for the individual rows:
WITH test_data (category, weightage, as_of_date) AS
(
SELECT '123abc',50, TO_DATE('1/1/2020','DD/MM/YYYY') FROM DUAL UNION ALL
SELECT '456abc',100,TO_DATE('1/2/2020','DD/MM/YYYY') FROM DUAL UNION ALL
SELECT '456abc',100,TO_DATE('1/3/2020','DD/MM/YYYY') FROM DUAL UNION ALL
SELECT '678def',200,TO_DATE('1/4/2020','DD/MM/YYYY') FROM DUAL UNION ALL
SELECT '678def',200,TO_DATE('1/4/2020','DD/MM/YYYY') FROM DUAL UNION ALL
SELECT '123def',50, TO_DATE('2/1/2020','DD/MM/YYYY') FROM DUAL UNION ALL
SELECT '123def',50, TO_DATE('2/1/2020','DD/MM/YYYY') FROM DUAL UNION ALL
SELECT '123def',50, TO_DATE('2/3/2020','DD/MM/YYYY') FROM DUAL UNION ALL
SELECT '123def',50, TO_DATE('2/1/2020','DD/MM/YYYY') FROM DUAL UNION ALL
SELECT '123def',50, TO_DATE('6/7/2020','DD/MM/YYYY') FROM DUAL
), ranked_by_sum (category,sum_weightage, drnk)
AS
(
SELECT category, SUM(weightage),DENSE_RANK () OVER (
ORDER BY SUM(weightage) DESC )
FROM test_data
GROUP BY category
)
SELECT t.category, t.weightage, t.as_of_date, r.drnk
FROM test_data t
JOIN ranked_by_sum r ON t.category = r.category
ORDER BY r.drnk DESC
CATEGO WEIGHTAGE AS_OF_DATE DRNK
------ ---------- ----------- ----------
123abc 50 01-JAN-2020 4
456abc 100 01-FEB-2020 3
456abc 100 01-MAR-2020 3
123def 50 02-JAN-2020 2
123def 50 06-JUL-2020 2
123def 50 02-JAN-2020 2
123def 50 02-JAN-2020 2
123def 50 02-MAR-2020 2
678def 200 01-APR-2020 1
678def 200 01-APR-2020 1
You can do it without a self-join using SUM as an analytic function in a nested sub-query:
SELECT category,
weightage,
as_of_date,
DENSE_RANK() OVER (ORDER BY total_weightage DESC) AS dense_rank
FROM (
SELECT t.*,
SUM(weightage) OVER (PARTITION BY category) AS total_weightage
FROM table_name t
)
Which, for the sample data:
CREATE TABLE table_name (category, weightage, as_of_date) AS
SELECT '123abc',50, DATE'2020-01-01' FROM DUAL UNION ALL
SELECT '456abc',100,DATE'2020-02-01' FROM DUAL UNION ALL
SELECT '456abc',100,DATE'2020-03-01' FROM DUAL UNION ALL
SELECT '678def',200,DATE'2020-04-01' FROM DUAL UNION ALL
SELECT '678def',200,DATE'2020-04-01' FROM DUAL UNION ALL
SELECT '123def',50, DATE'2020-01-02' FROM DUAL UNION ALL
SELECT '123def',50, DATE'2020-01-02' FROM DUAL UNION ALL
SELECT '123def',50, DATE'2020-03-02' FROM DUAL UNION ALL
SELECT '123def',50, DATE'2020-01-02' FROM DUAL UNION ALL
SELECT '123def',50, DATE'2020-07-06' FROM DUAL;
Outputs:
CATEGORY
WEIGHTAGE
AS_OF_DATE
DENSE_RANK
678def
200
01-APR-20
1
678def
200
01-APR-20
1
123def
50
02-JAN-20
2
123def
50
02-MAR-20
2
123def
50
06-JUL-20
2
123def
50
02-JAN-20
2
123def
50
02-JAN-20
2
456abc
100
01-MAR-20
3
456abc
100
01-FEB-20
3
123abc
50
01-JAN-20
4
db<>fiddle here

Best 10 of 12 in SQL

Scoring for a running race series. They get points at each monthly race based on their finish. Their total score is their best 10 of 12 monthly races. How do I get that for each member?
tblRacePoints
memnum - Membership number
RaceNo - YYYYMM, e.g., 201910
Points
I want for each their total score of all races, total score of their best 10 of 12, and each of their lowest two scores for the year. Not everyone has done all the races so they may not have 12 entries for the year.
How do I write a query to do this, and then to rank them by their best 10/12 points?
If you are using MSSQL database, you can use ROW_NUMBER as below to achieve your required output. Same logic can be used for some other databases too.
Note: Table structure is just an assumption.
WITH your_table(player_id,dt,points)
AS
(
SELECT 1,'20190101', 100 UNION ALL SELECT 1,'20190201', 200 UNION ALL
SELECT 1,'20190301', 300 UNION ALL SELECT 1,'20190401', 400 UNION ALL
SELECT 1,'20190501', 500 UNION ALL SELECT 1,'20190601', 600 UNION ALL
SELECT 1,'20190701', 700 UNION ALL SELECT 1,'20190801', 800 UNION ALL
SELECT 1,'20190901', 900 UNION ALL SELECT 1,'20191001', 1000 UNION ALL
SELECT 1,'20191101', 1100 UNION ALL SELECT 1,'20191201', 1200 UNION ALL
SELECT 2,'20190101', 400 UNION ALL SELECT 2,'20190201', 200 UNION ALL
SELECT 2,'20190301', 300 UNION ALL SELECT 2,'20190401', 400 UNION ALL
SELECT 2,'20190501', 500 UNION ALL SELECT 2,'20190601', 600 UNION ALL
SELECT 2,'20190701', 700 UNION ALL SELECT 2,'20190801', 800 UNION ALL
SELECT 2,'20190901', 900 UNION ALL SELECT 2,'20191001', 1000 UNION ALL
SELECT 2,'20191101', 1100 UNION ALL SELECT 2,'20191201', 1200
)
SELECT
player_id,
YEAR(dt) Year,
SUM(Points) total_point
FROM
(
SELECT *,
ROW_NUMBER() OVER (PARTITION BY player_id, YEAR(dt) ORDER BY Points DESC) RN
FROM your_table
)A
WHERE RN <= 10
GROUP BY player_id, YEAR(dt)

How to perform rolling sum in BigQuery

I have sample data in BigQuery as -
with temp as (
select DATE("2016-10-02") date_field , 200 as salary
union all
select DATE("2016-10-09"), 500
union all
select DATE("2016-10-16"), 350
union all
select DATE("2016-10-23"), 400
union all
select DATE("2016-10-30"), 190
union all
select DATE("2016-11-06"), 550
union all
select DATE("2016-11-13"), 610
union all
select DATE("2016-11-20"), 480
union all
select DATE("2016-11-27"), 660
union all
select DATE("2016-12-04"), 690
union all
select DATE("2016-12-11"), 810
union all
select DATE("2016-12-18"), 950
union all
select DATE("2016-12-25"), 1020
union all
select DATE("2017-01-01"), 680
) ,
temp2 as (
select * , DATE("2017-01-01") as current_date
from temp
)
select * from temp2
I want to perform rolling sum on this table. As an example, I have set current date to 2017-01-01. Now, this being the current date, I want to go back 30 days and take sum of salary field. Hence, with 2017-01-01 being the current date, the total that should be returned is for the month of December , 2016, which is 690+810+950+1020. How can I do this using StandardSQL ?
Below is for BigQuery Standard SQL for Rolling last 30 days SUM
#standardSQL
SELECT *,
SUM(salary) OVER(
ORDER BY UNIX_DATE(date_field)
RANGE BETWEEN 30 PRECEDING AND 1 PRECEDING
) AS rolling_30_days_sum
FROM `project.dataset.your_table`
You can test, play with above using sample data from your question as below
#standardSQL
WITH temp AS (
SELECT DATE("2016-10-02") date_field , 200 AS salary UNION ALL
SELECT DATE("2016-10-09"), 500 UNION ALL
SELECT DATE("2016-10-16"), 350 UNION ALL
SELECT DATE("2016-10-23"), 400 UNION ALL
SELECT DATE("2016-10-30"), 190 UNION ALL
SELECT DATE("2016-11-06"), 550 UNION ALL
SELECT DATE("2016-11-13"), 610 UNION ALL
SELECT DATE("2016-11-20"), 480 UNION ALL
SELECT DATE("2016-11-27"), 660 UNION ALL
SELECT DATE("2016-12-04"), 690 UNION ALL
SELECT DATE("2016-12-11"), 810 UNION ALL
SELECT DATE("2016-12-18"), 950 UNION ALL
SELECT DATE("2016-12-25"), 1020 UNION ALL
SELECT DATE("2017-01-01"), 680
)
SELECT *,
SUM(salary) OVER(
ORDER BY UNIX_DATE(date_field)
RANGE BETWEEN 30 PRECEDING AND 1 PRECEDING
) AS rolling_30_days_sum
FROM temp
-- ORDER BY date_field
with result
Row date_field salary rolling_30_days_sum
1 2016-10-02 200 null
2 2016-10-09 500 200
3 2016-10-16 350 700
4 2016-10-23 400 1050
5 2016-10-30 190 1450
6 2016-11-06 550 1440
7 2016-11-13 610 1490
8 2016-11-20 480 1750
9 2016-11-27 660 1830
10 2016-12-04 690 2300
11 2016-12-11 810 2440
12 2016-12-18 950 2640
13 2016-12-25 1020 3110
14 2017-01-01 680 3470
This is not exactly a "rolling sum", but it's the exact answer to "I want to go back 30 days and take sum of salary field. Hence, with 2017-01-01 being the current date, the total that should be returned is for the month of December"
with temp as (
select DATE("2016-10-02") date_field , 200 as salary
union all
select DATE("2016-10-09"), 500
union all
select DATE("2016-10-16"), 350
union all
select DATE("2016-10-23"), 400
union all
select DATE("2016-10-30"), 190
union all
select DATE("2016-11-06"), 550
union all
select DATE("2016-11-13"), 610
union all
select DATE("2016-11-20"), 480
union all
select DATE("2016-11-27"), 660
union all
select DATE("2016-12-04"), 690
union all
select DATE("2016-12-11"), 810
union all
select DATE("2016-12-18"), 950
union all
select DATE("2016-12-25"), 1020
union all
select DATE("2017-01-01"), 680
) ,
temp2 as (
select * , DATE("2017-01-01") as current_date_x
from temp
)
select SUM(salary)
from temp2
WHERE date_field BETWEEN DATE_SUB(current_date_x, INTERVAL 30 DAY) AND DATE_SUB(current_date_x, INTERVAL 1 DAY)
3470
Note that I wasn't able to use current_date as a variable name, as it gets replaced by the actual current date.

How to calculate MTD and QTD by YTD value in Oracle

There are some data in my table t1 looks like below:
date dealer YTD_Value
2018-01 A 1100
2018-02 A 2000
2018-03 A 3000
2018-04 A 4200
2018-05 A 5000
2018-06 A 5500
2017-01 B 100
2017-02 B 200
2017-03 B 500
... ... ...
then I want to write a SQL to query this table and get below result:
date dealer YTD_Value MTD_Value QTD_Value
2018-01 A 1100 1100 1100
2018-02 A 2000 900 2000
2018-03 A 3000 1000 3000
2018-04 A 4200 1200 1200
2018-05 A 5000 800 2000
2018-06 A 5500 500 2500
2017-01 B 100 100 100
2017-02 B 200 100 200
2017-03 B 550 350 550
... ... ... ... ...
'YTD' means Year to date
'MTD' means Month to date
'QTD' means Quarter to date
So if I want to calculate MTD and QTD value for dealer 'A' in '2018-01', it should be the same as YTD.
If I want to calculate MTD value for dealer 'A' in '2018-06', MTD value should equal to YTD value in '2018-06' minus YTD value in '2018-05'. And the QTD value in '2018-06' should equal to YTD value in '2018-06' minus YTD value in '2018-03' or equal to sum MTD value in (2018-04,2018-05,2018-06)
The same rule for other dealers such as B.
How can I write the SQL to achieve this purpose?
The QTD calculation is tricky, but you can do this query without subqueries. The basic idea is to do a lag() for the monthly value. Then use a max() analytic function to get the YTD value at the beginning of the quarter.
Of course, the first quarter of the year has no such value, so a coalesce() is needed.
Try this:
with t(dte, dealer, YTD_Value) as (
select '2018-01', 'A', 1100 from dual union all
select '2018-02', 'A', 2000 from dual union all
select '2018-03', 'A', 3000 from dual union all
select '2018-04', 'A', 4200 from dual union all
select '2018-05', 'A', 5000 from dual union all
select '2018-06', 'A', 5500 from dual union all
select '2017-01', 'B', 100 from dual union all
select '2017-02', 'B', 200 from dual union all
select '2017-03', 'B', 550 from dual
)
select t.*,
(YTD_Value - lag(YTD_Value, 1, 0) over (partition by substr(dte, 1, 4) order by dte)) as MTD_Value,
(YTD_Value -
coalesce(max(case when substr(dte, -2) in ('03', '06', '09') then YTD_VALUE end) over
(partition by substr(dte, 1, 4) order by dte rows between unbounded preceding and 1 preceding
), 0
)
) as QTD_Value
from t
order by 1
Here is a db<>fiddle.
The following query should do the job. It uses a CTE that translates the varchar date column to dates, and then a few joins to recover the value to compare.
I tested it in this db fiddle and the output matches your expected results.
WITH cte AS (
SELECT TO_DATE(my_date, 'YYYY-MM') my_date, dealer, ytd_value FROM my_table
)
SELECT
TO_CHAR(ytd.my_date, 'YYYY-MM') my_date,
ytd.ytd_value,
ytd.dealer,
ytd.ytd_value - NVL(mtd.ytd_value, 0) mtd_value,
ytd.ytd_value - NVL(qtd.ytd_value, 0) qtd_value
FROM
cte ytd
LEFT JOIN cte mtd ON mtd.my_date = ADD_MONTHS(ytd.my_date, -1) AND mtd.dealer = ytd.dealer
LEFT JOIN cte qtd ON qtd.my_date = ADD_MONTHS(TRUNC(ytd.my_date, 'Q'), -1) AND mtd.dealer = qtd.dealer
ORDER BY dealer, my_date
PS : date is a reserved word in most RDBMS (including Oracle), I renamed that column to my_date in the query.
You can use lag() windows analytic and sum() over .. aggregation functions as :
select "date",dealer,YTD_Value,MTD_Value,
sum(MTD_Value) over (partition by qt order by "date")
as QTD_Value
from
(
with t("date",dealer,YTD_Value) as
(
select '2018-01','A',1100 from dual union all
select '2018-02','A',2000 from dual union all
select '2018-03','A',3000 from dual union all
select '2018-04','A',4200 from dual union all
select '2018-05','A',5000 from dual union all
select '2018-06','A',5500 from dual union all
select '2017-01','B', 100 from dual union all
select '2017-02','B', 200 from dual union all
select '2017-03','B', 550 from dual
)
select t.*,
t.YTD_Value - nvl(lag(t.YTD_Value)
over (partition by substr("date",1,4) order by substr("date",1,4) desc, "date"),0)
as MTD_Value,
substr("date",1,4)||to_char(to_date("date",'YYYY-MM'),'Q')
as qt,
substr("date",1,4) as year
from t
order by year desc, "date"
)
order by year desc, "date";
Rextester Demo

How to identify positive minimum or negative maximum in a column for a key?

I have the following columns - Person_ID Days. For one person id, multiple days are possible. Something like this:
Person_Id Days
1000 100
1000 200
1000 -50
1000 -10
1001 100
1001 200
1001 50
1001 10
1002 -50
1002 -10
I need to address the following scenarios:
If all values for days column are positive, I need minimum of the days for a person_id. If the days column has both positive and negative, I need minimum of positive. If all negatives, I need maximum of negative.
The output like:
Person_id Days
1000 100
1001 10
1002 -10
I tried using case statement, but I am unable to use a same column in the condition as well as grouping.
Try this (Postgres 9.4+):
select person_id, coalesce(min(days) filter (where days > 0), max(days))
from a_table
group by 1
order by 1;
Oracle Setup:
CREATE TABLE table_name ( Person_Id, Days ) AS
SELECT 1000, 100 FROM DUAL UNION ALL
SELECT 1000, 200 FROM DUAL UNION ALL
SELECT 1000, -50 FROM DUAL UNION ALL
SELECT 1000, -10 FROM DUAL UNION ALL
SELECT 1001, 100 FROM DUAL UNION ALL
SELECT 1001, 200 FROM DUAL UNION ALL
SELECT 1001, 50 FROM DUAL UNION ALL
SELECT 1001, 10 FROM DUAL UNION ALL
SELECT 1002, -50 FROM DUAL UNION ALL
SELECT 1002, -10 FROM DUAL;
Query:
SELECT person_id, days
FROM (
SELECT t.*,
ROW_NUMBER() OVER ( PARTITION BY person_id
ORDER BY SIGN( ABS( days ) ),
SIGN( DAYS ) DESC,
ABS( DAYS )
) AS rn
FROM table_name t
)
WHERE rn = 1;
Output:
PERSON_ID DAYS
---------- ----------
1000 100
1001 10
1002 -10
Oracle solution:
with
input_data ( person_id, days) as (
select 1000, 100 from dual union all
select 1000, 200 from dual union all
select 1000, -50 from dual union all
select 1000, -10 from dual union all
select 1001, 100 from dual union all
select 1001, 200 from dual union all
select 1001, 50 from dual union all
select 1001, 10 from dual union all
select 1002, -50 from dual union all
select 1002, -10 from dual
)
select person_id,
NVL(min(case when days > 0 then days end), max(days)) as days
from input_data
group by person_id;
PERSON_ID DAYS
---------- ----------
1000 100
1001 10
1002 -10
For each person_id, if there is at least one days value that is strictly positive, then the min will be taken over positive days only and will be returned by NVL(). Otherwise the min() will return null, and NVL() will return max() over all days (all of which are, in this case, negative or 0).
select Person_id, min(abs(days)) * days/abs(days) from table_name
group by Person_id
-- + handle zero_divide .. SORRY.. the above works only in MySQL .
Something like this will work anywhere which is equivalent of above query:
select t.Person_id , min(t.days) from table_name t,
(select Person_id, min(abs(days)) as days from table_name group by Person_id) v
where t.Person_id = v.Person_id
and abs(t days) = v.days
group by Person_id;
OR
select id, min(Days) from (
select Person_id, min(abs(Days)) as Days from temp group by Person_id
union
select Person_id, max(Days) as Days from temp group by Person_id
) temp
group by Person_id;
You can do this by using GroupBy clause in sql server. Take a look into below query:-
CREATE TABLE #test(Person_Id INT, [Days] INT)
DECLARE #LargestNumberFromTable INT;
INSERT INTO #test
SELECT 1000 , 100 UNION
SELECT 1000 , 200 UNION
SELECT 1000 , -50 UNION
SELECT 1000 , -10 UNION
SELECT 1001 , 100 UNION
SELECT 1001 , 200 UNION
SELECT 1001 , 50 UNION
SELECT 1001 , 10 UNION
SELECT 1002 , -50 UNION
SELECT 1002 , -10
SELECT #LargestNumberFromTable = ISNULL(MAX([Days]), 0)
FROM #test
SELECT Person_Id
,CASE WHEN SUM(IIF([Days] > 0,[Days] , 0)) = 0 THEN MAX([Days]) -- All Negative
WHEN SUM([Days]) = SUM(IIF([Days] > 0, [Days], 0)) THEN MIN ([Days]) -- ALL Positive
WHEN SUM([Days]) <> SUM(IIF([Days] > 0, [Days], 0)) THEN MIN(IIF([Days] > 0, [Days], #LargestNumberFromTable)) --Mix (Negative And positive)
END AS [Days]
FROM #test
GROUP BY Person_Id
DROP TABLE #test