create a calculated column base on two column in SQL - sql

I have a below table and I need to create a calculated column (RA) based on the category and month column.
Oa Sa Ai month MDY
5 10 2 Jan J302022
16 32 38 Jan J302022
15 14 4 Feb J302022
46 32 81 Jan J302022
3 90 0 Mar J302022
51 10 21 Jan J302021
19 32 3 Jan J302021
45 16 41 Feb J302021
46 7 81 Jan J302022
30 67 14 Mar J302021
45 16 41 Apr J302021
46 7 81 Apr J302021
30 67 0 Jan J302021
56 17 0 Mar J302022
first, it should need to consider a category, for example, J302022, then it needs to calculate the "RA" column based on the month for that category. for example, J302022, Jan, ((5+16+46+46)+(10+32+32+7)) / (2+38+81+81) = 0.96.
So below is the expected output looks like.
Oa Sa Ai month category RA
5 10 2 Jan J302022 0.96
16 32 38 Jan J302022 0.96
15 14 4 Feb J302022 7.25
46 32 81 Jan J302022 0.96
3 90 0 Mar J302022 0
51 10 21 Jan J302021 8.70
19 32 3 Jan J302021 8.70
45 16 41 Feb J302021 1.48
46 7 81 Jan J302022 0.96
30 67 14 Mar J302021 6.92
45 16 41 Apr J302021 1.48
46 7 81 Apr J302022 0.65
30 67 0 Jan J302021 8.70
56 17 0 Mar J302022 0
Is it possible to do it in SQL?
Thanks in advance!

select Oa, Sa, Ai, month, category,
coalesce((ra1+ra2)/ra3, 0) as RA
from (
select Oa, Sa, Ai, month, mdy as category,
sum(oa) over (partition by month, mdy) as ra1,
sum(sa) over (partition by month, mdy) as ra2,
sum(ai) over (partition by month, mdy) as ra3
from WhateverYourTableNameIs
) as t;
Output on MySQL 8.0.29:
+------+------+------+-------+----------+--------+
| Oa | Sa | Ai | month | category | RA |
+------+------+------+-------+----------+--------+
| 45 | 16 | 41 | Apr | J302021 | 0.9344 |
| 46 | 7 | 81 | Apr | J302021 | 0.9344 |
| 45 | 16 | 41 | Feb | J302021 | 1.4878 |
| 15 | 14 | 4 | Feb | J302022 | 7.2500 |
| 51 | 10 | 21 | Jan | J302021 | 8.7083 |
| 19 | 32 | 3 | Jan | J302021 | 8.7083 |
| 30 | 67 | 0 | Jan | J302021 | 8.7083 |
| 5 | 10 | 2 | Jan | J302022 | 0.9604 |
| 16 | 32 | 38 | Jan | J302022 | 0.9604 |
| 46 | 32 | 81 | Jan | J302022 | 0.9604 |
| 46 | 7 | 81 | Jan | J302022 | 0.9604 |
| 30 | 67 | 14 | Mar | J302021 | 6.9286 |
| 3 | 90 | 0 | Mar | J302022 | 0.0000 |
| 56 | 17 | 0 | Mar | J302022 | 0.0000 |
+------+------+------+-------+----------+--------+

for SQL Server
select Oa, Sa, Ai, [Month],Category,
case when (sum(Ai) over(partition by [Month], Category) *1.0) = 0 then 0 else
(sum(Oa) over(partition by [Month], Category) +
sum(Sa) over(partition by [Month], Category))/
(sum(Ai) over(partition by [Month], Category) *1.0) end Ra
from #temp
order by Category desc
1.0 is multiplied in denominator to convert the output to float.

Related

Pandas: keep first row of duplicated indices of second level of multi index

I found lots of drop_duplicates for index when both multi level indices are the same but, I would like to keep the first row of a multi index when the second level of the multi index has duplicates. So here:
| | col_0 | col_1 | col_2 | col_3 | col_4 |
|:-------------------------------|--------:|--------:|--------:|--------:|--------:|
| date | ID
| ('2022-01-01', 'identifier_0') | 26 | 46 | 44 | 21 | 10 |
| ('2022-01-01', 'identifier_1') | 25 | 45 | 83 | 23 | 45 |
| ('2022-01-01', 'identifier_2') | 42 | 79 | 55 | 5 | 78 |
| ('2022-01-01', 'identifier_3') | 32 | 4 | 57 | 19 | 61 |
| ('2022-01-01', 'identifier_4') | 30 | 25 | 5 | 93 | 72 |
| ('2022-01-02', 'identifier_0') | 42 | 14 | 56 | 43 | 42 |
| ('2022-01-02', 'identifier_1') | 90 | 27 | 46 | 58 | 5 |
| ('2022-01-02', 'identifier_2') | 33 | 39 | 53 | 94 | 86 |
| ('2022-01-02', 'identifier_3') | 32 | 65 | 98 | 81 | 64 |
| ('2022-01-02', 'identifier_4') | 48 | 31 | 25 | 58 | 15 |
| ('2022-01-03', 'identifier_0') | 5 | 80 | 33 | 96 | 80 |
| ('2022-01-03', 'identifier_1') | 15 | 86 | 45 | 39 | 62 |
| ('2022-01-03', 'identifier_2') | 98 | 3 | 42 | 50 | 83 |
I'd like to keep first rows with unique ID.
If your index is a MultiIndex:
>>> df.loc[~df.index.get_level_values('ID').duplicated()]
col_0 col_1 col_2 col_3 col_4
date ID
2022-01-01 identifier_0 26 46 44 21 10
identifier_1 25 45 83 23 45
identifier_2 42 79 55 5 78
identifier_3 32 4 57 19 61
identifier_4 30 25 5 93 72
# Or
>>> df.groupby(level='ID').first()
col_0 col_1 col_2 col_3 col_4
ID
identifier_0 26 46 44 21 10
identifier_1 25 45 83 23 45
identifier_2 42 79 55 5 78
identifier_3 32 4 57 19 61
identifier_4 30 25 5 93 72
If your index is an Index:
>>> df.loc[~df.index.str[1].duplicated()]
col_0 col_1 col_2 col_3 col_4
(2022-01-01, identifier_0) 26 46 44 21 10
(2022-01-01, identifier_1) 25 45 83 23 45
(2022-01-01, identifier_2) 42 79 55 5 78
(2022-01-01, identifier_3) 32 4 57 19 61
(2022-01-01, identifier_4) 30 25 5 93 72
>>> df.groupby(df.index.str[1]).first()
col_0 col_1 col_2 col_3 col_4
identifier_0 26 46 44 21 10
identifier_1 25 45 83 23 45
identifier_2 42 79 55 5 78
identifier_3 32 4 57 19 61
identifier_4 30 25 5 93 72

How do I edit the code that calculates the value for the four weeks of the month for all months with pl/sql?

I divided the month into four weeks and printed the amount for each week. How do I set this up with a loop for 12 months?
declare
cursor c is
select varis_tar, tutar
from muhasebe.doviz_takip
where trunc(varis_tar) BETWEEN TO_DATE('01/10/2021', 'DD/MM/YYYY') AND
TO_DATE('31/10/2021', 'DD/MM/YYYY')
group by varis_tar,tutar;
tutar1 number(13,2):=0;
tutar2 number(13,2):=0;
tutar3 number(13,2):=0;
tutar4 number(13,2):=0;
begin
for r in c loop
if r.varis_tar between TO_DATE('01/10/2021', 'DD/MM/YYYY') AND
TO_DATE('07/10/2021', 'DD/MM/YYYY') then
tutar1:=(r.tutar)+tutar1;
--message(r.tutar);
elsif r.varis_tar between TO_DATE('07/10/2021', 'DD/MM/YYYY') AND
TO_DATE('14/10/2021', 'DD/MM/YYYY') then
tutar2:=(r.tutar)+tutar2;
--message(r.tutar);
elsif r.varis_tar between TO_DATE('14/10/2021', 'DD/MM/YYYY') AND
TO_DATE('21/10/2021', 'DD/MM/YYYY') then
tutar3:=(r.tutar)+tutar3;
--message(r.tutar);
elsif r.varis_tar between TO_DATE('21/10/2021', 'DD/MM/YYYY') AND
TO_DATE('31/10/2021', 'DD/MM/YYYY') then
tutar4:=(r.tutar)+tutar4;
--message(r.tutar);
end if;
end loop;
I tried to get the dates the same way for all the months. I tried that, but it worked wrong.
where trunc(varis_tar) BETWEEN TO_DATE('1', 'DD') AND
TO_DATE('31', 'DD')
if r.varis_tar between TO_DATE('1', 'DD') AND
TO_DATE('07', 'DD') then
elsif r.varis_tar between TO_DATE('7', 'DD') AND
TO_DATE('14', 'DD') then
elsif r.varis_tar between TO_DATE('14', 'DD') AND
TO_DATE('21', 'DD') then
elsif r.varis_tar between TO_DATE('21', 'DD') AND
TO_DATE('31', 'DD') then
I don't know if I'am understanding it correctly but:
try if extract(day from varis_tar) between 1 and 7
or more complex
l_week := to_char(varis_tar,'W'); --week number
if l_week = 1 then --first week
elsif l_week = 2 etc...
Your code has several issues:
date in Oracle is actually a datetime, so between will not count any time after the midnight of the upper boundary.
you count the midnight of the week's end twice: in current week and in the next week (between includes both boundaries).
you do not need any PL/SQL and especially a cursor loop, because it occupy resources during calculation outside of SQL context.
Use datetime format to calculate weeks, because it is easy to read and understand. Then group by corresponding components.
with a as (
select
date '2021-01-01' - 1 + level as dt
, level as val
from dual
connect by level < 400
)
, b as (
select
dt
, val
/*Map 29, 30 and 31 to 28*/
, to_char(
least(dt, trunc(dt, 'mm') + 27)
, 'yyyymmw'
) as w
from a
)
select
substr(w, 1, 4) as y
, substr(w, 5, 2) as m
, substr(w, -1) as w
, sum(val) as val
, min(dt) as dt_from
, max(dt) as dt_to
from b
group by
w
Y | M | W | VAL | DT_FROM | DT_TO
:--- | :- | :- | ---: | :--------- | :---------
2021 | 01 | 1 | 28 | 2021-01-01 | 2021-01-07
2021 | 01 | 2 | 77 | 2021-01-08 | 2021-01-14
2021 | 01 | 3 | 126 | 2021-01-15 | 2021-01-21
2021 | 01 | 4 | 265 | 2021-01-22 | 2021-01-31
2021 | 02 | 1 | 245 | 2021-02-01 | 2021-02-07
2021 | 02 | 2 | 294 | 2021-02-08 | 2021-02-14
2021 | 02 | 3 | 343 | 2021-02-15 | 2021-02-21
2021 | 02 | 4 | 392 | 2021-02-22 | 2021-02-28
2021 | 03 | 1 | 441 | 2021-03-01 | 2021-03-07
2021 | 03 | 2 | 490 | 2021-03-08 | 2021-03-14
2021 | 03 | 3 | 539 | 2021-03-15 | 2021-03-21
2021 | 03 | 4 | 855 | 2021-03-22 | 2021-03-31
2021 | 04 | 1 | 658 | 2021-04-01 | 2021-04-07
2021 | 04 | 2 | 707 | 2021-04-08 | 2021-04-14
2021 | 04 | 3 | 756 | 2021-04-15 | 2021-04-21
2021 | 04 | 4 | 1044 | 2021-04-22 | 2021-04-30
2021 | 05 | 1 | 868 | 2021-05-01 | 2021-05-07
2021 | 05 | 2 | 917 | 2021-05-08 | 2021-05-14
2021 | 05 | 3 | 966 | 2021-05-15 | 2021-05-21
2021 | 05 | 4 | 1465 | 2021-05-22 | 2021-05-31
2021 | 06 | 1 | 1085 | 2021-06-01 | 2021-06-07
2021 | 06 | 2 | 1134 | 2021-06-08 | 2021-06-14
2021 | 06 | 3 | 1183 | 2021-06-15 | 2021-06-21
2021 | 06 | 4 | 1593 | 2021-06-22 | 2021-06-30
2021 | 07 | 1 | 1295 | 2021-07-01 | 2021-07-07
2021 | 07 | 2 | 1344 | 2021-07-08 | 2021-07-14
2021 | 07 | 3 | 1393 | 2021-07-15 | 2021-07-21
2021 | 07 | 4 | 2075 | 2021-07-22 | 2021-07-31
2021 | 08 | 1 | 1512 | 2021-08-01 | 2021-08-07
2021 | 08 | 2 | 1561 | 2021-08-08 | 2021-08-14
2021 | 08 | 3 | 1610 | 2021-08-15 | 2021-08-21
2021 | 08 | 4 | 2385 | 2021-08-22 | 2021-08-31
2021 | 09 | 1 | 1729 | 2021-09-01 | 2021-09-07
2021 | 09 | 2 | 1778 | 2021-09-08 | 2021-09-14
2021 | 09 | 3 | 1827 | 2021-09-15 | 2021-09-21
2021 | 09 | 4 | 2421 | 2021-09-22 | 2021-09-30
2021 | 10 | 1 | 1939 | 2021-10-01 | 2021-10-07
2021 | 10 | 2 | 1988 | 2021-10-08 | 2021-10-14
2021 | 10 | 3 | 2037 | 2021-10-15 | 2021-10-21
2021 | 10 | 4 | 2995 | 2021-10-22 | 2021-10-31
2021 | 11 | 1 | 2156 | 2021-11-01 | 2021-11-07
2021 | 11 | 2 | 2205 | 2021-11-08 | 2021-11-14
2021 | 11 | 3 | 2254 | 2021-11-15 | 2021-11-21
2021 | 11 | 4 | 2970 | 2021-11-22 | 2021-11-30
2021 | 12 | 1 | 2366 | 2021-12-01 | 2021-12-07
2021 | 12 | 2 | 2415 | 2021-12-08 | 2021-12-14
2021 | 12 | 3 | 2464 | 2021-12-15 | 2021-12-21
2021 | 12 | 4 | 3605 | 2021-12-22 | 2021-12-31
2022 | 01 | 1 | 2583 | 2022-01-01 | 2022-01-07
2022 | 01 | 2 | 2632 | 2022-01-08 | 2022-01-14
2022 | 01 | 3 | 2681 | 2022-01-15 | 2022-01-21
2022 | 01 | 4 | 3915 | 2022-01-22 | 2022-01-31
2022 | 02 | 1 | 1194 | 2022-02-01 | 2022-02-03
db<>fiddle here
Or the same in columns:
with a as (
select
date '2021-01-01' - 1 + level as dt
, level as val
from dual
connect by level < 400
)
, b as (
select
val
/*Map 29, 30 and 31 to 28*/
, to_char(dt, 'yyyymm') as m
, to_char(
least(dt, trunc(dt, 'mm') + 27)
, 'w'
) as w
from a
)
select
substr(m, 1, 4) as y
, substr(m, 5, 2) as m
, tutar1
, tutar2
, tutar3
, tutar4
from b
pivot(
sum(val)
for w in (
1 as tutar1, 2 as tutar2
, 3 as tutar3, 4 as tutar4
)
)
Y | M | TUTAR1 | TUTAR2 | TUTAR3 | TUTAR4
:--- | :- | -----: | -----: | -----: | -----:
2021 | 01 | 28 | 77 | 126 | 265
2021 | 02 | 245 | 294 | 343 | 392
2021 | 03 | 441 | 490 | 539 | 855
2021 | 04 | 658 | 707 | 756 | 1044
2021 | 05 | 868 | 917 | 966 | 1465
2021 | 06 | 1085 | 1134 | 1183 | 1593
2021 | 07 | 1295 | 1344 | 1393 | 2075
2021 | 08 | 1512 | 1561 | 1610 | 2385
2021 | 09 | 1729 | 1778 | 1827 | 2421
2021 | 10 | 1939 | 1988 | 2037 | 2995
2021 | 11 | 2156 | 2205 | 2254 | 2970
2021 | 12 | 2366 | 2415 | 2464 | 3605
2022 | 01 | 2583 | 2632 | 2681 | 3915
2022 | 02 | 1194 | null | null | null
db<>fiddle here

How to get weekly data but starting from the first date of the month and do SUM calculation accordingly in BQ?

I have an issue to pull this kind of data. So I need to pull weekly data with these specifications:
The data pull will be scheduled, hence it will involve multiple months
The very first week will start from the first date (1 in every month) -- Green in the pic
The last week doesn't involve dates from the next month -- Red in the pic
The raw data and the desirable output(s) will more or less look like this:
Is there any workaround to do this in BigQuery? Thanks (attached below the data)
+-------------+-------+
| date | sales |
+-------------+-------+
| 1 Oct 2021 | 5 |
+-------------+-------+
| 2 Oct 2021 | 13 |
+-------------+-------+
| 3 Oct 2021 | 75 |
+-------------+-------+
| 4 Oct 2021 | 3 |
+-------------+-------+
| 5 Oct 2021 | 70 |
+-------------+-------+
| 6 Oct 2021 | 85 |
+-------------+-------+
| 7 Oct 2021 | 99 |
+-------------+-------+
| 8 Oct 2021 | 90 |
+-------------+-------+
| 9 Oct 2021 | 68 |
+-------------+-------+
| 10 Oct 2021 | 97 |
+-------------+-------+
| 11 Oct 2021 | 87 |
+-------------+-------+
| 12 Oct 2021 | 56 |
+-------------+-------+
| 13 Oct 2021 | 99 |
+-------------+-------+
| 14 Oct 2021 | 38 |
+-------------+-------+
| 15 Oct 2021 | 6 |
+-------------+-------+
| 16 Oct 2021 | 43 |
+-------------+-------+
| 17 Oct 2021 | 45 |
+-------------+-------+
| 18 Oct 2021 | 90 |
+-------------+-------+
| 19 Oct 2021 | 64 |
+-------------+-------+
| 20 Oct 2021 | 26 |
+-------------+-------+
| 21 Oct 2021 | 24 |
+-------------+-------+
| 22 Oct 2021 | 4 |
+-------------+-------+
| 23 Oct 2021 | 36 |
+-------------+-------+
| 24 Oct 2021 | 68 |
+-------------+-------+
| 25 Oct 2021 | 4 |
+-------------+-------+
| 26 Oct 2021 | 16 |
+-------------+-------+
| 27 Oct 2021 | 30 |
+-------------+-------+
| 28 Oct 2021 | 89 |
+-------------+-------+
| 29 Oct 2021 | 46 |
+-------------+-------+
| 30 Oct 2021 | 28 |
+-------------+-------+
| 31 Oct 2021 | 28 |
+-------------+-------+
| 1 Nov 2021 | 47 |
+-------------+-------+
| 2 Nov 2021 | 75 |
+-------------+-------+
| 3 Nov 2021 | 1 |
+-------------+-------+
| 4 Nov 2021 | 26 |
+-------------+-------+
| 5 Nov 2021 | 26 |
+-------------+-------+
| 6 Nov 2021 | 38 |
+-------------+-------+
| 7 Nov 2021 | 79 |
+-------------+-------+
| 8 Nov 2021 | 37 |
+-------------+-------+
| 9 Nov 2021 | 83 |
+-------------+-------+
| 10 Nov 2021 | 97 |
+-------------+-------+
| 11 Nov 2021 | 56 |
+-------------+-------+
| 12 Nov 2021 | 83 |
+-------------+-------+
| 13 Nov 2021 | 14 |
+-------------+-------+
| 14 Nov 2021 | 25 |
+-------------+-------+
| 15 Nov 2021 | 55 |
+-------------+-------+
| 16 Nov 2021 | 16 |
+-------------+-------+
| 17 Nov 2021 | 80 |
+-------------+-------+
| 18 Nov 2021 | 66 |
+-------------+-------+
| 19 Nov 2021 | 25 |
+-------------+-------+
| 20 Nov 2021 | 62 |
+-------------+-------+
| 21 Nov 2021 | 36 |
+-------------+-------+
| 22 Nov 2021 | 33 |
+-------------+-------+
| 23 Nov 2021 | 19 |
+-------------+-------+
| 24 Nov 2021 | 47 |
+-------------+-------+
| 25 Nov 2021 | 14 |
+-------------+-------+
| 26 Nov 2021 | 22 |
+-------------+-------+
| 27 Nov 2021 | 66 |
+-------------+-------+
| 28 Nov 2021 | 15 |
+-------------+-------+
| 29 Nov 2021 | 96 |
+-------------+-------+
| 30 Nov 2021 | 4 |
+-------------+-------+
Consider below approach
with temp as (
select parse_date('%d %B %Y', date) date, sales
from your_table
)
select format_date('%d %B %Y', weeks[ordinal(num)]) start_week, sum(sales) total_sales
from (
select sales, weeks, range_bucket(date, weeks) num
from temp, unnest([struct(generate_date_array(date_trunc(date, month), last_day(date, month), interval 7 day ) as weeks)])
)
group by start_week
if to apply to sample data (as is) in your question - output is

How to create a SQL query for the below scenario

I am using Snowflake SQL, but I guess this can be solved by any sql. So I have data like this:
RA_MEMBER_ID YEAR QUARTER MONTH Monthly_TOTAL_PURCHASE CATEGORY
1000 2020 1 1 105 CAT10
1000 2020 1 1 57 CAT13
1000 2020 1 2 107 CAT10
1000 2020 1 2 59 CAT13
1000 2020 1 3 109 CAT11
1000 2020 1 3 61 CAT14
1000 2020 2 4 111 CAT11
1000 2020 2 4 63 CAT14
1000 2020 2 5 113 CAT12
1000 2020 2 5 65 CAT15
1000 2020 2 6 115 CAT12
1000 2020 2 6 67 CAT15
And I need data like this:
RA_MEMBER_ID YEAR QUARTER MONTH Monthly_TOTAL_PURCHASE CATEGORY Monthly_rank Quarterly_Total_purchase Quarter_category Quarter_rank Yearly_Total_purchase Yearly_category Yearly_rank
1000 2020 1 1 105 CAT10 1 105 CAT10 1 105 CAT10 1
1000 2020 1 1 57 CAT13 2 57 CAT13 2 57 CAT13 2
1000 2020 1 2 107 CAT10 1 212 CAT10 1 212 CAT10 1
1000 2020 1 2 59 CAT13 2 116 CAT13 2 116 CAT13 2
1000 2020 1 3 109 CAT11 1 212 CAT10 1 212 CAT10 1
1000 2020 1 3 61 CAT14 2 116 CAT13 2 116 CAT13 2
1000 2020 2 4 111 CAT11 1 111 CAT11 1 212 CAT10 1
1000 2020 2 4 63 CAT14 2 63 CAT14 2 124 CAT14 2
1000 2020 2 5 113 CAT12 1 113 CAT12 1 212 CAT10 1
1000 2020 2 5 65 CAT15 2 65 CAT15 2 124 CAT14 2
1000 2020 2 6 115 CAT12 1 228 CAT12 1 228 CAT12 1
1000 2020 2 6 67 CAT15 2 132 CAT15 2 132 CAT15 2
So basically, I have the top two categories by purchase amount for the first 6 months. I need the same for quarterly based on which month of the quarter it is. So let's say it is February, then the top 2 categories and amounts should be calculated based on both January and February. For March we have to get the quarter data by taking all three months. From April it will be the same as monthly rank, for May again calculate based on April and May. Similarly for Yearly also.
I have tried a lot of things but nothing seems to give me what I want.
The solution should be generic enough because there can be many other months and years.
I really need help in this.
Not sure if below is what you are after. I assume that everything is category based:
create or replace table test (
ra_member_id int,
year int,
quarter int,
month int,
monthly_purchase int,
category varchar
);
insert into test values
(1000, 2020, 1,1, 105, 'cat10'),
(1000, 2020, 1,1, 57, 'cat13'),
(1000, 2020, 1,2, 107, 'cat10'),
(1000, 2020, 1,2, 59, 'cat13'),
(1000, 2020, 1,3, 109, 'cat11'),
(1000, 2020, 1,3, 61, 'cat14'),
(1000, 2020, 2,4, 111, 'cat11'),
(1000, 2020, 2,4, 63, 'cat14'),
(1000, 2020, 2,5, 113, 'cat12'),
(1000, 2020, 2,5, 65, 'cat15'),
(1000, 2020, 2,6, 115, 'cat12'),
(1000, 2020, 2,6, 67, 'cat15');
WITH BASE as (
select
RA_MEMBER_ID,
YEAR,
QUARTER,
MONTH,
CATEGORY,
MONTHLY_PURCHASE,
LAG(MONTHLY_PURCHASE) OVER (PARTITION BY QUARTER, CATEGORY ORDER BY MONTH) AS QUARTERLY_PURCHASE_LAG,
IFNULL(QUARTERLY_PURCHASE_LAG, 0) + MONTHLY_PURCHASE AS QUARTERLY_PURCHASE,
LAG(MONTHLY_PURCHASE) OVER (PARTITION BY YEAR, CATEGORY ORDER BY MONTH) AS YEARLY_PURCHASE_LAG,
IFNULL(YEARLY_PURCHASE_LAG, 0) + MONTHLY_PURCHASE AS YEARLY_PURCHASE
FROM
TEST
),
BASE_RANK AS (
SELECT
RA_MEMBER_ID,
YEAR,
QUARTER,
MONTH,
CATEGORY,
MONTHLY_PURCHASE,
RANK() OVER (PARTITION BY MONTH ORDER BY MONTHLY_PURCHASE DESC) as MONTHLY_RANK,
QUARTERLY_PURCHASE,
RANK() OVER (PARTITION BY QUARTER ORDER BY QUARTERLY_PURCHASE DESC) as QUARTERLY_RANK,
YEARLY_PURCHASE,
RANK() OVER (PARTITION BY YEAR ORDER BY YEARLY_PURCHASE DESC) as YEARLY_RANK
FROM BASE
),
MAIN AS (
SELECT
RA_MEMBER_ID,
YEAR,
QUARTER,
MONTH,
CATEGORY,
MONTHLY_PURCHASE,
MONTHLY_RANK,
QUARTERLY_PURCHASE,
QUARTERLY_RANK,
YEARLY_PURCHASE,
YEARLY_RANK
FROM BASE_RANK
)
SELECT * FROM MAIN
ORDER BY YEAR, QUARTER, MONTH
;
Result:
+--------------+------+---------+-------+----------+------------------+--------------+--------------------+----------------+-----------------+-------------+
| RA_MEMBER_ID | YEAR | QUARTER | MONTH | CATEGORY | MONTHLY_PURCHASE | MONTHLY_RANK | QUARTERLY_PURCHASE | QUARTERLY_RANK | YEARLY_PURCHASE | YEARLY_RANK |
|--------------+------+---------+-------+----------+------------------+--------------+--------------------+----------------+-----------------+-------------|
| 1000 | 2020 | 1 | 1 | cat10 | 105 | 1 | 105 | 4 | 105 | 9 |
| 1000 | 2020 | 1 | 1 | cat13 | 57 | 2 | 57 | 6 | 57 | 12 |
| 1000 | 2020 | 1 | 2 | cat10 | 107 | 1 | 212 | 1 | 212 | 3 |
| 1000 | 2020 | 1 | 2 | cat13 | 59 | 2 | 116 | 2 | 116 | 6 |
| 1000 | 2020 | 1 | 3 | cat11 | 109 | 1 | 109 | 3 | 109 | 8 |
| 1000 | 2020 | 1 | 3 | cat14 | 61 | 2 | 61 | 5 | 61 | 11 |
| 1000 | 2020 | 2 | 4 | cat11 | 111 | 1 | 111 | 4 | 220 | 2 |
| 1000 | 2020 | 2 | 4 | cat14 | 63 | 2 | 63 | 6 | 124 | 5 |
| 1000 | 2020 | 2 | 5 | cat12 | 113 | 1 | 113 | 3 | 113 | 7 |
| 1000 | 2020 | 2 | 5 | cat15 | 65 | 2 | 65 | 5 | 65 | 10 |
| 1000 | 2020 | 2 | 6 | cat12 | 115 | 1 | 228 | 1 | 228 | 1 |
| 1000 | 2020 | 2 | 6 | cat15 | 67 | 2 | 132 | 2 | 132 | 4 |
+--------------+------+---------+-------+----------+------------------+--------------+--------------------+----------------+-----------------+-------------+

How can I obtain sum of each row in Postgresql?

I have got some rows of results like below, if each sum (row(i)) same, I can suppose the results are correct. How can I write a SQL clause to calculate sum of each row? thanks.
27 | 29 | 27 | 36 | 33 | 29 | 16 | 17 | 35 | 28 | 34 | 15
27 | 29 | 27 | 29 | 33 | 29 | 16 | 17 | 35 | 28 | 34 | 15
27 | 29 | 27 | 14 | 33 | 29 | 16 | 17 | 35 | 28 | 34 | 15
27 | 29 | 16 | 37 | 33 | 29 | 16 | 17 | 35 | 28 | 34 | 15
27 | 29 | 16 | 36 | 33 | 29 | 16 | 17 | 35 | 28 | 34 | 15