Group By and get top N in Simple SQL - sql

I have following table in SQLite
BANK:
user-id sender-name receiver-name amount
----------------------------------------
1 A B 200
2 A C 250
3 A B 400
4 A B 520
4 A D 120
4 A D 130
4 A B 110
4 A B 300
4 A B 190
4 A C 230
4 A B 110
4 A C 40
4 A C 80
I need to find out top 3 transaction from each receiver. There are multiple solutions provided for several other database which is not compatible with SQLite cause of the use of certain functions like PARTITION and RANK and even user-defined variables.
I need the solution in simple SQL queries to allow use with SQLite.
Expected result:
receiver-name amount
--------------------
B 560
C 1220
D 250

I managed to do it with using only simple function with self-join.
Now you can just update N with your preferred value, for my case top 3, it would be LIMIT 3.
SELECT receiver-name ,(
SELECT SUM(amount) as sum_amount
FROM (
SELECT amount
FROM bank as b2
WHERE b2.receiver-name = b.receiver-name
ORDER BY b2.amount DESC
LIMIT 3
)
) as sum_amount
FROM bank as b
GROUP BY receiver-name

Related

Create multiple rows based on a column containing a list of numbers

I currently have a table which looks like this.
A Category Code
1 A 10,30
2 B 30
3 C 20,30,40
Is there anyway to write a sql statement that would get me
ID Category Code
1 A 10
1 A 30
2 B 30
3 C 20
3 C 30
3 C 40
Thanks
You can use UNNEST with SPLIT function...
select a, category, s_code
from my_data, unnest(split(code, ',')) as s_code
a
category
s_code
1
A
10
1
A
30
2
B
30
3
C
20
3
C
30
3
C
40

SQL JOIN for Duplicate Values

I have following two tables:
A.
A_ID Amount GL_ID
------------------
1 100 10
2 200 11
3 150 10
4 20 10
5 369 12
6 369 11
7 254 12
B.
B_ID Name GL_ID
-----------------
1 A 10
2 B 10
3 C 11
4 D 11
5 E 12
6 F 12
I want to join these tables. They have GL_ID column in common (ID of another table). Table A store transactions along with GL_ID while table B defines document type (A, B, C, D etc.) with reference to GL_ID.
A & B don't have any common column except GL_ID. I want the following result, relevant document type (A, B, C, D etc.) for each transaction in table A.
A.A_ID A.Amount B.Name
-----------------------
1 100 A
2 200 B
3 150 B
4 20 B
5 369 A
6 369 D
7 254 D
But when I apply to join (LEFT, RIGHT, FULL JOIN) keyword, query shows repeated values. But I only want to have relevant Doc Type for each line in table A.
try this.
select distinct A.A_ID, A.Amount, B.Name
from A inner join B on A.GL_ID=B.GL_ID

Table returning cumulative sum

I have this table. It contains a column with points (a), a column with a playerid (b) and column with games(c). I would like to translate this table using SQL to a format in which values in column a get summed up. This would need to result in the table below. Column d contains the values summed by the previous value, column e contains the playerId en column f the gamenumber
So I would like this:
a b c
1 385 11255 1
2 378 11178 1
3 370 11551 1
4 264 11255 2
5 100 11178 2
6 405 11551 2
7 200 11255 3
8 412 11178 3
9 50 11551 3
Into this:
d e f
385 11255 1
649 11255 2
849 11255 3
378 11178 1
478 11178 2
890 11178 3
370 11551 1
775 11551 2
825 11551 3
You can use the SUM() OVER() window function (if your version of SQL Server supports it)
select b,c,sum(a) over(partition by b order by c) as running_sum
from tbl
On versions that don't support it, you can do this with cross apply.
select t.b,t.c,t1.total
from tbl t
cross apply (select sum(a) as total from tbl t1 where t1.b=t.b and t1.c<=t.c) t1
I think you want something like this:
select sum(a) over (partition by b order by c) as d, b as e, c as f
from t
order by e, f;
Cumulative sums with this syntax are supported since SQL Server 2012.

How to implement multiple aggregations using pandas groupby, referencing a specific column

I have data in a pandas data frame, and need to aggregate it. I need to do different aggregations across different columns similar to the below.
group min(rank) min(rank) min sum
title t_no t_descr rank stores
A 1 a 1 1000
B 1 a 1 1000
B 2 b 2 800
C 2 b 2 800
D 1 a 1 1000
D 2 b 2 800
F 4 d 4 500
E 3 c 3 700
to:
title t_no t_descr rank stores
A 1 a 1 1000
B 1 a 1 1800
C 2 b 2 800
D 1 a 1 1800
E 3 c 3 700
F 4 d 4 500
You'll notice that title B and D have been aggregated, keeping the t_no & t_descr that corresponded to the minimum of the rank for the respective title group, while stores are summed. t_no & t_descr are just arbitrary text. I need the top rank by title, sum the stores, and keep the corresponding t_no & t_descr.
How can I do this within a single pandas groupby? This is dummy data; the real problem that I'm working on has many more aggregations, and I'd prefer not to have to do each aggregation individually, which I know how to do.
I started with the below, but realized that I really need the mins & maxs for t_no & t_descr to be based on rank col of the subgroup, not the columns themselves.
aggs = {
'rank': 'min',
't_no': 'min', # need t_no for row that is min(rank) by title.
't_descr': 'min' # need t_descr for row that is min(rank) by title.
}
df2.groupby('title').agg(aggs).reset_index()
Perhaps there's a way to do this with a lambda? I'm sure there's a straightforward way to do this. And if groupby isn't the right method I'm obviously open to suggestions.
Thanks!
Two step process...
aggregate for sum of stores and idxmin for rank...
then use idxmin to slice original dataframe and join it with the aggregate
agged = df.groupby('title').agg(dict(rank='idxmin', stores='sum'))
df.loc[agged['rank'], ['title', 't_no', 't_descr', 'rank']].join(agged.stores, on='title')
title t_no t_descr rank stores
0 A 1 a 1 1000
1 B 1 a 1 1800
3 C 2 b 2 800
4 D 1 a 1 1800
7 E 3 c 3 700
6 F 4 d 4 500
This is a slightly different approach from #piRSquared, but gets you to the same spot:
Code:
# Set min and sum functions according to columns and generate new dataframe
f = {'rank':min, 'rank':min, 'stores':sum}
grouped = df.groupby('title').agg(f).reset_index()
# Then merge with original dataframe (keeping only the merged and new columns)
pd.merge(grouped, df[['title','rank','t_no','t_descr']], on=['title','rank'])
Output:
title stores rank t_no t_descr
0 A 1000 1 1 a
1 B 1800 1 1 a
2 C 800 2 2 b
3 D 1800 1 1 a
4 E 700 3 3 c
5 F 500 4 4 d
Of course you can organize the columns as you see fit.

Grouping query into group and subgroup

I want to group my data using SQL or R so that I can get top or bottom 10 Subarea_codes for each Company and Area_code. In essence: the Subarea_codes within the Area_codes where each Company has its largest or smallest result.
data.csv
Area_code Subarea_code Company Result
10 101 A 15
10 101 P 10
10 101 C 4
10 102 A 10
10 102 P 8
10 102 C 5
11 111 A 15
11 111 P 20
11 111 C 5
11 112 A 10
11 112 P 5
11 112 C 10
result.csv should be like this
Company Area_code Largest_subarea_code Result Smallest_subarea_code Result
A 10 101 15 102 10
P 10 101 10 102 8
C 10 102 5 101 4
A 11 111 15 112 10
P 11 111 20 112 5
C 11 112 10 111 5
Within each Area_code there can be hundreds of Subarea_codes but I only want the top and bottom 10 for each Company.
Also this doesn't have to be resolved in one query, but can be divided into two queries, meaning smallest is presented in results_10_smallest and largest in result_10_largest. But I'm hoping I can accomplish this with one query for each result.
What I've tried:
SELECT Company, Area_code, Subarea_code MAX(Result)
AS Max_result
FROM data
GROUP BY Subarea_code
ORDER BY Company
;
This gives me all the Companies with the highest results within each Subarea_code. Which would mean: A, A, P, A-C for the data above.
Using sqldf package:
df <- read.table(text="Area_code Subarea_code Company Result
10 101 A 15
10 101 P 10
10 101 C 4
10 102 A 10
10 102 P 8
10 102 C 5
11 111 A 15
11 111 P 20
11 111 C 5
11 112 A 10
11 112 P 5
11 112 C 10", header=TRUE)
library(sqldf)
mymax <- sqldf("select Company,
Area_code,
max(Subarea_code) Largest_subarea_code
from df
group by Company,Area_code")
mymaxres <- sqldf("select d.Company,
d.Area_code,
m.Largest_subarea_code,
d.Result
from df d, mymax m
where d.Company=m.Company and
d.Subarea_code=m.Largest_subarea_code")
mymin <- sqldf("select Company,
Area_code,
min(Subarea_code) Smallest_subarea_code
from df
group by Company,Area_code")
myminres <- sqldf("select d.Company,
d.Area_code,
m.Smallest_subarea_code,
d.Result
from df d, mymin m
where d.Company=m.Company and
d.Subarea_code=m.Smallest_subarea_code")
result <- sqldf("select a.*, b.Smallest_subarea_code,b.Result
from mymaxres a, myminres b
where a.Company=b.Company and
a.Area_code=b.Area_code")
If you already doing it in R, why not use the much more efficient data.table instead of sqldf using SQL syntax? Assuming data is your data set, simply:
library(data.table)
setDT(data)[, list(Largest_subarea_code = Subarea_code[which.max(Result)],
Resultmax = max(Result),
Smallest_subarea_code = Subarea_code[which.min(Result)],
Resultmin = min(Result)), by = list(Company, Area_code)]
# Company Area_code Largest_subarea_code Resultmax Smallest_subarea_code Resultmin
# 1: A 10 101 15 102 10
# 2: P 10 101 10 102 8
# 3: C 10 102 5 101 4
# 4: A 11 111 15 112 10
# 5: P 11 111 20 112 5
# 6: C 11 112 10 111 5
There seems to be a discrepancy between the output shown and the description. The description asks for the top 10 and bottom 10 results for each Area code/Company but the sample output shows only the top 1 and the bottom 1. For example, for area code 10 and company A subarea 101 is top with a result of 15 and and subarea 102 is 2nd largest with a result of 10 so according to the description there should be two rows for that company/area code combination. (If there were more data there would be up to 10 rows for that company/area code combination.)
We give two answers. The first assumes the top 10 and bottom 10 are wanted for each company and area code as in the question's description and the second assumes only the top and bottom for each company and area code as in the question's sample output.
1) Top/Bottom 10
Here we assume that the top 10 and bottom 10 results for each Company/Area code are wanted. If its just the top and bottom one then see (2) later on (or replace 10 with 1 in the code here). Bottom10 is all rows for which there are 10 or fewer subareas for the same area code and company with equal or smaller results. Top10 is similar.
library(sqldf)
Bottom10 <- sqldf("select a.Company,
a.Area_code,
a.Subarea_code Bottom_Subarea,
a.Result Bottom_Result,
count(*) Bottom_Rank
from df a join df b
on a.Company = b.Company and
a.Area_code = B.Area_code and
b.Result <= a.Result
group by a.Company, a.Area_code, a.Subarea_code
having count(*) <= 10")
Top10 <- sqldf("select a.Company,
a.Area_code,
a.Subarea_code Top_Subarea,
a.Result Top_Result,
count(*) Top_Rank
from df a join df b
on a.Company = b.Company and
a.Area_code = B.Area_code and
b.Result >= a.Result
group by a.Company, a.Area_code, a.Subarea_code
having count(*) <= 10")
The description indicated you wanted the top 10 OR the bottom 10 for each company/area code in which case just use one of the results above. If you want to combine them we show a merge below. We have added a Rank column to indicate the smallest/largest (Rank is 1), second smallest/largest (Rank is 2), etc.
sqldf("select t.Area_code,
t.Company,
t.Top_Rank Rank,
t.Top_Subarea,
t.Top_Result,
b.Bottom_Subarea,
b.Bottom_Result
from Bottom10 b join Top10 t
on t.Area_code = b.Area_code and
t.Company = b.Company and
t.Top_Rank = b.Bottom_Rank
order by t.Area_code, t.Company, t.Top_Rank")
giving:
Area_code Company Rank Top_Subarea Top_Result Bottom_Subarea Bottom_Result
1 10 A 1 101 15 102 10
2 10 A 2 102 10 101 15
3 10 C 1 102 5 101 4
4 10 C 2 101 4 102 5
5 10 P 1 101 10 102 8
6 10 P 2 102 8 101 10
7 11 A 1 111 15 112 10
8 11 A 2 112 10 111 15
9 11 C 1 112 10 111 5
10 11 C 2 111 5 112 10
11 11 P 1 111 20 112 5
12 11 P 2 112 5 111 20
Note that this format makes less sense if there are ties and, in fact, could generate more than 10 rows for a Company/Area code so you might just want to use the individual Top10 and Bottom10 in that case. You could also consider jittering df$Result if this a problem:
df$Result <- jitter(df$Result)
# now perform SQL statements
2) Top/Bottom Only
Here we give only the top and bottom results and the corresponding subareas for each company/area code. Note that this uses an extension to SQL supported by sqlite and the SQL code is substantially simpler:
Bottom1 <- sqldf("select Company,
Area_code,
Subarea_code Bottom_Subarea,
min(Result) Bottom_Result
from df
group by Company, Area_code")
Top1 <- sqldf("select Company,
Area_code,
Subarea_code Top_Subarea,
max(Result) Top_Result
from df
group by Company, Area_code")
sqldf("select a.Company,
a.Area_code,
Top_Subarea,
Top_Result,
Bottom_Subarea
Bottom_Result
from Top1 a join Bottom1 b
on a.Company = b.Company and
a.Area_code = b.Area_code
order by a.Area_code, a.Company")
This gives:
Company Area_code Top_Subarea Top_Result Bottom_Result
1 A 10 101 15 102
2 C 10 102 5 101
3 P 10 101 10 102
4 A 11 111 15 112
5 C 11 112 10 111
6 P 11 111 20 112
Update Correction and added (2).
Above answers are fine to fetch max result.
This solves the top10 issue:
data.top <- data[ave(-data$Result, data$Company, data$Area_code, FUN = rank) <= 10, ]
In this script the user declares the company. The script then indicates the max top 10 results (idem for min values).
Result=NULL
A <- read.table(/your-file.txt",header=T,sep="\t",na.string="NA")
Company<-A$Company=="A" #can be A, C, P or other values
Subarea<-unique(A$Subarea)
for (i in 1:length(unique(A$Subarea)))
{Result[i]<-max(A$Result[Company & A$Subarea_code==Subarea[i]])}
Res1<-t((rbind(Subarea,Result)))
Res2<-Res1[order(-Res1[,2]),]
Res2[1:10,]