SQL select top rows based on limit - sql

Please help me t make below select query
Source table
name Amount
-----------
A 2
B 3
C 2
D 7
if limit is 5 then result table should be
name Amount
-----------
A 2
B 3
if limit is 8 then result table
name Amount
-----------
A 2
B 3
C 2

You can use window function to achieve this:
select name,
amount
from (
select t.*,
sum(amount) over (
order by name
) s
from your_table t
) t
where s <= 8;
The analytic function sum will be aggregated row-by-row based on the given order order by name.
Once you found sum till given row using this, you can filter the result using a simple where clause to find rows till which sum of amount is under or equal to the given limit.
More on this topic:
The SQL OVER() clause - when and why is it useful?
https://explainextended.com/2009/03/08/analytic-functions-sum-avg-row_number/

Related

How to get rows with minimum ID on a multiple columns query

I have a table like this:
Id
Type
multiple columns (a lot)...
1
50
2
50
3
50
4
75
5
75
6
75
I need to get only the rows with the older (min) id as a part of my query. The result should include all the columns of the table, but given that these multiple columns have multiple values, it's not posible to use MIN() and then GROUP BY
I need something like this:
Id
Type
multiple columns (a lot)...
1
50
4
75
I've tried using MIN() function and grouping by but that's not an option cause the rest of the columns have different values and if I use a GROUP BY I'm getting all the rows and not only the ones with the lowest ID's.
Any ideas?
Thanks!
You can use the WITH TIES option in concert with the window function lag() over()
To be clear, this will flag when the value changes
Example
Select top 1 with ties *
From YourTable
Order by case when lag([type],1) over (order by id) = [Type] then 0 else 1 end desc
Results
Id Type
1 50
4 75
Based on Rodrigo's solution, you may have wanted the first [Type] regardless of sequence.
Select top 1 with ties *
From YourTable
Order by row_number() over (partition by [Type] order by ID)
You can add a column that represents the number of dups.
That result will be used to join only with unique rows.
You can use Common table Expression to split the steps
WITH rows_with_index AS (
SELECT
ROW_NUMBER() OVER(PARTITION BY Type) AS row_number,
id,
Type
FROM
<TABLE>
ORDER BY 2
)
SELECT * FROM rows_with_index t
WHERE rows_with_index.row_number = 1;

Use window functions to select the value from a column based on the sum of another column, in an aggregate query

Consider this data (View on DB Fiddle):
id
dept
value
1
A
5
1
A
5
1
B
7
1
C
5
2
A
5
2
A
5
2
B
15
2
A
2
The base query I am running is pretty simple. Just get the total value by id and the most frequent dept.
SELECT
id,
MODE() WITHIN GROUP(ORDER BY dept) AS dept_freq,
SUM(value) AS value
FROM test
GROUP BY id
;
id
dept_freq
value
1
A
22
2
A
27
But I also need to get, for each id, the dept that concentrates the greatest value (so the greatest sum of value by id and dept, not the highest individual value in the original table).
Is there any way to use window functions to achieve that and do it directly in the base query above?
The expected output for this particular example would be:
id
dept_freq
dept_value
value
1
A
A
22
2
A
B
27
I could achieve that with the query below and then joining that with the results of the base query above
SELECT * FROM(
SELECT
*,
ROW_NUMBER() OVER(PARTITION BY id ORDER BY value DESC) as row
FROM (
SELECT id, dept, SUM(value) AS value
FROM test
GROUP BY id, dept
) AS alias1
) AS alias2
WHERE alias2.row = 1
;
id
dept
value
row
1
A
10
1
2
B
15
1
But it is not easy to read/maintain and seems also pretty inefficient. So I thought it should be possible to achieve this using window functions directly in the base query, and that also may also help Postgres to come up with a better query plan that does less passes over the data. But none of my attempts using over partition and filter worked.
step-by-step demo:db<>fiddle
You can fetch the dept for the highest values using the first_value() partition function. Adding this before your mode() grouping should do it:
SELECT
id,
highest_value_dept,
MODE() WITHIN GROUP(ORDER BY dept) AS dept_freq,
SUM(value) as value
FROM (
SELECT
id,
dept,
value,
FIRST_VALUE(dept) OVER (PARTITION BY id ORDER BY value DESC) as highest_value_dept
FROM test
) s
GROUP BY 1,2

How to return the category with max value for every user in postgresql?

This is the table
id
category
value
1
A
40
1
B
20
1
C
10
2
A
4
2
B
7
2
C
7
3
A
32
3
B
21
3
C
2
I want the result like this
id
category
1
A
2
B
2
C
3
A
For small tables or for only very few rows per user, a subquery with the window function rank() (as demonstrated by The Impaler) is just fine. The resulting sequential scan over the whole table, followed by a sort will be the most efficient query plan.
For more than a few rows per user, this gets increasingly inefficient though.
Typically, you also have a users table holding one distinct row per user. If you don't have it, created it! See:
Is there a way to SELECT n ON (like DISTINCT ON, but more than one of each)
Select first row in each GROUP BY group?
We can leverage that for an alternative query that scales much better - using WITH TIES in a LATERAL JOIN. Requires Postgres 13 or later.
SELECT u.id, t.*
FROM users u
CROSS JOIN LATERAL (
SELECT t.category
FROM tbl t
WHERE t.id = u.id
ORDER BY t.value DESC
FETCH FIRST 1 ROWS WITH TIES -- !
) t;
db<>fiddle here
See:
Get top row(s) with highest value, with ties
Fetching a minimum of N rows, plus all peers of the last row
This can use a multicolumn index to great effect - which must exist, of course:
CREATE INDEX ON tbl (id, value);
Or:
CREATE INDEX ON tbl (id, value DESC);
Even faster index-only scans become possible with:
CREATE INDEX ON tbl (id, value DESC, category);
Or (the optimum for the query at hand):
CREATE INDEX ON tbl (id, value DESC) INCLUDE (category);
Assuming value is defined NOT NULL, or we have to use DESC NULLS LAST. See:
Sort by column ASC, but NULL values first?
To keep users in the result that don't have any rows in table tbl, user LEFT JOIN LATERAL (...) ON true. See:
What is the difference between LATERAL JOIN and a subquery in PostgreSQL?
You can use RANK() to identify the rows you want. Then, filtering is easy. For example:
select *
from (
select *,
rank() over(partition by id order by value desc) as rk
from t
) x
where rk = 1
Result:
id category value rk
--- --------- ------ --
1 A 40 1
2 B 7 1
2 C 7 1
3 A 32 1
See running example at DB Fiddle.

Get Count Based on Combinations of Values from Second Column

I have a table format like below:
Id Code
1 A
1 B
2 A
3 A
3 C
4 A
4 B
I am trying to get count of code combinations like below:
Code Count
A,B 2 -- Row 1,2 and Row 6,7
A 1 -- Row 3
A,C 1 -- Row 4
I am unable to get the combination result. All I can do is group by but I am not getting count of IDs based in combinations.
You need to aggregate the rows, somehow, and do that twice. The code looks something like this:
select codes, count(*) as num_ids
from (select id, group_concat(code order by code) as codes
from t
group by id
) id
group by code;
group_concat() might be spelled listagg() or string_agg() depending on the database.
In SQL Server, use string_agg():
select codes, count(*) as num_ids
from (select id, string_agg(code, ',') within group (order by code) as codes
from t
group by id
) id
group by code;

query for roww returning the first element of a group in db2

Suppose I have a table filled with the data below, what SQL function or query I should use in db2 to retrieve all rows having the FIRST field FLD_A with value A, the FIRST field FLD_A with value B..and so on?
ID FLD_A FLD_B
1 A 10
2 A 20
3 A 30
4 B 10
5 A 20
6 C 30
I am expecting a table like below; I am aware of grouping done by function GROUP BY but how can I limit the query to return the very first of each group?
Essentially I would like to have the information about the very first row where a new value for FLD_A is appearing for the first time?
ID FLD_A FLD_B
1 A 10
4 B 10
6 C 30
Try this it works in sql
SELECT * FROM Table1
WHERE ID IN (SELECT MIN(ID) FROM Table1 GROUP BY FLD_A)
A good way to approach this problem is with window functions and row_number() in particular:
select t.*
from (select t.*,
row_number() over (partition by fld_a order by id) as seqnum
from table1
) t
where seqnum = 1;
(This is assuming that "first" means "minimum id".)
If you use t.*, this will add one extra column to the output. You can just list the columns you want to avoid this.