Producing n rows per group - sql

It is known that GROUP BY produces one row per group. I want to produce multiple rows per group. The particular use case is, for example, selecting two cheapest offerings for each item.
It is trivial for two or three elements in the group:
select type, variety, price
from fruits
where price = (select min(price) from fruits as f where f.type = fruits.type)
or price = (select min(price) from fruits as f where f.type = fruits.type
and price > (select min(price) from fruits as f2 where f2.type = fruits.type));
(Select n rows per group in mysql)
But I am looking for a query that can show n rows per group, where n is arbitrarily large. In other words, a query that displays 5 rows per group should be convertible to a query that displays 7 rows per group by just replacing some constants in it.
I am not constrained to any DBMS, so I am interested in any solution that runs on any DBMS. It is fine if it uses some non-standard syntax.

For any database that supports analytic functions\ window functions, this is relatively easy
select *
from (select type,
variety,
price,
rank() over ([partition by something]
order by price) rnk
from fruits) rank_subquery
where rnk <= 3
If you omit the [partition by something], you'll get the top three overall rows. If you want the top three for each type, you'd partition by type in your rank() function.
Depending on how you want to handle ties, you may want to use dense_rank() or row_number() rather than rank(). If two rows tie for first, using rank, the next row would have a rnk of 3 while it would have a rnk of 2 with dense_rank. In both cases, both tied rows would have a rnk of 1. row_number would arbitrarily give one of the two tied rows a rnk of 1 and the other a rnk of 2.

To save anyone looking some time, at the time of this writing, apparently this won't work because https://dev.mysql.com/doc/refman/5.7/en/subquery-restrictions.html.
I've never been a fan of correlated subqueries as most uses I saw for them could usually be written more simply, but I think this has changed by mind... a little. (This is for MySQL.)
SELECT `type`, `variety`, `price`
FROM `fruits` AS f2
WHERE `price` IN (
SELECT DISTINCT `price`
FROM `fruits` AS f1
WHERE f1.type = f2.type
ORDER BY `price` ASC
LIMIT X
)
;
Where X is the "arbitrary" value you wanted.
If you know how you want to limit further in cases of duplicate prices, and the data permits such limiting ...
SELECT `type`, `variety`, `price`
FROM `fruits` AS f2
WHERE (`price`, `other_identifying_criteria`) IN (
SELECT DISTINCT `price`, `other_identifying_criteria`
FROM `fruits` AS f1
WHERE f1.type = f2.type
ORDER BY `price` ASC, `other_identifying_criteria` [ASC|DESC]
LIMIT X
)
;

"greatest N per group problems" can easily be solved using window functions:
select type, variety, price
from (
select type, variety, price,
dense_rank() over (partition by type) order by price as rnk
from fruits
) t
where rnk <= 5;

Windows functions only work on SQL Server 2012 and above. Try this out:
SQL Server 2005 and Above Solution
DECLARE #yourTable TABLE(Category VARCHAR(50), SubCategory VARCHAR(50), price INT)
INSERT INTO #yourTable
VALUES ('Meat','Steak',1),
('Meat','Chicken Wings',3),
('Meat','Lamb Chops',5);
DECLARE #n INT = 2;
SELECT DISTINCT Category,CA.SubCategory,CA.price
FROM #yourTable A
CROSS APPLY
(
SELECT TOP (#n) SubCategory,price
FROM #yourTable B
WHERE A.Category = B.Category
ORDER BY price DESC
) CA
Results in two highest priced subCategories per Category:
Category SubCategory price
------------------------- ------------------------- -----------
Meat Chicken Wings 3
Meat Lamb Chops 5

Related

Using MAX to compute MAX value in a subquery column

What I am trying to do: I have a table, "band_style" with schema (band_id, style).
One band_id may occur multiple times, listed with different styles.
I want ALL rows of band_id, NUM (where NUM is the number of different styles a band has) for the band ids with the SECOND MOST number of styles.
I have spent hours on this query- almost nothing seems to be working.
This is how far I got. The table (data) successfully computes all bands with styles less than the maximum value of band styles. Now, I need ALL rows that have the Max NUM for the resulting table. This will give me bands with the second most number of styles.
However, this final result seems to be ignoring the MAX function and just returning the table (data) as is. Can someone please provide some insight/working method? I have over 20 attempts of this query with this being the closest.
Using SQL*PLUS on Oracle
WITH data AS (
SELECT band_id, COUNT(*) AS NUM FROM band_style GROUP BY band_id HAVING COUNT(*) <
(SELECT MAX(c) FROM
(SELECT COUNT(band_id) AS c
FROM band_style
GROUP BY band_id)))
SELECT data.band_id, data.NUM FROM data
INNER JOIN ( SELECT band_id m, MAX(NUM) n
FROM data GROUP BY band_id
) t
ON t.m = data.band_id
AND t.n = data.NUM;
Something like this... based on a Comment under your post, you are looking for DENSE_RANK()
select band_id
from ( select band_id, dense_rank() over (order by count(style) desc) as drk
from band_style
group by band_id
)
where drk = 2;
I would use a windowing function (RANK() in this case) - which is great for find the 'n' ranked thing in a set.
SELECT DISTINCT bs.band_id
FROM band_style bs
WHERE EXISTS (
SELECT NULL
FROM (
SELECT
bs2.band_id,
bs2.num,
RANK() OVER (ORDER BY bs2.num) AS numrank
FROM (
SELECT bs1.band_id, COUNT(*) as num
FROM band_style bs1
GROUP BY bs1.band_id ) bs2 ) bs3
WHERE bs.band_id = bs3.band_id
AND bs3.numrank = 2 )

Get two most frequent data from SQL tbl?

i have a tbl call it tbl_test in which continously data are inserting and it has approx 10^6 records at a time it has colums
Acquire_Id (Value between 1 to 20 ),
Status_Msg(value between 'A' to 'Z'),
Status_Code(value between 1 to 26)
There is one to one mapping b/w Status_Msg and Status_Code
Now i want to get two most frequent status_msg and Staus_Code Count for each acquirer if they are present in table
Query should be Cost Saving
Most databases support the ANSI standard window functions. You can get what you want using row_number() (or rank() or dense_rank(), depending on how ties are returned) after aggregating the values.
The following returns exactly two rows for each acquirer (even if there are ties).
select t.*
from (select t.acquire_id, t.status_msg, t.status_code, count(*) as cnt,
row_number() over (partition by t.acquire_id order by count(*) desc) as seqnum
from tbl_test t
group by t.acquire_id, t.status_msg, t.status_code
) t
where seqnum <= 2;

SQL random sample with groups

I have a university graduate database and would like to extract a random sample of data of around 1000 records.
I want to ensure the sample is representative of the population so would like to include the same proportions of courses eg
I could do this using the following:
select top 500 id from degree where coursecode = 1 order by newid()
union
select top 300 id from degree where coursecode = 2 order by newid()
union
select top 200 id from degree where coursecode = 3 order by newid()
but we have hundreds of courses codes so this would be time consuming and I would like to be able to reuse this code for different sample sizes and don't particularly want to go through the query and hard code the sample sizes.
Any help would be greatly appreciated
You want a stratified sample. I would recommend doing this by sorting the data by course code and doing an nth sample. Here is one method that works best if you have a large population size:
select d.*
from (select d.*,
row_number() over (order by coursecode, newid) as seqnum,
count(*) over () as cnt
from degree d
) d
where seqnum % (cnt / 500) = 1;
EDIT:
You can also calculate the population size for each group "on the fly":
select d.*
from (select d.*,
row_number() over (partition by coursecode order by newid) as seqnum,
count(*) over () as cnt,
count(*) over (partition by coursecode) as cc_cnt
from degree d
) d
where seqnum < 500 * (cc_cnt * 1.0 / cnt)
Add a table for storing population.
I think it should be like this:
SELECT *
FROM (
SELECT id, coursecode, ROW_NUMBER() OVER (PARTITION BY coursecode ORDER BY NEWID()) AS rn
FROM degree) t
LEFT OUTER JOIN
population p ON t.coursecode = p.coursecode
WHERE
rn <= p.SampleSize
It is not necessary to partition the population at all.
If you are taking a sample of 1000 from a population among hundreds of course codes, it stands to reason that many of those course codes will not be selected at all in any one sampling.
If the population is uniform (say, a continuous sequence of student IDs), a uniformly-distributed sample will automatically be representative of population weighting by course code. Since newid() is a uniform random sampler, you're good to go out of the box.
The only wrinkle that you might encounter is if a student ID is a associated with multiple course codes. In this case make a unique list (temporary table or subquery) containing a sequential id, student id and course code, sample the sequential id from it, grouping by student id to remove duplicates.
I've done similar queries (but not on MS SQL) using a ROW_NUMBER approach:
select ...
from
( select ...
,row_number() over (partition by coursecode order by newid()) as rn
from degree
) as d
join sample size as s
on d.coursecode = s.coursecode
and d.rn <= s.samplesize

SQL. Is there any efficient way to find second lowest value?

I have the following table:
ItemID Price
1 10
2 20
3 12
4 10
5 11
I need to find the second lowest price. So far, I have a query that works, but i am not sure it is the most efficient query:
select min(price)
from table
where itemid not in
(select itemid
from table
where price=
(select min(price)
from table));
What if I have to find third OR fourth minimum price? I am not even mentioning other attributes and conditions... Is there any more efficient way to do this?
PS: note that minimum is not a unique value. For example, items 1 and 4 are both minimums. Simple ordering won't do.
SELECT MIN( price )
FROM table
WHERE price > ( SELECT MIN( price )
FROM table )
select price from table where price in (
select
distinct price
from
(select t.price,rownumber() over () as rownum from table t) as x
where x.rownum = 2 --or 3, 4, 5, etc
)
Not sure if this would be the fastest, but it would make it easier to select the second, third, etc... Just change the TOP value.
UPDATED
SELECT MIN(price)
FROM table
WHERE price NOT IN (SELECT DISTINCT TOP 1 price FROM table ORDER BY price)
To find out second minimum salary of an employee, you can use following:
select min(salary)
from table
where salary > (select min(salary) from table);
This is a good answer:
SELECT MIN( price )
FROM table
WHERE price > ( SELECT MIN( price )
FROM table )
Make sure when you do this that there is only 1 row in the subquery! (the part in brackets at the end).
For example if you want to use GROUP BY you will have to define even further using:
SELECT MIN( price )
FROM table te1
WHERE price > ( SELECT MIN( price )
FROM table te2 WHERE te1.brand = te2.brand)
GROUP BY brand
Because GROUP BY will give you multiple rows, otherwise you will get the error:
SQL Error [21000]: ERROR: more than one row returned by a subquery used as an expression
I guess a simplest way to do is using offset-fetch filter from standard sql, distinct is not necessary if you don't have repeat values in your column.
select distinct(price) from table
order by price
offset 1 row fetch first 1 row only;
no need to write complex subqueries....
In amazon redshift use limit-fetch instead for ex...
Select distinct(price) from table
order by price
limit 1
offset 1;
You can either use one of the following:-
select min(your_field) from your_table where your_field NOT IN (select distinct TOP 1 your_field from your_table ORDER BY your_field DESC)
OR
select top 1 ColumnName from TableName where ColumnName not in (select top 1 ColumnName from TableName order by ColumnName asc)
I think you can find the second minimum using LIMIT and ORDER BY
select max(price) as minimum from (select distinct(price) from tableName order by price asc limit 2 ) --or 3, 4, 5, etc
if you want to find third or fourth minimum and so on... you can find out by changing minimum number in limit. you can find using this statement.
You can use RANK functions,
it may seem complex query but similar results like other answers can be achieved with the same,
WITH Temp_table AS (SELECT ITEM_ID,PRICE,RANK() OVER (ORDER BY PRICE) AS
Rnk
FROM YOUR_TABLE_NAME)
SELECT ITEM_ID FROM Temp_table
WHERE Rnk=2;
Maybe u can check the min value first and then place a not or greater than the operator. This will eliminate the usage of a subquery but will require a two-step process
select min(price)
from table
where min(price) <> -- "the min price you previously got"

How can I get a random cartesian product in PostgreSQL?

I have two tables, custassets and tags. To generate some test data I'd like to do an INSERT INTO a many-to-many table with a SELECT that gets random rows from each (so that a random primary key from one table is paired with a random primary key from the second). To my surprise this isn't as easy as I first thought, so I'm persisting with this to teach myself.
Here's my first attempt. I select 10 custassets and 3 tags, but both are the same in each case. I'd be fine with the first table being fixed, but I'd like to randomise the tags assigned.
SELECT
custassets_rand.id custassets_id,
tags_rand.id tags_rand_id
FROM
(
SELECT id FROM custassets WHERE defunct = false ORDER BY RANDOM() LIMIT 10
) AS custassets_rand
,
(
SELECT id FROM tags WHERE defunct = false ORDER BY RANDOM() LIMIT 3
) AS tags_rand
This produces:
custassets_id | tags_rand_id
---------------+--------------
9849 | 3322 }
9849 | 4871 } this pattern of tag PKs is repeated
9849 | 5188 }
12145 | 3322
12145 | 4871
12145 | 5188
17837 | 3322
17837 | 4871
17837 | 5188
....
I then tried the following approach: doing the second RANDOM() call in the SELECT column list. However this one was worse, as it chooses a single tag PK and sticks with it.
SELECT
custassets_rand.id custassets_id,
(SELECT id FROM tags WHERE defunct = false ORDER BY RANDOM() LIMIT 1) tags_rand_id
FROM
(
SELECT id FROM custassets WHERE defunct = false ORDER BY RANDOM() LIMIT 30
) AS custassets_rand
Result:
custassets_id | tags_rand_id
---------------+--------------
16694 | 1537
14204 | 1537
23823 | 1537
34799 | 1537
36388 | 1537
....
This would be easy in a scripting language, and I'm sure can be done quite easily with a stored procedure or temporary table. But can I do it just with a INSERT INTO SELECT?
I did think of choosing integer primary keys using a random function, but unfortunately the primary keys for both tables have gaps in the increment sequences (and so an empty row might be chosen in each table). That would have been fine otherwise!
Note that what you are looking for is not a Cartesian product, which would produce n*m rows; rather a random 1:1 association, which produces GREATEST(n,m) rows.
To produce truly random combinations, it's enough to randomize rn for the bigger set:
SELECT c_id, t_id
FROM (
SELECT id AS c_id, row_number() OVER (ORDER BY random()) AS rn
FROM custassets
) x
JOIN (SELECT id AS t_id, row_number() OVER () AS rn FROM tags) y USING (rn);
If arbitrary combinations are good enough, this is faster (especially for big tables):
SELECT c_id, t_id
FROM (SELECT id AS c_id, row_number() OVER () AS rn FROM custassets) x
JOIN (SELECT id AS t_id, row_number() OVER () AS rn FROM tags) y USING (rn);
If the number of rows in both tables do not match and you do not want to lose rows from the bigger table, use the modulo operator % to join rows from the smaller table multiple times:
SELECT c_id, t_id
FROM (
SELECT id AS c_id, row_number() OVER () AS rn
FROM custassets -- table with fewer rows
) x
JOIN (
SELECT id AS t_id, (row_number() OVER () % small.ct) + 1 AS rn
FROM tags
, (SELECT count(*) AS ct FROM custassets) AS small
) y USING (rn);
Window functions were added with PostgreSQL 8.4.
WITH a_ttl AS (
SELECT count(*) AS ttl FROM custassets c),
b_ttl AS (
SELECT count(*) AS ttl FROM tags),
rows AS (
SELECT gs.*
FROM generate_series(1,
(SELECT max(ttl) AS ttl FROM
(SELECT ttl FROM a_ttl UNION SELECT ttl FROM b_ttl) AS m))
AS gs(row)),
tab_a_rand AS (
SELECT custassets_id, row_number() OVER (order by random()) as row
FROM custassets),
tab_b_rand AS (
SELECT id, row_number() OVER (order by random()) as row
FROM tags)
SELECT a.custassets_id, b.id
FROM rows r
JOIN a_ttl ON 1=1 JOIN b_ttl ON 1=1
LEFT JOIN tab_a_rand a ON a.row = (r.row % a_ttl.ttl)+1
LEFT JOIN tab_b_rand b ON b.row = (r.row % b_ttl.ttl)+1
ORDER BY 1,2;
You can test this query on SQL Fiddle.
Here is a different approach to pick a single combination from 2 tables by random, assuming two tables a and b, both with primary key id. The tables needn't be of same size, and the second row is independently chosen from the first, which might not be that important for testdata.
SELECT * FROM a, b
WHERE a.id = (
SELECT id
FROM a
OFFSET (
SELECT random () * (SELECT count(*) FROM a)
)
LIMIT 1)
AND b.id = (
SELECT id
FROM b
OFFSET (
SELECT random () * (SELECT count(*) FROM b)
)
LIMIT 1);
Tested with two tables, one of size 7000 rows, one with 100k rows, result: immediately. For more than one result, you have to call the query repeatedly - increasing the LIMIT and changing x.id = to x.id IN would produce (aA, aB, bA, bB) result patterns.
It bugs me that after all these years of relational databases, there doesn't seem to be very good cross database ways of doing things like this. The MSDN article http://msdn.microsoft.com/en-us/library/cc441928.aspx seems to have some interesting ideas, but of course that's not PostgreSQL. And even then, their solution requires a single pass, when I'd think it ought to be able to be done without the scan.
I can imagine a few ways that might work without a pass (in selection), but it would involve creating another table that maps your table's primary keys to random numbers (or to linear sequences that you later randomly select, which in some ways may actually be better), and of course, that may have issues as well.
I realize this is probably a non-useful comment, I just felt I needed to rant a bit.
If you just want to get a random set of rows from each side, use a pseudo-random number generator. I would use something like:
select *
from (select a.*, row_number() over (order by NULL) as rownum -- NULL may not work, "(SELECT NULL)" works in MSSQL
from a
) a cross join
(select b.*, row_number() over (order by NULL) as rownum
from b
) b
where a.rownum <= 30 and b.rownum <= 30
This is doing a Cartesian product, which returns 900 rows assuming a and b each have at least 30 rows.
However, I interpreted your question as getting random combinations. Once again, I'd go for the pseudo-random approach.
select *
from (select a.*, row_number() over (order by NULL) as rownum -- NULL may not work, "(SELECT NULL)" works in MSSQL
from a
) a cross join
(select b.*, row_number() over (order by NULL) as rownum
from b
) b
where modf(a.rownum*107+b.rownum*257+17, 101) < <some vaue>
This let's you get combinations among arbitrary rows.
Just a plain carthesian product ON random() appears to work reasonably well. Simple comme bonjour...
-- Cartesian product
-- EXPLAIN ANALYZE
INSERT INTO dirgraph(point_from,point_to,costs)
SELECT p1.the_point , p2.the_point, (1000*random() ) +1
FROM allpoints p1
JOIN allpoints p2 ON random() < 0.002
;