How to find most-correlated X for each Y? - sql

I have a query I can run, which produces rows like this:
ID | category | property_A | property_B
----+----------+------------+------------
1 | X | tall | old
2 | X | short | old
3 | X | tall | old
4 | X | short | young
5 | Y | short | old
6 | Y | short | old
7 | Y | tall | old
I'd like to find, for each category and property_B, what is the most common property_A, and put that into another table somewhere for later use. So here I'd like to know that in category X, old people tend to be tall and young people short, while in category Y, old people tend to be short.
The domain of each column is finite, and not too large - there are something like 200 categories, and a dozen or so of property_A and property_B. So I could write a dumb script on my client, which queries the database 200*12*12 times doing a limited query, but that seems like it must be the wrong approach, as well as wasteful given that it's expensive to produce this table and then throw most of it away.
But I don't even know what words to look up to find the right approach: "sql find correlated rows" shows how to find integer correlations, but I'm not interested in integers. So what do I do instead?

You can readily do this with aggregation and the window/analytic functions. You want the top ranked one by count. The following returns the most popular A:
select category, property_b, property_a as MostPopularA
from (select category, property_b, property_a, count(*) as cnt,
row_number() over (partition by category, property_b order by count(*) desc) as seqnum
from table t
group by category, property_b, property_a
) t
where seqnum = 1;
If you want to get all values when there is a tie, then use dense_rank() instead of row_number().

I suggest a combination of GROUP BY and DISTINCT ON, which is faster / simpler / more elegant in Postgres:
SELECT DISTINCT ON (category, property_b)
category, property_b, property_a, count(*) AS ct
FROM tbl
GROUP BY category, property_b, property_a
ORDER BY category, property_b, ct DESC;
Returns:
category | property_b | property_a | ct
---------+------------+------------+----
X | old | tall | 2
X | young | short | 1
Y | old | short | 2
If multiple peers tie for the most common value, only one arbitrary pick is returned.
This works in a single query level without subquery, since aggregation (GROUP BY) is applied before the DISTINCT step. Detailed explanation for DISTINCT ON:
Select first row in each GROUP BY group?
SQL Fiddle.

Related

how would you generate fake account numbers for a company project with SQL?

I have a big data set in Redshift which my company will share with university students to analyze. I need to mask the real customer account numbers.
I've looked at the random function but there's one catch: some customers are repeated, so I need to retain that for the analysis to be useful. Also, with a random number there's a small possibility you would repeat account numbers, right?
How would you achieve this? Generate a new_random_id. It must be unique from all others in the table (there are over 4 million in the table), but must be the same for those rows where the actual account ID is the same.
+-------------------+---------------+---------+
| actual_accound_id | new_random_id | status |
+-------------------+---------------+---------+
| 100 | 123 | new |
| 100 | 123 | upgrade |
| 200 | 249 | new |
| 300 | 401 | upgrade |
+-------------------+---------------+---------+
I realize I could first generate a mapping table like this below, and then join to the main table, but it still doesn't solve the problem of possibly repeating new random IDs.
select distinct actual_account_id, cast(random()*1000000 as int) as new_random_id
into mapping_table
from t1;
I would create a mapping table using window functions:
select actual_account_id,
row_number() over (order by random()) as fake_account_id
from t1
group by actual_account_id;
This should be a meaningless sequential number.
Redshift might be a bit slow on the ROW_NUMBER() with no PARTITION BY. If performance is an issue, you can use something like this:
select actual_account_id,
count(*) * 100 + row_number(partition by tmp order by random()) as fake_acocunt_number
from (select actual_account_id,
cast(random()*1000000 as int) as tmp
from t1
group by actual_account_id
) t;

Novice seeking help, Max Aggregate not returning expected results

I'm still very new to MS-SQL. I have a simple table and query that that is getting the best of me. I know it will something fundamental I'm overlooking.
I've changed the field names but the idea is the same.
So the idea is that every time someone signs up they get a RegID, Name, and Team. The names are unique, so for below yes John changed teams. And that's my trouble.
Football Table
+------------+----------+---------+
| Max_RegID | Name | Team |
+------------+----------+---------+
| 100 | John | Red |
| 101 | Bill | Blue |
| 102 | Tom | Green |
| 103 | John | Green |
+------------+----------+---------+
With the query at the bottom using the Max_RegID, I was expecting to get back only one record.
+------------+----------+---------+
| Max_RegID | Name | Team |
+------------+----------+---------+
| 103 | John | Green |
+------------+----------+---------+
Instead I get back below, Which seems to include Max_RegID but also for each team. What am I doing wrong?
+------------+----------+---------+
| Max_RegID | Name | Team |
+------------+----------+---------+
| 100 | John | Red |
| 103 | John | Green |
+------------+----------+---------+
My Query
SELECT
Max(Football.RegID) AS Max_RegID,
Football.Name,
Football.Team
FROM
Football
GROUP BY
Football.RegID,
Football.Name,
Football.Team
EDIT* Removed the WHERE statement
The reason you're getting the results that you are is because of the way you have your GROUP BY clause structured.
When you're using any aggregate function, MAX(X), SUM(X), COUNT(X), or what have you, you're telling the SQL engine that you want the aggregate value of column X for each unique combination of the columns listed in the GROUP BY clause.
In your query as written, you're grouping by all three of the columns in the table, telling the SQL engine that each tuple is unique. Therefore the query is returning ALL of the values, and you aren't actually getting the MAX of anything at all.
What you actually want in your results is the maximum RegID for each distinct value in the Name column and also the Team that goes along with that (RegID,Name) combination.
To accomplish that you need to find the MAX(ID) for each Name in an initial data set, and then use that list of RegIDs to add the values for Name and Team in a secondary data set.
Caveat (per comments from #HABO): This is premised on the assumption that RegID is a unique number (an IDENTITY column, value from a SEQUENCE, or something of that sort). If there are duplicate values, this will fail.
The most straight forward way to accomplish that is with a sub-query. The sub-query below gets your unique RegIDs, then joins to the original table to add the other values.
SELECT
f.RegID
,f.Name
,f.Team
FROM
Football AS f
JOIN
(--The sub-query, sq, gets the list of IDs
SELECT
MAX(f2.RegID) AS Max_RegID
FROM
Football AS f2
GROUP BY
f2.Name
) AS sq
ON
sq.Max_RegID = f.RegID;
EDIT: Sorry. I just re-read the question. To get just the single record for the MAX(RegID), just take the GROUP BY out of the sub-query, and you'll just get the current maximum value, which you can use to find the values in the rest of the columns.
SELECT
f.RegID
,f.Name
,f.Team
FROM
Football AS f
JOIN
(--The sub-query, sq, now gets the MAX ID
SELECT
MAX(f2.RegID) AS Max_RegID
FROM
Football AS f2
) AS sq
ON
sq.Max_RegID = f.RegID;
Use row_number()
select * from
(SELECT
Football.RegID AS Max_RegID,
Football.Name,
Football.Team, row_number() over(partition by name order by Football.RegID desc) as rn
FROM
Football
WHERE
Football.Name = 'John')a
where rn=1
simply you can edit your query below way
SELECT *
FROM
Football f
WHERE
f.Name = 'John' and
Max_RegID = (SELECT Max(Football.Max_RegID) where Football.Name = 'John'
)
or
if sql server simply use this
select top 1 * from Football f
where f.Name = 'John'
order by Max_RegID desc
or
if mysql then
select * from Football f
where f.Name = 'John'
order by Max_RegID desc
Limit 1
You need self join :
select f1.*
from Football f inner join
Football f1
on f1.name = f.name
where f.Max_RegID = 103;
After re-visit question, the sample data suggests me subquery :
select f.*
from Football f
where name = (select top (1) f1.name
from Football f1
order by f1.Max_RegID desc
);

Use something like TOP with GROUP BY

With table table1 like below
+--------+-------+-------+------------+-------+
| flight | orig | dest | passenger | bags |
+--------+-------+-------+------------+-------+
| 1111 | sfo | chi | david | 3 |
| 1112 | sfo | dal | david | 7 |
| 1112 | sfo | dal | kim | 10|
| 1113 | lax | san | ameera | 5 |
| 1114 | lax | lfr | tim | 6 |
| 1114 | lax | lfr | jake | 8 |
+--------+-------+-------+------------+-------+
I'm aggregating the table by orig like below
select
orig
, count(*) flight_cnt
, count(distinct passenger) as pass_cnt
, percentile_cont(0.5) within group ( order by bags ASC) as bag_cnt_med
from table1
group by orig
I need to add the passenger with the longest name ( length(passenger) ) for each orig group - how do I go about it?
Output expected
+------+-------------+-----------+---------------+-------------------+
| orig | flight_cnt | pass_cnt | bags_cnt_med | pass_max_len_name |
+------+-------------+-----------+---------------+-------------------+
| sfo | 3 | 2 | 7 | david |
| lax | 3 | 3 | 6 | ameera |
+------+-------------+-----------+---------------+-------------------+
You can conveniently retrieve the passenger with the longest name per group with DISTINCT ON.
Select first row in each GROUP BY group?
But I see no way to combine that (or any other simple way) with your original query in a single SELECT. I suggest to join two separate subqueries:
SELECT *
FROM ( -- your original query
SELECT orig
, count(*) AS flight_cnt
, count(distinct passenger) AS pass_cnt
, percentile_cont(0.5) WITHIN GROUP (ORDER BY bags) AS bag_cnt_med
FROM table1
GROUP BY orig
) org_query
JOIN ( -- my addition
SELECT DISTINCT ON (orig) orig, passenger AS pass_max_len_name
FROM table1
ORDER BY orig, length(passenger) DESC NULLS LAST
) pas USING (orig);
USING in the join clause conveniently only outputs one instance of orig, so you can simply use SELECT * in the outer SELECT.
If passenger can be NULL, it is important to add NULLS LAST:
PostgreSQL sort by datetime asc, null first?
From multiple passenger names with the same maximum length in the same group, you get an arbitrary pick - unless you add more expressions to ORDER BY as tiebreaker. Detailed explanation in the answer linked above.
Performance?
Typically, a single scan is superior, especially with sequential scans.
The above query uses two scans (maybe index / index-only scans). But the second scan is comparatively cheap unless the table is too huge to fit in cache (mostly). Lukas suggested an alternative query with only a single SELECT adding:
, (ARRAY_AGG (passenger ORDER BY LENGTH (passenger) DESC))[1] -- I'd add NULLS LAST
The idea is smart, but last time I tested, array_agg with ORDER BY did not perform so well. (The overhead of per-group ORDER BY is substantial, and array handling is expensive, too.)
The same approach can be cheaper with a custom aggregate function first() like instructed in the Postgres Wiki here. Or, faster, yet, with a version written in C, available on PGXN. Eliminates the extra cost for array handling, but we still need per-group ORDER BY. May be faster for only few groups. You would then add:
, first(passenger ORDER BY length(passenger) DESC NULLS LAST)
Gordon and Lukas also mention the window function first_value(). Window functions are applied after aggregate functions. To use it in the same SELECT, we would need to aggregate passenger somehow first - catch 22. Gordon solves this with a subquery - another candidate for good performance with standard Postgres.
first() does the same without subquery and should be simpler and a bit faster. But it still won't be faster than a separate DISTINCT ON for most cases with few rows per group. For lots of rows per group, a recursive CTE technique is typically faster. There are yet faster techniques if you have a separate table holding all relevant, unique orig values. Details:
Optimize GROUP BY query to retrieve latest record per user
The best solution depends on various factors. The proof of the pudding is in the eating. To optimize performance you have to test with your setup. The above query should be among the fastest.
One method uses the window function first_value(). Unfortunately, this is not available as an aggregation function:
select orig,
count(*) flight_cnt,
count(distinct passenger) as pass_cnt,
percentile_cont(0.5) within group ( order by bags ASC) as bag_cnt_med,
max(longest_name) as longest_name
from (select t1.*,
first_value(name) over (partition by orig order by length(name) desc) as longest_name
from table1
) t1
group by orig;
You are looking for something like Oracle's KEEP FIRST/LAST where you get a value (the passenger name) according to an aggregate (the name length). PostgreSQL doesn't have such function as far as I know.
One way to go about this is a trick: Combine length and name, get the maximum, then extract the name: '0005david' > '0003kim' etc.
select
orig
, count(*) flight_cnt
, count(distinct passenger) as pass_cnt
, percentile_cont(0.5) within group ( order by bags ASC) as bag_cnt_med,
, substr(max(to_char(char_length(passenger), '0000') || passenger), 5) as name
from table1
group by orig
order by orig;
For small group sizes, you could use array_agg()
SELECT
orig
, COUNT (*) AS flight_cnt
, COUNT (DISTINCT passenger) AS pass_cnt
, PERCENTILE_CONT (0.5) WITHIN GROUP (ORDER BY bags ASC) AS bag_cnt_med
, (ARRAY_AGG (passenger ORDER BY LENGTH (passenger) DESC))[1] AS pass_max_len_name
FROM table1
GROUP BY orig
Having said so, while this is shorter syntax, a first_value() window function based approach might be faster for larger data sets as array accumulation might become expensive.
bot it does not solve problem if you have several names wqith same length:
t=# with p as (select distinct orig,passenger,length(trim(passenger)),max(length(trim(passenger))) over (partition by orig) from s127)
, o as ( select
orig
, count(*) flight_cnt
, count(distinct passenger) as pass_cnt
, percentile_cont(0.5) within group ( order by bags ASC) as bag_cnt_med
from s127
group by orig)
select distinct o.*,p.passenger from o join p on p.orig = o.orig where max=length;
orig | flight_cnt | pass_cnt | bag_cnt_med | passenger
---------+------------+----------+-------------+--------------
lax | 3 | 3 | 6 | ameera
sfo | 3 | 2 | 7 | david
(2 rows)
populate:
t=# create table s127(flight int,orig text,dest text, passenger text, bags int);
CREATE TABLE
Time: 52.678 ms
t=# copy s127 from stdin delimiter '|';
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> 1111 | sfo | chi | david | 3
>> 1112 | sfo | dal | david | 7
1112 | sfo | dal | kim | 10
1113 | lax | san | ameera | 5
1114 | lax | lfr | tim | 6
1114 | lax | lfr | jake | 8 >> >> >> >>
>> \.
COPY 6

Find spectators that have seen the same shows (match multiple rows for each)

For an assignment I have to write several SQL queries for a database stored in a PostgreSQL server running PostgreSQL 9.3.0. However, I find myself blocked with last query. The database models a reservation system for an opera house. The query is about associating the a spectator the other spectators that assist to the same events every time.
The model looks like this:
Reservations table
id_res | create_date | tickets_presented | id_show | id_spectator | price | category
-------+---------------------+---------------------+---------+--------------+-------+----------
1 | 2015-08-05 17:45:03 | | 1 | 1 | 195 | 1
2 | 2014-03-15 14:51:08 | 2014-11-30 14:17:00 | 11 | 1 | 150 | 2
Spectators table
id_spectator | last_name | first_name | email | create_time | age
---------------+------------+------------+----------------------------------------+---------------------+-----
1 | gonzalez | colin | colin.gonzalez#gmail.com | 2014-03-15 14:21:30 | 22
2 | bequet | camille | bequet.camille#gmail.com | 2014-12-10 15:22:31 | 22
Shows table
id_show | name | kind | presentation_date | start_time | end_time | id_season | capacity_cat1 | capacity_cat2 | capacity_cat3 | price_cat1 | price_cat2 | price_cat3
---------+------------------------+--------+-------------------+------------+----------+-----------+---------------+---------------+---------------+------------+------------+------------
1 | madama butterfly | opera | 2015-09-05 | 19:30:00 | 21:30:00 | 2 | 315 | 630 | 945 | 195 | 150 | 100
2 | don giovanni | opera | 2015-09-12 | 19:30:00 | 21:45:00 | 2 | 315 | 630 | 945 | 195 | 150 | 100
So far I've started by writing a query to get the id of the spectator and the date of the show he's attending to, the query looks like this.
SELECT Reservations.id_spectator, Shows.presentation_date
FROM Reservations
LEFT JOIN Shows ON Reservations.id_show = Shows.id_show;
Could someone help me understand better the problem and hint me towards finding a solution. Thanks in advance.
So the result I'm expecting should be something like this
id_spectator | other_id_spectators
-------------+--------------------
1| 2,3
Meaning that every time spectator with id 1 went to a show, spectators 2 and 3 did too.
Note based on comments: Wanted to make clear that this answer may be of limited use as it was answered in the context of SQL-Server (tag was present at the time)
There is probably a better way to do it, but you could do it with the 'stuff 'function. The only drawback here is that, since your ids are ints, placing a comma between values will involve a work around (would need to be a string). Below is the method I can think of using a work around.
SELECT [id_spectator], [id_show]
, STUFF((SELECT ',' + CAST(A.[id_spectator] as NVARCHAR(10))
FROM reservations A
Where A.[id_show]=B.[id_show] AND a.[id_spectator] != b.[id_spectator] FOR XML PATH('')),1,1,'') As [other_id_spectators]
From reservations B
Group By [id_spectator], [id_show]
This will show you all other spectators that attended the same shows.
Meaning that every time spectator with id 1 went to a show, spectators 2 and 3 did too.
In other words, you want a list of ...
all spectators that have seen all the shows that a given spectator has seen (and possibly more than the given one)
This is a special case of relational division. We have assembled an arsenal of basic techniques here:
How to filter SQL results in a has-many-through relation
It is special because the list of shows each spectator has to have attended is dynamically determined by the given prime spectator.
Assuming that (d_spectator, id_show) is unique in reservations, which has not been clarified.
A UNIQUE constraint on those two columns (in that order) also provides the most important index.
For best performance in query 2 and 3 below also create an index with leading id_show.
1. Brute force
The primitive approach would be to form a sorted array of shows the given user has seen and compare the same array of others:
SELECT 1 AS id_spectator, array_agg(sub.id_spectator) AS id_other_spectators
FROM (
SELECT id_spectator
FROM reservations r
WHERE id_spectator <> 1
GROUP BY 1
HAVING array_agg(id_show ORDER BY id_show)
#> (SELECT array_agg(id_show ORDER BY id_show)
FROM reservations
WHERE id_spectator = 1)
) sub;
But this is potentially very expensive for big tables. The whole table hast to be processes, and in a rather expensive way, too.
2. Smarter
Use a CTE to determine relevant shows, then only consider those
WITH shows AS ( -- all shows of id 1; 1 row per show
SELECT id_spectator, id_show
FROM reservations
WHERE id_spectator = 1 -- your prime spectator here
)
SELECT sub.id_spectator, array_agg(sub.other) AS id_other_spectators
FROM (
SELECT s.id_spectator, r.id_spectator AS other
FROM shows s
JOIN reservations r USING (id_show)
WHERE r.id_spectator <> s.id_spectator
GROUP BY 1,2
HAVING count(*) = (SELECT count(*) FROM shows)
) sub
GROUP BY 1;
#> is the "contains2 operator for arrays - so we get all spectators that have at least seen the same shows.
Faster than 1. because only relevant shows are considered.
3. Real smart
To also exclude spectators that are not going to qualify early from the query, use a recursive CTE:
WITH RECURSIVE shows AS ( -- produces exactly 1 row
SELECT id_spectator, array_agg(id_show) AS shows, count(*) AS ct
FROM reservations
WHERE id_spectator = 1 -- your prime spectator here
GROUP BY 1
)
, cte AS (
SELECT r.id_spectator, 1 AS idx
FROM shows s
JOIN reservations r ON r.id_show = s.shows[1]
WHERE r.id_spectator <> s.id_spectator
UNION ALL
SELECT r.id_spectator, idx + 1
FROM cte c
JOIN reservations r USING (id_spectator)
JOIN shows s ON s.shows[c.idx + 1] = r.id_show
)
SELECT s.id_spectator, array_agg(c.id_spectator) AS id_other_spectators
FROM shows s
JOIN cte c ON c.idx = s.ct -- has an entry for every show
GROUP BY 1;
Note that the first CTE is non-recursive. Only the second part is recursive (iterative really).
This should be fastest for small selections from big tables. Row that don't qualify are excluded early. the two indices I mentioned are essential.
SQL Fiddle demonstrating all three.
It sounds like you have one half of the total question--determining which id_shows a particular id_spectator attended.
What you want to ask yourself is how you can determine which id_spectators attended an id_show, given an id_show. Once you have that, combine the two answers to get the full result.
So the final answer I got, looks like this :
SELECT id_spectator, id_show,(
SELECT string_agg(to_char(A.id_spectator, '999'), ',')
FROM Reservations A
WHERE A.id_show=B.id_show
) AS other_id_spectators
FROM Reservations B
GROUP By id_spectator, id_show
ORDER BY id_spectator ASC;
Which prints something like this:
id_spectator | id_show | other_id_spectators
-------------+---------+---------------------
1 | 1 | 1, 2, 9
1 | 14 | 1, 2
Which suits my needs, however if you have any improvements to offer, please share :) Thanks again everybody!

Confusing SELECT statement

First I will show you example tables that my issue pertains to, then I will ask the question.
[my_fruits]
fruit_name | fruit_id | fruit_owner | fruit_timestamp
----------------------------------------------------------------
Banana | 3 | Timmy | 3/4/11
Banana | 3 | Timmy | 4/1/11
Banana | 8 | Timmy | 5/2/11
Apple | 4 | Timmy | 2/1/11
Apple | 4 | Roger | 3/4/11
Now I want to run a query that only selects fruit_name, fruit_id, and fruit_owner values. I only want to get one row per fruit, and the way I want it to be decided is by the latest timestamp. For example the perfect query on this table would return:
[my_fruits]
fruit_name | fruit_id | fruit_owner |
----------------------------------------------
Banana | 8 | Timmy |
Apple | 4 | Roger |
I tried the query:
select max(my_fruits.fruit_name) keep
(dense_rank last order by my_fruits.fruit_timestamp) fruit_name,
my_fruits.fruit_id, my_fruits.fruit_owner
from my_fruits
group by my_fruits.fruit_id, my_fruits.fruit_owner
Now the issue with that is returns basically distinct fruit names, fruit ids, and fruit owners.
For Oracle 9i+, use:
SELECT x.fruit_name,
x.fruit_id,
x.fruit_owner
FROM (SELECT mf.fruit_name,
mf.fruit_id,
mf.fruit_owner,
ROW_NUMBER() OVER (PARTITION BY mf.fruit_name
ORDER BY mf.fruit_timestamp) AS rank
FROM MY_FRUIT mf) x
WHERE x.rank = 1
Most databases will support using a self join on a derived table/inline view:
SELECT x.fruit_name,
x.fruit_id,
x.fruit_owner
FROM MY_FRUIT x
JOIN (SELECT t.fruit_name,
MAX(t.fruit_timestamp) AS max_ts
FROM MY_FRUIT t
GROUP BY t.fruit_name) y ON y.fruit_name = x.fruit_name
AND y.max_ts = x.fruit_timestamp
However, this will return duplicates if there are 2+ fruit_name records with the same timestamp value.
If you want one row per fruit name, you have to group by fruit_name.
select fruit_name,
max(my_fruits.fruit_id) keep
(dense_rank last order by my_fruits.fruit_timestamp) fruit_id,
max(my_fruits.fruit_owner) keep
(dense_rank last order by my_fruits.fruit_timestamp) fruit_owner
from my_fruits
group by my_fruits.fruit_name
How you want to deal with tie-breaks is a separate issue.
Try a subquery:
select a.fruit_name, a.fruit_id, a.fruit_owner
from my_fruits a
where a.fruit_timestamp =
(select max(b.fruit_timestamp)
from my_fruits b
where b.fruit_id = a.fruit_id)
I would do it by finding out the list of (fruit_name, fruit_timestamp) which are of interest to you, and then grouping that "table" with the actual fruit table and retrieving the other values.
SELECT fruit_and_max_t.fruit_name,
my_fruits.fruit_id,
my_fruits.fruit_owner
FROM my_fruits,
( SELECT fruit_name, MAX(fruit_timestamp) AS max_timestamp
FROM my_fruits
GROUP BY fruit_name) AS fruit_and_max_t,
WHERE fruit_and_max_t.max_timestamp = my_fruits.fruit_timestamp
AND fruit_and_max_t.fruit_name = my_fruits.fruit_name
This assumes that there are not multiple entries in the table with the same value of (fruit_name, fruit_timestamp), i.e. that tuple (pair) act like a unique identifier.