Postgres foreign data wrapper issue with 'is null' where clause - sql

I'm trying to build a report with a query, accessing different postgres DB via FDW.
And I'm guessing why does it work this way.
First query without where clause is fine:
SELECT s.student_id, p.surname
FROM rep_student s inner JOIN rep_person p ON p.id = s.person_id
But adding where caluse make this query a hundred times slower (40s vs 0.1s):
SELECT s.student_id, p.surname
FROM rep_student s inner JOIN rep_person p ON p.id = s.person_id
WHERE s.learning_end_date IS NULL
Result for EXPLAIN VERBOSE:
Nested Loop (cost=200.00..226.39 rows=1 width=734)
Output: s.student_id, p.surname
Join Filter: ((s.person_id)::text = (p.id)::text)
-> Foreign Scan on public.rep_student s (cost=100.00..111.80 rows=1 width=436)
Output: s.student_id, s.version, s.person_id, s.curriculum_flow_id, s.learning_start_date, s.learning_end_date, s.learning_end_reason, s.last_update_timestamp, s.aud_created_ts, s.aud_created_by, s.aud_last_updated_ts, s.aud_last_updated_by
Remote SQL: SELECT student_id, person_id FROM public.rep_student WHERE ((learning_end_date IS NULL))
-> Foreign Scan on public.rep_person p (cost=100.00..113.24 rows=108 width=734)
Output: p.id, p.version, p.surname, p.name, p.middle_name, p.birthdate, p.info, p.photo, p.last_update_timestamp, p.is_archived, p.gender, p.aud_created_ts, p.aud_created_by, p.aud_last_updated_ts, p.aud_last_updated_by, p.full_name
Remote SQL: SELECT id, surname FROM public.rep_person`
Result for EXPLAIN ANALYZE:
Nested Loop (cost=200.00..226.39 rows=1 width=734) (actual time=27.138..38996.303 rows=939 loops=1)
Join Filter: ((s.person_id)::text = (p.id)::text)
Rows Removed by Join Filter: 15194898
-> Foreign Scan on rep_student s (cost=100.00..111.80 rows=1 width=436) (actual time=0.685..4.259 rows=939 loops=1)
-> Foreign Scan on rep_person p (cost=100.00..113.24 rows=108 width=734) (actual time=1.380..39.094 rows=16183 loops=939)
Planning time: 0.251 ms
Execution time: 38997.914 ms
Data count for tables is relatively small. Almost all rows in student table has NULL in learning_end_date column.
Student ~ 1000 rows. Persons ~ 15000.
It seems that Postgres has issues with filtering NULLs with FDW, because this query executes fast again:
SELECT s.student_id, p.surname
FROM rep_student s inner JOIN rep_person p ON p.id = s.person_id
WHERE s.learning_start_date < current_date
Result for EXPLAIN VERBOSE:
Hash Join (cost=214.59..231.83 rows=36 width=734)
Output: s.student_id, p.surname
Hash Cond: ((s.person_id)::text = (p.id)::text)
-> Foreign Scan on public.rep_student s (cost=100.00..116.65 rows=59 width=436)
Output: s.student_id, s.version, s.person_id, s.curriculum_flow_id, s.learning_start_date, s.learning_end_date, s.learning_end_reason, s.last_update_timestamp, s.aud_created_ts, s.aud_created_by, s.aud_last_updated_ts, s.aud_last_updated_by
Filter: (s.learning_start_date < ('now'::cstring)::date)
Remote SQL: SELECT student_id, person_id, learning_start_date FROM public.rep_student"
-> Hash (cost=113.24..113.24 rows=108 width=734)
Output: p.surname, p.id
-> Foreign Scan on public.rep_person p (cost=100.00..113.24 rows=108 width=734)
Output: p.surname, p.id
Remote SQL: SELECT id, surname FROM public.rep_person`
Result for EXPLAIN ANALYZE:
Hash Join (cost=214.59..231.83 rows=36 width=734) (actual time=41.614..46.347 rows=940 loops=1)
Hash Cond: ((s.person_id)::text = (p.id)::text)
-> Foreign Scan on rep_student s (cost=100.00..116.65 rows=59 width=436) (actual time=0.718..3.829 rows=940 loops=1)
Filter: (learning_start_date < ('now'::cstring)::date)
-> Hash (cost=113.24..113.24 rows=108 width=734) (actual time=40.812..40.812 rows=16183 loops=1)
Buckets: 16384 (originally 1024) Batches: 2 (originally 1) Memory Usage: 921kB
-> Foreign Scan on rep_person p (cost=100.00..113.24 rows=108 width=734) (actual time=2.252..35.079 rows=16183 loops=1)
Planning time: 0.208 ms
Execution time: 47.176 ms
Tried to add index on learning_end_date but didn't experience any effect.
What do I need to change to make query execute faster with 'IS NULL' where clause? Any ideas will be appreciated!

Your problem is that you do not have good table statistics on those foreign tables, so the row count estimates of the PostgreSQL optimizer are pretty arbitrary.
That causes the optimizer to choose a nested loop join in the case you report as slow, which is an inappropriate plan.
It is just by coincidence that this happens for a certain IS NULL condition.
Collect statistics on the foreign tables with
ANALYZE rep_student;
ANALYZE rep_person;
Then the performance will be much better.
Note that while autovacuum automatically gathers statistics for local tables, it does not do that for remote tables because it does not know how many rows have changed, so you should regularly ANALYZE foreign tables whose data change.

Related

Why does this GROUP BY cause a full table scan?

I have a table of projects and a table of tasks, with each task referencing a single project. I want to get a list of projects sorted by their due dates along with the number of tasks in each project. I can pretty easily write this query two ways. First, using JOIN and GROUP BY:
SELECT p.name, p.due_date, COUNT(t.id) as num_tasks
FROM projects p
LEFT OUTER JOIN tasks t ON t.project_id = p.id
GROUP BY p.id
ORDER BY p.due_date ASC LIMIT 20;
Second, using a subquery:
SELECT p.name, p.due_date, (SELECT
COUNT(*) FROM tasks t WHERE t.project_id = p.id) as num_tasks
FROM projects p
ORDER BY p.due_date ASC LIMIT 20;
I'm using PostgreSQL 10, and I've got indices on projects.id, projects.due_date and tasks.project_id. Why does the first query using the GROUP BY clause do a full table scan while the second query makes proper use of the indices? It seems like these should compile down to the same thing.
Note that if I remove the GROUP BY and the COUNT(t.id) from the first query it will run quickly, just with lots of duplicate rows. So the problem is with the GROUP BY clause, not the JOIN. This seems like it's about the simplest GROUP BY one could do, so I'd like to understand if/how to make it more efficient before moving on to more complicated queries.
Edit — here's the result of EXPLAIN ANALYZE. First query:
Limit (cost=41919.58..41919.63 rows=20 width=53) (actual time=1046.762..1046.771 rows=20 loops=1)
-> Sort (cost=41919.58..42169.58 rows=100000 width=53) (actual time=1046.760..1046.765 rows=20 loops=1)
Sort Key: p.due_date
Sort Method: top-N heapsort Memory: 29kB
-> GroupAggregate (cost=0.71..39258.62 rows=100000 width=53) (actual time=0.109..1002.890 rows=100000 loops=1)
Group Key: p.id
-> Merge Left Join (cost=0.71..35758.62 rows=500000 width=49) (actual time=0.072..807.603 rows=500702 loops=1)
Merge Cond: (p.id = t.project_id)
-> Index Scan using projects_pkey on projects p (cost=0.29..3542.29 rows=100000 width=45) (actual time=0.025..38.363 rows=100000 loops=1)
-> Index Scan using project_id_idx on tasks t (cost=0.42..25716.33 rows=500000 width=8) (actual time=0.038..531.097 rows=500000 loops=1)
Planning Time: 0.573 ms
Execution Time: 1046.934 ms
Second query:
Limit (cost=0.29..92.61 rows=20 width=49) (actual time=0.079..0.443 rows=20 loops=1)
-> Index Scan using project_date_idx on projects p (cost=0.29..461594.09 rows=100000 width=49) (actual time=0.076..0.432 rows=20 loops=1)
SubPlan 1
-> Aggregate (cost=4.54..4.55 rows=1 width=8) (actual time=0.015..0.016 rows=1 loops=20)
-> Index Only Scan using project_id_idx on tasks t (cost=0.42..4.53 rows=6 width=0) (actual time=0.009..0.011 rows=5 loops=20)
Index Cond: (project_id = p.id)
Heap Fetches: 0
Planning Time: 0.284 ms
Execution Time: 0.551 ms
And if anyone wants to try to exactly reproduce this, here's my setup:
CREATE TABLE projects (
id serial NOT NULL PRIMARY KEY,
name varchar(100) NOT NULL,
due_date timestamp NOT NULL
);
CREATE TABLE tasks (
id serial NOT NULL PRIMARY KEY,
project_id integer NOT NULL,
data real NOT NULL
);
INSERT INTO projects (name, due_date) SELECT
md5(random()::text),
timestamp '2020-01-01 00:00:00' +
random() * (timestamp '2030-01-01 20:00:00' - timestamp '2020-01-01 10:00:00')
FROM generate_series(1, 100000);
INSERT INTO tasks (project_id, data)
SELECT CAST(1 + random()*99999 AS integer), random()
FROM generate_series(1, 500000);
CREATE INDEX project_date_idx ON projects ("due_date");
CREATE INDEX project_id_idx ON tasks ("project_id");
ALTER TABLE tasks ADD CONSTRAINT task_foreignkey FOREIGN KEY ("project_id") REFERENCES "projects" ("id") DEFERRABLE INITIALLY DEFERRED;

SQL: ON vs. WHERE in sub JOIN

What is the difference between using ON and WHERE in a sub join when using an outer reference?
Consider these two SQL statements as an example (looking for 10 persons with not closed tasks, using person_task with a many-to-many relationship):
select p.name
from person p
where exists (
select 1
from person_task pt
join task t on pt.task_id = t.id
and t.state <> 'closed'
and pt.person_id = p.id -- ON
)
limit 10
select p.name
from person p
where exists (
select 1
from person_task pt
join task t on pt.task_id = t.id and t.state <> 'closed'
where pt.person_id = p.id -- WHERE
)
limit 10
They produce the same result but the statement with ON is considerably faster.
Here the corresponding EXPLAIN (ANALYZE) statements:
-- USING ON
Limit (cost=0.00..270.98 rows=10 width=8) (actual time=10.412..60.876 rows=10 loops=1)
-> Seq Scan on person p (cost=0.00..28947484.16 rows=1068266 width=8) (actual time=10.411..60.869 rows=10 loops=1)
Filter: (SubPlan 1)
Rows Removed by Filter: 68
SubPlan 1
-> Nested Loop (cost=1.00..20257.91 rows=1632 width=0) (actual time=0.778..0.778 rows=0 loops=78)
-> Index Scan using person_taskx1 on person_task pt (cost=0.56..6551.27 rows=1632 width=8) (actual time=0.633..0.633 rows=0 loops=78)
Index Cond: (id = p.id)
-> Index Scan using taskxpk on task t (cost=0.44..8.40 rows=1 width=8) (actual time=1.121..1.121 rows=1 loops=10)
Index Cond: (id = pt.task_id)
Filter: (state <> 'open')
Planning Time: 0.466 ms
Execution Time: 60.920 ms
-- USING WHERE
Limit (cost=2818814.57..2841563.37 rows=10 width=8) (actual time=29.075..6884.259 rows=10 loops=1)
-> Merge Semi Join (cost=2818814.57..59308642.64 rows=24832 width=8) (actual time=29.075..6884.251 rows=10 loops=1)
Merge Cond: (p.id = pt.person_id)
-> Index Scan using personxpk on person p (cost=0.43..1440340.27 rows=2136533 width=16) (actual time=0.003..0.168 rows=18 loops=1)
-> Gather Merge (cost=1001.03..57357094.42 rows=40517669 width=8) (actual time=9.441..6881.180 rows=23747 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Nested Loop (cost=1.00..52679350.05 rows=16882362 width=8) (actual time=1.862..4207.577 rows=7938 loops=3)
-> Parallel Index Scan using person_taskx1 on person_task pt (cost=0.56..25848782.35 rows=16882362 width=16) (actual time=1.344..1807.664 rows=7938 loops=3)
-> Index Scan using taskxpk on task t (cost=0.44..1.59 rows=1 width=8) (actual time=0.301..0.301 rows=1 loops=23814)
Index Cond: (id = pt.task_id)
Filter: (state <> 'open')
Planning Time: 0.430 ms
Execution Time: 6884.349 ms
Should therefore always the ON statement be used for filtering values in a sub JOIN? Or what is going on?
I have used Postgres for this example.
The condition and pt.person_id = p.id doesn't refer to any column of the joined table t. In an inner join this doesn't make much sense semantically and we can move this condition from ON to WHERE to get the query more readable.
You are right, hence, that the two queries are equivalent and should result in the same execution plan. As this is not the case, PostgreSQL seems to have a problem here with their optimizer.
In an outer join such a condition in ON can make sense and would be different from WHERE. I assume that this is the reason for the optimizer finding a different plan for ON in general. Once it detects the condition in ON it goes another route, oblivious of the join type (so my assumption). I am surprised though, that this leads to a better plan; I'd rather expect a worse plan.
This may indicate that the table's statistics are not up-to-date. Please analyze the tables to make sure. Or it may be a sore spot in the optimizer code PostgreSQL developers might want to work on.

Query planner differences in similar queries

I have two tables, one for profiles and one for the profile's employment status. The two tables have a one-to-one relationship. One profile might might not have an employment status. The table schemas are as below (irrelevant columns removed for clarity):
create type employment_status as enum ('claimed', 'approved', 'denied');
create table if not exists profiles
(
id bigserial not null
constraint profiles_pkey
primary key
);
create table if not exists employments
(
id bigserial not null
constraint employments_pkey
primary key,
status employment_status not null,
profile_id bigint not null
constraint fk_rails_d95865cd58
references profiles
on delete cascade
);
create unique index if not exists index_employments_on_profile_id
on employments (profile_id);
With these tables, I was asked to list all unemployed profiles. An unemployed profile is defined as a profile not having an employment record or having an employment with a status other than "approved".
My first tentative was the following query:
SELECT * FROM "profiles"
LEFT JOIN employments ON employments.profile_id = profiles.id
WHERE employments.status != 'approved'
The assumption here was that all profiles will be listed with their respective employments, then I could filter them with the where condition. Any profile without an employment record would have an employment status of null and therefore be filtered by the condition. However, this query did not return profiles without an employment.
After some research I found this post, explaining why it doesn't work and transformed my query:
SELECT *
FROM profiles
LEFT JOIN employments ON profiles.id = employments.profile_id and employments.status != 'approved';
Which actually did work. But, my ORM produced a slightly different query, which didn't work.
SELECT profiles.* FROM "profiles"
LEFT JOIN employments ON employments.profile_id = profiles.id AND employments.status != 'approved'
The only difference being the select clause. I tried to understand why this slight difference produced such a difference an ran an explain analyze all three queries:
EXPLAIN ANALYZE SELECT * FROM "profiles"
LEFT JOIN employments ON employments.profile_id = profiles.id
WHERE employments.status != 'approved'
Hash Join (cost=14.28..37.13 rows=846 width=452) (actual time=0.025..0.027 rows=2 loops=1)
Hash Cond: (e.profile_id = profiles.id)
-> Seq Scan on employments e (cost=0.00..20.62 rows=846 width=68) (actual time=0.008..0.009 rows=2 loops=1)
Filter: (status <> ''approved''::employment_status)
Rows Removed by Filter: 1
-> Hash (cost=11.90..11.90 rows=190 width=384) (actual time=0.007..0.007 rows=8 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 12kB
-> Seq Scan on profiles (cost=0.00..11.90 rows=190 width=384) (actual time=0.003..0.004 rows=8 loops=1)
Planning Time: 0.111 ms
Execution Time: 0.053 ms
EXPLAIN ANALYZE SELECT *
FROM profiles
LEFT JOIN employments ON profiles.id = employments.profile_id and employments.status != 'approved';
Hash Right Join (cost=14.28..37.13 rows=846 width=452) (actual time=0.036..0.042 rows=8 loops=1)
Hash Cond: (employments.profile_id = profiles.id)
-> Seq Scan on employments (cost=0.00..20.62 rows=846 width=68) (actual time=0.005..0.005 rows=2 loops=1)
Filter: (status <> ''approved''::employment_status)
Rows Removed by Filter: 1
-> Hash (cost=11.90..11.90 rows=190 width=384) (actual time=0.015..0.015 rows=8 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 12kB
-> Seq Scan on profiles (cost=0.00..11.90 rows=190 width=384) (actual time=0.010..0.011 rows=8 loops=1)
Planning Time: 0.106 ms
Execution Time: 0.108 ms
EXPLAIN ANALYZE SELECT profiles.* FROM "profiles"
LEFT JOIN employments ON employments.profile_id = profiles.id AND employments.status != 'approved'
Seq Scan on profiles (cost=0.00..11.90 rows=190 width=384) (actual time=0.006..0.007 rows=8 loops=1)
Planning Time: 0.063 ms
Execution Time: 0.016 ms
The first and second query plans are almost the same expect for the hash join for one and right hash join for the other, while the last query doesn't even do join or the where condition.
I came up with a forth query that did work:
EXPLAIN ANALYZE SELECT profiles.* FROM profiles
LEFT JOIN employments ON employments.profile_id = profiles.id
WHERE (employments.id IS NULL OR employments.status != 'approved')
Hash Right Join (cost=14.28..35.02 rows=846 width=384) (actual time=0.021..0.026 rows=7 loops=1)
Hash Cond: (employments.profile_id = profiles.id)
Filter: ((employments.id IS NULL) OR (employments.status <> ''approved''::employment_status))
Rows Removed by Filter: 1
-> Seq Scan on employments (cost=0.00..18.50 rows=850 width=20) (actual time=0.002..0.003 rows=3 loops=1)
-> Hash (cost=11.90..11.90 rows=190 width=384) (actual time=0.011..0.011 rows=8 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 12kB
-> Seq Scan on profiles (cost=0.00..11.90 rows=190 width=384) (actual time=0.007..0.008 rows=8 loops=1)
Planning Time: 0.104 ms
Execution Time: 0.049 ms
My questions about the subject is:
Why are the query plans for second and third queries different even though they have the same structure?
Why are the query plans the first and fourth queries different even though they the same structure?
Why is Postgres totally ignoring my join and where condition for the third query?
EDIT:
With the following sample data, the expected query should return 2 and 3.
insert into profiles values (1);
insert into profiles values (2);
insert into profiles values (3);
insert into employments (profile_id, status) values (1, 'approved');
insert into employments (profile_id, status) values (2, 'denied');
There must be a unique or primary key constraint on employments.profile_id (or it is a view with an appropriate DISTINCT clause) so that the optimizer knows that there can be at most one row in employments that is related to a given row in profiles.
If that is the case and you don't use employments's columns in the SELECT list, the optimizer deduces that the join is redundant and need not be calculated, which makes for a simpler and faster execution plan.
See the comment for join_is_removable in src/backend/optimizer/plan/analyzejoins.c:
/*
* join_is_removable
* Check whether we need not perform this special join at all, because
* it will just duplicate its left input.
*
* This is true for a left join for which the join condition cannot match
* more than one inner-side row. (There are other possibly interesting
* cases, but we don't have the infrastructure to prove them.) We also
* have to check that the inner side doesn't generate any variables needed
* above the join.
*/

PostgreSQL 9.5.15 - Left join with huge table taking time

I need your suggestion on the below query performance issue.
CREATE TEMPORARY TABLE tableA( id bigint NOT NULL, second_id bigint );
CREATE INDEX idx_tableA_id ON tableA USING btree (id);
CREATE INDEX idx_tableA_second_id ON tableA USING btree (second_id);
here the table A having 100K records.
CREATE TABLE tableB( id bigint NOT NULL);
CREATE INDEX idx_tableB_id ON tableB USING btree (id);
But the table B having the 145GB volume of data.
If i execute the query with one left join like below,
select a.id from table A left join table B on B.id = A.id
or
select a.id from table A left join table B on B.d_id = A.Second_id
getting the data quicker. But when i combine both the LEFT JOIN, then the query taking 30 minutes to query the records.
SELECT a.id
FROM tableA A LEFT JOIN tableB B on B.id = A.id
LEFT JOIN tableB B1 on B1.id = A.second_id;
Got the indexes on the respective columns. Any other performance suggestions to reduce the execution time.
VERSION: "PostgreSQL 9.5.15 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit"
Execution plan
Hash Right Join (cost=18744968.20..108460708.81 rows=298384424 width=8) (actual time=469896.453..1290666.446 rows=26520 loops=1)
Hash Cond: (tableB.id = tableA.id)
-> Seq Scan on tableB ubp1 (cost=0.00..63944581.96 rows=264200740 width=8) (actual time=127.504..1182339.167 rows=268297289 loops=1)
Filter: (company_type_id = 2)
Rows Removed by Filter: 1409407086
-> Hash (cost=18722912.16..18722912.16 rows=1764483 width=8) (actual time=16564.303..16564.303 rows=26520 loops=1)
Buckets: 2097152 Batches: 1 Memory Usage: 17420kB
-> Merge Join (cost=6035.58..18722912.16 rows=1764483 width=8) (actual time=37.964..16503.057 rows=26520 loops=1)
-> Nested Loop Left Join (cost=0.86..18686031.22 rows=1752390 width=8) (actual time=0.019..16412.634 rows=26520 loops=1)
-> Index Scan using idx_tableA_id on tableA A (cost=0.29..94059.62 rows=26520 width=16) (actual time=0.013..69.016 rows=26520 loops=1)
-> Index Scan using idx_tableB_id on tableB B (cost=0.58..699.36 rows=169 width=8) (actual time=0.458..0.615 rows=0 loops=26520)
Index Cond: (tableA.id = tableB.second_id)
Filter: (company_type_id = 2)
Rows Removed by Filter: 2
-> Sort (cost=6034.21..6100.97 rows=26703 width=8) (actual time=37.941..54.444 rows=26520 loops=1)
Rows Removed by Filter: 105741
Planning time: 0.878 ms
Execution time: 1290674.154 ms
Thanks and Regards,
Thiru.M
I suspect
B1.second_id either does not have an index
B1.second_id is not unique (or a primary key)
B1.second_id is part of a multi-column index where it is not the first column in the index
If you have an index on that column, you could also try moving the indexes to a different volume so that it's not in contention with the main data retrieval.
Run EXPLAIN on your query to verify that indexes are being used instead of falling back to a sequential scan on a 145GB volume.
You also didn't mention how much RAM your database has available. If your settings combined with your query is making the system swap, you could see precipitous drops in performance as well.

Improving Subquery performance in Postgres

I have these two tables in my database
Student Table Student Semester Table
| Column : Type | | Column : Type |
|------------|----------| |------------|----------|
| student_id : integer | | student_id : integer |
| satquan : smallint | | semester : integer |
| actcomp : smallint | | enrolled : boolean |
| entryyear : smallint | | major : text |
|-----------------------| | college : text |
|-----------------------|
Where student_id is a unique key in the student table, and a foreign key in the student semester table. The semester integer is just a 1 for the first semester, 2 for the second, and so on.
I'm doing queries where I want to get the students by their entryyear (and sometimes by their sat and/or act scores), then get all of those students associated data from the student semester table.
Currently, my queries look something like this:
SELECT * FROM student_semester
WHERE student_id IN(
SELECT student_id FROM student_semester
WHERE student_id IN(
SELECT student_id FROM student WHERE entryyear = 2006
) AND college = 'AS' AND ...
)
ORDER BY student_id, semester;
But, this results in relatively long running queries (400ms) when I am selecting ~1k students. According to execution plan, most of the time is spent doing a hash join. To ameliorate this, I have added satquan, actpcomp, and entryyear columns to the student_semester table. This reduces the time to run the query by ~90%, but results in a lot of redundant data. Is there a better way to do this?
These are the indexes that I currently have (Along with the implicit indexes on student_id):
CREATE INDEX act_sat_entryyear ON student USING btree (entryyear, actcomp, sattotal)
CREATE INDEX student_id_major_college ON student_semester USING btree (student_id, major, college)
Query Plan
QUERY PLAN
Hash Join (cost=17311.74..35895.38 rows=81896 width=65) (actual time=121.097..326.934 rows=25680 loops=1)
Hash Cond: (public.student_semester.student_id = public.student_semester.student_id)
-> Seq Scan on student_semester (cost=0.00..14307.20 rows=698820 width=65) (actual time=0.015..154.582 rows=698820 loops=1)
-> Hash (cost=17284.89..17284.89 rows=2148 width=8) (actual time=121.062..121.062 rows=1284 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 51kB
-> HashAggregate (cost=17263.41..17284.89 rows=2148 width=8) (actual time=120.708..120.871 rows=1284 loops=1)
-> Hash Semi Join (cost=1026.68..17254.10 rows=3724 width=8) (actual time=4.828..119.619 rows=6184 loops=1)
Hash Cond: (public.student_semester.student_id = student.student_id)
-> Seq Scan on student_semester (cost=0.00..16054.25 rows=42908 width=4) (actual time=0.013..109.873 rows=42331 loops=1)
Filter: ((college)::text = 'AS'::text)
-> Hash (cost=988.73..988.73 rows=3036 width=4) (actual time=4.801..4.801 rows=3026 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 107kB
-> Bitmap Heap Scan on student (cost=71.78..988.73 rows=3036 width=4) (actual time=0.406..3.223 rows=3026 loops=1)
Recheck Cond: (entryyear = 2006)
-> Bitmap Index Scan on student_act_sat_entryyear_index (cost=0.00..71.03 rows=3036 width=0) (actual time=0.377..0.377 rows=3026 loops=1)
Index Cond: (entryyear = 2006)
Total runtime: 327.708 ms
I was mistaken about there not being a Seq Scan in the query. I think the Seq Scan is being done due to the number of rows that match the college condition; when I change it to one that has less students an index is used. Source: https://stackoverflow.com/a/5203827/880928
Query with entryyear column included student semester table
SELECT * FROM student_semester
WHERE student_id IN(
SELECT student_id FROM student_semester
WHERE entryyear = 2006 AND collgs = 'AS'
) ORDER BY student_id, semester;
Query Plan
Sort (cost=18597.13..18800.49 rows=81343 width=65) (actual time=72.946..74.003 rows=25680 loops=1)
Sort Key: public.student_semester.student_id, public.student_semester.semester
Sort Method: quicksort Memory: 3546kB
-> Nested Loop (cost=9843.87..11962.91 rows=81343 width=65) (actual time=24.617..40.751 rows=25680 loops=1)
-> HashAggregate (cost=9843.87..9845.73 rows=186 width=4) (actual time=24.590..24.836 rows=1284 loops=1)
-> Bitmap Heap Scan on student_semester (cost=1612.75..9834.63 rows=3696 width=4) (actual time=10.401..23.637 rows=6184 loops=1)
Recheck Cond: (entryyear = 2006)
Filter: ((collgs)::text = 'AS'::text)
-> Bitmap Index Scan on entryyear_act_sat_semester_enrolled_cumdeg_index (cost=0.00..1611.82 rows=60192 width=0) (actual time=10.259..10.259 rows=60520 loops=1)
Index Cond: (entryyear = 2006)
-> Index Scan using student_id_index on student_semester (cost=0.00..11.13 rows=20 width=65) (actual time=0.003..0.010 rows=20 loops=1284)
Index Cond: (student_id = public.student_semester.student_id)
Total runtime: 74.938 ms
An alternative approach to doing the query is to use window functions.
select t.* -- Has the extra NumMatches column. To eliminate it, list the columns you want
from (select ss.*,
sum(case when ss.college = 'AS' and s.entry_year = 206 then 1 else 0 end) over
(partition by student_id) as NumMatches
from student_semester ss join
student s
on ss.student_id = s.student_id
) t
where NumMatches > 0;
Window functions are usually faster than joining in an aggregation, so I suspect that this might perform well.
The clean version of you query is
select ss.*
from
student s
inner join
student_semester ss using(student_id)
where
s.entryyear = 2006
and exists (
select 1
from student_semester
where
college = 'AS'
and student_id = s.student_id
)
order by ss.student_id, semester
You want, it appears, students who entered in 2006 and who have ever been in AS college.
Version One.
SELECT sem.*
FROM student s JOIN student_semester sem USING (student_id)
WHERE s.entry_year=2006
AND student_id IN (SELECT student_id
FROM student_semester s2 WHERE s2.college='AS')
AND /* other criteria */
ORDER BY sem.student_id, semester;
Version Two
SELECT sem.*
FROM student s JOIN student_semester sem USING (student_id)
WHERE s.entry_year=2006
AND EXISTS
(SELECT 1 FROM student_semester s2
WHERE s2.student_id = s.student_id AND s2.college='AS')
-- CREATE INDEX foo on student_semester(student_id, college);
AND /* other criteria */
ORDER BY sem.student_id, semester;
I expect both to be fast, but whether they one performs better than the other (or exact same plan) is a PG mystery.
[EDIT] Here's a version with no semi-joins. I wouldn't expect it to work well because it will give multiple hits for each time a student was in AS.
SELECT DISTINCT ON ( /* PK of sem */ )
FROM student s
JOIN student_semester sem USING (student_id)
JOIN student_semester s2 USING (student_id)
WHERE s.entry_year=2006
AND s2.college='AS'
ORDER BY sem.student_id, semester;