PostgreSQL exclude records crossing other table values - sql

Consider two PostgreSQL tables :
Table #1
id INT
secret_id INT
title VARCHAR
Table #2
id INT
secret_id INT
I need to select all records from Table #1, but exclude Table #2 crossing secret_id values.
The following query is very slow with 1 000 000 records in Table #1 and 500 000 in Table #2 :
select * from table_1 where secret_id not in (select secret_id from table_2);
What is the best way to achieve this ?

FWIW, I tested Daniel Lyons and Craig Ringer's suggestions made in comments above. Here are the results on my particular case (~500k rows per table), order by efficiency (the most efficient first).
ANTI-JOIN :
> EXPLAIN ANALYZE SELECT * FROM table1 t1 LEFT JOIN table2 t2 ON t1.secret_id=t2.secret_id WHERE t2.secret_id IS NULL;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------
Hash Anti Join (cost=19720.19..56129.91 rows=21 width=28) (actual time=139.868..459.991 rows=142993 loops=1)
Hash Cond: (t1.secret_id = t2.secret_id)
-> Seq Scan ON table1 t1 (cost=0.00..13049.06 rows=622606 width=14) (actual time=0.005..61.913 rows=622338 loops=1)
-> Hash (cost=10849.75..10849.75 rows=510275 width=14) (actual time=138.176..138.176 rows=510275 loops=1)
Buckets: 4096 Batches: 32 Memory Usage: 777kB
-> Seq Scan ON table2 t2 (cost=0.00..10849.75 rows=510275 width=14) (actual time=0.018..47.005 rows=510275 loops=1)
Total runtime: 466.748 ms
(7 lignes)
NOT EXISTS :
> EXPLAIN ANALYZE SELECT * FROM table1 t1 WHERE NOT EXISTS (SELECT secret_id FROM table2 t2 WHERE t2.secret_id=t1.secret_id);
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------
Hash Anti Join (cost=19222.19..55133.91 rows=21 width=14) (actual time=181.881..517.632 rows=142993 loops=1)
Hash Cond: (t1.secret_id = t2.secret_id)
-> Seq Scan ON table1 t1 (cost=0.00..13049.06 rows=622606 width=14) (actual time=0.005..70.478 rows=622338 loops=1)
-> Hash (cost=10849.75..10849.75 rows=510275 width=4) (actual time=179.665..179.665 rows=510275 loops=1)
Buckets: 4096 Batches: 32 Memory Usage: 592kB
-> Seq Scan ON table2 t2 (cost=0.00..10849.75 rows=510275 width=4) (actual time=0.019..78.074 rows=510275 loops=1)
Total runtime: 524.300 ms
(7 lignes)
EXCEPT :
> EXPLAIN ANALYZE SELECT * FROM table1 EXCEPT (SELECT t1.* FROM table1 t1 join table2 t2 ON t1.secret_id=t2.secret_id);
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
SetOp Except (cost=1524985.53..1619119.03 rows=62261 width=14) (actual time=16926.056..19850.915 rows=142993 loops=1)
-> Sort (cost=1524985.53..1543812.23 rows=7530680 width=14) (actual time=16925.010..18596.860 rows=6491408 loops=1)
Sort Key: "*SELECT* 1".secret_id, "*SELECT* 1".jeu, "*SELECT* 1".combinaison, "*SELECT* 1".gains
Sort Method: external merge Disk: 185232kB
-> Append (cost=0.00..278722.63 rows=7530680 width=14) (actual time=0.007..2951.920 rows=6491408 loops=1)
-> Subquery Scan ON "*SELECT* 1" (cost=0.00..19275.12 rows=622606 width=14) (actual time=0.007..176.892 rows=622338 loops=1)
-> Seq Scan ON table1 (cost=0.00..13049.06 rows=622606 width=14) (actual time=0.005..69.842 rows=622338 loops=1)
-> Subquery Scan ON "*SELECT* 2" (cost=19222.19..259447.51 rows=6908074 width=14) (actual time=168.529..2228.335 rows=5869070 loops=1)
-> Hash Join (cost=19222.19..190366.77 rows=6908074 width=14) (actual time=168.528..1450.663 rows=5869070 loops=1)
Hash Cond: (t1.secret_id = t2.secret_id)
-> Seq Scan ON table1 t1 (cost=0.00..13049.06 rows=622606 width=14) (actual time=0.002..64.554 rows=622338 loops=1)
-> Hash (cost=10849.75..10849.75 rows=510275 width=4) (actual time=168.329..168.329 rows=510275 loops=1)
Buckets: 4096 Batches: 32 Memory Usage: 592kB
-> Seq Scan ON table2 t2 (cost=0.00..10849.75 rows=510275 width=4) (actual time=0.017..72.702 rows=510275 loops=1)
Total runtime: 19896.445 ms
(15 lignes)
NOT IN :
> EXPLAIN SELECT * FROM table1 WHERE secret_id NOT IN (SELECT secret_id FROM table2);
QUERY PLAN
-----------------------------------------------------------------------------------------
Seq Scan ON table1 (cost=0.00..5189688549.26 rows=311303 width=14)
Filter: (NOT (SubPlan 1))
SubPlan 1
-> Materialize (cost=0.00..15395.12 rows=510275 width=4)
-> Seq Scan ON table2 (cost=0.00..10849.75 rows=510275 width=4)
(5 lignes)
I did not analyze the latter because it took ages.

Related

Postgres Slow join with where clause

I already making index on both table. But its still pretty slow. And its so fast without the where clause.
EXPLAIN ANALYZE SELECT date, a , b, c FROM t1 JOIN t2 using (date, a) where date = current_date
;
Nest2d Loop (cost=0.71..12.75 rows=1 width=22) (actual time=0.343..50925.262 rows=87956 loops=1)
Join Filt2r: (t1.a = t2.a)
Rows Removed by Join Filt2r: 262988440
-> Index Scan using t1_date_idx1 on t1 t1 (cost=0.42..8.44 rows=1 width=15) (actual time=0.022..20.240 rows=87956 loops=1)
Index Cond: (date = CURRENT_DATE)
-> Index Scan using t2_date_idx on t2 t2 (cost=0.29..4.30 rows=1 width=15) (actual time=0.005..0.353 rows=2991 loops=87956)
Index Cond: (date = CURRENT_DATE)
Planning time: 0.151 ms
Execution time: 50930.327 ms
Without where clause:
EXPLAIN ANALYZE SELECT date, a , b, c FROM t1 JOIN t2 using (date, a)
;
Hash Join (cost=349.55..11993.24 rows=182123 width=22) (actual time=4.741..61.881 rows=182123 loops=1)
Hash Cond: ((t1.date = t2.date) AND (t1.a = t2.a))
-> Seq Scan on t1 t1 (cost=0.00..8001.23 rows=182123 width=15) (actual time=2.921..17.651 rows=182123 loops=1)
-> Hash (cost=259.82..259.82 rows=5982 width=15) (actual time=1.765..1.765 rows=5982 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 350kB
-> Seq Scan on t2 t2 (cost=0.00..259.82 rows=5982 width=15) (actual time=0.115..0.908 rows=5982 loops=1)
Planning time: 0.280 ms
Execution time: 66.400 ms
Thanks rickyen , After try "ANALYZE t1; and ANALYZE t2; ". It is much better.
Hash Join (cost=233.44..7052.01 rows=87936 width=15) (actual time=0.936..31.488 rows=87956 loops=1)
Hash Cond: (t1.a = t2.a)
-> Index Scan using t1_date_idx1 on t1 (cost=0.42..5499.81 rows=87965 width=15) (actual time=0.026..12.625 rows=87956 loops=1)
Index Cond: (date = CURRENT_DATE)
-> Hash (cost=195.63..195.63 rows=2991 width=8) (actual time=0.900..0.900 rows=2991 loops=1)
Buckets: 4096 Batches: 1 Memory Usage: 152kB
-> Index Scan using t2_date_idx on t2 (cost=0.29..195.63 rows=2991 width=8) (actual time=0.015..0.556 rows=2991 loops=1)
Index Cond: (date = CURRENT_DATE)
Planning time: 0.171 ms
Execution time: 33.799 ms

Postgres: Slow query when using OR statement in a join query

We run a join query between 2 tables.
The query has an OR statement that compares one column from the left table and one column from the right table. The query performance is very low, and we fixed it by changing the OR to UNION.
Why is this happening? I'm looking for a detailed explanation or a reference to the documentation that might shed a light on the issue.
Query with Or Statment:
db1=# explain analyze select count(*)
from conversations
join agents on conversations.agent_id=agents.id
where conversations.id=1 or agents.id = '123';
**Query plan**
----------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=**11017.95..11017.96** rows=1 width=8) (actual time=54.088..54.088 rows=1 loops=1)
-> Gather (cost=11017.73..11017.94 rows=2 width=8) (actual time=53.945..57.181 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial Aggregate (cost=10017.73..10017.74 rows=1 width=8) (actual time=48.303..48.303 rows=1 loops=3)
-> Hash Join (cost=219.26..10016.69 rows=415 width=0) (actual time=5.292..48.287 rows=130 loops=3)
Hash Cond: (conversations.agent_id = agents.id)
Join Filter: ((conversations.id = 1) OR ((agents.id)::text = '123'::text))
Rows Removed by Join Filter: 80035
-> Parallel Seq Scan on conversations (cost=0.00..9366.95 rows=163995 width=8) (actual time=0.017..14.972 rows=131196 loops=3)
-> Hash (cost=143.56..143.56 rows=6056 width=16) (actual time=2.686..2.686 rows=6057 loops=3)
Buckets: 8192 Batches: 1 Memory Usage: 353kB
-> Seq Scan on agents (cost=0.00..143.56 rows=6056 width=16) (actual time=0.011..1.305 rows=6057 loops=3)
Planning time: 0.710 ms
Execution time: 57.276 ms
(15 rows)
Changing the OR to UNION:
db1=# explain analyze select count(*) from (
select *
from conversations
join agents on conversations.agent_id=agents.id
where conversations.installation_id=1
union
select *
from conversations
join agents on conversations.agent_id=agents.id
where agents.source_id = '123') as subquery;
**Query plan:**
----------------------------------------------------------------------------------------------------------------------------------
Aggregate (**cost=1114.31..1114.32** rows=1 width=8) (actual time=8.038..8.038 rows=1 loops=1)
-> HashAggregate (cost=1091.90..1101.86 rows=996 width=1437) (actual time=7.783..8.009 rows=390 loops=1)
Group Key: conversations.id, conversations.created, conversations.modified, conversations.source_created, conversations.source_id, conversations.installation_id, bra
in_conversation.resolution_reason, conversations.solve_time, conversations.agent_id, conversations.submission_reason, conversations.is_marked_as_duplicate, conversations.n
um_back_and_forths, conversations.is_closed, conversations.is_solved, conversations.conversation_type, conversations.related_ticket_source_id, conversations.channel, brain_convers
ation.last_updated_from_platform, conversations.csat, agents.id, agents.created, agents.modified, agents.name, agents.source_id, organizati
on_agent.installation_id, agents.settings
-> Append (cost=219.68..1027.16 rows=996 width=1437) (actual time=5.517..6.307 rows=390 loops=1)
-> Hash Join (cost=219.68..649.69 rows=931 width=224) (actual time=5.516..6.063 rows=390 loops=1)
Hash Cond: (conversations.agent_id = agents.id)
-> Index Scan using conversations_installation_id_b3ff5c00 on conversations (cost=0.42..427.98 rows=931 width=154) (actual time=0.039..0.344 rows=879 loops=1)
Index Cond: (installation_id = 1)
-> Hash (cost=143.56..143.56 rows=6056 width=70) (actual time=5.394..5.394 rows=6057 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 710kB
-> Seq Scan on agents (cost=0.00..143.56 rows=6056 width=70) (actual time=0.014..1.938 rows=6057 loops=1)
-> Nested Loop (cost=0.70..367.52 rows=65 width=224) (actual time=0.210..0.211 rows=0 loops=1)
-> Index Scan using agents_source_id_106c8103_like on agents agents_1 (cost=0.28..8.30 rows=1 width=70) (actual time=0.210..0.210 rows=0 loops=1)
Index Cond: ((source_id)::text = '123'::text)
-> Index Scan using conversations_agent_id_de76554b on conversations conversations_1 (cost=0.42..358.12 rows=110 width=154) (never executed)
Index Cond: (agent_id = agents_1.id)
Planning time: 2.024 ms
Execution time: 9.367 ms
(18 rows)
Yes. or has a way of killing the performance of queries. For this query:
select count(*)
from conversations c join
agents a
on c.agent_id = a.id
where c.id = 1 or a.id = 123;
Note I removed the quotes around 123. It looks like a number so I assume it is. For this query, you want an index on conversations(agent_id).
Probably the most effective way to write the query is:
select count(*)
from ((select 1
from conversations c join
agents a
on c.agent_id = a.id
where c.id = 1
) union all
(select 1
from conversations c join
agents a
on c.agent_id = a.id
where a.id = 123 and c.id <> 1
)
) ac;
Note the use of union all rather than union. The additional where condition eliminates duplicates.
This can take advantage of the following indexes:
conversations(id, agent_id)
agents(id)
conversations(agent_id, id)

Delete orphaned records in postgres. Delete using join. Performance

I have a case where I need to cleanup table from orphans regularly, so I'm looking for a high performance solution. I tried using 'IN' clause, but its not really fast. Columns have all the required indexes in both tables.(id - primary key, component_id - index, component_type - index)
DELETE FROM component_apportionment
WHERE id in (
SELECT a.id
FROM component_apportionment a
LEFT JOIN component_live c
ON (c.component_id = a.component_id
AND
c.component_type = a.component_type)
WHERE c.id is null);
Basically the case is to remove records from 'component_apportionment' table which do not exist in 'component_live' table.
The query plan for the query above is horrible as well:
Delete on component_apportionment_copy1 (cost=3860927.55..3860929.09 rows=1 width=18) (actual time=183479.848..183479.848 rows=0 loops=1)
-> Nested Loop (cost=3860927.55..3860929.09 rows=1 width=18) (actual time=183479.811..183479.813 rows=1 loops=1)
-> HashAggregate (cost=3860927.12..3860927.13 rows=1 width=20) (actual time=183479.793..183479.793 rows=1 loops=1)
Group Key: a.id
-> Merge Right Join (cost=3753552.72..3860927.12 rows=1 width=20) (actual time=172941.125..183479.787 rows=1 loops=1)
Merge Cond: ((c.component_id = a.component_id) AND ((c.component_type)::text = (a.component_type)::text))
Filter: (c.id IS NULL)
Rows Removed by Filter: 5968195
-> Sort (cost=3390767.32..3413658.29 rows=9156391 width=21) (actual time=169852.438..172642.897 rows=8043013 loops=1)
Sort Key: c.component_id, c.component_type
Sort Method: external merge Disk: 310232kB
-> Seq Scan on component_live c (cost=0.00..2117393.91 rows=9156391 width=21) (actual time=0.004..155656.568 rows=9333382 loops=1)
-> Materialize (cost=362785.40..375049.75 rows=2452871 width=21) (actual time=3088.653..5343.013 rows=5968195 loops=1)
-> Sort (cost=362785.40..368917.58 rows=2452871 width=21) (actual time=3088.648..3989.163 rows=2452871 loops=1)
Sort Key: a.component_id, a.component_type
Sort Method: external merge Disk: 81504kB
-> Seq Scan on component_apportionment_copy1 a (cost=0.00..44969.71 rows=2452871 width=21) (actual time=0.920..882.040 rows=2452871 loops=1)
-> Index Scan using component_apportionment_copy1_pkey on component_apportionment_copy1 (cost=0.43..1.95 rows=1 width=14) (actual time=0.012..0.012 rows=1 loops=1)
Index Cond: (id = a.id)
Planning time: 5.573 ms
Execution time: 183554.675 ms
Would appreciate any help.
Thanks
Note
Tables have approx 80mln records each in a worst case. Both tables have indexes on used columns.
UPDATE
Query plan for 'not exists'
Query:
EXPLAIN (analyze, verbose, buffers) DELETE FROM component_apportionment_copy1
WHERE not exists (select 1
from component_live c
where c.component_id = component_apportionment_copy1.component_id);
Delete on vector.component_apportionment_copy1 (cost=2276557.80..2446287.39 rows=2104532 width=12) (actual time=203643.560..203643.560 rows=0 loops=1)
Buffers: shared hit=20875 read=2025400, temp read=46067 written=45813
-> Hash Anti Join (cost=2276557.80..2446287.39 rows=2104532 width=12) (actual time=202212.975..203643.486 rows=1 loops=1)
Output: component_apportionment_copy1.ctid, c.ctid
Hash Cond: (component_apportionment_copy1.component_id = c.component_id)
Buffers: shared hit=20874 read=2025400, temp read=46067 written=45813
-> Seq Scan on vector.component_apportionment_copy1 (cost=0.00..44969.71 rows=2452871 width=10) (actual time=0.003..659.668 rows=2452871 loops=1)
Output: component_apportionment_copy1.ctid, component_apportionment_copy1.component_id
Buffers: shared hit=20441
-> Hash (cost=2117393.91..2117393.91 rows=9156391 width=10) (actual time=198536.786..198536.786 rows=9333382 loops=1)
Output: c.ctid, c.component_id
Buckets: 16384 Batches: 128 Memory Usage: 3195kB
Buffers: shared hit=430 read=2025400, temp written=36115
-> Seq Scan on vector.component_live c (cost=0.00..2117393.91 rows=9156391 width=10) (actual time=0.039..194415.641 rows=9333382 loops=1)
Output: c.ctid, c.component_id
Buffers: shared hit=430 read=2025400
Planning time: 6.639 ms
Execution time: 203643.594 ms
Its doing seq scan on both tables and more data - the slower it will be.
You have way too many joins in there:
set enable_seqscan = false; -- forcing to use indexes
DELETE FROM component_apportionment
WHERE not exists (select 1
from component_live c
where c.component_id = component_apportionment.component_id);
Will do the same thing and should be much faster, especially if you have indexes on the component_id columns.
The exists way:
delete
from component_apportionment ca
where not exists
(select 1
from component_live cl
where cl.component_id = ca.component_id
);
Or the in way:
delete
from component_apportionment
where component_id not in
(select component_id
from component_live
);
Also, create indexes on both tables to the component_id columns.
UPDATE
I made a script for testing:
-- table creating and populating (1,000,000 records each)
drop table if exists component_apportionment;
drop table if exists component_live;
create table component_live (component_id numeric primary key);
create table component_apportionment (id serial primary key, component_id numeric);
create index component_apportionment_idx on component_apportionment (component_id);
insert into component_live select g from generate_series(1,1000000) g;
insert into component_apportionment (component_id) select trunc(random()*1000000) from generate_series(1,1000000) g;
analyze verbose component_live;
analyze verbose component_apportionment;
EXPLAIN (analyze, verbose, buffers)
select component_id
from component_apportionment ca
where not exists
(select 1
from component_live cl
where cl.component_id = ca.component_id
);
Merge Anti Join (cost=0.85..61185.85 rows=1 width=6) (actual time=0.013..1060.014 rows=2 loops=1)
Output: ca.component_id
Merge Cond: (ca.component_id = cl.component_id)
Buffers: shared hit=1010548
-> Index Only Scan using component_apportionment_idx on admin.component_apportionment ca (cost=0.42..24015.42 rows=1000000 width=6) (actual time=0.006..460.318 rows=1000000 loops=1)
Output: ca.component_id
Heap Fetches: 1000000
Buffers: shared hit=1003388
-> Index Only Scan using component_live_pkey on admin.component_live cl (cost=0.42..22170.42 rows=1000000 width=6) (actual time=0.005..172.502 rows=999998 loops=1)
Output: cl.component_id
Heap Fetches: 999998
Buffers: shared hit=7160
Total runtime: 1060.035 ms

SQL query running very slow - postrges

This query currently take 4 minutes to run:
with name1 as (
select col1 as a1, col2 as a2, sum(FEE) as a3
from s1, date
where return_date = datesk and year = 2000
group by col1, col2
)
select c_id
from name1 ala1, ss, cc
where ala1.a3 > (
select avg(a3) * 1.2 from name1 ctr2
where ala1.a2 = ctr2.a2
)
and s_sk = ala1.a2
and s_state = 'TN'
and ala1.a1 = c_sk
order by c_id
limit 100;
I have set work_mem=’1000MB’ and enable-nestloop=off
EXPLAIN ANALYZE of this query is: http://explain.depesz.com/s/DUa
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------
--------------------
Limit (cost=59141.02..59141.09 rows=28 width=17) (actual time=253707.928..253707.940 rows=100 loops=1)
CTE name1
-> HashAggregate (cost=11091.33..11108.70 rows=1390 width=14) (actual time=105.223..120.358 rows=50441 loops=1)
Group Key: s1.col1, s1.col2
-> Hash Join (cost=2322.69..11080.90 rows=1390 width=14) (actual time=10.390..79.897 rows=55820 loops=1)
Hash Cond: (s1.return_date = date.datesk)
-> Seq Scan on s1 (cost=0.00..7666.14 rows=287514 width=18) (actual time=0.005..33.801 rows=287514 loops=1)
-> Hash (cost=2318.11..2318.11 rows=366 width=4) (actual time=10.375..10.375 rows=366 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 13kB
-> Seq Scan on date (cost=0.00..2318.11 rows=366 width=4) (actual time=5.224..10.329 rows=366 loops=1)
Filter: (year = 2000)
Rows Removed by Filter: 72683
-> Sort (cost=48032.32..48032.39 rows=28 width=17) (actual time=253707.923..253707.930 rows=100 loops=1)
Sort Key: cc.c_id
Sort Method: top-N heapsort Memory: 32kB
-> Hash Join (cost=43552.37..48031.65 rows=28 width=17) (actual time=253634.511..253696.291 rows=18976 loops=1)
Hash Cond: (cc.c_sk = ala1.a1)
-> Seq Scan on cc (cost=0.00..3854.00 rows=100000 width=21) (actual time=0.009..18.527 rows=100000 loops=1)
-> Hash (cost=43552.02..43552.02 rows=28 width=4) (actual time=253634.420..253634.420 rows=18976 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 668kB
-> Hash Join (cost=1.30..43552.02 rows=28 width=4) (actual time=136.819..253624.375 rows=18982 loops=1)
Hash Cond: (ala1.a2 = ss.s_sk)
-> CTE Scan on name1 ala1 (cost=0.00..43548.70 rows=463 width=8) (actual time=136.756..253610.817 rows=18982 loops=1)
Filter: (a3 > (SubPlan 2))
Rows Removed by Filter: 31459
SubPlan 2
-> Aggregate (cost=31.29..31.31 rows=1 width=32) (actual time=5.025..5.025 rows=1 loops=50441)
-> CTE Scan on name1 ctr2 (cost=0.00..31.27 rows=7 width=32) (actual time=0.032..3.860 rows=8241 loops=50441)
Filter: (ala1.a2 = a2)
Rows Removed by Filter: 42200
-> Hash (cost=1.15..1.15 rows=12 width=4) (actual time=0.036..0.036 rows=12 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 1kB
-> Seq Scan on ss (cost=0.00..1.15 rows=12 width=4) (actual time=0.025..0.033 rows=12 loops=1)
Filter: (s_state = 'TN'::bpchar)
Planning time: 0.316 ms
Execution time: 253708.351 ms
(36 rows)
With enable_nestloop=on;
EXPLAIN ANLYZE result is : http://explain.depesz.com/s/NPo
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------
--------------
Limit (cost=54916.36..54916.43 rows=28 width=17) (actual time=257869.004..257869.015 rows=100 loops=1)
CTE name1
-> HashAggregate (cost=11091.33..11108.70 rows=1390 width=14) (actual time=92.354..104.103 rows=50441 loops=1)
Group Key: s1.col1, s1.col2
-> Hash Join (cost=2322.69..11080.90 rows=1390 width=14) (actual time=9.371..68.156 rows=55820 loops=1)
Hash Cond: (s1.return_date = date.datesk)
-> Seq Scan on s1 (cost=0.00..7666.14 rows=287514 width=18) (actual time=0.011..25.637 rows=287514 loops=1)
-> Hash (cost=2318.11..2318.11 rows=366 width=4) (actual time=9.343..9.343 rows=366 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 13kB
-> Seq Scan on date (cost=0.00..2318.11 rows=366 width=4) (actual time=4.796..9.288 rows=366 loops=1)
Filter: (year = 2000)
Rows Removed by Filter: 72683
-> Sort (cost=43807.66..43807.73 rows=28 width=17) (actual time=257868.994..257868.998 rows=100 loops=1)
Sort Key: cc.c_id
Sort Method: top-N heapsort Memory: 32kB
-> Nested Loop (cost=0.29..43806.98 rows=28 width=17) (actual time=120.358..257845.941 rows=18976 loops=1)
-> Nested Loop (cost=0.00..43633.22 rows=28 width=4) (actual time=120.331..257692.654 rows=18982 loops=1)
Join Filter: (ala1.a2 = ss.s_sk)
Rows Removed by Join Filter: 208802
-> CTE Scan on name1 ala1 (cost=0.00..43548.70 rows=463 width=8) (actual time=120.316..257652.636 rows=18982 loops=1)
Filter: (a3 > (SubPlan 2))
Rows Removed by Filter: 31459
SubPlan 2
-> Aggregate (cost=31.29..31.31 rows=1 width=32) (actual time=5.105..5.105 rows=1 loops=50441)
-> CTE Scan on name1 ctr2 (cost=0.00..31.27 rows=7 width=32) (actual time=0.032..3.952 rows=8241 loops=50441)
Filter: (ala1.a2 = a2)
Rows Removed by Filter: 42200
-> Materialize (cost=0.00..1.21 rows=12 width=4) (actual time=0.000..0.001 rows=12 loops=18982)
-> Seq Scan on ss (cost=0.00..1.15 rows=12 width=4) (actual time=0.007..0.012 rows=12 loops=1)
Filter: (s_state = 'TN'::bpchar)
-> Index Scan using cc_pkey on cc (cost=0.29..6.20 rows=1 width=21) (actual time=0.007..0.007 rows=1 loops=18982)
Index Cond: (c_sk = ala1.a1)
Planning time: 0.453 ms
Execution time: 257869.554 ms
(34 rows)
Many other queries run quickly with enable_nestloop=off, there is no big difference for this query. Raw data is not really big, so 4 minutes is too much. I was expecting around 4-5 seconds.
Why is it taking so long !?
I tried this in both postgres versions 9.4 and 9.5. It is same. Maybe I can create brin indexes. But I am not sure for which columns to create.
Configuration setting:
effective_cache_size | 89GB
shared_buffers | 18GB
work_mem | 1000MB
maintenance_work_mem | 500MB
checkpoint_segments | 32
constraint_exclusion | on
checkpoint_completion_target | 0.5
Like John Bollinger commented, your sub-query gets evaluated for each row of the main query. But since you are averaging on a simple column, you can easily move the sub-query out to a CTE and calculate the average once, which should speed up things tremendously:
with name1 as (
select col1 as a1, col2 as a2, sum(FEE) as a3
from s1, date
where return_date = datesk and year = 2000
group by col1, col2
), avg_a3_by_a2 as (
select a2, avg(a3) * 1.2 as avg12
from name1
group by a2
)
select c_id
from name1, avg_a3_by_a2, ss, cc
where name1.a3 > avg_a3_by_a2.avg12
and name1.a2 = avg_a3_by_a2.a2
and s_sk = name1.a2
and s_state = 'TN'
and name1.a1 = c_sk
order by c_id
limit 100;
The new CTE calculates the average + 20% for every distinct value of a2.
Please also use the JOIN syntax instead of comma-separated FROM items as it makes your code far more readable. And if you start using aliases in your query, use them consistently on all tables and columns. I could correct neither of these two issues because of lack of information.

How reliable is the cost measurement in PostgreSQL Explain Plan?

The queries are performed on a large table with 11 million rows. I have already performed an ANALYZE on the table prior to the query executions.
Query 1:
SELECT *
FROM accounts t1
LEFT OUTER JOIN accounts t2
ON (t1.account_no = t2.account_no
AND t1.effective_date < t2.effective_date)
WHERE t2.account_no IS NULL;
Explain Analyze:
Hash Anti Join (cost=480795.57..1201111.40 rows=7369854 width=292) (actual time=29619.499..115662.111 rows=1977871 loops=1)
Hash Cond: ((t1.account_no)::text = (t2.account_no)::text)
Join Filter: ((t1.effective_date)::text < (t2.effective_date)::text)
-> Seq Scan on accounts t1 (cost=0.00..342610.81 rows=11054781 width=146) (actual time=0.025..25693.921 rows=11034070 loops=1)
-> Hash (cost=342610.81..342610.81 rows=11054781 width=146) (actual time=29612.925..29612.925 rows=11034070 loops=1)
Buckets: 2097152 Batches: 1 Memory Usage: 1834187kB
-> Seq Scan on accounts t2 (cost=0.00..342610.81 rows=11054781 width=146) (actual time=0.006..22929.635 rows=11034070 loops=1)
Total runtime: 115870.788 ms
The estimated cost is ~1.2 million and the actual time taken is ~1.9 minutes.
Query 2:
SELECT t1.*
FROM accounts t1
LEFT OUTER JOIN accounts t2
ON (t1.account_no = t2.account_no
AND t1.effective_date < t2.effective_date)
WHERE t2.account_no IS NULL;
Explain Analyze:
Hash Anti Join (cost=480795.57..1201111.40 rows=7369854 width=146) (actual time=13365.808..65519.402 rows=1977871 loops=1)
Hash Cond: ((t1.account_no)::text = (t2.account_no)::text)
Join Filter: ((t1.effective_date)::text < (t2.effective_date)::text)
-> Seq Scan on accounts t1 (cost=0.00..342610.81 rows=11054781 width=146) (actual time=0.007..5032.778 rows=11034070 loops=1)
-> Hash (cost=342610.81..342610.81 rows=11054781 width=18) (actual time=13354.219..13354.219 rows=11034070 loops=1)
Buckets: 2097152 Batches: 1 Memory Usage: 545369kB
-> Seq Scan on accounts t2 (cost=0.00..342610.81 rows=11054781 width=18) (actual time=0.011..8964.571 rows=11034070 loops=1)
Total runtime: 65705.707 ms
The estimated cost is ~1.2 million (again) but the actual time taken is <1.1 minutes.
Query 3:
SELECT *
FROM accounts
WHERE (account_no,
effective_date) IN
(SELECT account_no,
max(effective_date)
FROM accounts
GROUP BY account_no);
Explain Analyze:
Nested Loop (cost=406416.19..502216.84 rows=2763695 width=146) (actual time=31779.457..917543.228 rows=1977871 loops=1)
-> HashAggregate (cost=406416.19..406757.45 rows=34126 width=43) (actual time=31774.877..33378.968 rows=1977425 loops=1)
-> Subquery Scan on "ANY_subquery" (cost=397884.72..404709.90 rows=341259 width=43) (actual time=27979.226..29841.217 rows=1977425 loops=1)
-> HashAggregate (cost=397884.72..401297.31 rows=341259 width=18) (actual time=27979.224..29315.346 rows=1977425 loops=1)
-> Seq Scan on accounts (cost=0.00..342610.81 rows=11054781 width=18) (actual time=0.851..16092.755 rows=11034070 loops=1)
-> Index Scan using accounts_idx2 on accounts (cost=0.00..2.78 rows=1 width=146) (actual time=0.443..0.445 rows=1 loops=1977425)
Index Cond: (((account_no)::text = ("ANY_subquery".account_no)::text) AND ((effective_date)::text = "ANY_subquery".max))
Total runtime: 918039.614 ms
The estimated cost is ~502,000 but the actual time taken is ~15.3 minutes!
How reliable is the EXPLAIN output?
Do we always have to EXPLAIN ANALYZE to see how our query is going to perform on real data, and not place trust on how much the query planner thinks it will cost?
They are reliable, except for when they are not. You can't really generalize.
It looks like it is dramatically underestimating the number of different account_no that it will find (thinks it will find 34126 actually found 1977425). Your default_statistics_target might not be high enough to get a good estimate for this column.