when I run a query with a lateral join and a LIMIT inside, it uses nested a loop join. But when I remove the LIMIT it uses a Hash Right Join. Why?
EXPLAIN ANALYSE
SELECT proxy.*
FROM jobs
LEFT OUTER JOIN LATERAL (
SELECT proxy.*
FROM proxy
WHERE jobs.id = proxy.job_id
) proxy ON true
Hash Right Join (cost=2075.47..3029.05 rows=34688 width=12) (actual time=9.951..24.758 rows=35212 loops=1)
Hash Cond: (proxy.job_id = jobs.id)
-> Seq Scan on proxy (cost=0.00..524.15 rows=34015 width=12) (actual time=0.011..2.502 rows=34028 loops=1)
-> Hash (cost=1641.87..1641.87 rows=34688 width=4) (actual time=9.842..9.842 rows=34689 loops=1)
Buckets: 65536 Batches: 1 Memory Usage: 1732kB
-> Index Only Scan using jobs_pkey on jobs (cost=0.29..1641.87 rows=34688 width=4) (actual time=0.010..4.904 rows=34689 loops=1)
Heap Fetches: 921
But when I add limits to the query, the actual time jumps from 24 to 150:
EXPLAIN ANALYSE
SELECT proxy.*
FROM jobs
LEFT OUTER JOIN LATERAL (
SELECT proxy.*
FROM proxy
WHERE jobs.id = proxy.job_id
limit 1
) proxy ON true
Nested Loop Left Join (cost=0.58..290506.19 rows=34688 width=12) (actual time=0.024..155.753 rows=34689 loops=1)
-> Index Only Scan using jobs_pkey on jobs (cost=0.29..1641.87 rows=34688 width=4) (actual time=0.014..3.984 rows=34689 loops=1)
Heap Fetches: 921
-> Limit (cost=0.29..8.31 rows=1 width=12) (actual time=0.001..0.001 rows=1 loops=34689)
-> Index Scan using index_job_proxy_on_job_id on loc_job_source_materials (cost=0.29..8.31 rows=1 width=12) (actual time=0.001..0.001 rows=1 loops=34689)
Index Cond: (jobs.id = job_id)
The optimizer is smart enough to rewrite your first query to
SELECT proxy.*
FROM proxy
RIGHT OUTER JOIN jobs
ON jobs.id = proxy.job_id;
But this optimization cannot be made with the LIMIT clause, so only a nested loop join is possible.
Following on from #LaurenzAlbe's answer, I think we can help more if you show the complete query, so we know why you need a LATERAL join. For the (simplified) requirements you have mentioned so far, I think an equivalent is
SELECT DISTINCT ON(proxy.id) proxy.*
FROM proxy
RIGHT OUTER JOIN jobs
ON jobs.id = proxy.job_id;
Also, since you are only outputting columns from proxy, you are effectively doing only an INNER JOIN, but with more computing effort.
Related
This is my SQL script, I have to join 7 tables
SELECT concat_ws('-', it.item_id, it.model_id) AS product_id,
concat_ws('-', aip.partner_item_id, aip.partner_model_id) AS product_reseller_id,
i.name as item_name,
im.name AS model_name,
p.partner_code,
sum(it.quantity) AS transfer_total,
sum(isb.remaining_item) as remaining_stock,
sum(isb.sold_item) as partner_sold
FROM transfer t
INNER JOIN partner p ON p.reseller_store_id = t.reseller_store_id
INNER JOIN item_transfer it ON t.id = it.transfer_id
INNER JOIN item i ON i.id = it.item_id
INNER JOIN item_model im ON it.model_id = im.id
INNER JOIN affiliate_item_mapping aip on it.item_id = aip.seller_item_id and it.model_id = aip.seller_model_id
and t.reseller_store_id = aip.reseller_store_id
LEFT JOIN inventory_summary_branch isb on isb.inventory_summary_id = concat_ws('-', aip.partner_item_id, aip.partner_model_id)
WHERE p.store_id = 9805
GROUP BY it.item_id, it.model_id, p.partner_code, i.id, im.id, aip.id, isb.inventory_summary_id
This is the result of SQL EXPLAIN:
GroupAggregate (cost=13861.57..13861.62 rows=1 width=885) (actual time=1890.392..1890.525 rows=15 loops=1)
Group Key: it.item_id, it.model_id, p.partner_code, i.id, im.id, aip.id, isb.inventory_summary_id
Buffers: shared hit=118610
-> Sort (cost=13861.57..13861.58 rows=1 width=765) (actual time=1890.310..1890.338 rows=21 loops=1)
Sort Key: it.item_id, it.model_id, p.partner_code, aip.id, isb.inventory_summary_id
Sort Method: quicksort Memory: 28kB
Buffers: shared hit=118610
-> Nested Loop (cost=1.27..13861.56 rows=1 width=765) (actual time=73.156..1890.057 rows=21 loops=1)
Buffers: shared hit=118610
-> Nested Loop (cost=0.85..13853.14 rows=1 width=753) (actual time=73.134..1889.495 rows=21 loops=1)
Buffers: shared hit=118526
-> Nested Loop (cost=0.43..13845.32 rows=1 width=609) (actual time=73.099..1888.733 rows=21 loops=1)
Join Filter: ((p.reseller_store_id = t.reseller_store_id) AND (it.transfer_id = t.id))
Rows Removed by Join Filter: 2142
Buffers: shared hit=118442
-> Nested Loop (cost=0.43..13840.24 rows=1 width=633) (actual time=72.793..1879.961 rows=21 loops=1)
Join Filter: ((aip.seller_item_id = it.item_id) AND (aip.seller_model_id = it.model_id))
Rows Removed by Join Filter: 6003
Buffers: shared hit=118379
-> Nested Loop Left Join (cost=0.43..13831.47 rows=1 width=601) (actual time=72.093..1861.415 rows=24 loops=1)
Buffers: shared hit=118307
-> Nested Loop (cost=0.00..11.44 rows=1 width=572) (actual time=0.042..0.696 rows=24 loops=1)
Join Filter: (p.reseller_store_id = aip.reseller_store_id)
Rows Removed by Join Filter: 150
Buffers: shared hit=7
-> Seq Scan on partner p (cost=0.00..10.38 rows=1 width=524) (actual time=0.026..0.039 rows=6 loops=1)
Filter: (store_id = 9805)
Buffers: shared hit=1
-> Seq Scan on affiliate_item_mapping aip (cost=0.00..1.03 rows=3 width=48) (actual time=0.006..0.043 rows=29 loops=6)
Buffers: shared hit=6
-> Index Scan using branch_id_inventory_summary_id_inventory_summary_branch on inventory_summary_branch isb (cost=0.43..13820.01 rows=1 width=29) (actual time=77.498..77.498 rows=0 loops=24)
Index Cond: ((inventory_summary_id)::text = concat_ws('-'::text, aip.partner_item_id, aip.partner_model_id))
Buffers: shared hit=118300
-> Seq Scan on item_transfer it (cost=0.00..5.31 rows=231 width=32) (actual time=0.024..0.391 rows=251 loops=24)
Buffers: shared hit=72
-> Seq Scan on transfer t (cost=0.00..3.83 rows=83 width=16) (actual time=0.011..0.256 rows=103 loops=21)
Buffers: shared hit=63
-> Index Scan using pk_item on item i (cost=0.42..7.81 rows=1 width=152) (actual time=0.022..0.023 rows=1 loops=21)
Index Cond: (id = it.item_id)
Buffers: shared hit=84
-> Index Scan using pk_item_model on item_model im (cost=0.43..8.41 rows=1 width=20) (actual time=0.016..0.018 rows=1 loops=21)
Index Cond: (id = it.model_id)
Buffers: shared hit=84
Planning time: 10.051 ms
Execution time: 1890.943 ms
Of course, this statement works fine, but it's slow. Is there a better way to write this code?
How can I improve the performance? Join or sub-query is better in this case? Anyone, please give me a hand
2 things can help you
do VACCUME ANALYZE for all the tables involved.
create indexe on item_transfer.item_id & model_id
Essentially all of your time (77.498*24) is spend on the index scan of branch_id_inventory_summary_id_inventory_summary_branch.
About the only explanation I can see for this is that the index isn't suited to the query, and it is being full-index scanned (in lieu of full scanning the table), rather than being efficiently scanned. This probably means the index includes the column inventory_summary_id, but it is not the leading column. (It would be nice if EXPLAIN were to make this inefficient type of usage clearer than it currently does).
You would probably benefit from an index such as on inventory_summary_branch (inventory_summary_id) which has a better chance of being used efficiently.
I don't know why it wouldn't just do a hash join of that table. Maybe your work_mem is too low.
Inner joins will always be slower, especially with so many tables.
You could change from an inner join on the whole table to just the columns you need and see if that improves it at all:
From:
INNER JOIN partner p ON p.reseller_store_id = t.reseller_store_id
To:
inner join (select id, partner_code from partner) as p ON p.reseller_store_id = t.reseller_store_id
See if that speeds things up at all.
If not I would recommend indexes on the keys
We run a join query between 2 tables.
The query has an OR statement that compares one column from the left table and one column from the right table. The query performance is very low, and we fixed it by changing the OR to UNION.
Why is this happening? I'm looking for a detailed explanation or a reference to the documentation that might shed a light on the issue.
Query with Or Statment:
db1=# explain analyze select count(*)
from conversations
join agents on conversations.agent_id=agents.id
where conversations.id=1 or agents.id = '123';
**Query plan**
----------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=**11017.95..11017.96** rows=1 width=8) (actual time=54.088..54.088 rows=1 loops=1)
-> Gather (cost=11017.73..11017.94 rows=2 width=8) (actual time=53.945..57.181 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial Aggregate (cost=10017.73..10017.74 rows=1 width=8) (actual time=48.303..48.303 rows=1 loops=3)
-> Hash Join (cost=219.26..10016.69 rows=415 width=0) (actual time=5.292..48.287 rows=130 loops=3)
Hash Cond: (conversations.agent_id = agents.id)
Join Filter: ((conversations.id = 1) OR ((agents.id)::text = '123'::text))
Rows Removed by Join Filter: 80035
-> Parallel Seq Scan on conversations (cost=0.00..9366.95 rows=163995 width=8) (actual time=0.017..14.972 rows=131196 loops=3)
-> Hash (cost=143.56..143.56 rows=6056 width=16) (actual time=2.686..2.686 rows=6057 loops=3)
Buckets: 8192 Batches: 1 Memory Usage: 353kB
-> Seq Scan on agents (cost=0.00..143.56 rows=6056 width=16) (actual time=0.011..1.305 rows=6057 loops=3)
Planning time: 0.710 ms
Execution time: 57.276 ms
(15 rows)
Changing the OR to UNION:
db1=# explain analyze select count(*) from (
select *
from conversations
join agents on conversations.agent_id=agents.id
where conversations.installation_id=1
union
select *
from conversations
join agents on conversations.agent_id=agents.id
where agents.source_id = '123') as subquery;
**Query plan:**
----------------------------------------------------------------------------------------------------------------------------------
Aggregate (**cost=1114.31..1114.32** rows=1 width=8) (actual time=8.038..8.038 rows=1 loops=1)
-> HashAggregate (cost=1091.90..1101.86 rows=996 width=1437) (actual time=7.783..8.009 rows=390 loops=1)
Group Key: conversations.id, conversations.created, conversations.modified, conversations.source_created, conversations.source_id, conversations.installation_id, bra
in_conversation.resolution_reason, conversations.solve_time, conversations.agent_id, conversations.submission_reason, conversations.is_marked_as_duplicate, conversations.n
um_back_and_forths, conversations.is_closed, conversations.is_solved, conversations.conversation_type, conversations.related_ticket_source_id, conversations.channel, brain_convers
ation.last_updated_from_platform, conversations.csat, agents.id, agents.created, agents.modified, agents.name, agents.source_id, organizati
on_agent.installation_id, agents.settings
-> Append (cost=219.68..1027.16 rows=996 width=1437) (actual time=5.517..6.307 rows=390 loops=1)
-> Hash Join (cost=219.68..649.69 rows=931 width=224) (actual time=5.516..6.063 rows=390 loops=1)
Hash Cond: (conversations.agent_id = agents.id)
-> Index Scan using conversations_installation_id_b3ff5c00 on conversations (cost=0.42..427.98 rows=931 width=154) (actual time=0.039..0.344 rows=879 loops=1)
Index Cond: (installation_id = 1)
-> Hash (cost=143.56..143.56 rows=6056 width=70) (actual time=5.394..5.394 rows=6057 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 710kB
-> Seq Scan on agents (cost=0.00..143.56 rows=6056 width=70) (actual time=0.014..1.938 rows=6057 loops=1)
-> Nested Loop (cost=0.70..367.52 rows=65 width=224) (actual time=0.210..0.211 rows=0 loops=1)
-> Index Scan using agents_source_id_106c8103_like on agents agents_1 (cost=0.28..8.30 rows=1 width=70) (actual time=0.210..0.210 rows=0 loops=1)
Index Cond: ((source_id)::text = '123'::text)
-> Index Scan using conversations_agent_id_de76554b on conversations conversations_1 (cost=0.42..358.12 rows=110 width=154) (never executed)
Index Cond: (agent_id = agents_1.id)
Planning time: 2.024 ms
Execution time: 9.367 ms
(18 rows)
Yes. or has a way of killing the performance of queries. For this query:
select count(*)
from conversations c join
agents a
on c.agent_id = a.id
where c.id = 1 or a.id = 123;
Note I removed the quotes around 123. It looks like a number so I assume it is. For this query, you want an index on conversations(agent_id).
Probably the most effective way to write the query is:
select count(*)
from ((select 1
from conversations c join
agents a
on c.agent_id = a.id
where c.id = 1
) union all
(select 1
from conversations c join
agents a
on c.agent_id = a.id
where a.id = 123 and c.id <> 1
)
) ac;
Note the use of union all rather than union. The additional where condition eliminates duplicates.
This can take advantage of the following indexes:
conversations(id, agent_id)
agents(id)
conversations(agent_id, id)
What is the difference between using ON and WHERE in a sub join when using an outer reference?
Consider these two SQL statements as an example (looking for 10 persons with not closed tasks, using person_task with a many-to-many relationship):
select p.name
from person p
where exists (
select 1
from person_task pt
join task t on pt.task_id = t.id
and t.state <> 'closed'
and pt.person_id = p.id -- ON
)
limit 10
select p.name
from person p
where exists (
select 1
from person_task pt
join task t on pt.task_id = t.id and t.state <> 'closed'
where pt.person_id = p.id -- WHERE
)
limit 10
They produce the same result but the statement with ON is considerably faster.
Here the corresponding EXPLAIN (ANALYZE) statements:
-- USING ON
Limit (cost=0.00..270.98 rows=10 width=8) (actual time=10.412..60.876 rows=10 loops=1)
-> Seq Scan on person p (cost=0.00..28947484.16 rows=1068266 width=8) (actual time=10.411..60.869 rows=10 loops=1)
Filter: (SubPlan 1)
Rows Removed by Filter: 68
SubPlan 1
-> Nested Loop (cost=1.00..20257.91 rows=1632 width=0) (actual time=0.778..0.778 rows=0 loops=78)
-> Index Scan using person_taskx1 on person_task pt (cost=0.56..6551.27 rows=1632 width=8) (actual time=0.633..0.633 rows=0 loops=78)
Index Cond: (id = p.id)
-> Index Scan using taskxpk on task t (cost=0.44..8.40 rows=1 width=8) (actual time=1.121..1.121 rows=1 loops=10)
Index Cond: (id = pt.task_id)
Filter: (state <> 'open')
Planning Time: 0.466 ms
Execution Time: 60.920 ms
-- USING WHERE
Limit (cost=2818814.57..2841563.37 rows=10 width=8) (actual time=29.075..6884.259 rows=10 loops=1)
-> Merge Semi Join (cost=2818814.57..59308642.64 rows=24832 width=8) (actual time=29.075..6884.251 rows=10 loops=1)
Merge Cond: (p.id = pt.person_id)
-> Index Scan using personxpk on person p (cost=0.43..1440340.27 rows=2136533 width=16) (actual time=0.003..0.168 rows=18 loops=1)
-> Gather Merge (cost=1001.03..57357094.42 rows=40517669 width=8) (actual time=9.441..6881.180 rows=23747 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Nested Loop (cost=1.00..52679350.05 rows=16882362 width=8) (actual time=1.862..4207.577 rows=7938 loops=3)
-> Parallel Index Scan using person_taskx1 on person_task pt (cost=0.56..25848782.35 rows=16882362 width=16) (actual time=1.344..1807.664 rows=7938 loops=3)
-> Index Scan using taskxpk on task t (cost=0.44..1.59 rows=1 width=8) (actual time=0.301..0.301 rows=1 loops=23814)
Index Cond: (id = pt.task_id)
Filter: (state <> 'open')
Planning Time: 0.430 ms
Execution Time: 6884.349 ms
Should therefore always the ON statement be used for filtering values in a sub JOIN? Or what is going on?
I have used Postgres for this example.
The condition and pt.person_id = p.id doesn't refer to any column of the joined table t. In an inner join this doesn't make much sense semantically and we can move this condition from ON to WHERE to get the query more readable.
You are right, hence, that the two queries are equivalent and should result in the same execution plan. As this is not the case, PostgreSQL seems to have a problem here with their optimizer.
In an outer join such a condition in ON can make sense and would be different from WHERE. I assume that this is the reason for the optimizer finding a different plan for ON in general. Once it detects the condition in ON it goes another route, oblivious of the join type (so my assumption). I am surprised though, that this leads to a better plan; I'd rather expect a worse plan.
This may indicate that the table's statistics are not up-to-date. Please analyze the tables to make sure. Or it may be a sore spot in the optimizer code PostgreSQL developers might want to work on.
Here's my sql, followed by the explanation. I need to improve the performance. Any ideas?
PostgreSQL 9.3.12 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.1) 4.8.4, 64-bit
explain analyze
SELECT DISTINCT "apts"."id", practices.name AS alias_0
FROM "apts"
LEFT OUTER JOIN "patients" ON "patients"."id" = "apts"."patient_id"
LEFT OUTER JOIN "practices" ON "practices"."id" = "apts"."practice_id"
LEFT OUTER JOIN "eligibility_messages" ON "eligibility_messages"."apt_id" = "apts"."id"
WHERE (apts.eligibility_status_id != 1)
AND (eligibility_messages.current = 't')
AND (practices.id = '104')
ORDER BY practices.name desc
LIMIT 25 OFFSET 0
Limit (cost=881321.34..881321.41 rows=25 width=20) (actual time=2928.225..2928.227 rows=25 loops=1)
-> Sort (cost=881321.34..881391.94 rows=28240 width=20) (actual time=2928.223..2928.224 rows=25 loops=1)
Sort Key: practices.name
Sort Method: top-N heapsort Memory: 26kB
-> HashAggregate (cost=880242.03..880524.43 rows=28240 width=20) (actual time=2927.213..2927.319 rows=520 loops=1)
-> Nested Loop (cost=286614.55..880100.83 rows=28240 width=20) (actual time=206.180..2926.791 rows=520 loops=1)
-> Seq Scan on practices (cost=0.00..6.36 rows=1 width=20) (actual time=0.018..0.031 rows=1 loops=1)
Filter: (id = 104)
Rows Removed by Filter: 108
-> Hash Join (cost=286614.55..879812.07 rows=28240 width=8) (actual time=206.159..2926.643 rows=520 loops=1)
Hash Cond: (eligibility_messages.apt_id = apts.id)
-> Seq Scan on eligibility_messages (cost=0.00..561275.63 rows=2029532 width=4) (actual time=0.691..2766.867 rows=67559 loops=1)
Filter: current
Rows Removed by Filter: 3924633
-> Hash (cost=284614.02..284614.02 rows=115082 width=12) (actual time=121.957..121.957 rows=91660 loops=1)
Buckets: 16384 Batches: 2 Memory Usage: 1974kB
-> Bitmap Heap Scan on apts (cost=8296.88..284614.02 rows=115082 width=12) (actual time=19.927..91.038 rows=91660 loops=1)
Recheck Cond: (practice_id = 104)
Filter: (eligibility_status_id <> 1)
Rows Removed by Filter: 80169
-> Bitmap Index Scan on index_apts_on_practice_id (cost=0.00..8268.11 rows=177540 width=0) (actual time=16.856..16.856 rows=179506 loops=1)
Index Cond: (practice_id = 104)
Total runtime: 2928.361 ms
First, rewrite the query to a more manageable form:
SELECT DISTINCT a."id", pr.name AS alias_0
FROM "apts" a JOIN
"practices" pr
ON pr."id" = a."practice_id" JOIN
"eligibility_messages" em
ON em."apt_id" = a."id"
WHERE (a.eligibility_status_id <> 1) AND
(em.current = 't') AND
(a.practice_id = 104)
ORDER BY pr.name desc ;
Notes:
The WHERE clause turns the outer joins into inner joins anyway, so you might as well express them correctly.
I doubt pr.id is actually a string
The patients table isn't used, so I just removed it.
Perhaps you don't even need the select distinct any more.
Switched the condition in the where to apts rather than practices.
If this isn't fast enough, you want indexes, probably on apts(practice_id, eligibility_status_id, id), practices(id), and eligibility_messages(apt_id, current).
The queries are performed on a large table with 11 million rows. I have already performed an ANALYZE on the table prior to the query executions.
Query 1:
SELECT *
FROM accounts t1
LEFT OUTER JOIN accounts t2
ON (t1.account_no = t2.account_no
AND t1.effective_date < t2.effective_date)
WHERE t2.account_no IS NULL;
Explain Analyze:
Hash Anti Join (cost=480795.57..1201111.40 rows=7369854 width=292) (actual time=29619.499..115662.111 rows=1977871 loops=1)
Hash Cond: ((t1.account_no)::text = (t2.account_no)::text)
Join Filter: ((t1.effective_date)::text < (t2.effective_date)::text)
-> Seq Scan on accounts t1 (cost=0.00..342610.81 rows=11054781 width=146) (actual time=0.025..25693.921 rows=11034070 loops=1)
-> Hash (cost=342610.81..342610.81 rows=11054781 width=146) (actual time=29612.925..29612.925 rows=11034070 loops=1)
Buckets: 2097152 Batches: 1 Memory Usage: 1834187kB
-> Seq Scan on accounts t2 (cost=0.00..342610.81 rows=11054781 width=146) (actual time=0.006..22929.635 rows=11034070 loops=1)
Total runtime: 115870.788 ms
The estimated cost is ~1.2 million and the actual time taken is ~1.9 minutes.
Query 2:
SELECT t1.*
FROM accounts t1
LEFT OUTER JOIN accounts t2
ON (t1.account_no = t2.account_no
AND t1.effective_date < t2.effective_date)
WHERE t2.account_no IS NULL;
Explain Analyze:
Hash Anti Join (cost=480795.57..1201111.40 rows=7369854 width=146) (actual time=13365.808..65519.402 rows=1977871 loops=1)
Hash Cond: ((t1.account_no)::text = (t2.account_no)::text)
Join Filter: ((t1.effective_date)::text < (t2.effective_date)::text)
-> Seq Scan on accounts t1 (cost=0.00..342610.81 rows=11054781 width=146) (actual time=0.007..5032.778 rows=11034070 loops=1)
-> Hash (cost=342610.81..342610.81 rows=11054781 width=18) (actual time=13354.219..13354.219 rows=11034070 loops=1)
Buckets: 2097152 Batches: 1 Memory Usage: 545369kB
-> Seq Scan on accounts t2 (cost=0.00..342610.81 rows=11054781 width=18) (actual time=0.011..8964.571 rows=11034070 loops=1)
Total runtime: 65705.707 ms
The estimated cost is ~1.2 million (again) but the actual time taken is <1.1 minutes.
Query 3:
SELECT *
FROM accounts
WHERE (account_no,
effective_date) IN
(SELECT account_no,
max(effective_date)
FROM accounts
GROUP BY account_no);
Explain Analyze:
Nested Loop (cost=406416.19..502216.84 rows=2763695 width=146) (actual time=31779.457..917543.228 rows=1977871 loops=1)
-> HashAggregate (cost=406416.19..406757.45 rows=34126 width=43) (actual time=31774.877..33378.968 rows=1977425 loops=1)
-> Subquery Scan on "ANY_subquery" (cost=397884.72..404709.90 rows=341259 width=43) (actual time=27979.226..29841.217 rows=1977425 loops=1)
-> HashAggregate (cost=397884.72..401297.31 rows=341259 width=18) (actual time=27979.224..29315.346 rows=1977425 loops=1)
-> Seq Scan on accounts (cost=0.00..342610.81 rows=11054781 width=18) (actual time=0.851..16092.755 rows=11034070 loops=1)
-> Index Scan using accounts_idx2 on accounts (cost=0.00..2.78 rows=1 width=146) (actual time=0.443..0.445 rows=1 loops=1977425)
Index Cond: (((account_no)::text = ("ANY_subquery".account_no)::text) AND ((effective_date)::text = "ANY_subquery".max))
Total runtime: 918039.614 ms
The estimated cost is ~502,000 but the actual time taken is ~15.3 minutes!
How reliable is the EXPLAIN output?
Do we always have to EXPLAIN ANALYZE to see how our query is going to perform on real data, and not place trust on how much the query planner thinks it will cost?
They are reliable, except for when they are not. You can't really generalize.
It looks like it is dramatically underestimating the number of different account_no that it will find (thinks it will find 34126 actually found 1977425). Your default_statistics_target might not be high enough to get a good estimate for this column.