COUNT * is too slow using postgresql - sql

I am observing that COUNT(*) from table is not an optimised query when it comes to deep SQLs.
Here's the sql I am working with
SELECT COUNT(*) FROM "items"
INNER JOIN (
SELECT c.* FROM companies c LEFT OUTER JOIN company_groups ON c.id = company_groups.company_id
WHERE company_groups.has_restriction IS NULL OR company_groups.has_restriction = 'f' OR company_groups.company_id = 1999 OR company_groups.group_id IN ('3','2')
GROUP BY c.id
) AS companies ON companies.id = stock_items.vendor_id
LEFT OUTER JOIN favs ON items.id = favs.item_id AND favs.user_id = 999 AND favs.is_visible = TRUE
WHERE "items"."type" IN ('Fashion') AND "items"."visibility" = 't' AND "items"."is_hidden" = 'f' AND (items.depth IS NULL OR (items.depth >= '0' AND items.depth <= '100')) AND (items.table IS NULL OR (items.table >= '0' AND items.table <= '100')) AND (items.company_id NOT IN (199,200,201))
This query is taking 4084.8ms to count from 0.35 Million records from database.
I am using Rails as framework, so the SQL I am composing fires a COUNT query of the original query whenever I call results.count
Since, I am using LIMIT and OFFSET so basic results are loading in less than 32.0ms (which is way too fast)
Here's the output of the EXPLAIN ANALYSE
Merge Join (cost=70743.22..184962.02 rows=7540499 width=4) (actual time=4018.351..4296.963 rows=360323 loops=1)
Merge Cond: (c.id = items.company_id)
-> Group (cost=0.56..216.21 rows=4515 width=4) (actual time=0.357..5.165 rows=4501 loops=1)
Group Key: c.id
-> Merge Left Join (cost=0.56..204.92 rows=4515 width=4) (actual time=0.303..2.590 rows=4504 loops=1)
Merge Cond: (c.id = company_groups.company_id)
Filter: ((company_groups.has_restriction IS NULL) OR (NOT company_groups.has_restriction) OR (company_groups.company_id = 1999) OR (company_groups.group_id = ANY ('{3,2}'::integer[])))
Rows Removed by Filter: 10
-> Index Only Scan using companies_pkey on companies c (cost=0.28..128.10 rows=4521 width=4) (actual time=0.155..0.941 rows=4508 loops=1)
Heap Fetches: 3
-> Index Scan using index_company_groups_on_company_id on company_groups (cost=0.28..50.14 rows=879 width=9) (actual time=0.141..0.480 rows=878 loops=1)
-> Materialize (cost=70742.66..72421.11 rows=335690 width=8) (actual time=4017.964..4216.381 rows=362180 loops=1)
-> Sort (cost=70742.66..71581.89 rows=335690 width=8) (actual time=4017.955..4140.168 rows=362180 loops=1)
Sort Key: items.company_id
Sort Method: external merge Disk: 6352kB
-> Hash Left Join (cost=1.05..35339.74 rows=335690 width=8) (actual time=0.617..3588.634 rows=362180 loops=1)
Hash Cond: (items.id = favs.item_id)
-> Seq Scan on items (cost=0.00..34079.84 rows=335690 width=8) (actual time=0.504..3447.355 rows=362180 loops=1)
Filter: (visibility AND (NOT is_hidden) AND ((type)::text = 'Fashion'::text) AND (company_id <> ALL ('{199,200,201}'::integer[])) AND ((depth IS NULL) OR ((depth >= '0'::numeric) AND (depth <= '100'::nume (...)
Rows Removed by Filter: 5814
-> Hash (cost=1.04..1.04 rows=1 width=4) (actual time=0.009..0.009 rows=0 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 8kB
-> Seq Scan on favs (cost=0.00..1.04 rows=1 width=4) (actual time=0.008..0.008 rows=0 loops=1)
Filter: (is_visible AND (user_id = 999))
Rows Removed by Filter: 3
Planning time: 3.526 ms
Execution time: 4397.849 ms
Please advise on how should I make it work faster!
P.S.: All the columns are indexed like type, visibility, is_hidden, table, depth etc.
Thanks in advance!

Well, you have two parts that select everything (SELECT *) in your query, maybe you could limit that and see if it helps, example:
SELECT COUNT(OneSpecificColumn)
FROM "items"
INNER JOIN
( SELECT c.(AnotherSpecificColumn)
FROM companies c
LEFT OUTER JOIN company_groups ON c.id = company_groups.company_id
WHERE company_groups.has_restriction IS NULL
OR company_groups.has_restriction = 'f'
OR company_groups.company_id = 1999
OR company_groups.group_id IN ('3',
'2')
GROUP BY c.id) AS companies ON companies.id = stock_items.vendor_id
LEFT OUTER JOIN favs ON items.id = favs.item_id
AND favs.user_id = 999
AND favs.is_visible = TRUE
WHERE "items"."type" IN ('Fashion')
AND "items"."visibility" = 't'
AND "items"."is_hidden" = 'f'
AND (items.depth IS NULL
OR (items.depth >= '0'
AND items.depth <= '100'))
AND (items.table IS NULL
OR (items.table >= '0'
AND items.table <= '100'))
AND (items.company_id NOT IN (199,
200,
201))
You could also check if those left joins are all necessary, inner joins are less costly and may speed up your search.

The lion's share of the time is spent in the sequential scan of items, and that cannot be improved, because you need almost all of the rows in the table.
So the only ways to improve the query are
see that items is cached in memory
get faster storage

Related

SQL: JOIN inside correlated subquery (SEMI JOIN)

I need to JOIN a table inside a correlated subquery. However, the chosen query plan of postgres is very slow. How can I optimize the following query:
SELECT c.id
FROM customer c
WHERE EXISTS (
SELECT 1
FROM customer_communication cc
JOIN communication co on co.id = cc.communication_id and co.channel <> 'mobile'
WHERE cc.user_id = c.id
)
This is the EXPLAIN (ANALYZE) result:
Nested Loop (cost=3451561.57..3539012.42 rows=24509 width=8) (actual time=60913.294..64056.970 rows=1036309 loops=1)
-> HashAggregate (cost=3451561.14..3451806.23 rows=24509 width=8) (actual time=60913.264..61187.702 rows=1036310 loops=1)
Group Key: cc.customer_id
-> Hash Join (cost=2070834.75..3358538.60 rows=37209016 width=8) (actual time=32758.325..52752.383 rows=37209019 loops=1)
Hash Cond: (cc.communication_id = co.id)
-> Seq Scan on customer_communication cc (cost=0.00..755689.16 rows=37209016 width=16) (actual time=0.011..4949.315 rows=37209019 loops=1)
-> Hash (cost=1772758.38..1772758.38 rows=18168430 width=8) (actual time=32756.662..32756.663 rows=18108924 loops=1)
Buckets: 262144 Batches: 128 Memory Usage: 7557kB
-> Seq Scan on communication co (cost=0.00..1772758.38 rows=18168430 width=8) (actual time=0.007..30024.494 rows=18108924 loops=1)
Filter: (channel <> 'mobile')
-> Index Only Scan using customerxpk on customer c (cost=0.43..3.60 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=1036310)
Index Cond: (id = cc.customer_id)
Heap Fetches: 525050
Planning Time: 0.391 ms
Execution Time: 64094.584 ms
I think you have mis-specified the query, because you have conflicting aliases. This might be better:
SELECT c.id
FROM customer c
WHERE EXISTS (SELECT 1
FROM customer_communication cc JOIN
communication co
ON co.id = cc.communication_id AND
co.channel <> 'mobile'
WHERE cc.user_id = c.id
);
Note that in the subquery c refers to the outer query's customer and co refers to communication.

SQL: ON vs. WHERE in sub JOIN

What is the difference between using ON and WHERE in a sub join when using an outer reference?
Consider these two SQL statements as an example (looking for 10 persons with not closed tasks, using person_task with a many-to-many relationship):
select p.name
from person p
where exists (
select 1
from person_task pt
join task t on pt.task_id = t.id
and t.state <> 'closed'
and pt.person_id = p.id -- ON
)
limit 10
select p.name
from person p
where exists (
select 1
from person_task pt
join task t on pt.task_id = t.id and t.state <> 'closed'
where pt.person_id = p.id -- WHERE
)
limit 10
They produce the same result but the statement with ON is considerably faster.
Here the corresponding EXPLAIN (ANALYZE) statements:
-- USING ON
Limit (cost=0.00..270.98 rows=10 width=8) (actual time=10.412..60.876 rows=10 loops=1)
-> Seq Scan on person p (cost=0.00..28947484.16 rows=1068266 width=8) (actual time=10.411..60.869 rows=10 loops=1)
Filter: (SubPlan 1)
Rows Removed by Filter: 68
SubPlan 1
-> Nested Loop (cost=1.00..20257.91 rows=1632 width=0) (actual time=0.778..0.778 rows=0 loops=78)
-> Index Scan using person_taskx1 on person_task pt (cost=0.56..6551.27 rows=1632 width=8) (actual time=0.633..0.633 rows=0 loops=78)
Index Cond: (id = p.id)
-> Index Scan using taskxpk on task t (cost=0.44..8.40 rows=1 width=8) (actual time=1.121..1.121 rows=1 loops=10)
Index Cond: (id = pt.task_id)
Filter: (state <> 'open')
Planning Time: 0.466 ms
Execution Time: 60.920 ms
-- USING WHERE
Limit (cost=2818814.57..2841563.37 rows=10 width=8) (actual time=29.075..6884.259 rows=10 loops=1)
-> Merge Semi Join (cost=2818814.57..59308642.64 rows=24832 width=8) (actual time=29.075..6884.251 rows=10 loops=1)
Merge Cond: (p.id = pt.person_id)
-> Index Scan using personxpk on person p (cost=0.43..1440340.27 rows=2136533 width=16) (actual time=0.003..0.168 rows=18 loops=1)
-> Gather Merge (cost=1001.03..57357094.42 rows=40517669 width=8) (actual time=9.441..6881.180 rows=23747 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Nested Loop (cost=1.00..52679350.05 rows=16882362 width=8) (actual time=1.862..4207.577 rows=7938 loops=3)
-> Parallel Index Scan using person_taskx1 on person_task pt (cost=0.56..25848782.35 rows=16882362 width=16) (actual time=1.344..1807.664 rows=7938 loops=3)
-> Index Scan using taskxpk on task t (cost=0.44..1.59 rows=1 width=8) (actual time=0.301..0.301 rows=1 loops=23814)
Index Cond: (id = pt.task_id)
Filter: (state <> 'open')
Planning Time: 0.430 ms
Execution Time: 6884.349 ms
Should therefore always the ON statement be used for filtering values in a sub JOIN? Or what is going on?
I have used Postgres for this example.
The condition and pt.person_id = p.id doesn't refer to any column of the joined table t. In an inner join this doesn't make much sense semantically and we can move this condition from ON to WHERE to get the query more readable.
You are right, hence, that the two queries are equivalent and should result in the same execution plan. As this is not the case, PostgreSQL seems to have a problem here with their optimizer.
In an outer join such a condition in ON can make sense and would be different from WHERE. I assume that this is the reason for the optimizer finding a different plan for ON in general. Once it detects the condition in ON it goes another route, oblivious of the join type (so my assumption). I am surprised though, that this leads to a better plan; I'd rather expect a worse plan.
This may indicate that the table's statistics are not up-to-date. Please analyze the tables to make sure. Or it may be a sore spot in the optimizer code PostgreSQL developers might want to work on.

possible to speed up this query?

I have the following query which takes a little too long to execute. I have posted the EXPLAIN ANALYZE for the query. Anything I can do to improve its speed?
EXPLAIN analyze SELECT c.*, match.user_json FROM match INNER JOIN conversation c
ON match.match_id = c.match_id WHERE c.from_id <> 142822281 AND c.to_id =
142822281 AND c.unix_timestamp = (SELECT max( unix_timestamp ) FROM conversation
WHERE match_id = c.match_id GROUP BY match_id)
EXPLAIN ANALYZE results
Nested Loop (cost=0.00..16183710.79 rows=2 width=805) (actual time=2455.133..2502.781 rows=34 loops=1)
Join Filter: (match.match_id = c.match_id)
Rows Removed by Join Filter: 71502
-> Seq Scan on match (cost=0.00..268.51 rows=2151 width=723) (actual time=0.006..4.973 rows=2104 loops=1)
-> Materialize (cost=0.00..16183377.75 rows=2 width=90) (actual time=0.034..1.168 rows=34 loops=2104)
-> Seq Scan on conversation c (cost=0.00..16183377.74 rows=2 width=90) (actual time=70.972..2421.949 rows=34 loops=1)
Filter: ((from_id <> 142822281) AND (to_id = 142822281) AND (unix_timestamp = (SubPlan 1)))
Rows Removed by Filter: 22010
SubPlan 1
-> GroupAggregate (cost=0.00..739.64 rows=10 width=16) (actual time=5.358..5.358 rows=1 loops=450)
Group Key: conversation.match_id
-> Seq Scan on conversation (cost=0.00..739.49 rows=10 width=16) (actual time=3.355..5.320 rows=17 loops=450)
Filter: (match_id = c.match_id)
Rows Removed by Filter: 22027
Planning Time: 1.132 ms
Execution Time: 2502.926 ms
This is your query:
SELECT c.*, m.user_json
FROM match m INNER JOIN
conversation c
ON m.match_id = c.match_id
WHERE c.from_id <> 142822281 AND
c.to_id = 142822281 AND
c.unix_timestamp = (SELECT max( c2.unix_timestamp )
FROM conversation c2
WHERE c2.match_id = c.match_id
GROUP BY c2.match_id
);
I would suggest writing it as:
SELECT DISTINCT ON (c.match_id) c.*, m.user_json
FROM match m INNER JOIN
conversation c
ON m.match_id = c.match_id
WHERE c.from_id <> 142822281 AND
c.to_id = 142822281 AND
ORDER BY c.match_id, c.unix_timestamp DESC;
Then try an index on: conversation(to_id, from_id, match_id). I assume you have an index on match(match_id).

Improve performance on SQL query with Nested Loop - PostgreSQL

I am using PostgreSQL and I have a weird problem with my SQL query. Depending on wich date paramter I'm using. My request doesn't do the same operation.
This is my working query :
SELECT DISTINCT app.id_application
FROM stat sj
LEFT OUTER JOIN groupe gp ON gp.id_groupe = sj.id_groupe
LEFT OUTER JOIN application app ON app.id_application = gp.id_application
WHERE date_stat >= '2016/3/01'
AND date_stat <= '2016/3/31'
AND ( date_stat = date_gen-1 or (date_gen = '2016/04/01' AND date_stat = '2016/3/31'))
AND app.id_application IS NOT NULL
This query takes around 2 secondes (which is OKAY for me because I have a lots of rows). When I run EXPLAIN ANALYSE for this query I have this:
HashAggregate (cost=375486.95..375493.62 rows=667 width=4) (actual time=2320.541..2320.656 rows=442 loops=1)
-> Hash Join (cost=254.02..375478.99 rows=3186 width=4) (actual time=6.144..2271.984 rows=263274 loops=1)
Hash Cond: (gp.id_application = app.id_application)
-> Hash Join (cost=234.01..375415.17 rows=3186 width=4) (actual time=5.926..2200.671 rows=263274 loops=1)
Hash Cond: (sj.id_groupe = gp.id_groupe)
-> Seq Scan on stat sj (cost=0.00..375109.47 rows=3186 width=8) (actual time=3.196..2068.357 rows=263274 loops=1)
Filter: ((date_stat >= '2016-03-01'::date) AND (date_stat <= '2016-03-31'::date) AND ((date_stat = (date_gen - 1)) OR ((date_gen = '2016-04-01'::date) AND (date_stat = '2016-03-31'::date))))
Rows Removed by Filter: 7199514
-> Hash (cost=133.45..133.45 rows=8045 width=12) (actual time=2.677..2.677 rows=8019 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 345kB
-> Seq Scan on groupe gp (cost=0.00..133.45 rows=8045 width=12) (actual time=0.007..1.284 rows=8019 loops=1)
-> Hash (cost=11.67..11.67 rows=667 width=4) (actual time=0.206..0.206 rows=692 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 25kB
-> Seq Scan on application app (cost=0.00..11.67 rows=667 width=4) (actual time=0.007..0.101 rows=692 loops=1)
Filter: (id_application IS NOT NULL)
Total runtime: 2320.855 ms
Now, When I'm trying the same query for the current month (we are the 6th of April, so I'm trying to get all the application_id of April) with the same query
SELECT DISTINCT app.id_application
FROM stat sj
LEFT OUTER JOIN groupe gp ON gp.id_groupe = sj.id_groupe
LEFT OUTER JOIN application app ON app.id_application = gp.id_application
WHERE date_stat >= '2016/04/01'
AND date_stat <= '2016/04/30'
AND ( date_stat = date_gen-1 or ( date_gen = '2016/05/01' AND date_job = '2016/04/30'))
AND app.id_application IS NOT NULL
This query takes now 120 seconds. So I also ran EXPLAIN ANALYZE on this query and now it doesn't have the same operations:
HashAggregate (cost=375363.50..375363.51 rows=1 width=4) (actual time=186716.468..186716.532 rows=490 loops=1)
-> Nested Loop (cost=0.00..375363.49 rows=1 width=4) (actual time=1.945..186619.404 rows=118990 loops=1)
Join Filter: (gp.id_application = app.id_application)
Rows Removed by Join Filter: 82222090
-> Nested Loop (cost=0.00..375343.49 rows=1 width=4) (actual time=1.821..171458.237 rows=118990 loops=1)
Join Filter: (sj.id_groupe = gp.id_groupe)
Rows Removed by Join Filter: 954061820
-> Seq Scan on stat sj (cost=0.00..375109.47 rows=1 width=8) (actual time=0.235..1964.423 rows=118990 loops=1)
Filter: ((date_stat >= '2016-04-01'::date) AND (date_stat <= '2016-04-30'::date) AND ((date_stat = (date_gen - 1)) OR ((date_gen = '2016-05-01'::date) AND (date_stat = '2016-04-30'::date))))
Rows Removed by Filter: 7343798
-> Seq Scan on groupe gp (cost=0.00..133.45 rows=8045 width=12) (actual time=0.002..0.736 rows=8019 loops=118990)
-> Seq Scan on application app (cost=0.00..11.67 rows=667 width=4) (actual time=0.003..0.073 rows=692 loops=118990)
Filter: (id_application IS NOT NULL)
Total runtime: 186716.635 ms
So I decided to search where the problem came from by reducing the number of conditions from my query until the performances is acceptable again.
So with only this parameter
WHERE date_stat >= '2016/04/01'
It takes only 1.9secondes (like the first working query)
and it's also working with 2 parameters :
WHERE date_stat >= '2016/04/01'
AND app.id_application IS NOT NULL
BUT when I try to add one of those line I have the Nested loop in the Explain
AND date_stat <= '2016/04/30'
AND ( date_stat = date_gen-1 or ( date_gen = '2016/05/01' AND date_stat = '2016/04/30'))
Does someone have any idea where it could come from?
Ok, it looks like there's problem with optimizer estimations. He thiks that for april there will be only 1 row so he choose NESTED LOOP which is very inefficient for big number of rows (118,990 in that case).
Perform VACUUM ANALYZE for every table. This will clean up dead tuples and refresh statistics.
consider adding index based on dates like CREATE INDEX date_stat_idx ON <table with date_stat> USING btree (date_stat);
Rerun the query,

Optimizing postgres query

QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=32164.87..32164.89 rows=1 width=44) (actual time=221552.831..221552.831 rows=0 loops=1)
-> Sort (cost=32164.87..32164.87 rows=1 width=44) (actual time=221552.827..221552.827 rows=0 loops=1)
Sort Key: t.date_effective, t.acct_account_transaction_id, p.method, t.amount, c.business_name, t.amount
-> Nested Loop (cost=22871.67..32164.86 rows=1 width=44) (actual time=221552.808..221552.808 rows=0 loops=1)
-> Nested Loop (cost=22871.67..32160.37 rows=1 width=52) (actual time=221431.071..221546.619 rows=670 loops=1)
-> Nested Loop (cost=22871.67..32157.33 rows=1 width=43) (actual time=221421.218..221525.056 rows=2571 loops=1)
-> Hash Join (cost=22871.67..32152.80 rows=1 width=16) (actual time=221307.382..221491.019 rows=2593 loops=1)
Hash Cond: ("outer".acct_account_id = "inner".acct_account_fk)
-> Seq Scan on acct_account a (cost=0.00..7456.08 rows=365008 width=8) (actual time=0.032..118.369 rows=61295 loops=1)
-> Hash (cost=22871.67..22871.67 rows=1 width=16) (actual time=221286.733..221286.733 rows=2593 loops=1)
-> Nested Loop Left Join (cost=0.00..22871.67 rows=1 width=16) (actual time=1025.396..221266.357 rows=2593 loops=1)
Join Filter: ("inner".orig_acct_payment_fk = "outer".acct_account_transaction_id)
Filter: ("inner".link_type IS NULL)
-> Seq Scan on acct_account_transaction t (cost=0.00..18222.98 rows=1 width=16) (actual time=949.081..976.432 rows=2596 loops=1)
Filter: ((("type")::text = 'debit'::text) AND ((transaction_status)::text = 'active'::text) AND (date_effective >= '2012-03-01'::date) AND (date_effective < '2012-04-01 00:00:00'::timestamp without time zone))
-> Seq Scan on acct_payment_link l (cost=0.00..4648.68 rows=1 width=15) (actual time=1.073..84.610 rows=169 loops=2596)
Filter: ((link_type)::text ~~ 'return_%'::text)
-> Index Scan using contact_pk on contact c (cost=0.00..4.52 rows=1 width=27) (actual time=0.007..0.008 rows=1 loops=2593)
Index Cond: (c.contact_id = "outer".contact_fk)
-> Index Scan using acct_payment_transaction_fk on acct_payment p (cost=0.00..3.02 rows=1 width=13) (actual time=0.005..0.005 rows=0 loops=2571)
Index Cond: (p.acct_account_transaction_fk = "outer".acct_account_transaction_id)
Filter: ((method)::text <> 'trade'::text)
-> Index Scan using contact_role_pk on contact_role (cost=0.00..4.48 rows=1 width=4) (actual time=0.007..0.007 rows=0 loops=670)
Index Cond: ("outer".contact_id = contact_role.contact_fk)
Filter: (exchange_fk = 74)
Total runtime: 221553.019 ms
Your problem is here:
-> Nested Loop Left Join (cost=0.00..22871.67 rows=1 width=16) (actual time=1025.396..221266.357 rows=2593 loops=1)
Join Filter: ("inner".orig_acct_payment_fk = "outer".acct_account_transaction_id)
Filter: ("inner".link_type IS NULL)
-> Seq Scan on acct_account_transaction t (cost=0.00..18222.98 rows=1 width=16) (actual time=949.081..976.432 rows=2596 loops=1)
Filter: ((("type")::text = 'debit'::text) AND ((transaction_status)::text = 'active'::text) AND (date_effective >= '2012-03-01'::date) AND (date_effective
Seq Scan on acct_payment_link l (cost=0.00..4648.68 rows=1 width=15) (actual time=1.073..84.610 rows=169 loops=2596)
Filter: ((link_type)::text ~~ 'return_%'::text)
It expects to find 1 row in acct_account_transaction, while it finds 2596, and similarly for the other table.
You did not mention Your postgres version (could You?), but this should do the trick:
SELECT DISTINCT
t.date_effective,
t.acct_account_transaction_id,
p.method,
t.amount,
c.business_name,
t.amount
FROM
contact c inner join contact_role on (c.contact_id=contact_role.contact_fk and contact_role.exchange_fk=74),
acct_account a, acct_payment p,
acct_account_transaction t
WHERE
p.acct_account_transaction_fk=t.acct_account_transaction_id
and t.type = 'debit'
and transaction_status = 'active'
and p.method != 'trade'
and t.date_effective >= '2012-03-01'
and t.date_effective < (date '2012-03-01' + interval '1 month')
and c.contact_id=a.contact_fk and a.acct_account_id = t.acct_account_fk
and not exists(
select * from acct_payment_link l
where orig_acct_payment_fk == acct_account_transaction_id
and link_type like 'return_%'
)
ORDER BY
t.date_effective DESC
Also, try setting appropriate statistics target for relevant columns. Link to the friendly manual: http://www.postgresql.org/docs/current/static/sql-altertable.html
What are your indexes, and have you analysed lately? It's doing a table scan on acct_account_transaction even though there are several criteria on that table:
type
date_effective
If there are no indexes on those columns, then a compound one one (type, date_effective) could help (assuming there are lots of rows that don't meet the criteria on those columns).
I remove my first suggestion, as it changes the nature of the query.
I see that there's too much time spent in the LEFT JOIN.
First thing is to try to make only a single scan of the acct_payment_link table. Could you try rewriting your query to:
... LEFT JOIN (SELECT * FROM acct_payment_link
WHERE link_type LIKE 'return_%') AS l ...
You should check your statistics, as there's difference between planned and returned numbers of rows.
You haven't included tables' and indexes' definitions, it'd be good to take a look on those.
You might also want to use contrib/pg_tgrm extension to build index on the acct_payment_link.link_type, but I would make this a last option to try out.
BTW, what is the PostgreSQL version you're using?
Your statement rewritten and formatted:
SELECT DISTINCT
t.date_effective,
t.acct_account_transaction_id,
p.method,
t.amount,
c.business_name,
t.amount
FROM contact c
JOIN contact_role cr ON cr.contact_fk = c.contact_id
JOIN acct_account a ON a.contact_fk = c.contact_id
JOIN acct_account_transaction t ON t.acct_account_fk = a.acct_account_id
JOIN acct_payment p ON p.acct_account_transaction_fk
= t.acct_account_transaction_id
LEFT JOIN acct_payment_link l ON orig_acct_payment_fk
= acct_account_transaction_id
-- missing table-qualification!
AND link_type like 'return_%'
-- missing table-qualification!
WHERE transaction_status = 'active' -- missing table-qualification!
AND cr.exchange_fk = 74
AND t.type = 'debit'
AND t.date_effective >= '2012-03-01'
AND t.date_effective < (date '2012-03-01' + interval '1 month')
AND p.method != 'trade'
AND l.link_type IS NULL
ORDER BY t.date_effective DESC;
Explicit JOIN statements are preferable. I reordered your tables according to your JOIN logic.
Why (date '2012-03-01' + interval '1 month') instead of date '2012-04-01'?
Some table qualifications are missing. In a complex statement like this that's bad style. May be hiding a mistake.
The key to performance are indexes where appropriate, proper configuration of PostgreSQL and accurate statistics.
General advice on performance tuning in the PostgreSQL wiki.