Why does this GROUP BY cause a full table scan? - sql

I have a table of projects and a table of tasks, with each task referencing a single project. I want to get a list of projects sorted by their due dates along with the number of tasks in each project. I can pretty easily write this query two ways. First, using JOIN and GROUP BY:
SELECT p.name, p.due_date, COUNT(t.id) as num_tasks
FROM projects p
LEFT OUTER JOIN tasks t ON t.project_id = p.id
GROUP BY p.id
ORDER BY p.due_date ASC LIMIT 20;
Second, using a subquery:
SELECT p.name, p.due_date, (SELECT
COUNT(*) FROM tasks t WHERE t.project_id = p.id) as num_tasks
FROM projects p
ORDER BY p.due_date ASC LIMIT 20;
I'm using PostgreSQL 10, and I've got indices on projects.id, projects.due_date and tasks.project_id. Why does the first query using the GROUP BY clause do a full table scan while the second query makes proper use of the indices? It seems like these should compile down to the same thing.
Note that if I remove the GROUP BY and the COUNT(t.id) from the first query it will run quickly, just with lots of duplicate rows. So the problem is with the GROUP BY clause, not the JOIN. This seems like it's about the simplest GROUP BY one could do, so I'd like to understand if/how to make it more efficient before moving on to more complicated queries.
Edit — here's the result of EXPLAIN ANALYZE. First query:
Limit (cost=41919.58..41919.63 rows=20 width=53) (actual time=1046.762..1046.771 rows=20 loops=1)
-> Sort (cost=41919.58..42169.58 rows=100000 width=53) (actual time=1046.760..1046.765 rows=20 loops=1)
Sort Key: p.due_date
Sort Method: top-N heapsort Memory: 29kB
-> GroupAggregate (cost=0.71..39258.62 rows=100000 width=53) (actual time=0.109..1002.890 rows=100000 loops=1)
Group Key: p.id
-> Merge Left Join (cost=0.71..35758.62 rows=500000 width=49) (actual time=0.072..807.603 rows=500702 loops=1)
Merge Cond: (p.id = t.project_id)
-> Index Scan using projects_pkey on projects p (cost=0.29..3542.29 rows=100000 width=45) (actual time=0.025..38.363 rows=100000 loops=1)
-> Index Scan using project_id_idx on tasks t (cost=0.42..25716.33 rows=500000 width=8) (actual time=0.038..531.097 rows=500000 loops=1)
Planning Time: 0.573 ms
Execution Time: 1046.934 ms
Second query:
Limit (cost=0.29..92.61 rows=20 width=49) (actual time=0.079..0.443 rows=20 loops=1)
-> Index Scan using project_date_idx on projects p (cost=0.29..461594.09 rows=100000 width=49) (actual time=0.076..0.432 rows=20 loops=1)
SubPlan 1
-> Aggregate (cost=4.54..4.55 rows=1 width=8) (actual time=0.015..0.016 rows=1 loops=20)
-> Index Only Scan using project_id_idx on tasks t (cost=0.42..4.53 rows=6 width=0) (actual time=0.009..0.011 rows=5 loops=20)
Index Cond: (project_id = p.id)
Heap Fetches: 0
Planning Time: 0.284 ms
Execution Time: 0.551 ms
And if anyone wants to try to exactly reproduce this, here's my setup:
CREATE TABLE projects (
id serial NOT NULL PRIMARY KEY,
name varchar(100) NOT NULL,
due_date timestamp NOT NULL
);
CREATE TABLE tasks (
id serial NOT NULL PRIMARY KEY,
project_id integer NOT NULL,
data real NOT NULL
);
INSERT INTO projects (name, due_date) SELECT
md5(random()::text),
timestamp '2020-01-01 00:00:00' +
random() * (timestamp '2030-01-01 20:00:00' - timestamp '2020-01-01 10:00:00')
FROM generate_series(1, 100000);
INSERT INTO tasks (project_id, data)
SELECT CAST(1 + random()*99999 AS integer), random()
FROM generate_series(1, 500000);
CREATE INDEX project_date_idx ON projects ("due_date");
CREATE INDEX project_id_idx ON tasks ("project_id");
ALTER TABLE tasks ADD CONSTRAINT task_foreignkey FOREIGN KEY ("project_id") REFERENCES "projects" ("id") DEFERRABLE INITIALLY DEFERRED;

Related

SQL: ON vs. WHERE in sub JOIN

What is the difference between using ON and WHERE in a sub join when using an outer reference?
Consider these two SQL statements as an example (looking for 10 persons with not closed tasks, using person_task with a many-to-many relationship):
select p.name
from person p
where exists (
select 1
from person_task pt
join task t on pt.task_id = t.id
and t.state <> 'closed'
and pt.person_id = p.id -- ON
)
limit 10
select p.name
from person p
where exists (
select 1
from person_task pt
join task t on pt.task_id = t.id and t.state <> 'closed'
where pt.person_id = p.id -- WHERE
)
limit 10
They produce the same result but the statement with ON is considerably faster.
Here the corresponding EXPLAIN (ANALYZE) statements:
-- USING ON
Limit (cost=0.00..270.98 rows=10 width=8) (actual time=10.412..60.876 rows=10 loops=1)
-> Seq Scan on person p (cost=0.00..28947484.16 rows=1068266 width=8) (actual time=10.411..60.869 rows=10 loops=1)
Filter: (SubPlan 1)
Rows Removed by Filter: 68
SubPlan 1
-> Nested Loop (cost=1.00..20257.91 rows=1632 width=0) (actual time=0.778..0.778 rows=0 loops=78)
-> Index Scan using person_taskx1 on person_task pt (cost=0.56..6551.27 rows=1632 width=8) (actual time=0.633..0.633 rows=0 loops=78)
Index Cond: (id = p.id)
-> Index Scan using taskxpk on task t (cost=0.44..8.40 rows=1 width=8) (actual time=1.121..1.121 rows=1 loops=10)
Index Cond: (id = pt.task_id)
Filter: (state <> 'open')
Planning Time: 0.466 ms
Execution Time: 60.920 ms
-- USING WHERE
Limit (cost=2818814.57..2841563.37 rows=10 width=8) (actual time=29.075..6884.259 rows=10 loops=1)
-> Merge Semi Join (cost=2818814.57..59308642.64 rows=24832 width=8) (actual time=29.075..6884.251 rows=10 loops=1)
Merge Cond: (p.id = pt.person_id)
-> Index Scan using personxpk on person p (cost=0.43..1440340.27 rows=2136533 width=16) (actual time=0.003..0.168 rows=18 loops=1)
-> Gather Merge (cost=1001.03..57357094.42 rows=40517669 width=8) (actual time=9.441..6881.180 rows=23747 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Nested Loop (cost=1.00..52679350.05 rows=16882362 width=8) (actual time=1.862..4207.577 rows=7938 loops=3)
-> Parallel Index Scan using person_taskx1 on person_task pt (cost=0.56..25848782.35 rows=16882362 width=16) (actual time=1.344..1807.664 rows=7938 loops=3)
-> Index Scan using taskxpk on task t (cost=0.44..1.59 rows=1 width=8) (actual time=0.301..0.301 rows=1 loops=23814)
Index Cond: (id = pt.task_id)
Filter: (state <> 'open')
Planning Time: 0.430 ms
Execution Time: 6884.349 ms
Should therefore always the ON statement be used for filtering values in a sub JOIN? Or what is going on?
I have used Postgres for this example.
The condition and pt.person_id = p.id doesn't refer to any column of the joined table t. In an inner join this doesn't make much sense semantically and we can move this condition from ON to WHERE to get the query more readable.
You are right, hence, that the two queries are equivalent and should result in the same execution plan. As this is not the case, PostgreSQL seems to have a problem here with their optimizer.
In an outer join such a condition in ON can make sense and would be different from WHERE. I assume that this is the reason for the optimizer finding a different plan for ON in general. Once it detects the condition in ON it goes another route, oblivious of the join type (so my assumption). I am surprised though, that this leads to a better plan; I'd rather expect a worse plan.
This may indicate that the table's statistics are not up-to-date. Please analyze the tables to make sure. Or it may be a sore spot in the optimizer code PostgreSQL developers might want to work on.

What is the fastest way in a growing table to know if at least one line is existing with two search (WHERE) criteria?

In pgsql, what's the fastest request to see if a table BLOG_POST is having column company_id=5 and status_id=3 at least once considering the table can grow?
I have many company using that table and they can have many entry, my end goal is too create a method named hasCompanyAlreadyPublishedABlogPost(companyId).
An EXISTS condition would do:
select exists (select *
from blog_post
where company_id = 5
and status_id = 3);
Obviously you want an index on blog_post(company_id, status_id)
To further improve the performance we can do this:
select exists (select id
from blog_post
where company_id = 5
and status_id = 3 limit 1);
Add limit in nested query
Instead of select * we can do select any_column
I have tried this with a sample database:
with id and limit:
explain analyze (select exists (select user_id from sample_table where sample_ids=4 and user_id=5 limit 1));
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------
Result (cost=8.17..8.18 rows=1 width=1) (actual time=0.014..0.014 rows=1 loops=1)
InitPlan 1 (returns $0)
-> Index Scan using sample_table_pkey on sample_table (cost=0.15..8.17 rows=1 width=0) (actual time=0.012..0.012 rows=1 loops=1)
Index Cond: (user_id = 5)
Filter: (sample_ids = 4)
Planning Time: 0.091 ms
Execution Time: 0.032 ms
without id and *
explain analyze (select exists (select * from sample_table where sample_ids=4 and user_id=5));
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------
Result (cost=8.17..8.18 rows=1 width=1) (actual time=0.014..0.014 rows=1 loops=1)
InitPlan 1 (returns $0)
-> Index Scan using sample_table_pkey on sample_table (cost=0.15..8.17 rows=1 width=0) (actual time=0.012..0.012 rows=1 loops=1)
Index Cond: (user_id = 5)
Filter: (sample_ids = 4)
Planning Time: 0.084 ms
Execution Time: 0.034 ms

Speed up search for date with subrecords recent date (postgresql)

I want to find old jobs that is still open with no recent activity.
The tables are as following:
CREATE TABLE job
(jobid int4, jobname text, jobdate date);
INSERT INTO job
(jobid, jobname, jobdate)
VALUES
(1,'X','2016-12-31'),
(2,'Y','2016-12-31'),
(3,'Z','2016-12-31');
CREATE TABLE hr
(hrid int4, hrjob int4, hrdate date);
INSERT INTO hr
(hrid, hrjob, hrdate)
VALUES
(1,1,'2017-05-30'),
(2,1,'2016-12-31'),
(3,2,'2016-12-31'),
(4,3,'2016-12-31'),
(5,4,'2017-12-31');
CREATE TABLE po
(poid int, pojob int4, podate date);
INSERT INTO po
(poid, pojob, podate)
VALUES
(1,1,'2016-05-30'),
(2,1,'2016-12-31'),
(3,2,'2016-12-31'),
(4,3,'2016-12-31'),
(5,4,'2017-12-31');
I have found a solution that works with few records, but takes very long time with several thousands of records
SELECT jobid
FROM job
LEFT JOIN hr ON hrjob=jobid
LEFT JOIN po ON poid=jobid
WHERE jobdate <'2017-12-31'
GROUP BY jobid
HAVING greatest(max(hrdate),max(podate))<'2017-12-31'
ORDER BY jobid
Is there any way to simplify and speed up this query?
In this case all jobs but 4 could be closed = no recent activity.
SQLFiddle: http://sqlfiddle.com/#!15/098c3/1
Execution plan:
GroupAggregate (cost=311.82..1199.60 rows=67 width=12)
Filter: (GREATEST(max(hr.hrdate), max(po.podate)) < '2017-12-31'::date)
-> Merge Left Join (cost=311.82..925.66 rows=36414 width=12)
Merge Cond: (job.jobid = po.poid)
-> Merge Left Join (cost=176.48..234.72 rows=3754 width=8)
Merge Cond: (job.jobid = hr.hrjob)
-> Sort (cost=41.13..42.10 rows=387 width=4)
Sort Key: job.jobid
-> Seq Scan on job (cost=0.00..24.50 rows=387 width=4)
Filter: (jobdate < '2017-12-31'::date)
-> Sort (cost=135.34..140.19 rows=1940 width=8)
Sort Key: hr.hrjob
-> Seq Scan on hr (cost=0.00..29.40 rows=1940 width=8)
-> Sort (cost=135.34..140.19 rows=1940 width=8)
Sort Key: po.poid
-> Seq Scan on po (cost=0.00..29.40 rows=1940 width=8)
EXPLAIN:
Output: job.jobid
Filter: (GREATEST(max(hr.hrdate), max(po.podate)) < '2017-12-31'::date)
-> Merge Left Join (cost=311.82..925.66 rows=36414 width=12) (actual time=0.032..0.039 rows=4 loops=1)
Output: job.jobid, hr.hrdate, po.podate
Merge Cond: (job.jobid = po.poid -> Merge Left Join (cost=176.48..234.72 rows=3754 width=8) (actual time=0.024..0.028 rows=4 loops=1)
Output: job.jobid, hr.hrdate
Merge Cond: (job.jobid = hr.hrjob -> Sort (cost=41.13..42.10 rows=387 width=4) (actual time=0.014..0.015 rows=3 loops=1)
Output: job.jobid
Sort Key: job.jobid
Sort Method: quicksort Memory: 25kB -> Seq Scan on public.job (cost=0.00..24.50 rows=387 width=4) (actual time=0.006..0.007 rows=3 loops=1)
Output: job.jobid
Filter: (job.jobdate < '2017-12-31'::date) -> Sort (cost=135.34..140.19 rows=1940 width=8) (actual time=0.008..0.009 rows=5 loops=1)
Output: hr.hrdate, hr.hrjob
Sort Key: hr.hrjob
Sort Method: quicksort Memory: 25kB -> Seq Scan on public.hr (cost=0.00..29.40 rows=1940 width=8) (actual time=0.001..0.002 rows=5 loops=1)
Output: hr.hrdate, hr.hrjob -> Sort (cost=135.34..140.19 rows=1940 width=8) (actual time=0.007..0.007 rows=5 loops=1)
Output: po.podate, po.poid
Sort Key: po.poid
Sort Method: quicksort Memory: 25kB -> Seq Scan on public.po (cost=0.00..29.40 rows=1940 width=8) (actual time=0.001..0.003 rows=5 loops=1)
Output: po.podate, po.poid
Total runtime: 0.148 ms
Thank you in advance
Instead of using JOIN and GROUP BY, you could find old jobs like this
SELECT jobid
FROM job
WHERE jobdate < '2017-12-31'
AND NOT EXISTS (SELECT 1
FROM hr
WHERE hr.hrjob = job.jobid
AND hrdate >= '2017-12-31')
AND NOT EXISTS (SELECT 1
FROM po
WHERE po.poid = job.jobid
AND podate >= '2017-12-31')
ORDER BY jobid
I think it could speed up your query.
Something that could save you a lot of potential processing is to add a field to the job indicating that the job is closed. This could save you a lot of query work!

DISTINCT with ORDER BY very slow

So I am using postgres for the first time and finding it rather slow to run distinct and group by queries, currently i am trying to find the latest record and whether or not it is working or not.
This is the first query I came up with:
SELECT DISTINCT ON (device_id) c.device_id, c.timestamp, c.working
FROM call_logs c
ORDER BY c.device_id, c.timestamp desc
And it works but it is taking along time to run.
Unique (cost=94840.24..97370.54 rows=11 width=17) (actual time=424.424..556.253 rows=13 loops=1)
-> Sort (cost=94840.24..96105.39 rows=506061 width=17) (actual time=424.423..531.905 rows=506061 loops=1)
Sort Key: device_id, "timestamp" DESC
Sort Method: external merge Disk: 13272kB
-> Seq Scan on call_logs c (cost=0.00..36512.61 rows=506061 width=17) (actual time=0.059..162.932 rows=506061 loops=1)
Planning time: 0.152 ms
Execution time: 557.957 ms
(7 rows)
I have updated the query to use the following which is faster but very ugly:
SELECT c.device_id, c.timestamp, c.working FROM call_logs c
INNER JOIN (SELECT c.device_id, MAX(c.timestamp) AS timestamp
FROM call_logs c
GROUP BY c.device_id)
newest on newest.timestamp = c.timestamp
and the analysis:
Nested Loop (cost=39043.34..39136.08 rows=12 width=17) (actual time=216.406..216.580 rows=15 loops=1)
-> HashAggregate (cost=39042.91..39043.02 rows=11 width=16) (actual time=216.347..216.351 rows=13 loops=1)
Group Key: c_1.device_id
-> Seq Scan on call_logs c_1 (cost=0.00..36512.61 rows=506061 width=16) (actual time=0.026..125.482 rows=506061 loops=1)
-> Index Scan using call_logs_timestamp on call_logs c (cost=0.42..8.44 rows=1 width=17) (actual time=0.016..0.016 rows=1 loops=13)
Index Cond: ("timestamp" = (max(c_1."timestamp")))
Planning time: 0.318 ms
Execution time: 216.631 ms
(8 rows)
Even 200ms does seem a little slow to me as all I want is the top record per device (which is in an indexed table)
AND this is my index it is using:
CREATE INDEX call_logs_timestamp
ON public.call_logs USING btree
(timestamp)
TABLESPACE pg_default;
I have tried the below index but does not help at all:
CREATE INDEX dev_ts_1
ON public.call_logs USING btree
(device_id, timestamp DESC, working)
TABLESPACE pg_default;
Any ideas am I missing something obvious?
200 ms really isn't that bad for going through 500k rows. But for this query:
SELECT DISTINCT ON (device_id) c.device_id, c.timestamp, c.working
FROM call_logs c
ORDER BY c.device_id, c.timestamp desc
Then your index on call_logs(device_id, timestamp desc, working) should be an optimal index.
Two other ways to write the query for the same index are:
select c.*
from (select c.device_id, c.timestamp, c.working, c.*,
row_number() over (partition by device_id order by timestamp desc) as seqnum
from call_logs c
) c
where seqnum = 1;
and:
select c.device_id, c.timestamp, c.working
from call_logs c
where not exists (select 1
from call_logs c2
where c2.device_id = c.device_id and
c2.timestamp > c.timestamp
);

Optimizing postgres query

QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=32164.87..32164.89 rows=1 width=44) (actual time=221552.831..221552.831 rows=0 loops=1)
-> Sort (cost=32164.87..32164.87 rows=1 width=44) (actual time=221552.827..221552.827 rows=0 loops=1)
Sort Key: t.date_effective, t.acct_account_transaction_id, p.method, t.amount, c.business_name, t.amount
-> Nested Loop (cost=22871.67..32164.86 rows=1 width=44) (actual time=221552.808..221552.808 rows=0 loops=1)
-> Nested Loop (cost=22871.67..32160.37 rows=1 width=52) (actual time=221431.071..221546.619 rows=670 loops=1)
-> Nested Loop (cost=22871.67..32157.33 rows=1 width=43) (actual time=221421.218..221525.056 rows=2571 loops=1)
-> Hash Join (cost=22871.67..32152.80 rows=1 width=16) (actual time=221307.382..221491.019 rows=2593 loops=1)
Hash Cond: ("outer".acct_account_id = "inner".acct_account_fk)
-> Seq Scan on acct_account a (cost=0.00..7456.08 rows=365008 width=8) (actual time=0.032..118.369 rows=61295 loops=1)
-> Hash (cost=22871.67..22871.67 rows=1 width=16) (actual time=221286.733..221286.733 rows=2593 loops=1)
-> Nested Loop Left Join (cost=0.00..22871.67 rows=1 width=16) (actual time=1025.396..221266.357 rows=2593 loops=1)
Join Filter: ("inner".orig_acct_payment_fk = "outer".acct_account_transaction_id)
Filter: ("inner".link_type IS NULL)
-> Seq Scan on acct_account_transaction t (cost=0.00..18222.98 rows=1 width=16) (actual time=949.081..976.432 rows=2596 loops=1)
Filter: ((("type")::text = 'debit'::text) AND ((transaction_status)::text = 'active'::text) AND (date_effective >= '2012-03-01'::date) AND (date_effective < '2012-04-01 00:00:00'::timestamp without time zone))
-> Seq Scan on acct_payment_link l (cost=0.00..4648.68 rows=1 width=15) (actual time=1.073..84.610 rows=169 loops=2596)
Filter: ((link_type)::text ~~ 'return_%'::text)
-> Index Scan using contact_pk on contact c (cost=0.00..4.52 rows=1 width=27) (actual time=0.007..0.008 rows=1 loops=2593)
Index Cond: (c.contact_id = "outer".contact_fk)
-> Index Scan using acct_payment_transaction_fk on acct_payment p (cost=0.00..3.02 rows=1 width=13) (actual time=0.005..0.005 rows=0 loops=2571)
Index Cond: (p.acct_account_transaction_fk = "outer".acct_account_transaction_id)
Filter: ((method)::text <> 'trade'::text)
-> Index Scan using contact_role_pk on contact_role (cost=0.00..4.48 rows=1 width=4) (actual time=0.007..0.007 rows=0 loops=670)
Index Cond: ("outer".contact_id = contact_role.contact_fk)
Filter: (exchange_fk = 74)
Total runtime: 221553.019 ms
Your problem is here:
-> Nested Loop Left Join (cost=0.00..22871.67 rows=1 width=16) (actual time=1025.396..221266.357 rows=2593 loops=1)
Join Filter: ("inner".orig_acct_payment_fk = "outer".acct_account_transaction_id)
Filter: ("inner".link_type IS NULL)
-> Seq Scan on acct_account_transaction t (cost=0.00..18222.98 rows=1 width=16) (actual time=949.081..976.432 rows=2596 loops=1)
Filter: ((("type")::text = 'debit'::text) AND ((transaction_status)::text = 'active'::text) AND (date_effective >= '2012-03-01'::date) AND (date_effective
Seq Scan on acct_payment_link l (cost=0.00..4648.68 rows=1 width=15) (actual time=1.073..84.610 rows=169 loops=2596)
Filter: ((link_type)::text ~~ 'return_%'::text)
It expects to find 1 row in acct_account_transaction, while it finds 2596, and similarly for the other table.
You did not mention Your postgres version (could You?), but this should do the trick:
SELECT DISTINCT
t.date_effective,
t.acct_account_transaction_id,
p.method,
t.amount,
c.business_name,
t.amount
FROM
contact c inner join contact_role on (c.contact_id=contact_role.contact_fk and contact_role.exchange_fk=74),
acct_account a, acct_payment p,
acct_account_transaction t
WHERE
p.acct_account_transaction_fk=t.acct_account_transaction_id
and t.type = 'debit'
and transaction_status = 'active'
and p.method != 'trade'
and t.date_effective >= '2012-03-01'
and t.date_effective < (date '2012-03-01' + interval '1 month')
and c.contact_id=a.contact_fk and a.acct_account_id = t.acct_account_fk
and not exists(
select * from acct_payment_link l
where orig_acct_payment_fk == acct_account_transaction_id
and link_type like 'return_%'
)
ORDER BY
t.date_effective DESC
Also, try setting appropriate statistics target for relevant columns. Link to the friendly manual: http://www.postgresql.org/docs/current/static/sql-altertable.html
What are your indexes, and have you analysed lately? It's doing a table scan on acct_account_transaction even though there are several criteria on that table:
type
date_effective
If there are no indexes on those columns, then a compound one one (type, date_effective) could help (assuming there are lots of rows that don't meet the criteria on those columns).
I remove my first suggestion, as it changes the nature of the query.
I see that there's too much time spent in the LEFT JOIN.
First thing is to try to make only a single scan of the acct_payment_link table. Could you try rewriting your query to:
... LEFT JOIN (SELECT * FROM acct_payment_link
WHERE link_type LIKE 'return_%') AS l ...
You should check your statistics, as there's difference between planned and returned numbers of rows.
You haven't included tables' and indexes' definitions, it'd be good to take a look on those.
You might also want to use contrib/pg_tgrm extension to build index on the acct_payment_link.link_type, but I would make this a last option to try out.
BTW, what is the PostgreSQL version you're using?
Your statement rewritten and formatted:
SELECT DISTINCT
t.date_effective,
t.acct_account_transaction_id,
p.method,
t.amount,
c.business_name,
t.amount
FROM contact c
JOIN contact_role cr ON cr.contact_fk = c.contact_id
JOIN acct_account a ON a.contact_fk = c.contact_id
JOIN acct_account_transaction t ON t.acct_account_fk = a.acct_account_id
JOIN acct_payment p ON p.acct_account_transaction_fk
= t.acct_account_transaction_id
LEFT JOIN acct_payment_link l ON orig_acct_payment_fk
= acct_account_transaction_id
-- missing table-qualification!
AND link_type like 'return_%'
-- missing table-qualification!
WHERE transaction_status = 'active' -- missing table-qualification!
AND cr.exchange_fk = 74
AND t.type = 'debit'
AND t.date_effective >= '2012-03-01'
AND t.date_effective < (date '2012-03-01' + interval '1 month')
AND p.method != 'trade'
AND l.link_type IS NULL
ORDER BY t.date_effective DESC;
Explicit JOIN statements are preferable. I reordered your tables according to your JOIN logic.
Why (date '2012-03-01' + interval '1 month') instead of date '2012-04-01'?
Some table qualifications are missing. In a complex statement like this that's bad style. May be hiding a mistake.
The key to performance are indexes where appropriate, proper configuration of PostgreSQL and accurate statistics.
General advice on performance tuning in the PostgreSQL wiki.