Why does PostgreSQL sorts on a boolean WHERE condition? - sql

I am testing some queries over a bunch of materialized views. All of them have the same structure, like this one:
EXPLAIN ANALYZE SELECT mr.foo, ..., CAST(SUM(mr.bar) AS INTEGER) AS stuff
FROM foo.bar mr
WHERE
mr.a = 'TRUE' AND
mr.b = 'something' AND
mr.c = '12'
GROUP BY
mr.a,
mr.b,
mr.c;
Obviously the system is giving me a different query plan for each one of them, but if (and only if) a WHERE clause involves a boolean column (like in the examples), the planner always sorts the result set before finishing. Example:
Finalize GroupAggregate (cost=16305.92..16317.98 rows=85 width=21) (actual time=108.301..108.301 rows=1 loops=1)
Group Key: festivo, nome_strada, ora
-> Gather Merge (cost=16305.92..16315.05 rows=70 width=77) (actual time=108.279..109.015 rows=2 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial GroupAggregate (cost=15305.90..15306.95 rows=35 width=77) (actual time=101.422..101.422 rows=1 loops=3)
Group Key: festivo, nome_strada, ora
-> Sort (cost=15305.90..15305.99 rows=35 width=21) (actual time=101.390..101.395 rows=28 loops=3)
Sort Key: festivo
Sort Method: quicksort Memory: 25kB
-> Parallel Seq Scan on sft_vmv3_g3 mr (cost=0.00..15305.00 rows=35 width=21) (actual time=75.307..101.329 rows=28 loops=3)
Filter: (festivo AND ((nome_strada)::text = '16th St'::text) AND (ora = '12'::smallint))
Rows Removed by Filter: 277892
I am really curios about this kind of approach, but I still haven't found an explaination about this.

I'm curious why you wouldn't phrase the logic as:
SELECT true as a, 'something' as b, '12' as c, CAST(SUM(mr.bar) as INTEGER)
FROM foo.bar as mr
WHERE mr.a AND
mr.b = 'something' AND
mr.c = '12';
This an aggregation query (because of the SUM() in the SELECT) and does not have an explicit GROUP BY. I think it should produce a more optimal execution plan. In addition, it will always return one row, even if no rows match the condition.

Related

Why is Postgres execution plan changing vastly based on where condition

I am trying to execute the same SQL but with different values for the where clause. One query is taking significantly longer time to process than the other. I have also observed that the execution plan for the two queries is different too,
Query1 and Execution Plan:
explain analyze
select t."postal_code"
from dev."postal_master" t
left join dev."premise_master" f
on t."primary_code" = f."primary_code"
and t."name" = f."name"
and t."final_code" = f."final_code"
where 1 = 1 and t."region" = 'US'
and t."name" = 'UBQ'
and t."accountModCode" = 'LTI'
and t."modularity_code" = 'PHA'
group by t."postal_code", t."modularity_code", t."region",
t."feature", t."granularity"
Group (cost=4.19..4.19 rows=1 width=38) (actual time=76411.456..76414.348 rows=11871 loops=1)
Group Key: t."postal_code", t."modularity_code", t."region", t."feature", t.granularity
-> Sort (cost=4.19..4.19 rows=1 width=38) (actual time=76411.452..76412.045 rows=11879 loops=1)
Sort Key: t."postal_code", t."feature", t.granularity
Sort Method: quicksort Memory: 2055kB
-> Nested Loop Left Join (cost=0.17..4.19 rows=1 width=38) (actual time=45.373..76362.219 rows=11879 loops=1)
Join Filter: (((t."name")::text = (f."name")::text) AND ((t."primary_code")::text = (f."primary_code")::text) AND ((t."final_code")::text = (f."final_code")::text))
Rows Removed by Join Filter: 150642887
-> Index Scan using idx_postal_code_source on postal_master t (cost=0.09..2.09 rows=1 width=72) (actual time=36.652..154.339 rows=11871 loops=1)
Index Cond: (("name")::text = 'UBQ'::text)
Filter: ((("region")::text = 'US'::text) AND (("accountModCode")::text = 'LTI'::text) AND (("modularity_code")::text = 'PHA'::text))
Rows Removed by Filter: 550164
-> Index Scan using idx_postal_master_source on premise_master f (cost=0.08..2.09 rows=1 width=35) (actual time=0.016..3.720 rows=12690 loops=11871)
Index Cond: (("name")::text = 'UBQ'::text)
Planning Time: 1.196 ms
Execution Time: 76415.004 ms
Query2 and Execution plan:
explain analyze
select t."postal_code"
from dev."postal_master" t
left join dev."premise_master" f
on t."primary_code" = f."primary_code"
and t."name" = f."name"
and t."final_code" = f."final_code"
where 1 = 1 and t."region" = 'DE'
and t."name" = 'EME'
and t."accountModCode" = 'QEW'
and t."modularity_code" = 'NFX'
group by t."postal_code", t."modularity_code", t."region",
t."feature", t."granularity"
Group (cost=50302.96..50426.04 rows=1330 width=38) (actual time=170.687..184.772 rows=8230 loops=1)
Group Key: t."postal_code", t."modularity_code", t."region", t."feature", t.granularity
-> Gather Merge (cost=50302.96..50423.27 rows=1108 width=38) (actual time=170.684..182.965 rows=8230 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Group (cost=49302.95..49304.62 rows=554 width=38) (actual time=164.446..165.613 rows=2743 loops=3)
Group Key: t."postal_code", t."modularity_code", t."region", t."feature", t.granularity
-> Sort (cost=49302.95..49303.23 rows=554 width=38) (actual time=164.444..164.645 rows=3432 loops=3)
Sort Key: t."postal_code", t."feature", t.granularity
Sort Method: quicksort Memory: 550kB
Worker 0: Sort Method: quicksort Memory: 318kB
Worker 1: Sort Method: quicksort Memory: 322kB
-> Nested Loop Left Join (cost=1036.17..49297.90 rows=554 width=38) (actual time=2.143..148.372 rows=3432 loops=3)
-> Parallel Bitmap Heap Scan on territory_postal_mapping t (cost=1018.37..38323.78 rows=554 width=72) (actual time=1.898..11.849 rows=2743 loops=3)
Recheck Cond: ((("accountModCode")::text = 'QEW'::text) AND (("region")::text = 'DE'::text) AND (("name")::text = 'EME'::text))
Filter: (("modularity_code")::text = 'NFX'::text)
Rows Removed by Filter: 5914
Heap Blocks: exact=2346
-> Bitmap Index Scan on territorypostal__source_region_mod (cost=0.00..1018.31 rows=48088 width=0) (actual time=4.783..4.783 rows=25973 loops=1)
Index Cond: ((("accountModCode")::text = 'QEW'::text) AND (("region")::text = 'DE'::text) AND (("name")::text = 'EME'::text))
-> Bitmap Heap Scan on premise_master f (cost=17.80..19.81 rows=1 width=35) (actual time=0.047..0.048 rows=1 loops=8230)
Recheck Cond: (((t."primary_code")::text = ("primary_code")::text) AND ((t."final_code")::text = ("final_code")::text))
Filter: ((("name")::text = 'EME'::text) AND ((t."name")::text = ("name")::text))
Heap Blocks: exact=1955
-> BitmapAnd (cost=17.80..17.80 rows=1 width=0) (actual time=0.046..0.046 rows=0 loops=8230)
-> Bitmap Index Scan on premise_master__accountprimarypostal (cost=0.00..1.95 rows=105 width=0) (actual time=0.008..0.008 rows=24 loops=8230)
Index Cond: ((t."primary_code")::text = ("primary_code")::text)
-> Bitmap Index Scan on premise_master__accountfinalterritorycode (cost=0.00..15.80 rows=1403 width=0) (actual time=0.065..0.065 rows=559 loops=4568)
Index Cond: ((t."final_code")::text = ("final_code")::text)
Planning Time: 1.198 ms
Execution Time: 185.197 ms
I am aware that there will be different number of rows depending on the where condition but is that the only reason for the different execution plan. Also, how can I improve the performance of the first query.
The estimates are totally wrong for the first query, so it is no surprise that PostgreSQL picks a bad plan. Try these measures one after the other and see if they help:
Collect statistics:
ANALYZE premise_master, postal_master;
Calculate more precise statistics:
ALTER TABLE premise_master ALTER name SET statistics 1000;
ALTER TABLE postal_master ALTER name SET statistics 1000;
ANALYZE premise_master, postal_master;
The estimates in the first query are off in such a bad way that I suspect that there is an exceptional problem, like an upgrade with pg_upgrade where you forgot to run ANALYZE afterwards, or you are wiping the database statistics with pg_stat_reset().
If that is not the case, and a simple ANALYZE of the tables did the trick, the cause of the problem must be that autoanalyze does not run often enough on these tables. You can tune autovacuum to do that more often with a statement like this:
ALTER TABLE premise_master SET (autovacuum_analyze_scale_factor = 0.01);
That would make PostgreSQL collect statistics whenever 1% of the table has changed.
The first line of each EXPLAIN ANALYZE output suggests that the planner only expected 1 row from the first query, while it expected 1130 from the second, so that's probably why it chose a less efficient query plan. That usually means table statistics aren't up to date, and when they were last run there weren't many rows that would've matched the first query (maybe the data was being loaded in alphabetical order?). In this case the fix is to execute an ANALYZE dev."postal_master"query to refresh the statistics.
You could also try removing the GROUP BY clause entirely (if your tooling allows). I could be misreading but it doesn't look like it's affecting the output much. If that results in unwanted duplicates you can use select distinct t.postal_code instead of the group by.

How to optimize filter for big data volume? PostgreSQL

A few weeks ago our team faced difficulties with our SQL query because the data volume has increased a lot.
We would appreciate any advice on how we can update schema or optimize the query in order to keep status filtering logic the same.
In a nutshell:
We have two tables a and b. b has FK to a as M-1.
a
id | processed
1 TRUE
2 TRUE
b
a_id| status | type_id | l_id
1 '1' 5 105
1 '3' 6 105
2 '2' 7 105
We can have only one status for a unique combination of (l_id, type_id, a_id).
We need to calculate count of a rows filtered by statuses from b grouped by a_id .
In table a we have 5 300 000 rows.
In table b 750 000 000 rows.
So we need to calculate status for each a row by the next rules:
For a_id there are x rows in b:
1) If at least one status of x equals '3', then status for a_id is '3'.
2) If all statuses of x equal 1 then the status is 1.
And so on.
In current approach we use array_agg() function for filtering of subselection. So our query looks like:
SELECT COUNT(*)
FROM (
SELECT
FROM (
SELECT at.id as id,
BOOL_AND(bt.processed) AS not_pending,
ARRAY_AGG(DISTINCT bt.status) AS status
FROM a AS at
LEFT OUTER JOIN b AS bt
ON (at.id = bt.a_id AND bt.l_id = 105 AND
bt.type_id IN (2,10,18,1,4,5,6))
WHERE at.processed = True
GROUP BY at.id) sub
WHERE not_pending = True
AND status <# ARRAY ['1']::"char"[]
) counter
;
Our plan looks like:
Aggregate (cost=14665999.33..14665999.34 rows=1 width=8) (actual time=1875987.846..1875987.846 rows=1 loops=1)
-> GroupAggregate (cost=14166691.70..14599096.58 rows=5352220 width=37) (actual time=1875987.844..1875987.844 rows=0 loops=1)
Group Key: at.id
Filter: (bool_and(bt.processed) AND (array_agg(DISTINCT bt.status) <# '{1}'::"char"[]))
Rows Removed by Filter: 5353930
-> Sort (cost=14166691.70..14258067.23 rows=36550213 width=6) (actual time=1860315.593..1864175.762 rows=37430745 loops=1)
Sort Key: at.id
Sort Method: external merge Disk: 586000kB
-> Hash Right Join (cost=1135654.48..8076230.39 rows=36550213 width=6) (actual time=55665.584..1846965.271 rows=37430745 loops=1)
Hash Cond: (bt.a_id = at.id)
-> Bitmap Heap Scan on b bt (cost=882095.79..7418660.65 rows=36704370 width=6) (actual time=51871.658..1826058.186 rows=37430378 loops=1)
Recheck Cond: ((l_id = 105) AND (type_id = ANY ('{2,10,18,1,4,5,6}'::integer[])))
Rows Removed by Index Recheck: 574462752
Heap Blocks: exact=28898 lossy=5726508
-> Bitmap Index Scan on db_page_index_atableobjects (cost=0.00..872919.69 rows=36704370 width=0) (actual time=51861.815..51861.815 rows=37586483 loops=1)
Index Cond: ((l_id = 105) AND (type_id = ANY ('{2,10,18,1,4,5,6}'::integer[])))
-> Hash (cost=165747.94..165747.94 rows=5352220 width=4) (actual time=3791.710..3791.710 rows=5353930 loops=1)
Buckets: 131072 Batches: 128 Memory Usage: 2507kB
-> Seq Scan on a at (cost=0.00..165747.94 rows=5352220 width=4) (actual time=0.528..2958.004 rows=5353930 loops=1)
Filter: processed
Rows Removed by Filter: 18659
Planning time: 0.328 ms
Execution time: 1876066.242 ms
As you see the time for the query execution is immense and we would like to make it at least <30 seconds.
We have already tried some approaches like using bitor() instead of array_agg() and LATERAL JOIN. But they didn't give us desired performance and we decided to use materialized views for now. But we are still in search for any other solution and would really appreciate any suggestions!
Plan with track_io_timing enabled:
Aggregate (cost=14665999.33..14665999.34 rows=1 width=8) (actual time=2820945.285..2820945.285 rows=1 loops=1)
Buffers: shared hit=23 read=5998844, temp read=414465 written=414880
I/O Timings: read=2655805.505
-> GroupAggregate (cost=14166691.70..14599096.58 rows=5352220 width=930) (actual time=2820945.283..2820945.283 rows=0 loops=1)
Group Key: at.id
Filter: (bool_and(bt.processed) AND (array_agg(DISTINCT bt.status) <# '{1}'::"char"[]))
Rows Removed by Filter: 5353930
Buffers: shared hit=23 read=5998844, temp read=414465 written=414880
I/O Timings: read=2655805.505
-> Sort (cost=14166691.70..14258067.23 rows=36550213 width=6) (actual time=2804900.123..2808826.358 rows=37430745 loops=1)
Sort Key: at.id
Sort Method: external merge Disk: 586000kB
Buffers: shared hit=18 read=5998840, temp read=414465 written=414880
I/O Timings: read=2655805.491
-> Hash Right Join (cost=1135654.48..8076230.39 rows=36550213 width=6) (actual time=55370.788..2791441.542 rows=37430745 loops=1)
Hash Cond: (bt.a_id = at.id)
Buffers: shared hit=15 read=5998840, temp read=142879 written=142625
I/O Timings: read=2655805.491
-> Bitmap Heap Scan on b bt (cost=882095.79..7418660.65 rows=36704370 width=6) (actual time=51059.047..2769127.810 rows=37430378 loops=1)
Recheck Cond: ((l_id = 105) AND (type_id = ANY ('{2,10,18,1,4,5,6}'::integer[])))
Rows Removed by Index Recheck: 574462752
Heap Blocks: exact=28898 lossy=5726508
Buffers: shared hit=13 read=5886842
I/O Timings: read=2653254.939
-> Bitmap Index Scan on db_page_index_atableobjects (cost=0.00..872919.69 rows=36704370 width=0) (actual time=51049.365..51049.365 rows=37586483 loops=1)
Index Cond: ((l_id = 105) AND (type_id = ANY ('{2,10,18,1,4,5,6}'::integer[])))
Buffers: shared hit=12 read=131437
I/O Timings: read=49031.671
-> Hash (cost=165747.94..165747.94 rows=5352220 width=4) (actual time=4309.761..4309.761 rows=5353930 loops=1)
Buckets: 131072 Batches: 128 Memory Usage: 2507kB
Buffers: shared hit=2 read=111998, temp written=15500
I/O Timings: read=2550.551
-> Seq Scan on a at (cost=0.00..165747.94 rows=5352220 width=4) (actual time=0.515..3457.040 rows=5353930 loops=1)
Filter: processed
Rows Removed by Filter: 18659
Buffers: shared hit=2 read=111998
I/O Timings: read=2550.551
Planning time: 0.347 ms
Execution time: 2821022.622 ms
In the current plan, substantially all of the time is going to reading the table pages for the Bitmap Heap Scan. You must already have an index on something like (l_id, type_id). If you change it (create a new, then optionally drop the old one) to by on (ld_id, type_id, processed, a_id, status) instead, or perhaps on (ld_id, type_id, a_id, status) where processed), then it can probably switch to an index-only scan which can avoid reading the table as all the data is present in the index. You will need to make sure the table is well-vacuumed for this stategy to be effective. I would just manually vacuum the table once before building the index, then if it works you can at that point worry about how to keep it well-vacuumed.
Another option would be to jack up effective_io_concurrency (I'd just set it to 20. If it works; you can play with it more to find the optimal setting), so that more than one IO read request on the table can be outstanding at once. How effective this will be will depend on your IO system, and I don't know the answer to that for db.r5.xlarge. The index-only scan is better though as it uses less resources, while this method just uses the same resources faster. (If you have multiple similar queries running simultaneously, that is important. Also, if you are paying per IO, you want fewer of them, not the same number faster)
Another option is try to change the shape of the plan completely by having a nested loop from a into b. For this to have a hope, you will need an index on b which contains a_id and l_id as the leading columns (in either order). If you already have such an index and it doesn't naturally choose such a plan, you might be able to force by set enable_hashjoin=off. My gut feeling this is that a nested loop which needs to kick the other side 5,353,930 times is not going to be better than what you currently have, even if that other side has an efficient index.
You can filter and group table B before joining it with A. And order both tables by ID, because it increases speed of table scan when join operation is processed. Please check this code:
with at as (
select distinct at.id, at.processed
from a AS at
WHERE at.processed = True
order by at.id
),
bt as (
select bt.a_id, bt.l_id, bt.type_id, --BOOL_AND(bt.processed) AS not_pending,
ARRAY_AGG(DISTINCT bt.status) as status
from b AS bt
group by bt.a_id, bt.l_id, bt.type_id
having bt.l_id = 105 AND bt.type_id IN (2,10,18,1,4,5,6)
order by bt.a_id
),
counter as (
select at.id,
case
when '1' = all(status) then '1'
when '3' = any(status) then '3'
else status end as status
from at inner join bt on at.id=bt.a_id
)
select count (*) from counter where status='1'

PL/pgSQL Query Plan Worse Inside Function Than Outside

I have a function that is running too slow. I've isolated which piece of the function is slow.. a small SELECT statement:
SELECT image_group_id
FROM programs.image_family fam
JOIN programs.provider_file pf
ON (fam.provider_data_id = pf.provider_data_id
AND fam.family_id = $1 AND pf.image_group_id IS NOT NULL)
LIMIT 1
When I run the function this piece of SQL generates the following query plan:
Query Text: SELECT image_group_id FROM programs.image_family fam JOIN programs.provider_file pf ON (fam.provider_data_id = pf.provider_data_id AND fam.family_id = $1 AND pf.image_group_id IS NOT NULL) LIMIT 1
Limit (cost=0.56..6.75 rows=1 width=6) (actual time=3471.004..3471.004 rows=0 loops=1)
-> Nested Loop (cost=0.56..594054.42 rows=96017 width=6) (actual time=3471.002..3471.002 rows=0 loops=1)
-> Seq Scan on image_family fam (cost=0.00..391880.08 rows=96023 width=6) (actual time=3471.001..3471.001 rows=0 loops=1)
Filter: ((family_id)::numeric = '8419853'::numeric)
Rows Removed by Filter: 19204671
-> Index Scan using "IX_DBO_PROVIDER_FILE_1" on provider_file pf (cost=0.56..2.11 rows=1 width=12) (never executed)
Index Cond: (provider_data_id = fam.provider_data_id)
Filter: (image_group_id IS NOT NULL)
When I run the selected query in a query tool (outside of the function) the query plan looks like this:
Limit (cost=1.12..3.81 rows=1 width=6) (actual time=0.043..0.043 rows=1 loops=1)
Output: pf.image_group_id
Buffers: shared hit=11
-> Nested Loop (cost=1.12..14.55 rows=5 width=6) (actual time=0.041..0.041 rows=1 loops=1)
Output: pf.image_group_id
Inner Unique: true
Buffers: shared hit=11
-> Index Only Scan using image_family_family_id_provider_data_id_idx on programs.image_family fam (cost=0.56..1.65 rows=5 width=6) (actual time=0.024..0.024 rows=1 loops=1)
Output: fam.family_id, fam.provider_data_id
Index Cond: (fam.family_id = 8419853)
Heap Fetches: 2
Buffers: shared hit=6
-> Index Scan using "IX_DBO_PROVIDER_FILE_1" on programs.provider_file pf (cost=0.56..2.58 rows=1 width=12) (actual time=0.013..0.013 rows=1 loops=1)
Output: pf.provider_data_id, pf.provider_file_path, pf.posted_dt, pf.file_repository_id, pf.restricted_size, pf.image_group_id, pf.is_master, pf.is_biggest
Index Cond: (pf.provider_data_id = fam.provider_data_id)
Filter: (pf.image_group_id IS NOT NULL)
Buffers: shared hit=5
Planning time: 0.809 ms
Execution time: 0.100 ms
If I disable sequence scans in the function I can get a similar query plan:
Query Text: SELECT image_group_id FROM programs.image_family fam JOIN programs.provider_file pf ON (fam.provider_data_id = pf.provider_data_id AND fam.family_id = $1 AND pf.image_group_id IS NOT NULL) LIMIT 1
Limit (cost=1.12..8.00 rows=1 width=6) (actual time=3855.722..3855.722 rows=0 loops=1)
-> Nested Loop (cost=1.12..660217.34 rows=96017 width=6) (actual time=3855.721..3855.721 rows=0 loops=1)
-> Index Only Scan using image_family_family_id_provider_data_id_idx on image_family fam (cost=0.56..458043.00 rows=96023 width=6) (actual time=3855.720..3855.720 rows=0 loops=1)
Filter: ((family_id)::numeric = '8419853'::numeric)
Rows Removed by Filter: 19204671
Heap Fetches: 368
-> Index Scan using "IX_DBO_PROVIDER_FILE_1" on provider_file pf (cost=0.56..2.11 rows=1 width=12) (never executed)
Index Cond: (provider_data_id = fam.provider_data_id)
Filter: (image_group_id IS NOT NULL)
The query plans are different where the Filter functions are for the Index Only Scan. The function has more Heap Fetches and seems to treat the argument as a string casted to a numeric.
Things I've tried:
Increasing statistics (and running vacuum/analyze)
Calling the problematic piece of SQL in another function with language SQL
Add another index (the one that its using now to perform an INDEX ONLY scan)
Create a CTE for the image_family table (this did help performance but would still do a sequence scan on the image_family instead of using the index so still, too slow)
Change from executing raw SQL to using an EXECUTE ... INTO .. USING in the function.
Makeup of the two tables:
image_family:
provider_data_id: numeric(16)
family_id: int4
(rest omitted for brevity)
unique index on provider_data_id
index on family_id
I recently added a unique index on (family_id, provider_data_id) as well
Approximately 20 million rows here. Families have many provider_data_ids but not all provider_data_ids are part of families and thus aren't all in this table.
provider_file:
provider_data_id numeric(16)
image_group_id numeric(16)
(rest omitted for brevity)
unique index on provider_data_id
Approximately 32 million rows in this table. Most rows (> 95%) have a Non-Null image_group_id.
Postgres Version 10
How can I get the query performance to match whether I call it from a function or as raw SQL in a query tool?
The problem is exhibited in this line:
Filter: ((family_id)::numeric = '8419853'::numeric)
The index on family_id cannot be used because family_id is compared to a numeric value. This requires a cast to numeric, and there is no index on family_id::numeric.
Even though integer and numeric both are types representing numbers, their internal representation is quite different, and so the indexes are incompatible. In other words, the cast to numeric is like a function for PostgreSQL, and since it has no index on that functional expression, it has to resort to a scan of the whole table (or index).
The solution is simple, however: use an integer instead of a numeric parameter for the query. If in doubt, use a cast like
fam.family_id = $1::integer

How can I optimize this really slow query generated by Django?

here's my Django ORM query:
Group.objects.filter(public = True)\
.annotate(num_members = Count('members', distinct = True))\
.annotate(num_images = Count('images', distinct = True))\
.order_by(sort)
Unfortunately this is taking over 30 seconds even with just a few dozen Groups. Removing the annotate statements makes the query significantly faster at only 3 ms...
My database backend is Postgres and here's the SQL and explain:
Executed SQL
SELECT ••• FROM "astrobin_apps_groups_group"
LEFT OUTER JOIN "astrobin_apps_groups_group_members" ON (
"astrobin_apps_groups_group"."id" = "astrobin_apps_groups_group_members"."group_id"
)
LEFT OUTER JOIN "astrobin_apps_groups_group_images" ON (
"astrobin_apps_groups_group"."id" = "astrobin_apps_groups_group_images"."group_id")
WHERE "astrobin_apps_groups_group"."public" = true
GROUP BY
"astrobin_apps_groups_group"."id",
"astrobin_apps_groups_group"."date_created",
"astrobin_apps_groups_group"."date_updated",
"astrobin_apps_groups_group"."creator_id",
"astrobin_apps_groups_group"."owner_id",
"astrobin_apps_groups_group"."name",
"astrobin_apps_groups_group"."description",
"astrobin_apps_groups_group"."category",
"astrobin_apps_groups_group"."public",
"astrobin_apps_groups_group"."moderated",
"astrobin_apps_groups_group"."autosubmission",
"astrobin_apps_groups_group"."forum_id"
ORDER BY "astrobin_apps_groups_group"."date_updated" ASC
Time
30455.9268951 ms
QUERY PLAN
GroupAggregate (cost=5910.49..8288.54 rows=216 width=242) (actual time=29255.329..30269.284 rows=27 loops=1)
-> Sort (cost=5910.49..6068.88 rows=63357 width=242) (actual time=29253.278..29788.601 rows=201888 loops=1)
Sort Key: astrobin_apps_groups_group.date_updated, astrobin_apps_groups_group.id, astrobin_apps_groups_group.date_created, astrobin_apps_groups_group.creator_id, astrobin_apps_groups_group.owner_id, astrobin_apps_groups_group.name, astrobin_apps_groups_group.description, astrobin_apps_groups_group.category, astrobin_apps_groups_group.public, astrobin_apps_groups_group.moderated, astrobin_apps_groups_group.autosubmission, astrobin_apps_groups_group.forum_id
Sort Method: external merge Disk: 70176kB
-> Hash Right Join (cost=15.69..857.39 rows=63357 width=242) (actual time=1.903..397.613 rows=201888 loops=1)
Hash Cond: (astrobin_apps_groups_group_images.group_id = astrobin_apps_groups_group.id)
-> Seq Scan on astrobin_apps_groups_group_images (cost=0.00..106.05 rows=6805 width=8) (actual time=0.024..12.510 rows=6837 loops=1)
-> Hash (cost=12.31..12.31 rows=270 width=238) (actual time=1.853..1.853 rows=323 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 85kB
-> Hash Right Join (cost=3.63..12.31 rows=270 width=238) (actual time=0.133..1.252 rows=323 loops=1)
Hash Cond: (astrobin_apps_groups_group_members.group_id = astrobin_apps_groups_group.id)
-> Seq Scan on astrobin_apps_groups_group_members (cost=0.00..4.90 rows=290 width=8) (actual time=0.004..0.348 rows=333 loops=1)
-> Hash (cost=3.29..3.29 rows=27 width=234) (actual time=0.103..0.103 rows=27 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 7kB
-> Seq Scan on astrobin_apps_groups_group (cost=0.00..3.29 rows=27 width=234) (actual time=0.004..0.049 rows=27 loops=1)
Filter: public
Total runtime: 30300.606 ms
It would be great if somebody could suggest a way to optimize this. I feel like I'm missing a really low hanging fruit.
Thanks!
What are the indexes present on astrobin_apps_groups_group and "astrobin_apps_groups_group_member, astrobin_apps_groups_group_image table?
Is there any aggregate functions like SUM, COUNT used in your select? If no, then you can remove all columns from GROUP BY
The plan shows most of the time is taken for sorting. If you create an index on date_updated filed with NULLS LAST with latest values first in index, then planner may use this index for sorting.
For sorting, disk is getting used which is most costly affair. This is because your data collected for sorting is not fitting in memory. Try increasing the WORK_MEM - set work_mem='10MB'; SELECT.....

Need help understanding the SQL explanation of a JOIN query versus a query with subselects

I posted a previous question here asking about what was better, JOIN queries or queries using subselects. Link: Queries within queries: Is there a better way?
This is an extension to that question. Can somebody explain to me why I'm seeing what I'm seeing here?
Query (Subselects):
SELECT article_seq, title, synopsis, body, lastmodified_date, (SELECT type_id FROM types WHERE kbarticles.type = type_seq), status, scope, images, archived, author, owner, (SELECT owner_description FROM owners WHERE kbarticles.owner = owner_seq), (SELECT review_date FROM kbreview WHERE kbarticles.article_seq = article_seq) FROM kbarticles WHERE article_seq = $1
Explain Analyze (Subselects)
QUERY PLAN
Index Scan using article_seq_pkey on kbarticles (cost=0.00..32.24 rows=1 width=1241) (actual time=1.421..1.426 rows=1 loops=1)
Index Cond: (article_seq = 1511)
SubPlan
-> Seq Scan on kbreview (cost=0.00..14.54 rows=1 width=8) (actual time=0.243..1.158 rows=1 loops=1)
Filter: ($2 = article_seq)
-> Seq Scan on owners (cost=0.00..1.16 rows=1 width=24) (actual time=0.073..0.078 rows=1 loops=1)
Filter: ($1 = owner_seq)
-> Index Scan using types_type_seq_key on types (cost=0.00..8.27 rows=1 width=24) (actual time=0.044..0.050 rows=1 loops=1)
Index Cond: ($0 = type_seq)
Total runtime: 2.051 ms
Query (JOINs)
SELECT k.article_seq, k.title, k.synopsis, k.body, k.lastmodified_date, t.type_id, k.status, k.scope, k.images, k.archived, k.author, k.owner, o.owner_description, r.review_date FROM kbarticles k JOIN types t ON k.type = t.type_seq JOIN owners o ON k.owner = o.owner_seq JOIN kbreview r ON k.article_seq = r.article_seq WHERE k.article_seq = $1
Explain Analyze (JOINs)
QUERY PLAN
Nested Loop (cost=0.00..32.39 rows=1 width=1293) (actual time=0.532..1.467 rows=1 loops=1)
Join Filter: (k.owner = o.owner_seq)
-> Nested Loop (cost=0.00..31.10 rows=1 width=1269) (actual time=0.419..1.345 rows=1 loops=1)
-> Nested Loop (cost=0.00..22.82 rows=1 width=1249) (actual time=0.361..1.277 rows=1 loops=1)
-> Index Scan using article_seq_pkey on kbarticles k (cost=0.00..8.27 rows=1 width=1241) (actual time=0.065..0.071 rows=1 loops=1)
Index Cond: (article_seq = 1511)
-> Seq Scan on kbreview r (cost=0.00..14.54 rows=1 width=12) (actual time=0.267..1.175 rows=1 loops=1)
Filter: (r.article_seq = 1511)
-> Index Scan using types_type_seq_key on types t (cost=0.00..8.27 rows=1 width=28) (actual time=0.048..0.055 rows=1 loops=1)
Index Cond: (t.type_seq = k.type)
-> Seq Scan on owners o (cost=0.00..1.13 rows=13 width=28) (actual time=0.022..0.038 rows=13 loops=1)
Total runtime: 2.256 ms
Based on the answers given (and accepted) in my previous question, JOINs should prove to have better results. However, in all my tests, I'm seeing JOINs to have worse results by a few milliseconds. It also seems like the JOINs are riddled with nested loops. All the tables I'm JOINing are indexed.
Am I doing something that I should be doing differently? Is there something I'm missing?
These queries are logically different.
The first one:
SELECT article_seq, title, synopsis, body, lastmodified_date,
(
SELECT type_id
FROM types
WHERE kbarticles.type = type_seq
),
status, scope, images, archived, author, owner,
(
SELECT owner_description
FROM owners
WHERE kbarticles.owner = owner_seq
),
(
SELECT review_date
FROM kbreview
WHERE kbarticles.article_seq = article_seq
)
FROM kbarticles
WHERE article_seq = $1
The second one:
SELECT k.article_seq, k.title, k.synopsis, k.body, k.lastmodified_date, t.type_id, k.status,
k.scope, k.images, k.archived, k.author, k.owner, o.owner_description, r.review_date
FROM kbarticles k
JOIN types t
ON k.type = t.type_seq
JOIN owners o
ON k.owner = o.owner_seq
JOIN kbreview r
ON k.article_seq = r.article_seq
WHERE k.article_seq = $1
If there is more than one record in types, owners or kbreview, the first query will fail while the second one will return duplicates from kbarticles.
If there is no types, owners or kbreviews for a kbarticle, the first query will return a NULL in appropriate field, while the second one will just omit that record.
If the *_seq fields seem to be the PRIMARY KEY fields, there will never be duplicates and the query will never fail; in the same way if kbarticles is constrained with FOREIGN KEY references to types, owners or kbreview, there can be no missing rows.
However, JOIN operators give the optimizer more place: it can make any table leading and use more advanced JOIN techniques like HASH JOIN or MERGE JOIN which are not available if you are using subqueries.
Is this table column indexed? r.article_seq
-> Seq Scan on kbreview r (cost=0.00..14.54 rows=1 width=12)
(actual time=0.267..1.175 rows=1
loops=1)
This is where most time is spend.
Given that both plans are doing the same table scans, just arranged in a different way, I'd say there's no significant difference between the two. A "nested loop" where the lower arm produces a single row is pretty much the same as a single-row subselect.
Joins are more general, since using scalar subselects won't extend to getting two columns from any of those auxiliary tables, for example.