Postgres SQL slow query aggregation - sql

I have an aggregation query which is ends up to be slow, I am looking for any improvements in "query" or "index".
I indexed all the fieldsI use, maybe i missed something, or maybe you can suggest any ways I can execute this query
query:
EXPLAIN ANALYZE
SELECT HE.fs_perm_sec_id,
HE.TICKER_EXCHANGE,
HE.proper_name,
OP.shares_outstanding,
(SELECT factset_industry_desc
FROM factset_industry_map AS fim
WHERE fim.factset_industry_code = HES.industry_code) AS industry,
// slow aggregation
(SELECT SUM(OIH.current_holdings)
FROM own_inst_holdings OIH
WHERE OIH.fs_perm_sec_id = HE.fs_perm_sec_id) AS inst_holdings
FROM own_prices OP
JOIN h_security_ticker_exchange HE ON OP.fs_perm_sec_id = HE.fs_perm_sec_id
JOIN h_entity_sector HES ON HES.factset_entity_id = HE.factset_entity_id
WHERE HE.ticker_exchange = 'BUD-NYS'
ORDER BY OP.price_date DESC LIMIT 1
Where this piece slows down the query:
(SELECT SUM(OIH.current_holdings)
FROM own_inst_holdings OIH
WHERE OIH.fs_perm_sec_id = HE.fs_perm_sec_id) AS inst_holdings
EXPLAIN ANALYZE
Limit (cost=360.41..360.41 rows=1 width=100) (actual time=920.592..920.592 rows=1 loops=1)
-> Sort (cost=360.41..360.41 rows=1 width=100) (actual time=920.592..920.592 rows=1 loops=1)
Sort Key: op.price_date
Sort Method: top-N heapsort Memory: 25kB
-> Nested Loop (cost=0.26..360.41 rows=1 width=100) (actual time=867.898..920.493 rows=35 loops=1)
-> Nested Loop (cost=0.17..6.43 rows=1 width=104) (actual time=4.882..4.940 rows=35 loops=1)
-> Index Scan using h_sec_exch_factset_entity_id_idx on h_security_ticker_exchange he (cost=0.09..4.09 rows=1 width=92) (actual time=3.611..3.612 rows=1 loops=1)
Index Cond: ((ticker_exchange)::text = 'BUD-NYS'::text)
-> Index Only Scan using own_prices_multiple_idx_1 on own_prices op (cost=0.09..2.25 rows=32 width=23) (actual time=1.258..1.301 rows=35 loops=1)
Index Cond: (fs_perm_sec_id = (he.fs_perm_sec_id)::text)
Heap Fetches: 0
-> Index Scan using h_entity_sector_multiple_idx_3 on h_entity_sector hes (cost=0.09..4.09 rows=1 width=14) (actual time=0.083..0.085 rows=1 loops=35)
Index Cond: (factset_entity_id = he.factset_entity_id)
SubPlan 1
-> Seq Scan on factset_industry_map fim (cost=0.00..2.48 rows=1 width=20) (actual time=0.014..0.031 rows=1 loops=35)
Filter: (factset_industry_code = hes.industry_code)
Rows Removed by Filter: 137
SubPlan 2
-> Aggregate (cost=347.40..347.40 rows=1 width=6) (actual time=26.035..26.035 rows=1 loops=35)
-> Bitmap Heap Scan on own_inst_holdings oih (cost=4.36..347.31 rows=177 width=6) (actual time=0.326..25.658 rows=622 loops=35)
Recheck Cond: ((fs_perm_sec_id)::text = (he.fs_perm_sec_id)::text)
Heap Blocks: exact=22750
-> Bitmap Index Scan on own_inst_holdings_fs_perm_sec_id_idx (cost=0.00..4.35 rows=177 width=0) (actual time=0.232..0.232 rows=662 loops=35)
Index Cond: ((fs_perm_sec_id)::text = (he.fs_perm_sec_id)::text)
Planning time: 5.806 ms
Execution time: 920.778 ms

For this query:
SELECT HE.fs_perm_sec_id, HE.TICKER_EXCHANGE, HE.proper_name, OP.shares_outstanding,
(SELECT factset_industry_desc
FROM factset_industry_map AS fim
WHERE fim.factset_industry_code = HES.industry_code
) AS industry,
(SELECT SUM(OIH.current_holdings)
FROM own_inst_holdings OIH
WHERE OIH.fs_perm_sec_id = HE.fs_perm_sec_id
) AS inst_holdings
FROM own_prices OP JOIN
h_security_ticker_exchange HE
ON OP.fs_perm_sec_id = HE.fs_perm_sec_id JOIN
h_entity_sector HES
ON HES.factset_entity_id = HE.factset_entity_id
WHERE HE.ticker_exchange = 'BUD-NYS'
ORDER BY OP.price_date DESC
LIMIT 1;
You want the following indexes:
h_security_ticker_exchange(ticker_exchange, factset_entity_id, fs_perm_sec_id)
own_prices(fs_perm_sec_id)
h_entity_sector(factset_entity_id)
factset_industry_map(factset_industry_code, factset_industry_desc)
own_inst_holdings(fs_perm_sec_id, current_holdings)

Related

Why Postgres EXPLAIN ANALYSE report huge performance difference compare to real query execution

I have been tasked with rewriting some low performance sql in our system for which I have this query
select
"aggtable".id as t_id,
count(joined.packages)::integer as t_package_count,
sum(coalesce((joined.packages ->> 'weight'::text)::double precision, 0::double precision)) as t_total_weight
from
"aggtable"
join (
select
"unnested".myid, json_array_elements("jsontable".jsondata) as packages
from
(
select
distinct unnest("tounnest".arrayofid) as myid
from
"aggtable" "tounnest") "unnested"
join "jsontable" on
"jsontable".id = "unnested".myid) joined on
joined.myid = any("aggtable".arrayofid)
group by
"aggtable".id
The EXPLAN ANALYSE result is
Sort Method: quicksort Memory: 611kB
-> Nested Loop (cost=30917.16..31333627.69 rows=27270 width=69) (actual time=4.028..2054.470 rows=3658 loops=1)
Join Filter: ((unnest(tounnest.arrayofid)) = ANY (aggtable.arrayofid))
Rows Removed by Join Filter: 9055436
-> ProjectSet (cost=30917.16..36645.61 rows=459000 width=48) (actual time=3.258..13.846 rows=3322 loops=1)
-> Hash Join (cost=30917.16..34316.18 rows=4590 width=55) (actual time=3.246..7.079 rows=1661 loops=1)
Hash Cond: ((unnest(tounnest.arrayofid)) = jsontable.id)
-> Unique (cost=30726.88..32090.38 rows=144700 width=16) (actual time=1.901..3.720 rows=1664 loops=1)
-> Sort (cost=30726.88..31408.63 rows=272700 width=16) (actual time=1.900..2.711 rows=1845 loops=1)
Sort Key: (unnest(tounnest.arrayofid))
Sort Method: quicksort Memory: 135kB
-> ProjectSet (cost=0.00..1444.22 rows=272700 width=16) (actual time=0.011..1.110 rows=1845 loops=1)
-> Seq Scan on aggtable tounnest (cost=0.00..60.27 rows=2727 width=30) (actual time=0.007..0.311 rows=2727 loops=1)
-> Hash (cost=132.90..132.90 rows=4590 width=55) (actual time=1.328..1.329 rows=4590 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 454kB
-> Seq Scan on jsontable (cost=0.00..132.90 rows=4590 width=55) (actual time=0.006..0.497 rows=4590 loops=1)
-> Materialize (cost=0.00..73.91 rows=2727 width=67) (actual time=0.000..0.189 rows=2727 loops=3322)
-> Seq Scan on aggtable (cost=0.00..60.27 rows=2727 width=67) (actual time=0.012..0.317 rows=2727 loops=1)
Planning Time: 0.160 ms
Execution Time: 2065.268 ms
I tried to rewrite this query from scratch to profile performance and to understand the original intention
select
joined.joinid,
count(joined.packages)::integer as t_package_count,
sum(coalesce((joined.packages ->> 'weight'::text)::double precision, 0::double precision)) as t_total_weight
from
(
select
joinid ,
json_array_elements(jsondata) as packages
from
( (
select
distinct unnest(at2.arrayofid) as joinid, at2.id as rootid
from
aggtable at2) unnested
join jsontable jt on
jt.id = unnested.joinid)) joined
group by joined.joinid
For which the EXPLAIN ANALYSE return
HashAggregate (cost=873570.28..873572.78 rows=200 width=28) (actual time=18.379..18.741 rows=1661 loops=1)
Group Key: (unnest(at2.arrayofid))
-> ProjectSet (cost=44903.16..191820.28 rows=27270000 width=48) (actual time=3.019..14.684 rows=3658 loops=1)
-> Hash Join (cost=44903.16..53425.03 rows=272700 width=55) (actual time=3.010..4.999 rows=1829 loops=1)
Hash Cond: ((unnest(at2.arrayofid)) = jt.id)
-> Unique (cost=44712.88..46758.13 rows=272700 width=53) (actual time=1.825..2.781 rows=1845 loops=1)
-> Sort (cost=44712.88..45394.63 rows=272700 width=53) (actual time=1.824..2.135 rows=1845 loops=1)
Sort Key: (unnest(at2.arrayofid)), at2.id
Sort Method: quicksort Memory: 308kB
-> ProjectSet (cost=0.00..1444.22 rows=272700 width=53) (actual time=0.009..1.164 rows=1845 loops=1)
-> Seq Scan on aggtable at2 (cost=0.00..60.27 rows=2727 width=67) (actual time=0.005..0.311 rows=2727 loops=1)
-> Hash (cost=132.90..132.90 rows=4590 width=55) (actual time=1.169..1.169 rows=4590 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 454kB
-> Seq Scan on jsontable jt (cost=0.00..132.90 rows=4590 width=55) (actual time=0.007..0.462 rows=4590 loops=1)
Planning Time: 0.144 ms
Execution Time: 18.889 ms
I see a huge difference in the query performance (20ms to 2000ms), as evaluated by postgres. Howver, the real query performance is no where near that difference ( the fast one is about 500ms and the slow one is about 1s )
My question
1/ Is that normal that EXPLAIN produce drastic difference in performance but not so much in real life?
2/ Is the second - optimized query correct? what did the first query do wrong?
I suppy also the credential to a sample database so that everyone can try the queries out
postgres://birylwwg:X6EM3Al9Jhqzz0w6EaSSx79pa4aXRBZq#arjuna.db.elephantsql.com:5432/birylwwg
PW is
X6EM3Al9Jhqzz0w6EaSSx79pa4aXRBZq

Query Tuning in PostgreSQL [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I have a query that is running in 17s, but I can not think of a way to optimize this query. Some help is much needed.
EXPLAIN ANALYSE
CREATE materialized VIEW professores_fizeram_planejamentoTEST as
SELECT unities.id as id_escola,
unities.name as nome_escola,
teachers.id as id_professor,
teachers.name as nome_professor,
datas.dia,
COALESCE((SELECT true
FROM lesson_plans
WHERE lesson_plans.teacher_id = teachers.id and
datas.dia between lesson_plans.start_at and lesson_plans.end_at
LIMIT 1), false) as criou_plano_aula,
COALESCE((select true
from content_records
where content_records.teacher_id = teachers.id and
content_records.record_date = datas.dia
limit 1), false) as criou_registro_conteudo
FROM (SELECT i::date as dia,
EXTRACT(year FROM i::date) as ano
FROM generate_series(date_trunc('year', now()), now(), '1 day'::INTERVAL) i
WHERE EXTRACT(dow from i::timestamp) in (1,2,3,4,5)) datas
JOIN (SELECT distinct teacher_id, classroom_id, YEAR
FROM teacher_discipline_classrooms) teacher_discipline_classrooms ON (teacher_discipline_classrooms.year = datas.ano)
JOIN classrooms on (classrooms.id = teacher_discipline_classrooms.classroom_id)
JOIN unities on (unities.id = classrooms.unity_id)
JOIN teachers on (teachers.id = teacher_discipline_classrooms.teacher_id)
WHERE NOT EXISTS(SELECT 1
FROM school_calendars
JOIN school_calendar_events on (school_calendar_events.school_calendar_id = school_calendars.id and
school_calendar_events.event_type = 'no_school' and
datas.dia between school_calendar_events.start_date and school_calendar_events.end_date)
WHERE school_calendars.unity_id = unities.id)
This query returns the following analysis
Nested Loop (cost=143.840..3721.540 rows=38 width=66) (actual time=1.923..17270.125 rows=171231 loops=1)
-> Nested Loop (cost=143.690..1523.510 rows=38 width=41) (actual time=1.744..5996.571 rows=171231 loops=1)
Join Filter: (NOT (delta 3))
Rows Removed by Join Filter: 15249
-> Nested Loop (cost=143.550..203.530 rows=76 width=16) (actual time=1.661..568.049 rows=186480 loops=1)
-> Hash Join (cost=143.270..165.450 rows=76 width=16) (actual time=1.651..183.740 rows=186660 loops=1)
Hash Cond: ((victor.juliet_seven)::double precision = echo_tango('quebec_four'::text, ((alpha_quebec_whiskey.alpha_quebec_whiskey)::date)::timestamp without time zone))
-> HashAggregate (cost=121.700..127.820 rows=612 width=12) (actual time=1.384..3.336 rows=2388 loops=1)
Group Key: victor.foxtrot_six, victor.oscar_kilo, victor.juliet_seven
-> Seq Scan on victor (cost=0.000..94.400 rows=3640 width=12) (actual time=0.004..0.563 rows=3640 loops=1)
-> Hash (cost=21.260..21.260 rows=25 width=8) (actual time=0.256..0.256 rows=180 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 16kB
-> Function Scan on xray_yankee alpha_quebec_whiskey (cost=0.010..21.260 rows=25 width=8) (actual time=0.081..0.195 rows=180 loops=1)
Filter: (echo_tango('papa'::text, (alpha_quebec_whiskey)::timestamp without time zone) = ANY ('oscar_seven_charlie'::double precision[]))
Rows Removed by Filter: 72
-> Index Scan using echo_victor on uniform (cost=0.280..0.490 rows=1 width=8) (actual time=0.001..0.002 rows=1 loops=186660)
Index Cond: (quebec_seven = victor.oscar_kilo)
-> Index Scan using golf on four (cost=0.140..0.160 rows=1 width=29) (actual time=0.001..0.001 rows=1 loops=186480)
Index Cond: (quebec_seven = uniform.xray_victor)
SubPlan
-> Nested Loop (cost=0.280..34.110 rows=2 width=0) (actual time=0.027..0.027 rows=0 loops=186480)
-> Seq Scan on seven (cost=0.000..1.990 rows=2 width=4) (actual time=0.003..0.008 rows=2 loops=186480)
Filter: (xray_victor = four.quebec_seven)
Rows Removed by Filter: 75
-> Index Scan using alpha_quebec_papa on two (cost=0.280..16.050 rows=1 width=4) (actual time=0.008..0.008 rows=0 loops=372960)
Index Cond: (zulu = seven.quebec_seven)
Filter: (((xray_delta)::text = 'oscar_seven_golf'::text) AND ((alpha_quebec_whiskey.alpha_quebec_whiskey)::date >= foxtrot_three) AND ((alpha_quebec_whiskey.alpha_quebec_whiskey)::date <= lima))
Rows Removed by Filter: 14
-> Index Scan using tango on romeo (cost=0.150..0.200 rows=1 width=29) (actual time=0.001..0.001 rows=1 loops=171231)
Index Cond: (quebec_seven = victor.foxtrot_six)
SubPlan
-> Limit (cost=0.000..20.600 rows=1 width=0) (actual time=0.048..0.048 rows=0 loops=171231)
-> Seq Scan on five (cost=0.000..20.600 rows=1 width=0) (actual time=0.045..0.045 rows=0 loops=171231)
Filter: ((foxtrot_six = romeo.quebec_seven) AND ((alpha_quebec_whiskey.alpha_quebec_whiskey)::date >= oscar_echo) AND ((alpha_quebec_whiskey.alpha_quebec_whiskey)::date <= xray_three))
Rows Removed by Filter: 246
SubPlan
-> Limit (cost=4.810..37.030 rows=1 width=0) (actual time=0.015..0.015 rows=0 loops=171231)
-> Bitmap Heap Scan on whiskey (cost=4.810..37.030 rows=1 width=0) (actual time=0.011..0.011 rows=0 loops=171231)
Recheck Cond: (foxtrot_six = romeo.quebec_seven)
Filter: (foxtrot_tango = (alpha_quebec_whiskey.alpha_quebec_whiskey)::date)
Rows Removed by Filter: 28
Heap Blocks: exact=258248
-> Bitmap Index Scan on juliet_bravo (cost=0.000..4.810 rows=70 width=0) (actual time=0.003..0.003 rows=37 loops=171231)
Index Cond: (foxtrot_six = romeo.quebec_seven)
EXPLAIN 1 - RESULT
EXPLAIN 2 - RESULT
Thanks you.
no, we wont!
sanitize your query (add aliases, and use them, for instance)
COALESCE((SELECT true FROM lesson_plans WHERE lesson_plans.teacher_id = teachers.id and datas.dia between lesson_plans.start_at and lesson_plans.end_at LIMIT 1), false) as criou_plano_aula
... can be replaced by a simple EXISTS(subquery)
your outer query only refers to {unities,teachers,datas}, the rest of the tables are merely junction tables.
if there is a difference in the queryplan between expected <-->observed, your statistics are wrong.
the function scan on generate_series() spoils the queryplan. Better use a material calendartable, which could be indexed & countable.
always add the tuning parameters and an estimate of the cardinalities to the question. These are not details.
This is not an answer, but a comment that doesn't fit in the comments section. If you want to speed up your query please add some information:
First, please include the execution plan. This will tell us what's going on fast.
Also, please post:
Existing indexes on lessons_plans.
Existing indexes on content_records.
Rows on teacher_discipline_classrooms.
Existing indexes on teacher_discipline_classrooms.
Existing indexes on classrooms.
Existing indexes on unities.
indexes on teachers.
Existing indexes on shool_calendars.

Postgres running a slower nested join loop instead of a hash join

I've been optimizing some sql queries against a production database clone. Here is an example query where I've create two indexes where we can run index-only scans really fast using a hash join.
explain analyse
select activity.id from activity, notification
where notification.user_id = '9a51f675-e1e2-46e5-8bcd-6bc535c7e7cb'
and notification.received = false
and notification.invalid = false
and activity.id = notification.activity_id
and activity.space_id = 'e12b42ac-4e54-476f-a4f5-7d6bdb1e61e2'
order by activity.end_time desc
limit 21;
Limit (cost=985.58..985.58 rows=1 width=24) (actual time=0.017..0.017 rows=0 loops=1)
-> Sort (cost=985.58..985.58 rows=1 width=24) (actual time=0.016..0.016 rows=0 loops=1)
Sort Key: activity.end_time DESC
Sort Method: quicksort Memory: 25kB
-> Hash Join (cost=649.76..985.57 rows=1 width=24) (actual time=0.010..0.010 rows=0 loops=1)
Hash Cond: (notification.activity_id = activity.id)
-> Index Only Scan using unreceived_notifications_index on notification (cost=0.42..334.62 rows=127 width=16) (actual time=0.009..0.009 rows=0 loops=1)
Index Cond: (user_id = '9a51f675-e1e2-46e5-8bcd-6bc535c7e7cb'::uuid)
Heap Fetches: 0
-> Hash (cost=634.00..634.00 rows=1227 width=24) (never executed)
-> Index Only Scan using space_activity_index on activity (cost=0.56..634.00 rows=1227 width=24) (never executed)
Index Cond: (space_id = 'e12b42ac-4e54-476f-a4f5-7d6bdb1e61e2'::uuid)
Heap Fetches: 0
Planning time: 0.299 ms
Execution time: 0.046 ms
And here are the indexes.
create index unreceived_notifications_index on notification using btree (
user_id,
activity_id, -- index-only scan
id -- index-only scan
) where (
invalid = false
and received = false
);
space_activity_index
create index space_activity_index on activity using btree (
space_id,
end_time desc,
id -- index-only scan
);
However, I'm noticing that these indexes are making our development database a LOT slower. Here's the same query against a user in our development database and you'll notice its using a nested loop join this time and the order of the loops is really inefficient.
explain analyse
select notification.id from notification, activity
where notification.user_id = '7c74a801-7cb5-4914-bbbe-2b18cd1ced76'
and notification.received = false
and notification.invalid = false
and activity.id = notification.activity_id
and activity.space_id = '415fc269-e68f-4da0-b3e3-b1273b741a7f'
order by activity.end_time desc
limit 20;
Limit (cost=0.69..272.04 rows=20 width=24) (actual time=277.255..277.255 rows=0 loops=1)
-> Nested Loop (cost=0.69..71487.55 rows=5269 width=24) (actual time=277.253..277.253 rows=0 loops=1)
-> Index Only Scan using space_activity_index on activity (cost=0.42..15600.36 rows=155594 width=24) (actual time=0.016..59.433 rows=155666 loops=1)
Index Cond: (space_id = '415fc269-e68f-4da0-b3e3-b1273b741a7f'::uuid)
Heap Fetches: 38361
-> Index Only Scan using unreceived_notifications_index on notification (cost=0.27..0.35 rows=1 width=32) (actual time=0.001..0.001 rows=0 loops=155666)
Index Cond: ((user_id = '7c74a801-7cb5-4914-bbbe-2b18cd1ced76'::uuid) AND (activity_id = activity.id))
Heap Fetches: 0
Planning time: 0.351 ms
Execution time: 277.286 ms
One thing to note here is that there is are only 2 space_ids in our development database. I suspect this is causing Postgres to try to be clever, but it's actually making performance worse!
My question is:
Is there some way that I can force Postgres to run the hash join instead of the nested loop join?
Is there some way, in general, that I can make Postgres's query-planner more deterministic? Ideally, the query performance characteristics would be the exact same between these environments.
Thanks.
Edit: Note that when I leave out the space_id condition when querying my dev database, the result is faster.
explain analyse
select notification.id from notification, activity
where notification.user_id = '7c74a801-7cb5-4914-bbbe-2b18cd1ced76'
and notification.received = false
and notification.invalid = false
and activity.id = notification.activity_id
--and activity.space_id = '415fc269-e68f-4da0-b3e3-b1273b741a7f'
order by activity.end_time desc
limit 20;
Limit (cost=17628.13..17630.43 rows=20 width=24) (actual time=2.730..2.730 rows=0 loops=1)
-> Gather Merge (cost=17628.13..17996.01 rows=3199 width=24) (actual time=2.729..2.729 rows=0 loops=1)
Workers Planned: 1
Workers Launched: 1
-> Sort (cost=16628.12..16636.12 rows=3199 width=24) (actual time=0.126..0.126 rows=0 loops=2)
Sort Key: activity.end_time DESC
Sort Method: quicksort Memory: 25kB
-> Nested Loop (cost=20.59..16441.88 rows=3199 width=24) (actual time=0.093..0.093 rows=0 loops=2)
-> Parallel Bitmap Heap Scan on notification (cost=20.17..2512.17 rows=3199 width=32) (actual time=0.092..0.092 rows=0 loops=2)
Recheck Cond: ((user_id = '7c74a801-7cb5-4914-bbbe-2b18cd1ced76'::uuid) AND (NOT invalid) AND (NOT received))
-> Bitmap Index Scan on unreceived_notifications_index (cost=0.00..18.82 rows=5439 width=0) (actual time=0.006..0.006 rows=0 loops=1)
Index Cond: (user_id = '7c74a801-7cb5-4914-bbbe-2b18cd1ced76'::uuid)
-> Index Scan using activity_pkey on activity (cost=0.42..4.35 rows=1 width=24) (never executed)
Index Cond: (id = notification.activity_id)
Planning time: 0.344 ms
Execution time: 3.433 ms
Edit: After reading about index hinting, I tried turning nested_loop off using set enable_nestloop=false; and the query is way faster!
Limit (cost=20617.76..20620.09 rows=20 width=24) (actual time=2.872..2.872 rows=0 loops=1)
-> Gather Merge (cost=20617.76..21130.20 rows=4392 width=24) (actual time=2.871..2.871 rows=0 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Sort (cost=19617.74..19623.23 rows=2196 width=24) (actual time=0.086..0.086 rows=0 loops=3)
Sort Key: activity.end_time DESC
Sort Method: quicksort Memory: 25kB
-> Hash Join (cost=2609.20..19495.85 rows=2196 width=24) (actual time=0.062..0.062 rows=0 loops=3)
Hash Cond: (activity.id = notification.activity_id)
-> Parallel Seq Scan on activity (cost=0.00..14514.57 rows=64831 width=24) (actual time=0.006..0.006 rows=1 loops=3)
Filter: (space_id = '415fc269-e68f-4da0-b3e3-b1273b741a7f'::uuid)
-> Hash (cost=2541.19..2541.19 rows=5441 width=32) (actual time=0.007..0.007 rows=0 loops=3)
Buckets: 8192 Batches: 1 Memory Usage: 64kB
-> Bitmap Heap Scan on notification (cost=20.18..2541.19 rows=5441 width=32) (actual time=0.006..0.006 rows=0 loops=3)
Recheck Cond: ((user_id = '7c74a801-7cb5-4914-bbbe-2b18cd1ced76'::uuid) AND (NOT invalid) AND (NOT received))
-> Bitmap Index Scan on unreceived_notifications_index (cost=0.00..18.82 rows=5441 width=0) (actual time=0.004..0.004 rows=0 loops=3)
Index Cond: (user_id = '7c74a801-7cb5-4914-bbbe-2b18cd1ced76'::uuid)
Planning time: 0.375 ms
Execution time: 3.630 ms
It depends on how specialized you want to get. There are plan guides in postgresQL that you can use to force the queries to use specific indexes. But query optimizers are strongly impacted by record counts in the choices they make. Maybe you add the extra indexes in the non-dev environment and move on?
https://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/chap-sql-server-aurora-pg.tuning.queryplanning.html

1 minute difference in almost identical PostgreSQL queries?

I have Rails application with the ability to filter records by state_code. I noticed that when i pass 'CA' as search term i get my results almost instantly. If i will pass 'AZ' for example it will take more than a minute though.
I don't have any ideas why so?
Below is query explains from psql:
Fast one:
EXPLAIN ANALYZE SELECT
accounts.id
FROM "accounts"
LEFT OUTER JOIN "addresses"
ON "addresses"."addressable_id" = "accounts"."id"
AND "addresses"."address_type" = 'mailing'
AND "addresses"."addressable_type" = 'Account'
WHERE "accounts"."organization_id" = 16
AND (addresses.state_code IN ('CA'))
ORDER BY accounts.name DESC;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=4941.94..4941.94 rows=1 width=18) (actual time=74.810..74.969 rows=821 loops=1)
Sort Key: accounts.name
Sort Method: quicksort Memory: 75kB
-> Hash Join (cost=4.46..4941.93 rows=1 width=18) (actual time=70.044..73.148 rows=821 loops=1)
Hash Cond: (addresses.addressable_id = accounts.id)
-> Seq Scan on addresses (cost=0.00..4911.93 rows=6806 width=4) (actual time=0.027..65.547 rows=15244 loops=1)
Filter: (((address_type)::text = 'mailing'::text) AND ((addressable_type)::text = 'Account'::text) AND ((state_code)::text = 'CA'::text))
Rows Removed by Filter: 129688
-> Hash (cost=4.45..4.45 rows=1 width=18) (actual time=2.037..2.037 rows=1775 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 87kB
-> Index Scan using organization_id_index on accounts (cost=0.29..4.45 rows=1 width=18) (actual time=0.018..1.318 rows=1775 loops=1)
Index Cond: (organization_id = 16)
Planning time: 0.565 ms
Execution time: 75.224 ms
(14 rows)
Slow one:
EXPLAIN ANALYZE SELECT
accounts.id
FROM "accounts"
LEFT OUTER JOIN "addresses"
ON "addresses"."addressable_id" = "accounts"."id"
AND "addresses"."address_type" = 'mailing'
AND "addresses"."addressable_type" = 'Account'
WHERE "accounts"."organization_id" = 16
AND (addresses.state_code IN ('NV'))
ORDER BY accounts.name DESC;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=4917.27..4917.27 rows=1 width=18) (actual time=97091.270..97091.277 rows=25 loops=1)
Sort Key: accounts.name
Sort Method: quicksort Memory: 26kB
-> Nested Loop (cost=0.29..4917.26 rows=1 width=18) (actual time=844.250..97091.083 rows=25 loops=1)
Join Filter: (accounts.id = addresses.addressable_id)
Rows Removed by Join Filter: 915875
-> Index Scan using organization_id_index on accounts (cost=0.29..4.45 rows=1 width=18) (actual time=0.017..10.315 rows=1775 loops=1)
Index Cond: (organization_id = 16)
-> Seq Scan on addresses (cost=0.00..4911.93 rows=70 width=4) (actual time=0.110..54.521 rows=516 loops=1775)
Filter: (((address_type)::text = 'mailing'::text) AND ((addressable_type)::text = 'Account'::text) AND ((state_code)::text = 'NV'::text))
Rows Removed by Filter: 144416
Planning time: 0.308 ms
Execution time: 97091.325 ms
(13 rows)
Slow one result is 25 rows, fast one is 821 rows, which is even more confusing.
I solved it by using VACUUM ANALYZE command from psql command line.

Extremely slow query on 1st run, even with indexes

I have an extremely slow query that is slow despite indexes being used(on the order of 1-3 minutes). Similar queries will be run 4-6 times by the user, so speed is critical.
QUERY:
SELECT SUM(bh.count) AS count,b.time AS batchtime
FROM
batchtimes AS b
INNER JOIN batchtimes_headlines AS bh ON b.hashed_id = bh.batchtime_hashed_id
INNER JOIN headlines_ngrams AS hn ON bh.headline_hashed_id = hn.headline_hashed_id
INNER JOIN ngrams AS n ON hn.ngram_hashed_id = n.hashed_id
INNER JOIN homepages_headlines AS hh ON bh.headline_hashed_id = hh.headline_hashed_id
INNER JOIN homepages AS hp ON hh.homepage_hashed_id = hp.hashed_id
WHERE
b.time IN (SELECT * FROM generate_series('2013-10-10 20:00:00.000000'::timestamp,'2014-02-16 20:00:00.000000'::timestamp,'1 hours'))
AND ( n.gram = 'a' )
AND hp.url = 'www.abcdefg.com'
GROUP BY
b.time
ORDER BY
b.time ASC;
EXPLAIN ANALYZE after very first run:
GroupAggregate (cost=6863.26..6863.79 rows=30 width=12) (actual time=90905.858..90908.889 rows=3039 loops=1)
-> Sort (cost=6863.26..6863.34 rows=30 width=12) (actual time=90905.853..90906.971 rows=19780 loops=1)
Sort Key: b."time"
Sort Method: quicksort Memory: 1696kB
-> Hash Join (cost=90.16..6862.52 rows=30 width=12) (actual time=378.784..90890.636 rows=19780 loops=1)
Hash Cond: (b."time" = generate_series.generate_series)
-> Nested Loop (cost=73.16..6845.27 rows=60 width=12) (actual time=375.644..90859.059 rows=22910 loops=1)
-> Nested Loop (cost=72.88..6740.51 rows=60 width=37) (actual time=375.624..90618.828 rows=22910 loops=1)
-> Nested Loop (cost=42.37..4391.06 rows=1 width=66) (actual time=368.993..54607.402 rows=1213 loops=1)
-> Nested Loop (cost=42.23..4390.18 rows=5 width=99) (actual time=223.681..53051.774 rows=294787 loops=1)
-> Nested Loop (cost=41.68..4379.19 rows=5 width=33) (actual time=223.643..49403.746 rows=294787 loops=1)
-> Index Scan using by_gram_ngrams on ngrams n (cost=0.56..8.58 rows=1 width=33) (actual time=17.001..17.002 rows=1 loops=1)
Index Cond: ((gram)::text = 'a'::text)
-> Bitmap Heap Scan on headlines_ngrams hn (cost=41.12..4359.59 rows=1103 width=66) (actual time=206.634..49273.363 rows=294787 loops=1)
Recheck Cond: ((ngram_hashed_id)::text = (n.hashed_id)::text)
-> Bitmap Index Scan on by_ngramhashedid_headlinesngrams (cost=0.00..40.84 rows=1103 width=0) (actual time=143.430..143.430 rows=294787 loops=1)
Index Cond: ((ngram_hashed_id)::text = (n.hashed_id)::text)
-> Index Scan using by_headlinehashedid_homepagesheadlines on homepages_headlines hh (cost=0.56..2.19 rows=1 width=66) (actual time=0.011..0.011 rows=1 loops=294787)
Index Cond: ((headline_hashed_id)::text = (hn.headline_hashed_id)::text)
-> Index Scan using by_hashedid_homepages on homepages hp (cost=0.14..0.17 rows=1 width=33) (actual time=0.005..0.005 rows=0 loops=294787)
Index Cond: ((hashed_id)::text = (hh.homepage_hashed_id)::text)
Filter: ((url)::text = 'www.abcdefg.com'::text)
Rows Removed by Filter: 1
-> Bitmap Heap Scan on batchtimes_headlines bh (cost=30.51..2333.86 rows=1560 width=70) (actual time=7.977..29.674 rows=19 loops=1213)
Recheck Cond: ((headline_hashed_id)::text = (hn.headline_hashed_id)::text)
-> Bitmap Index Scan on by_headlinehashedid_batchtimesheadlines (cost=0.00..30.12 rows=1560 width=0) (actual time=6.595..6.595 rows=19 loops=1213)
Index Cond: ((headline_hashed_id)::text = (hn.headline_hashed_id)::text)
-> Index Scan using by_hashedid_batchtimes on batchtimes b (cost=0.28..1.74 rows=1 width=41) (actual time=0.009..0.009 rows=1 loops=22910)
Index Cond: ((hashed_id)::text = (bh.batchtime_hashed_id)::text)
-> Hash (cost=14.50..14.50 rows=200 width=8) (actual time=3.130..3.130 rows=3097 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 121kB
-> HashAggregate (cost=12.50..14.50 rows=200 width=8) (actual time=1.819..2.342 rows=3097 loops=1)
-> Function Scan on generate_series (cost=0.00..10.00 rows=1000 width=8) (actual time=0.441..0.714 rows=3097 loops=1)
Total runtime: 90911.001 ms
EXPLAIN ANALYZE after 2nd run:
GroupAggregate (cost=6863.26..6863.79 rows=30 width=12) (actual time=3122.861..3125.796 rows=3039 loops=1)
-> Sort (cost=6863.26..6863.34 rows=30 width=12) (actual time=3122.857..3123.882 rows=19780 loops=1)
Sort Key: b."time"
Sort Method: quicksort Memory: 1696kB
-> Hash Join (cost=90.16..6862.52 rows=30 width=12) (actual time=145.396..3116.467 rows=19780 loops=1)
Hash Cond: (b."time" = generate_series.generate_series)
-> Nested Loop (cost=73.16..6845.27 rows=60 width=12) (actual time=142.406..3102.864 rows=22910 loops=1)
-> Nested Loop (cost=72.88..6740.51 rows=60 width=37) (actual time=142.395..3011.768 rows=22910 loops=1)
-> Nested Loop (cost=42.37..4391.06 rows=1 width=66) (actual time=142.229..2969.144 rows=1213 loops=1)
-> Nested Loop (cost=42.23..4390.18 rows=5 width=99) (actual time=135.799..2142.666 rows=294787 loops=1)
-> Nested Loop (cost=41.68..4379.19 rows=5 width=33) (actual time=135.768..437.824 rows=294787 loops=1)
-> Index Scan using by_gram_ngrams on ngrams n (cost=0.56..8.58 rows=1 width=33) (actual time=0.030..0.031 rows=1 loops=1)
Index Cond: ((gram)::text = 'a'::text)
-> Bitmap Heap Scan on headlines_ngrams hn (cost=41.12..4359.59 rows=1103 width=66) (actual time=135.732..405.943 rows=294787 loops=1)
Recheck Cond: ((ngram_hashed_id)::text = (n.hashed_id)::text)
-> Bitmap Index Scan on by_ngramhashedid_headlinesngrams (cost=0.00..40.84 rows=1103 width=0) (actual time=72.570..72.570 rows=294787 loops=1)
Index Cond: ((ngram_hashed_id)::text = (n.hashed_id)::text)
-> Index Scan using by_headlinehashedid_homepagesheadlines on homepages_headlines hh (cost=0.56..2.19 rows=1 width=66) (actual time=0.005..0.005 rows=1 loops=294787)
Index Cond: ((headline_hashed_id)::text = (hn.headline_hashed_id)::text)
-> Index Scan using by_hashedid_homepages on homepages hp (cost=0.14..0.17 rows=1 width=33) (actual time=0.003..0.003 rows=0 loops=294787)
Index Cond: ((hashed_id)::text = (hh.homepage_hashed_id)::text)
Filter: ((url)::text = 'www.abcdefg.com'::text)
Rows Removed by Filter: 1
-> Bitmap Heap Scan on batchtimes_headlines bh (cost=30.51..2333.86 rows=1560 width=70) (actual time=0.015..0.031 rows=19 loops=1213)
Recheck Cond: ((headline_hashed_id)::text = (hn.headline_hashed_id)::text)
-> Bitmap Index Scan on by_headlinehashedid_batchtimesheadlines (cost=0.00..30.12 rows=1560 width=0) (actual time=0.013..0.013 rows=19 loops=1213)
Index Cond: ((headline_hashed_id)::text = (hn.headline_hashed_id)::text)
-> Index Scan using by_hashedid_batchtimes on batchtimes b (cost=0.28..1.74 rows=1 width=41) (actual time=0.003..0.004 rows=1 loops=22910)
Index Cond: ((hashed_id)::text = (bh.batchtime_hashed_id)::text)
-> Hash (cost=14.50..14.50 rows=200 width=8) (actual time=2.982..2.982 rows=3097 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 121kB
-> HashAggregate (cost=12.50..14.50 rows=200 width=8) (actual time=1.771..2.311 rows=3097 loops=1)
-> Function Scan on generate_series (cost=0.00..10.00 rows=1000 width=8) (actual time=0.439..0.701 rows=3097 loops=1)
Total runtime: 3125.985 ms
I have a 32GB server. Here are the modifications to postgresql.conf:
default_statistics_target = 100
maintenance_work_mem = 1920MB
checkpoint_completion_target = 0.9
effective_cache_size = 16GB
work_mem = 160MB
wal_buffers = 16MB
checkpoint_segments = 32
shared_buffers = 7680MB
DB has recently been Vacuumed, re-indexed, and analyze.
Any suggestions for how to tune this query?
This may or may not answer to your question. i cannot comment above, since i dont have 50 rep's as per Stack overflow. :/
My first question is why Inner Join..? This will return you unwanted Columns in your Inner join result. For example in your query when you inner join
INNER JOIN headlines_ngrams AS hn ON bh.headline_hashed_id = hn.headline_hashed_id
The result will have two columns with same information which is redundant. so for example if you have 100,000,000 rows, you will have bh.headline_hashed_id and hh.headline_hashed_id 100,000,000 entries in each column. in your query above you are joining 5 tables. Plus you are interested in only
SELECT SUM(bh.count) AS count,b.time AS batchtime
so i belive you to use Natural join.
[link] (http://en.wikipedia.org/wiki/Inner_join#Inner_join)
The reason that i can think of why in second attempt you are getting a improved performance is due to cache. People have mentioned above to use temporary table for Generate_series which could be a good option. Plus if you think of using WITH in your query then, you should read this article. link