1 minute difference in almost identical PostgreSQL queries? - sql

I have Rails application with the ability to filter records by state_code. I noticed that when i pass 'CA' as search term i get my results almost instantly. If i will pass 'AZ' for example it will take more than a minute though.
I don't have any ideas why so?
Below is query explains from psql:
Fast one:
EXPLAIN ANALYZE SELECT
accounts.id
FROM "accounts"
LEFT OUTER JOIN "addresses"
ON "addresses"."addressable_id" = "accounts"."id"
AND "addresses"."address_type" = 'mailing'
AND "addresses"."addressable_type" = 'Account'
WHERE "accounts"."organization_id" = 16
AND (addresses.state_code IN ('CA'))
ORDER BY accounts.name DESC;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=4941.94..4941.94 rows=1 width=18) (actual time=74.810..74.969 rows=821 loops=1)
Sort Key: accounts.name
Sort Method: quicksort Memory: 75kB
-> Hash Join (cost=4.46..4941.93 rows=1 width=18) (actual time=70.044..73.148 rows=821 loops=1)
Hash Cond: (addresses.addressable_id = accounts.id)
-> Seq Scan on addresses (cost=0.00..4911.93 rows=6806 width=4) (actual time=0.027..65.547 rows=15244 loops=1)
Filter: (((address_type)::text = 'mailing'::text) AND ((addressable_type)::text = 'Account'::text) AND ((state_code)::text = 'CA'::text))
Rows Removed by Filter: 129688
-> Hash (cost=4.45..4.45 rows=1 width=18) (actual time=2.037..2.037 rows=1775 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 87kB
-> Index Scan using organization_id_index on accounts (cost=0.29..4.45 rows=1 width=18) (actual time=0.018..1.318 rows=1775 loops=1)
Index Cond: (organization_id = 16)
Planning time: 0.565 ms
Execution time: 75.224 ms
(14 rows)
Slow one:
EXPLAIN ANALYZE SELECT
accounts.id
FROM "accounts"
LEFT OUTER JOIN "addresses"
ON "addresses"."addressable_id" = "accounts"."id"
AND "addresses"."address_type" = 'mailing'
AND "addresses"."addressable_type" = 'Account'
WHERE "accounts"."organization_id" = 16
AND (addresses.state_code IN ('NV'))
ORDER BY accounts.name DESC;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=4917.27..4917.27 rows=1 width=18) (actual time=97091.270..97091.277 rows=25 loops=1)
Sort Key: accounts.name
Sort Method: quicksort Memory: 26kB
-> Nested Loop (cost=0.29..4917.26 rows=1 width=18) (actual time=844.250..97091.083 rows=25 loops=1)
Join Filter: (accounts.id = addresses.addressable_id)
Rows Removed by Join Filter: 915875
-> Index Scan using organization_id_index on accounts (cost=0.29..4.45 rows=1 width=18) (actual time=0.017..10.315 rows=1775 loops=1)
Index Cond: (organization_id = 16)
-> Seq Scan on addresses (cost=0.00..4911.93 rows=70 width=4) (actual time=0.110..54.521 rows=516 loops=1775)
Filter: (((address_type)::text = 'mailing'::text) AND ((addressable_type)::text = 'Account'::text) AND ((state_code)::text = 'NV'::text))
Rows Removed by Filter: 144416
Planning time: 0.308 ms
Execution time: 97091.325 ms
(13 rows)
Slow one result is 25 rows, fast one is 821 rows, which is even more confusing.

I solved it by using VACUUM ANALYZE command from psql command line.

Related

Query Tuning in PostgreSQL [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I have a query that is running in 17s, but I can not think of a way to optimize this query. Some help is much needed.
EXPLAIN ANALYSE
CREATE materialized VIEW professores_fizeram_planejamentoTEST as
SELECT unities.id as id_escola,
unities.name as nome_escola,
teachers.id as id_professor,
teachers.name as nome_professor,
datas.dia,
COALESCE((SELECT true
FROM lesson_plans
WHERE lesson_plans.teacher_id = teachers.id and
datas.dia between lesson_plans.start_at and lesson_plans.end_at
LIMIT 1), false) as criou_plano_aula,
COALESCE((select true
from content_records
where content_records.teacher_id = teachers.id and
content_records.record_date = datas.dia
limit 1), false) as criou_registro_conteudo
FROM (SELECT i::date as dia,
EXTRACT(year FROM i::date) as ano
FROM generate_series(date_trunc('year', now()), now(), '1 day'::INTERVAL) i
WHERE EXTRACT(dow from i::timestamp) in (1,2,3,4,5)) datas
JOIN (SELECT distinct teacher_id, classroom_id, YEAR
FROM teacher_discipline_classrooms) teacher_discipline_classrooms ON (teacher_discipline_classrooms.year = datas.ano)
JOIN classrooms on (classrooms.id = teacher_discipline_classrooms.classroom_id)
JOIN unities on (unities.id = classrooms.unity_id)
JOIN teachers on (teachers.id = teacher_discipline_classrooms.teacher_id)
WHERE NOT EXISTS(SELECT 1
FROM school_calendars
JOIN school_calendar_events on (school_calendar_events.school_calendar_id = school_calendars.id and
school_calendar_events.event_type = 'no_school' and
datas.dia between school_calendar_events.start_date and school_calendar_events.end_date)
WHERE school_calendars.unity_id = unities.id)
This query returns the following analysis
Nested Loop (cost=143.840..3721.540 rows=38 width=66) (actual time=1.923..17270.125 rows=171231 loops=1)
-> Nested Loop (cost=143.690..1523.510 rows=38 width=41) (actual time=1.744..5996.571 rows=171231 loops=1)
Join Filter: (NOT (delta 3))
Rows Removed by Join Filter: 15249
-> Nested Loop (cost=143.550..203.530 rows=76 width=16) (actual time=1.661..568.049 rows=186480 loops=1)
-> Hash Join (cost=143.270..165.450 rows=76 width=16) (actual time=1.651..183.740 rows=186660 loops=1)
Hash Cond: ((victor.juliet_seven)::double precision = echo_tango('quebec_four'::text, ((alpha_quebec_whiskey.alpha_quebec_whiskey)::date)::timestamp without time zone))
-> HashAggregate (cost=121.700..127.820 rows=612 width=12) (actual time=1.384..3.336 rows=2388 loops=1)
Group Key: victor.foxtrot_six, victor.oscar_kilo, victor.juliet_seven
-> Seq Scan on victor (cost=0.000..94.400 rows=3640 width=12) (actual time=0.004..0.563 rows=3640 loops=1)
-> Hash (cost=21.260..21.260 rows=25 width=8) (actual time=0.256..0.256 rows=180 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 16kB
-> Function Scan on xray_yankee alpha_quebec_whiskey (cost=0.010..21.260 rows=25 width=8) (actual time=0.081..0.195 rows=180 loops=1)
Filter: (echo_tango('papa'::text, (alpha_quebec_whiskey)::timestamp without time zone) = ANY ('oscar_seven_charlie'::double precision[]))
Rows Removed by Filter: 72
-> Index Scan using echo_victor on uniform (cost=0.280..0.490 rows=1 width=8) (actual time=0.001..0.002 rows=1 loops=186660)
Index Cond: (quebec_seven = victor.oscar_kilo)
-> Index Scan using golf on four (cost=0.140..0.160 rows=1 width=29) (actual time=0.001..0.001 rows=1 loops=186480)
Index Cond: (quebec_seven = uniform.xray_victor)
SubPlan
-> Nested Loop (cost=0.280..34.110 rows=2 width=0) (actual time=0.027..0.027 rows=0 loops=186480)
-> Seq Scan on seven (cost=0.000..1.990 rows=2 width=4) (actual time=0.003..0.008 rows=2 loops=186480)
Filter: (xray_victor = four.quebec_seven)
Rows Removed by Filter: 75
-> Index Scan using alpha_quebec_papa on two (cost=0.280..16.050 rows=1 width=4) (actual time=0.008..0.008 rows=0 loops=372960)
Index Cond: (zulu = seven.quebec_seven)
Filter: (((xray_delta)::text = 'oscar_seven_golf'::text) AND ((alpha_quebec_whiskey.alpha_quebec_whiskey)::date >= foxtrot_three) AND ((alpha_quebec_whiskey.alpha_quebec_whiskey)::date <= lima))
Rows Removed by Filter: 14
-> Index Scan using tango on romeo (cost=0.150..0.200 rows=1 width=29) (actual time=0.001..0.001 rows=1 loops=171231)
Index Cond: (quebec_seven = victor.foxtrot_six)
SubPlan
-> Limit (cost=0.000..20.600 rows=1 width=0) (actual time=0.048..0.048 rows=0 loops=171231)
-> Seq Scan on five (cost=0.000..20.600 rows=1 width=0) (actual time=0.045..0.045 rows=0 loops=171231)
Filter: ((foxtrot_six = romeo.quebec_seven) AND ((alpha_quebec_whiskey.alpha_quebec_whiskey)::date >= oscar_echo) AND ((alpha_quebec_whiskey.alpha_quebec_whiskey)::date <= xray_three))
Rows Removed by Filter: 246
SubPlan
-> Limit (cost=4.810..37.030 rows=1 width=0) (actual time=0.015..0.015 rows=0 loops=171231)
-> Bitmap Heap Scan on whiskey (cost=4.810..37.030 rows=1 width=0) (actual time=0.011..0.011 rows=0 loops=171231)
Recheck Cond: (foxtrot_six = romeo.quebec_seven)
Filter: (foxtrot_tango = (alpha_quebec_whiskey.alpha_quebec_whiskey)::date)
Rows Removed by Filter: 28
Heap Blocks: exact=258248
-> Bitmap Index Scan on juliet_bravo (cost=0.000..4.810 rows=70 width=0) (actual time=0.003..0.003 rows=37 loops=171231)
Index Cond: (foxtrot_six = romeo.quebec_seven)
EXPLAIN 1 - RESULT
EXPLAIN 2 - RESULT
Thanks you.
no, we wont!
sanitize your query (add aliases, and use them, for instance)
COALESCE((SELECT true FROM lesson_plans WHERE lesson_plans.teacher_id = teachers.id and datas.dia between lesson_plans.start_at and lesson_plans.end_at LIMIT 1), false) as criou_plano_aula
... can be replaced by a simple EXISTS(subquery)
your outer query only refers to {unities,teachers,datas}, the rest of the tables are merely junction tables.
if there is a difference in the queryplan between expected <-->observed, your statistics are wrong.
the function scan on generate_series() spoils the queryplan. Better use a material calendartable, which could be indexed & countable.
always add the tuning parameters and an estimate of the cardinalities to the question. These are not details.
This is not an answer, but a comment that doesn't fit in the comments section. If you want to speed up your query please add some information:
First, please include the execution plan. This will tell us what's going on fast.
Also, please post:
Existing indexes on lessons_plans.
Existing indexes on content_records.
Rows on teacher_discipline_classrooms.
Existing indexes on teacher_discipline_classrooms.
Existing indexes on classrooms.
Existing indexes on unities.
indexes on teachers.
Existing indexes on shool_calendars.

How can I optimize this JOIN query?

From pg_stat_statements I have this query that's taking 900 ms on average. What is the recommended way going forward in optimizing this query? I do have indexes but not sure where the bottleneck could be. Here's the EXPLAIN ANALYZE.
EXPLAIN ANALYZE
SELECT "listing_variants".*
FROM "listing_variants"
INNER JOIN "links" ON "links"."listing_variant_id" = "listing_variants"."id"
INNER JOIN "product_variants" ON "product_variants"."id" = "links"."product_variant_id"
INNER JOIN "listings" ON "listing_variants"."listing_id" = "listings"."id"
WHERE "listings"."sales_channel_id" = 31
AND "listing_variants"."is_linked" = 'f'
AND (listing_variants.available_quantity != product_variants.available_quantity);
gives
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=5283.71..6960.01 rows=524 width=232) (actual time=54.138..54.138 rows=0 loops=1)
Join Filter: (listing_variants.available_quantity <> product_variants.available_quantity)
-> Hash Join (cost=5283.42..6648.69 rows=720 width=236) (actual time=54.137..54.137 rows=0 loops=1)
Hash Cond: (links.listing_variant_id = listing_variants.id)
-> Index Only Scan using index_on_product_listing_variant_id on links (cost=0.29..1205.14 rows=30643 width=8) (actual time=0.026..6.112 rows=30863 loops=1)
Heap Fetches: 6799
-> Hash (cost=5261.53..5261.53 rows=1728 width=232) (actual time=45.407..45.407 rows=368 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 65kB
-> Hash Join (cost=1671.82..5261.53 rows=1728 width=232) (actual time=11.147..45.075 rows=368 loops=1)
Hash Cond: (listing_variants.listing_id = listings.id)
-> Seq Scan on listing_variants (cost=0.00..3412.77 rows=42577 width=232) (actual time=0.018..29.882 rows=42713 loops=1)
Filter: (NOT is_linked)
Rows Removed by Filter: 30863
-> Hash (cost=1661.68..1661.68 rows=811 width=4) (actual time=10.585..10.585 rows=811 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 29kB
-> Bitmap Heap Scan on listings (cost=30.57..1661.68 rows=811 width=4) (actual time=0.362..10.224 rows=811 loops=1)
Recheck Cond: (sales_channel_id = 31)
Heap Blocks: exact=668
-> Bitmap Index Scan on index_listings_on_sales_channel_ext_svc_updated (cost=0.00..30.37 rows=811 width=0) (actual time=0.242..0.242 rows=821 loops=1)
Index Cond: (sales_channel_id = 31)
-> Index Scan using product_variants_pkey on product_variants (cost=0.29..0.42 rows=1 width=12) (never executed)
Index Cond: (id = links.product_variant_id)
Planning time: 1.437 ms
Execution time: 54.366 ms
Thanks!
Use JOIN Over Exists only when you need to select data from multiple tables, which you are not doing here. That's a first step of optimization. In your case, with join it is polluting your resultset by returning multitude of same data rows depending on multiple of data available in joined secondary tables.
SELECT "listing_variants".*
FROM "listing_variants"
WHERE "listing_variants"."is_linked" = 'f'
AND EXISTS(SELECT 1 FROM "links" ON "links"."listing_variant_id" = "listing_variants"."id"
JOIN "product_variants" ON "product_variants"."id" = "links"."product_variant_id"
AND "listing_variants"."available_quantity" != "product_variants"."available_quantity"
JOIN "listings" ON "listing_variants"."listing_id" = "listings"."id"
AND "listings"."sales_channel_id" = 31);
Other than that your query is pretty straightforward, and well indexing & data partitioning can only contribute to further optimization.

Improve running time of SQL query

I have the following table structure:
AdPerformance
id
ad_id
impressions
Targeting
value
AdActions
app_starts
Ad
id
name
parent_id
AdTargeting
id
targeting_
ad_id
Targeting
id
name
value
AdProduct
id
ad_id
name
I need to aggregate the data by targeting with restriction to product name , so I wrote the following query:
SELECT ad_performance.ad_id, targeting.value AS targeting_value,
sum(impressions) AS impressions,
sum(app_starts) AS app_starts
FROM ad_performance
LEFT JOIN ad on ad.id = ad_performance.ad_id
LEFT JOIN ad_actions ON ad_performance.id = ad_actions.ad_performance_id
RIGHT JOIN (
SELECT ad_id, value from targeting, ad_targeting
WHERE targeting.id = ad_targeting.id and targeting.name = 'gender'
) targeting ON targeting.ad_id = ad.parent_id
WHERE ad_performance.ad_id IN
(SELECT ad_id FROM ad_product WHERE product = 'iphone')
GROUP BY ad_performance.ad_id, targeting_value
However the above query in ANALYZE command takes about 5s for ~1M records.
Is there a way to improve it?
I do have indexes on foreign keys
UPDATED
See output of ANALYZE
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=5787.28..5789.87 rows=259 width=254) (actual time=3283.763..3286.015 rows=5998 loops=1)
Group Key: adobject_performance.ad_id, targeting.value
Buffers: shared hit=3400223
-> Nested Loop Left Join (cost=241.63..5603.63 rows=8162 width=254) (actual time=10.438..2774.664 rows=839720 loops=1)
Buffers: shared hit=3400223
-> Nested Loop (cost=241.21..1590.52 rows=8162 width=250) (actual time=10.412..703.818 rows=839720 loops=1)
Join Filter: (adobject.id = adobject_performance.ad_id)
Buffers: shared hit=36755
-> Hash Join (cost=240.78..323.35 rows=9 width=226) (actual time=10.380..20.332 rows=5998 loops=1)
Hash Cond: (ad_product.ad_id = ad.id)
Buffers: shared hit=190
-> HashAggregate (cost=128.98..188.96 rows=5998 width=4) (actual time=3.788..6.821 rows=5998 loops=1)
Group Key: ad_product.ad_id
Buffers: shared hit=39
-> Seq Scan on ad_product (cost=0.00..113.99 rows=5998 width=4) (actual time=0.011..1.726 rows=5998 loops=1)
Filter: ((product)::text = 'ft2_iPhone'::text)
Rows Removed by Filter: 1
Buffers: shared hit=39
-> Hash (cost=111.69..111.69 rows=9 width=222) (actual time=6.578..6.578 rows=5998 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 241kB
Buffers: shared hit=151
-> Hash Join (cost=30.26..111.69 rows=9 width=222) (actual time=0.154..4.660 rows=5998 loops=1)
Hash Cond: (adobject.parent_id = adobject_targeting.ad_id)
Buffers: shared hit=151
-> Seq Scan on adobject (cost=0.00..77.97 rows=897 width=8) (actual time=0.009..1.449 rows=6001 loops=1)
Buffers: shared hit=69
-> Hash (cost=30.24..30.24 rows=2 width=222) (actual time=0.132..0.132 rows=2 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 1kB
Buffers: shared hit=82
-> Nested Loop (cost=0.15..30.24 rows=2 width=222) (actual time=0.101..0.129 rows=2 loops=1)
Buffers: shared hit=82
-> Seq Scan on targeting (cost=0.00..13.88 rows=2 width=222) (actual time=0.015..0.042 rows=79 loops=1)
Filter: (name = 'age group'::targeting_name)
Rows Removed by Filter: 82
Buffers: shared hit=1
-> Index Scan using advertising_targeting_pkey on adobject_targeting (cost=0.15..8.17 rows=1 width=8) (actual time=0.001..0.001 rows=0 loops=79)
Index Cond: (id = targeting.id)
Buffers: shared hit=81
-> Index Scan using "fki_advertising_peformance_advertising_entity_id -> advertising" on adobject_performance (cost=0.42..89.78 rows=4081 width=32) (actual time=0.007..0.046 rows=140 loops=5998)
Index Cond: (ad_id = ad_product.ad_id)
Buffers: shared hit=36565
-> Index Scan using facebook_advertising_actions_pkey on facebook_adobject_actions (cost=0.42..0.48 rows=1 width=12) (actual time=0.001..0.002 rows=1 loops=839720)
Index Cond: (ad_performance.id = ad_performance_id)
Buffers: shared hit=3363468
Planning time: 1.525 ms
Execution time: 3287.324 ms
(46 rows)
Blindly shooting here, as we have not been provided with the result of the EXPLAIN, but still, Postgres should treat this query better if you take out your targeting table in a CTE:
WITH targeting AS
(
SELECT ad_id, value from targeting, ad_targeting
WHERE targeting.id = ad_targeting.id and targeting.name = 'gender'
)
SELECT ad_performance.ad_id, targeting.value AS targeting_value,
sum(impressions) AS impressions,
sum(app_starts) AS app_starts
FROM ad_performance
LEFT JOIN ad on ad.id = ad_performance.ad_id
LEFT JOIN ad_actions ON ad_performance.id = ad_actions.ad_performance_id
RIGHT JOIN targeting ON targeting.ad_id = ad.parent_id
WHERE ad_performance.ad_id IN
(SELECT ad_id FROM ad_product WHERE product = 'iphone')
GROUP BY ad_performance.ad_id, targeting_value
Taken from the Documentation:
A useful property of WITH queries is that they are evaluated only once
per execution of the parent query, even if they are referred to more
than once by the parent query or sibling WITH queries. Thus, expensive
calculations that are needed in multiple places can be placed within a
WITH query to avoid redundant work. Another possible application is to
prevent unwanted multiple evaluations of functions with side-effects.
The execution plan does not seem to match the query any more (maybe you can update the query).
However, the problem now is here:
-> Hash Join (cost=30.26..111.69 rows=9 width=222)
(actual time=0.154..4.660 rows=5998 loops=1)
Hash Cond: (adobject.parent_id = adobject_targeting.ad_id)
Buffers: shared hit=151
-> Seq Scan on adobject (cost=0.00..77.97 rows=897 width=8)
(actual time=0.009..1.449 rows=6001 loops=1)
Buffers: shared hit=69
-> Hash (cost=30.24..30.24 rows=2 width=222)
(actual time=0.132..0.132 rows=2 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 1kB
Buffers: shared hit=82
-> Nested Loop (cost=0.15..30.24 rows=2 width=222)
(actual time=0.101..0.129 rows=2 loops=1)
Buffers: shared hit=82
-> Seq Scan on targeting (cost=0.00..13.88 rows=2 width=222)
(actual time=0.015..0.042 rows=79 loops=1)
Filter: (name = 'age group'::targeting_name)
Rows Removed by Filter: 82
Buffers: shared hit=1
-> Index Scan using advertising_targeting_pkey on adobject_targeting
(cost=0.15..8.17 rows=1 width=8)
(actual time=0.001..0.001 rows=0 loops=79)
Index Cond: (id = targeting.id)
Buffers: shared hit=81
This is a join between adobject and the result of
targeting JOIN adobject_targeting
USING (id)
WHERE targeting.name = 'age group'
The latter subquery is correctly estimated to 2 rows, but PostgreSQL fails to notice that almost all rows found in adobject will match one of those two rows, so that the result of the join will be 6000 rather than the 9 it estimates.
This causes the optimizer to wrongly choose a nested loop join later on, where more than half of the query time is spent.
Unfortunately, since PostgreSQL doesn't have cross-table statistics, there is no way for PostgreSQL to know better.
One coarse measure is to SET enable_nestloop=off, but that will deteriorate the performance of the other (correctly chosen) nested loop join, so I don't know if it will be a net win.
If that helps, you could consider changing the parameter only for the duration of the query (with a transaction and SET LOCAL).
Maybe there is a way to rewrite the query so that a better plan can be found, but that is hard to say without knowing the exact query.
I dont know if this query will solve your problem, but try it:
SELECT ad_performance.ad_id, targeting.value AS targeting_value,
sum(impressions) AS impressions,
sum(app_starts) AS app_starts
FROM ad_performance
LEFT JOIN ad on ad.id = ad_performance.ad_id
LEFT JOIN ad_actions ON ad_performance.id = ad_actions.ad_performance_id
RIGHT JOIN ad_targeting on ad_targeting.ad_id = ad.parent_id
INNER JOIN targeting on targeting.id = ad_targeting.id and targeting.name = 'gender'
INNER JOIN ad_product on ad_product.ad_id = ad_performance.ad_id
WHERE ad_product.product = 'iphone'
GROUP BY ad_performance.ad_id, targeting_value
perhaps you would create index on all columns that you are putting in ON or WHERE conditions

Improve query performance - do not use index scan

I've got 2 tables:
event - list of events which should be processed (new files and dirs). Size: ~2M rows
dir_current - list of directories currently visible on filesystem. Size: ~1M but up to 100M in the future.
I use stored procedure to process events and turn them into dir_current rows. First step of processing events is to find all rows that do not have parent in dir_current table. Unfortunately this get a little more complicated as parent might be present in event table so we don't want to include them in result. I came up with this query:
SELECT DISTINCT event.parent_path, event.depth FROM sf.event as event
LEFT OUTER JOIN sf.dir_current as dir ON
event.parent_path = dir.path
AND dir.volume_id = 1
LEFT OUTER JOIN sf.event as event2 ON
event.parent_path = event2.path
AND event2.volume_id = 1
AND event2.type = 'DIR'
AND event2.id <= MAX_ID_VARIABLE
WHERE
event.volume_id = 1
AND event.id <= MAX_ID_VARIABLE
AND dir.volume_id IS NULL
AND event2.id IS NULL
ORDER BY depth, parent_path;
MAX_ID_VARIABLE is variable limiting number of events processed at once.
Below is explain analyze result (explain.depesz.com):
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=395165.81..395165.82 rows=1 width=83) (actual time=32009.439..32049.675 rows=2462 loops=1)
-> Sort (cost=395165.81..395165.81 rows=1 width=83) (actual time=32009.432..32021.733 rows=184975 loops=1)
Sort Key: event.depth, event.parent_path
Sort Method: quicksort Memory: 38705kB
-> Nested Loop Anti Join (cost=133385.93..395165.80 rows=1 width=83) (actual time=235.581..30916.912 rows=184975 loops=1)
-> Hash Anti Join (cost=133385.38..395165.14 rows=1 width=83) (actual time=83.073..1703.618 rows=768278 loops=1)
Hash Cond: (event.parent_path = event2.path)
-> Seq Scan on event (cost=0.00..252872.92 rows=2375157 width=83) (actual time=0.014..756.014 rows=2000000 loops=1)
Filter: ((id <= 13000000) AND (volume_id = 1))
-> Hash (cost=132700.54..132700.54 rows=54787 width=103) (actual time=82.754..82.754 rows=48029 loops=1)
Buckets: 65536 Batches: 1 Memory Usage: 6696kB
-> Bitmap Heap Scan on event event2 (cost=6196.07..132700.54 rows=54787 width=103) (actual time=12.979..63.803 rows=48029 loops=1)
Recheck Cond: (type = '16384'::text)
Filter: ((id <= 13000000) AND (volume_id = 1))
Heap Blocks: exact=16465
-> Bitmap Index Scan on event_dir_depth_idx (cost=0.00..6182.38 rows=54792 width=0) (actual time=8.759..8.759 rows=48029 loops=1)
-> Index Only Scan using dircurrent_volumeid_path_unq on dir_current dir (cost=0.55..0.65 rows=1 width=115) (actual time=0.038..0.038 rows=1 loops=768278)
Index Cond: ((volume_id = 1) AND (path = event.parent_path))
Heap Fetches: 583027
Planning time: 2.114 ms
Execution time: 32054.498 ms
The slowest part is Index Only Scan on dir_current table (took 29 sec from 32 sec total).
I wonder why Postgres is using index scan instead of sequential scan which would take 2-3 seconds.
After setting:
SET enable_indexscan TO false;
SET enable_bitmapscan TO false;
I received query that runs in 3 sec explain.depesz.com:
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=569654.93..569654.94 rows=1 width=83) (actual time=3943.487..3979.613 rows=2462 loops=1)
-> Sort (cost=569654.93..569654.93 rows=1 width=83) (actual time=3943.481..3954.169 rows=184975 loops=1)
Sort Key: event.depth, event.parent_path
Sort Method: quicksort Memory: 38705kB
-> Hash Anti Join (cost=307875.14..569654.92 rows=1 width=83) (actual time=1393.185..2970.626 rows=184975 loops=1)
Hash Cond: ((event.parent_path = dir.path) AND ((event.depth - 1) = dir.depth))
-> Hash Anti Join (cost=259496.25..521276.01 rows=1 width=83) (actual time=786.617..2111.297 rows=768278 loops=1)
Hash Cond: (event.parent_path = event2.path)
-> Seq Scan on event (cost=0.00..252872.92 rows=2375157 width=83) (actual time=0.016..616.598 rows=2000000 loops=1)
Filter: ((id <= 13000000) AND (volume_id = 1))
-> Hash (cost=258811.41..258811.41 rows=54787 width=103) (actual time=786.214..786.214 rows=48029 loops=1)
Buckets: 65536 Batches: 1 Memory Usage: 6696kB
-> Seq Scan on event event2 (cost=0.00..258811.41 rows=54787 width=103) (actual time=0.068..766.563 rows=48029 loops=1)
Filter: ((id <= 13000000) AND (volume_id = 1) AND (type = '16384'::text))
Rows Removed by Filter: 1951971
-> Hash (cost=36960.95..36960.95 rows=761196 width=119) (actual time=582.430..582.430 rows=761196 loops=1)
Buckets: 1048576 Batches: 1 Memory Usage: 121605kB
-> Seq Scan on dir_current dir (cost=0.00..36960.95 rows=761196 width=119) (actual time=0.010..267.484 rows=761196 loops=1)
Filter: (volume_id = 1)
Planning time: 2.242 ms
Execution time: 3999.213 ms
Both tables were analyzed before running queries.
Any idea why is Postgres using far from optimal query plan?
Is there a better way to improve query performance then disabling index/bitmap scan? Maybe different query with same result?
I am using Postgres 9.5.2
I would be grateful for any help.
You are only fetching columns from one table. I would recommend rewriting the query as:
SELECT e.parent_path, e.depth
FROM sf.event e
WHERE e.volume_id = 1 AND e.id <= MAX_ID_VARIABLE AND
NOT EXISTS (SELECT 1
FROM dir_current dc
WHERE e.parent_path = dc.path AND dc.volume_id = 1
) AND
NOT EXISTS (SELECT 1
FROM sf.event e2 ON
e.parent_path = e2.path AND
e2.volume_id = 1 AND
e2.type = 'DIR' AND
e2.id <= MAX_ID_VARIABLE
)
ORDER BY e.depth, e.parent_path;
Then the following indexes:
event(volume_id, id)
dir_current(path, volume_id)
event(path, volume_id, type, id)
I'm not sure why there is a comparison to MAX_ID_VARIABLE. Without this comparison, the first index can include the sort keys: event(volume_id, depth, parent_path).

Postgres SQL slow query aggregation

I have an aggregation query which is ends up to be slow, I am looking for any improvements in "query" or "index".
I indexed all the fieldsI use, maybe i missed something, or maybe you can suggest any ways I can execute this query
query:
EXPLAIN ANALYZE
SELECT HE.fs_perm_sec_id,
HE.TICKER_EXCHANGE,
HE.proper_name,
OP.shares_outstanding,
(SELECT factset_industry_desc
FROM factset_industry_map AS fim
WHERE fim.factset_industry_code = HES.industry_code) AS industry,
// slow aggregation
(SELECT SUM(OIH.current_holdings)
FROM own_inst_holdings OIH
WHERE OIH.fs_perm_sec_id = HE.fs_perm_sec_id) AS inst_holdings
FROM own_prices OP
JOIN h_security_ticker_exchange HE ON OP.fs_perm_sec_id = HE.fs_perm_sec_id
JOIN h_entity_sector HES ON HES.factset_entity_id = HE.factset_entity_id
WHERE HE.ticker_exchange = 'BUD-NYS'
ORDER BY OP.price_date DESC LIMIT 1
Where this piece slows down the query:
(SELECT SUM(OIH.current_holdings)
FROM own_inst_holdings OIH
WHERE OIH.fs_perm_sec_id = HE.fs_perm_sec_id) AS inst_holdings
EXPLAIN ANALYZE
Limit (cost=360.41..360.41 rows=1 width=100) (actual time=920.592..920.592 rows=1 loops=1)
-> Sort (cost=360.41..360.41 rows=1 width=100) (actual time=920.592..920.592 rows=1 loops=1)
Sort Key: op.price_date
Sort Method: top-N heapsort Memory: 25kB
-> Nested Loop (cost=0.26..360.41 rows=1 width=100) (actual time=867.898..920.493 rows=35 loops=1)
-> Nested Loop (cost=0.17..6.43 rows=1 width=104) (actual time=4.882..4.940 rows=35 loops=1)
-> Index Scan using h_sec_exch_factset_entity_id_idx on h_security_ticker_exchange he (cost=0.09..4.09 rows=1 width=92) (actual time=3.611..3.612 rows=1 loops=1)
Index Cond: ((ticker_exchange)::text = 'BUD-NYS'::text)
-> Index Only Scan using own_prices_multiple_idx_1 on own_prices op (cost=0.09..2.25 rows=32 width=23) (actual time=1.258..1.301 rows=35 loops=1)
Index Cond: (fs_perm_sec_id = (he.fs_perm_sec_id)::text)
Heap Fetches: 0
-> Index Scan using h_entity_sector_multiple_idx_3 on h_entity_sector hes (cost=0.09..4.09 rows=1 width=14) (actual time=0.083..0.085 rows=1 loops=35)
Index Cond: (factset_entity_id = he.factset_entity_id)
SubPlan 1
-> Seq Scan on factset_industry_map fim (cost=0.00..2.48 rows=1 width=20) (actual time=0.014..0.031 rows=1 loops=35)
Filter: (factset_industry_code = hes.industry_code)
Rows Removed by Filter: 137
SubPlan 2
-> Aggregate (cost=347.40..347.40 rows=1 width=6) (actual time=26.035..26.035 rows=1 loops=35)
-> Bitmap Heap Scan on own_inst_holdings oih (cost=4.36..347.31 rows=177 width=6) (actual time=0.326..25.658 rows=622 loops=35)
Recheck Cond: ((fs_perm_sec_id)::text = (he.fs_perm_sec_id)::text)
Heap Blocks: exact=22750
-> Bitmap Index Scan on own_inst_holdings_fs_perm_sec_id_idx (cost=0.00..4.35 rows=177 width=0) (actual time=0.232..0.232 rows=662 loops=35)
Index Cond: ((fs_perm_sec_id)::text = (he.fs_perm_sec_id)::text)
Planning time: 5.806 ms
Execution time: 920.778 ms
For this query:
SELECT HE.fs_perm_sec_id, HE.TICKER_EXCHANGE, HE.proper_name, OP.shares_outstanding,
(SELECT factset_industry_desc
FROM factset_industry_map AS fim
WHERE fim.factset_industry_code = HES.industry_code
) AS industry,
(SELECT SUM(OIH.current_holdings)
FROM own_inst_holdings OIH
WHERE OIH.fs_perm_sec_id = HE.fs_perm_sec_id
) AS inst_holdings
FROM own_prices OP JOIN
h_security_ticker_exchange HE
ON OP.fs_perm_sec_id = HE.fs_perm_sec_id JOIN
h_entity_sector HES
ON HES.factset_entity_id = HE.factset_entity_id
WHERE HE.ticker_exchange = 'BUD-NYS'
ORDER BY OP.price_date DESC
LIMIT 1;
You want the following indexes:
h_security_ticker_exchange(ticker_exchange, factset_entity_id, fs_perm_sec_id)
own_prices(fs_perm_sec_id)
h_entity_sector(factset_entity_id)
factset_industry_map(factset_industry_code, factset_industry_desc)
own_inst_holdings(fs_perm_sec_id, current_holdings)