Why does this INSERT query run slower when using indices - sql

I'm currently running a query that inserts the values from table insert_values(a,b) into table insert_base(a,b):
INSERT INTO insert_base
SELECT DISTINCT *
FROM insert_values IV
WHERE NOT EXISTS (SELECT * FROM insert_base IB
WHERE IV.a = IB.a AND IV.b = IB.b);
However, when I put an index on insert_values on (a,b) (all the attributes in insert_values) the query actually runs slightly slower than when there is no index (by around 2 seconds). I'm quite confused as to why this is the case, since I thought at worst the index wouldn't negatively affect the performance? Any help would be much appreciated. I am using postgresql by the way.
These are the access plans (indexed query first):
"Insert on insert_base (cost=34712.68..36712.68 rows=0 width=0) (actual time=3517.311..3517.346 rows=0 loops=1)"
" -> HashAggregate (cost=34712.68..35712.68 rows=100000 width=8) (actual time=897.690..1540.008 rows=949662 loops=1)"
" Group Key: iv.a, iv.b"
" Batches: 57 Memory Usage: 11065kB Disk Usage: 24288kB"
" -> Hash Anti Join (cost=100.08..29737.15 rows=995106 width=8) (actual time=12.398..444.112 rows=998050 loops=1)"
" Hash Cond: ((iv.a = ib.a) AND (iv.b = ib.b))"
" -> Seq Scan on insert_values iv (cost=0.00..14425.00 rows=1000000 width=8) (actual time=1.367..149.392 rows=1000000 loops=1)"
" -> Hash (cost=93.43..93.43 rows=443 width=8) (actual time=10.920..10.922 rows=20000 loops=1)"
" Buckets: 32768 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1038kB"
" -> Seq Scan on insert_base ib (cost=0.00..93.43 rows=443 width=8) (actual time=0.020..2.565 rows=20000 loops=1)"
"Planning Time: 12.250 ms"
"Execution Time: 3548.398 ms"
"Insert on insert_base (cost=34682.06..36682.06 rows=0 width=0) (actual time=2821.450..2821.453 rows=0 loops=1)"
" -> HashAggregate (cost=34682.06..35682.06 rows=100000 width=8) (actual time=735.926..1348.614 rows=949662 loops=1)"
" Group Key: iv.a, iv.b"
" Batches: 57 Memory Usage: 11065kB Disk Usage: 24288kB"
" -> Hash Anti Join (cost=102.75..29719.59 rows=992495 width=8) (actual time=4.566..404.311 rows=998050 loops=1)"
" Hash Cond: ((iv.a = ib.a) AND (iv.b = ib.b))"
" -> Seq Scan on insert_values iv (cost=0.00..14425.00 rows=1000000 width=8) (actual time=0.050..96.755 rows=1000000 loops=1)"
" -> Hash (cost=94.50..94.50 rows=550 width=8) (actual time=4.491..4.492 rows=20000 loops=1)"
" Buckets: 32768 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1038kB"
" -> Seq Scan on insert_base ib (cost=0.00..94.50 rows=550 width=8) (actual time=0.009..1.558 rows=20000 loops=1)"
"Planning Time: 0.280 ms"
"Execution Time: 2828.308 ms"

Related

Why Postgres EXPLAIN ANALYSE report huge performance difference compare to real query execution

I have been tasked with rewriting some low performance sql in our system for which I have this query
select
"aggtable".id as t_id,
count(joined.packages)::integer as t_package_count,
sum(coalesce((joined.packages ->> 'weight'::text)::double precision, 0::double precision)) as t_total_weight
from
"aggtable"
join (
select
"unnested".myid, json_array_elements("jsontable".jsondata) as packages
from
(
select
distinct unnest("tounnest".arrayofid) as myid
from
"aggtable" "tounnest") "unnested"
join "jsontable" on
"jsontable".id = "unnested".myid) joined on
joined.myid = any("aggtable".arrayofid)
group by
"aggtable".id
The EXPLAN ANALYSE result is
Sort Method: quicksort Memory: 611kB
-> Nested Loop (cost=30917.16..31333627.69 rows=27270 width=69) (actual time=4.028..2054.470 rows=3658 loops=1)
Join Filter: ((unnest(tounnest.arrayofid)) = ANY (aggtable.arrayofid))
Rows Removed by Join Filter: 9055436
-> ProjectSet (cost=30917.16..36645.61 rows=459000 width=48) (actual time=3.258..13.846 rows=3322 loops=1)
-> Hash Join (cost=30917.16..34316.18 rows=4590 width=55) (actual time=3.246..7.079 rows=1661 loops=1)
Hash Cond: ((unnest(tounnest.arrayofid)) = jsontable.id)
-> Unique (cost=30726.88..32090.38 rows=144700 width=16) (actual time=1.901..3.720 rows=1664 loops=1)
-> Sort (cost=30726.88..31408.63 rows=272700 width=16) (actual time=1.900..2.711 rows=1845 loops=1)
Sort Key: (unnest(tounnest.arrayofid))
Sort Method: quicksort Memory: 135kB
-> ProjectSet (cost=0.00..1444.22 rows=272700 width=16) (actual time=0.011..1.110 rows=1845 loops=1)
-> Seq Scan on aggtable tounnest (cost=0.00..60.27 rows=2727 width=30) (actual time=0.007..0.311 rows=2727 loops=1)
-> Hash (cost=132.90..132.90 rows=4590 width=55) (actual time=1.328..1.329 rows=4590 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 454kB
-> Seq Scan on jsontable (cost=0.00..132.90 rows=4590 width=55) (actual time=0.006..0.497 rows=4590 loops=1)
-> Materialize (cost=0.00..73.91 rows=2727 width=67) (actual time=0.000..0.189 rows=2727 loops=3322)
-> Seq Scan on aggtable (cost=0.00..60.27 rows=2727 width=67) (actual time=0.012..0.317 rows=2727 loops=1)
Planning Time: 0.160 ms
Execution Time: 2065.268 ms
I tried to rewrite this query from scratch to profile performance and to understand the original intention
select
joined.joinid,
count(joined.packages)::integer as t_package_count,
sum(coalesce((joined.packages ->> 'weight'::text)::double precision, 0::double precision)) as t_total_weight
from
(
select
joinid ,
json_array_elements(jsondata) as packages
from
( (
select
distinct unnest(at2.arrayofid) as joinid, at2.id as rootid
from
aggtable at2) unnested
join jsontable jt on
jt.id = unnested.joinid)) joined
group by joined.joinid
For which the EXPLAIN ANALYSE return
HashAggregate (cost=873570.28..873572.78 rows=200 width=28) (actual time=18.379..18.741 rows=1661 loops=1)
Group Key: (unnest(at2.arrayofid))
-> ProjectSet (cost=44903.16..191820.28 rows=27270000 width=48) (actual time=3.019..14.684 rows=3658 loops=1)
-> Hash Join (cost=44903.16..53425.03 rows=272700 width=55) (actual time=3.010..4.999 rows=1829 loops=1)
Hash Cond: ((unnest(at2.arrayofid)) = jt.id)
-> Unique (cost=44712.88..46758.13 rows=272700 width=53) (actual time=1.825..2.781 rows=1845 loops=1)
-> Sort (cost=44712.88..45394.63 rows=272700 width=53) (actual time=1.824..2.135 rows=1845 loops=1)
Sort Key: (unnest(at2.arrayofid)), at2.id
Sort Method: quicksort Memory: 308kB
-> ProjectSet (cost=0.00..1444.22 rows=272700 width=53) (actual time=0.009..1.164 rows=1845 loops=1)
-> Seq Scan on aggtable at2 (cost=0.00..60.27 rows=2727 width=67) (actual time=0.005..0.311 rows=2727 loops=1)
-> Hash (cost=132.90..132.90 rows=4590 width=55) (actual time=1.169..1.169 rows=4590 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 454kB
-> Seq Scan on jsontable jt (cost=0.00..132.90 rows=4590 width=55) (actual time=0.007..0.462 rows=4590 loops=1)
Planning Time: 0.144 ms
Execution Time: 18.889 ms
I see a huge difference in the query performance (20ms to 2000ms), as evaluated by postgres. Howver, the real query performance is no where near that difference ( the fast one is about 500ms and the slow one is about 1s )
My question
1/ Is that normal that EXPLAIN produce drastic difference in performance but not so much in real life?
2/ Is the second - optimized query correct? what did the first query do wrong?
I suppy also the credential to a sample database so that everyone can try the queries out
postgres://birylwwg:X6EM3Al9Jhqzz0w6EaSSx79pa4aXRBZq#arjuna.db.elephantsql.com:5432/birylwwg
PW is
X6EM3Al9Jhqzz0w6EaSSx79pa4aXRBZq

Speeding up the query with multiple joins, group by and order by

I have a SQL query as:
SELECT
title,
(COUNT(DISTINCT A.id)) AS "count_title"
FROM
B
INNER JOIN D ON B.app = D.app
INNER JOIN A ON D.number = A.number
INNER JOIN C ON A.id = C.id
GROUP BY C.title
ORDER BY count_title DESC
LIMIT 10
;
Table D contains 50M records, A contains 30M records, B & C contains 30k records each. Indexes are defined on all columns used in joins, group by, order by.
The query works fine without the order by statement and returns results in around 2-3 sec.
But, with the sorting operation(order by) the query time increases to 10-12 seconds.
I understand the reason behind this, that executor has to traverse all the records for sorting operation and index will hardly help here.
Are there some other ways to speed up this query?
Here is the explain analyze of this query:
"QUERY PLAN"
"Limit (cost=974652.20..974652.22 rows=10 width=54) (actual time=2817.579..2825.071 rows=10 loops=1)"
" Buffers: shared hit=120299 read=573195"
" -> Sort (cost=974652.20..974666.79 rows=5839 width=54) (actual time=2817.578..2817.578 rows=10 loops=1)"
" Sort Key: (count(DISTINCT A.id)) DESC"
" Sort Method: top-N heapsort Memory: 26kB"
" Buffers: shared hit=120299 read=573195"
" -> GroupAggregate (cost=974325.65..974526.02 rows=5839 width=54) (actual time=2792.465..2817.097 rows=3618 loops=1)"
" Group Key: C.title"
" Buffers: shared hit=120299 read=573195"
" -> Sort (cost=974325.65..974372.97 rows=18931 width=32) (actual time=2792.451..2795.161 rows=45175 loops=1)"
" Sort Key: C.title"
" Sort Method: quicksort Memory: 5055kB"
" Buffers: shared hit=120299 read=573195"
" -> Gather (cost=968845.30..972980.74 rows=18931 width=32) (actual time=2753.402..2778.648 rows=45175 loops=1)"
" Workers Planned: 1"
" Workers Launched: 1"
" Buffers: shared hit=120299 read=573195"
" -> Parallel Hash Join (cost=967845.30..970087.64 rows=11136 width=32) (actual time=2751.725..2764.832 rows=22588 loops=2)"
" Hash Cond: ((C.id)::text = (A.id)::text)"
" Buffers: shared hit=120299 read=573195"
" -> Parallel Seq Scan on C (cost=0.00..1945.87 rows=66687 width=32) (actual time=0.017..4.316 rows=56684 loops=2)"
" Buffers: shared read=1279"
" -> Parallel Hash (cost=966604.55..966604.55 rows=99260 width=9) (actual time=2750.987..2750.987 rows=20950 loops=2)"
" Buckets: 262144 Batches: 1 Memory Usage: 4032kB"
" Buffers: shared hit=120266 read=571904"
" -> Nested Loop (cost=219572.23..966604.55 rows=99260 width=9) (actual time=665.832..2744.270 rows=20950 loops=2)"
" Buffers: shared hit=120266 read=571904"
" -> Parallel Hash Join (cost=219571.79..917516.91 rows=99260 width=4) (actual time=665.804..2583.675 rows=20950 loops=2)"
" Hash Cond: ((D.app)::text = (B.app)::text)"
" Buffers: shared hit=8 read=524214"
" -> Parallel Bitmap Heap Scan on D (cost=217542.51..895848.77 rows=5126741 width=13) (actual time=661.254..1861.862 rows=6160441 loops=2)"
" Recheck Cond: ((action_type)::text = ANY ('{10,11}'::text[]))"
" Heap Blocks: exact=242152"
" Buffers: shared hit=3 read=523925"
" -> Bitmap Index Scan on D_index_action_type (cost=0.00..214466.46 rows=12304178 width=0) (actual time=546.470..546.471 rows=12320882 loops=1)"
" Index Cond: ((action_type)::text = ANY ('{10,11}'::text[]))"
" Buffers: shared hit=3 read=33669"
" -> Parallel Hash (cost=1859.36..1859.36 rows=13594 width=12) (actual time=4.337..4.337 rows=16313 loops=2)"
" Buckets: 32768 Batches: 1 Memory Usage: 1152kB"
" Buffers: shared hit=5 read=289"
" -> Parallel Index Only Scan using B_index_app on B (cost=0.29..1859.36 rows=13594 width=12) (actual time=0.015..2.218 rows=16313 loops=2)"
" Heap Fetches: 0"
" Buffers: shared hit=5 read=289"
" -> Index Scan using A_index_number on A (cost=0.43..0.48 rows=1 width=24) (actual time=0.007..0.007 rows=1 loops=41900)"
" Index Cond: ((number)::text = (D.number)::text)"
" Buffers: shared hit=120258 read=47690"
"Planning Time: 0.747 ms"
"Execution Time: 2825.118 ms"
You could try to aim for a nested loop join between b and d because b is so much smaller:
CREATE INDEX ON d (app);
If d is vacuumed frequently enough, you could see if an index only scan is even faster. For that, include number in the index (in v11, use the INCLUDE clause for that!). The EXPLAIN output suggests that you have an extra condition on action_type; you'd have to include that column too for an index only scan.

PostgreSQL 10 - IN and ANY performance inexplicable behaviour

I do selection from big table where id in array/list.
Checked several variants, result wonder me.
1. Use ANY and ARRAY
EXPLAIN (ANALYZE,BUFFERS)
SELECT * FROM cca_data_hours
WHERE
datetime = '2018-01-07 19:00:00'::timestamp without time zone AND
id_web_page = ANY (ARRAY[1, 2, 8, 3 /* ~50k ids */])
Result
"Index Scan using cca_data_hours_pri on cca_data_hours (cost=0.28..576.79 rows=15 width=188) (actual time=0.035..0.998 rows=6 loops=1)"
" Index Cond: (datetime = '2018-01-07 19:00:00'::timestamp without time zone)"
" Filter: (id_web_page = ANY ('{1,2,8,3, (...)"
" Rows Removed by Filter: 5"
" Buffers: shared hit=3"
"Planning time: 57.625 ms"
"Execution time: 1.065 ms"
2. Use IN and VALUES
EXPLAIN (ANALYZE, BUFFERS)
SELECT * FROM cca_data_hours
WHERE
datetime = '2018-01-07 19:00:00'::timestamp without time zone AND
id_web_page IN (VALUES (1),(2),(8),(3) /* ~50k ids */)
Result
"Hash Join (cost=439.77..472.66 rows=8 width=188) (actual time=90.806..90.858 rows=6 loops=1)"
" Hash Cond: (cca_data_hours.id_web_page = "*VALUES*".column1)"
" Buffers: shared hit=3"
" -> Index Scan using cca_data_hours_pri on cca_data_hours (cost=0.28..33.06 rows=15 width=188) (actual time=0.035..0.060 rows=11 loops=1)"
" Index Cond: (datetime = '2018-01-07 19:00:00'::timestamp without time zone)"
" Buffers: shared hit=3"
" -> Hash (cost=436.99..436.99 rows=200 width=4) (actual time=90.742..90.742 rows=4 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 9kB"
" -> HashAggregate (cost=434.99..436.99 rows=200 width=4) (actual time=90.709..90.717 rows=4 loops=1)"
" Group Key: "*VALUES*".column1"
" -> Values Scan on "*VALUES*" (cost=0.00..362.49 rows=28999 width=4) (actual time=0.008..47.056 rows=28999 loops=1)"
"Planning time: 53.607 ms"
"Execution time: 91.681 ms"
I expect case #2 will faster, but it is not like.
Why IN with VALUES slowelly?
Comparing the EXPLAIN ANALYZE results, it looks like the old version wasn't using the available index to key in the given examples. The reason why ANY (ARRAY[]) became faster is in version 9.2 https://www.postgresql.org/docs/current/static/release-9-2.html
Allow indexed_col op ANY(ARRAY[...]) conditions to be used in plain index scans and index-only scans (Tom Lane)
The site where you got the suggestion from was about version 9.0

Why my query cost is so high?

When I execute explain analyze on some query I've got the normal cost from some low value to some higher value. But when I'm trying to force to use the index on table by switching enable_seqscan to false, the query cost jumps to insane values like:
Merge Join (cost=10064648609.460..10088218360.810 rows=564249 width=21) (actual time=341699.323..370702.969 rows=3875328 loops=1)
Merge Cond: ((foxtrot.two = ((five_hotel.two)::numeric)) AND (foxtrot.alpha_two07 = ((five_hotel.alpha_two07)::numeric)))
-> Merge Append (cost=10000000000.580..10023064799.260 rows=23522481 width=24) (actual time=0.049..19455.320 rows=23522755 loops=1)
Sort Key: foxtrot.two, foxtrot.alpha_two07
-> Sort (cost=10000000000.010..10000000000.010 rows=1 width=76) (actual time=0.005..0.005 rows=0 loops=1)
Sort Key: foxtrot.two, foxtrot.alpha_two07
Sort Method: quicksort Memory: 25kB
-> Seq Scan on foxtrot (cost=10000000000.000..10000000000.000 rows=1 width=76) (actual time=0.001..0.001 rows=0 loops=1)
Filter: (kilo_sierra_oscar = 'oscar'::date)
-> Index Scan using alpha_five on five_uniform (cost=0.560..22770768.220 rows=23522480 width=24) (actual time=0.043..17454.619 rows=23522755 loops=1)
Filter: (kilo_sierra_oscar = 'oscar'::date)
As you can see I'm trying to retrive values by index, so they doesn't need to be sorted once they're loaded.
It is a simple query:
select *
from foxtrot a
where foxtrot.kilo_sierra_oscar = date'2015-01-01'
order by foxtrot.two, foxtrot.alpha_two07
Index scan: "Execution time: 19009.569 ms"
Sequential scan: "Execution time: 127062.802 ms"
Setting the enable_seqscan to false improves execution time of query, but I would like optimizer to calculate that.
EDIT:
Seq plan with buffers:
Sort (cost=4607555.110..4666361.310 rows=23522481 width=24) (actual time=101094.754..120740.190 rows=23522756 loops=1)
Sort Key: foxtrot.two, foxtrot.alpha07
Sort Method: external merge Disk: 805304kB
Buffers: shared hit=468690, temp read=100684 written=100684
-> Append (cost=0.000..762721.000 rows=23522481 width=24) (actual time=0.006..12018.725 rows=23522756 loops=1)
Buffers: shared hit=468690
-> Seq Scan on foxtrot (cost=0.000..0.000 rows=1 width=76) (actual time=0.001..0.001 rows=0 loops=1)
Filter: (kilo = 'oscar'::date)
-> Seq Scan on foxtrot (cost=0.000..762721.000 rows=23522480 width=24) (actual time=0.005..9503.851 rows=23522756 loops=1)
Filter: (kilo = 'oscar'::date)
Buffers: shared hit=468690
Index plan with buffers:
Merge Append (cost=10000000000.580..10023064799.260 rows=23522481 width=24) (actual time=0.046..19302.855 rows=23522756 loops=1)
Sort Key: foxtrot.two, foxtrot.alpha_two07
Buffers: shared hit=17855133 -> Sort (cost=10000000000.010..10000000000.010 rows=1 width=76) (actual time=0.009..0.009 rows=0 loops=1)
Sort Key: foxtrot.two, foxtrot.alpha_two07
Sort Method: quicksort Memory: 25kB
-> Seq Scan on foxtrot (cost=10000000000.000..10000000000.000 rows=1 width=76) (actual time=0.000..0.000 rows=0 loops=1)
Filter: (kilo = 'oscar'::date)
-> Index Scan using alpha_five on five (cost=0.560..22770768.220 rows=23522480 width=24) (actual time=0.036..17035.903 rows=23522756 loops=1)
Filter: (kilo = 'oscar'::date)
Buffers: shared hit=17855133
Why the cost of the query jumps so high? How can I avoid it?
The high cost is a direct consequence of set enable_seqscan=false.
The planner implements this "hint" by setting an arbitrary super-high cost (10 000 000 000) to the sequential scan technique. Then it computes the different potential execution strategies with their associated costs.
If the best result still has a super-high cost, it means that the planner found no strategy to avoid the sequential scan, even when trying at all costs.
In the plan shown in the question under "Index plan with buffers" this happens at the Seq Scan on foxtrot node.

Extremely slow query on 1st run, even with indexes

I have an extremely slow query that is slow despite indexes being used(on the order of 1-3 minutes). Similar queries will be run 4-6 times by the user, so speed is critical.
QUERY:
SELECT SUM(bh.count) AS count,b.time AS batchtime
FROM
batchtimes AS b
INNER JOIN batchtimes_headlines AS bh ON b.hashed_id = bh.batchtime_hashed_id
INNER JOIN headlines_ngrams AS hn ON bh.headline_hashed_id = hn.headline_hashed_id
INNER JOIN ngrams AS n ON hn.ngram_hashed_id = n.hashed_id
INNER JOIN homepages_headlines AS hh ON bh.headline_hashed_id = hh.headline_hashed_id
INNER JOIN homepages AS hp ON hh.homepage_hashed_id = hp.hashed_id
WHERE
b.time IN (SELECT * FROM generate_series('2013-10-10 20:00:00.000000'::timestamp,'2014-02-16 20:00:00.000000'::timestamp,'1 hours'))
AND ( n.gram = 'a' )
AND hp.url = 'www.abcdefg.com'
GROUP BY
b.time
ORDER BY
b.time ASC;
EXPLAIN ANALYZE after very first run:
GroupAggregate (cost=6863.26..6863.79 rows=30 width=12) (actual time=90905.858..90908.889 rows=3039 loops=1)
-> Sort (cost=6863.26..6863.34 rows=30 width=12) (actual time=90905.853..90906.971 rows=19780 loops=1)
Sort Key: b."time"
Sort Method: quicksort Memory: 1696kB
-> Hash Join (cost=90.16..6862.52 rows=30 width=12) (actual time=378.784..90890.636 rows=19780 loops=1)
Hash Cond: (b."time" = generate_series.generate_series)
-> Nested Loop (cost=73.16..6845.27 rows=60 width=12) (actual time=375.644..90859.059 rows=22910 loops=1)
-> Nested Loop (cost=72.88..6740.51 rows=60 width=37) (actual time=375.624..90618.828 rows=22910 loops=1)
-> Nested Loop (cost=42.37..4391.06 rows=1 width=66) (actual time=368.993..54607.402 rows=1213 loops=1)
-> Nested Loop (cost=42.23..4390.18 rows=5 width=99) (actual time=223.681..53051.774 rows=294787 loops=1)
-> Nested Loop (cost=41.68..4379.19 rows=5 width=33) (actual time=223.643..49403.746 rows=294787 loops=1)
-> Index Scan using by_gram_ngrams on ngrams n (cost=0.56..8.58 rows=1 width=33) (actual time=17.001..17.002 rows=1 loops=1)
Index Cond: ((gram)::text = 'a'::text)
-> Bitmap Heap Scan on headlines_ngrams hn (cost=41.12..4359.59 rows=1103 width=66) (actual time=206.634..49273.363 rows=294787 loops=1)
Recheck Cond: ((ngram_hashed_id)::text = (n.hashed_id)::text)
-> Bitmap Index Scan on by_ngramhashedid_headlinesngrams (cost=0.00..40.84 rows=1103 width=0) (actual time=143.430..143.430 rows=294787 loops=1)
Index Cond: ((ngram_hashed_id)::text = (n.hashed_id)::text)
-> Index Scan using by_headlinehashedid_homepagesheadlines on homepages_headlines hh (cost=0.56..2.19 rows=1 width=66) (actual time=0.011..0.011 rows=1 loops=294787)
Index Cond: ((headline_hashed_id)::text = (hn.headline_hashed_id)::text)
-> Index Scan using by_hashedid_homepages on homepages hp (cost=0.14..0.17 rows=1 width=33) (actual time=0.005..0.005 rows=0 loops=294787)
Index Cond: ((hashed_id)::text = (hh.homepage_hashed_id)::text)
Filter: ((url)::text = 'www.abcdefg.com'::text)
Rows Removed by Filter: 1
-> Bitmap Heap Scan on batchtimes_headlines bh (cost=30.51..2333.86 rows=1560 width=70) (actual time=7.977..29.674 rows=19 loops=1213)
Recheck Cond: ((headline_hashed_id)::text = (hn.headline_hashed_id)::text)
-> Bitmap Index Scan on by_headlinehashedid_batchtimesheadlines (cost=0.00..30.12 rows=1560 width=0) (actual time=6.595..6.595 rows=19 loops=1213)
Index Cond: ((headline_hashed_id)::text = (hn.headline_hashed_id)::text)
-> Index Scan using by_hashedid_batchtimes on batchtimes b (cost=0.28..1.74 rows=1 width=41) (actual time=0.009..0.009 rows=1 loops=22910)
Index Cond: ((hashed_id)::text = (bh.batchtime_hashed_id)::text)
-> Hash (cost=14.50..14.50 rows=200 width=8) (actual time=3.130..3.130 rows=3097 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 121kB
-> HashAggregate (cost=12.50..14.50 rows=200 width=8) (actual time=1.819..2.342 rows=3097 loops=1)
-> Function Scan on generate_series (cost=0.00..10.00 rows=1000 width=8) (actual time=0.441..0.714 rows=3097 loops=1)
Total runtime: 90911.001 ms
EXPLAIN ANALYZE after 2nd run:
GroupAggregate (cost=6863.26..6863.79 rows=30 width=12) (actual time=3122.861..3125.796 rows=3039 loops=1)
-> Sort (cost=6863.26..6863.34 rows=30 width=12) (actual time=3122.857..3123.882 rows=19780 loops=1)
Sort Key: b."time"
Sort Method: quicksort Memory: 1696kB
-> Hash Join (cost=90.16..6862.52 rows=30 width=12) (actual time=145.396..3116.467 rows=19780 loops=1)
Hash Cond: (b."time" = generate_series.generate_series)
-> Nested Loop (cost=73.16..6845.27 rows=60 width=12) (actual time=142.406..3102.864 rows=22910 loops=1)
-> Nested Loop (cost=72.88..6740.51 rows=60 width=37) (actual time=142.395..3011.768 rows=22910 loops=1)
-> Nested Loop (cost=42.37..4391.06 rows=1 width=66) (actual time=142.229..2969.144 rows=1213 loops=1)
-> Nested Loop (cost=42.23..4390.18 rows=5 width=99) (actual time=135.799..2142.666 rows=294787 loops=1)
-> Nested Loop (cost=41.68..4379.19 rows=5 width=33) (actual time=135.768..437.824 rows=294787 loops=1)
-> Index Scan using by_gram_ngrams on ngrams n (cost=0.56..8.58 rows=1 width=33) (actual time=0.030..0.031 rows=1 loops=1)
Index Cond: ((gram)::text = 'a'::text)
-> Bitmap Heap Scan on headlines_ngrams hn (cost=41.12..4359.59 rows=1103 width=66) (actual time=135.732..405.943 rows=294787 loops=1)
Recheck Cond: ((ngram_hashed_id)::text = (n.hashed_id)::text)
-> Bitmap Index Scan on by_ngramhashedid_headlinesngrams (cost=0.00..40.84 rows=1103 width=0) (actual time=72.570..72.570 rows=294787 loops=1)
Index Cond: ((ngram_hashed_id)::text = (n.hashed_id)::text)
-> Index Scan using by_headlinehashedid_homepagesheadlines on homepages_headlines hh (cost=0.56..2.19 rows=1 width=66) (actual time=0.005..0.005 rows=1 loops=294787)
Index Cond: ((headline_hashed_id)::text = (hn.headline_hashed_id)::text)
-> Index Scan using by_hashedid_homepages on homepages hp (cost=0.14..0.17 rows=1 width=33) (actual time=0.003..0.003 rows=0 loops=294787)
Index Cond: ((hashed_id)::text = (hh.homepage_hashed_id)::text)
Filter: ((url)::text = 'www.abcdefg.com'::text)
Rows Removed by Filter: 1
-> Bitmap Heap Scan on batchtimes_headlines bh (cost=30.51..2333.86 rows=1560 width=70) (actual time=0.015..0.031 rows=19 loops=1213)
Recheck Cond: ((headline_hashed_id)::text = (hn.headline_hashed_id)::text)
-> Bitmap Index Scan on by_headlinehashedid_batchtimesheadlines (cost=0.00..30.12 rows=1560 width=0) (actual time=0.013..0.013 rows=19 loops=1213)
Index Cond: ((headline_hashed_id)::text = (hn.headline_hashed_id)::text)
-> Index Scan using by_hashedid_batchtimes on batchtimes b (cost=0.28..1.74 rows=1 width=41) (actual time=0.003..0.004 rows=1 loops=22910)
Index Cond: ((hashed_id)::text = (bh.batchtime_hashed_id)::text)
-> Hash (cost=14.50..14.50 rows=200 width=8) (actual time=2.982..2.982 rows=3097 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 121kB
-> HashAggregate (cost=12.50..14.50 rows=200 width=8) (actual time=1.771..2.311 rows=3097 loops=1)
-> Function Scan on generate_series (cost=0.00..10.00 rows=1000 width=8) (actual time=0.439..0.701 rows=3097 loops=1)
Total runtime: 3125.985 ms
I have a 32GB server. Here are the modifications to postgresql.conf:
default_statistics_target = 100
maintenance_work_mem = 1920MB
checkpoint_completion_target = 0.9
effective_cache_size = 16GB
work_mem = 160MB
wal_buffers = 16MB
checkpoint_segments = 32
shared_buffers = 7680MB
DB has recently been Vacuumed, re-indexed, and analyze.
Any suggestions for how to tune this query?
This may or may not answer to your question. i cannot comment above, since i dont have 50 rep's as per Stack overflow. :/
My first question is why Inner Join..? This will return you unwanted Columns in your Inner join result. For example in your query when you inner join
INNER JOIN headlines_ngrams AS hn ON bh.headline_hashed_id = hn.headline_hashed_id
The result will have two columns with same information which is redundant. so for example if you have 100,000,000 rows, you will have bh.headline_hashed_id and hh.headline_hashed_id 100,000,000 entries in each column. in your query above you are joining 5 tables. Plus you are interested in only
SELECT SUM(bh.count) AS count,b.time AS batchtime
so i belive you to use Natural join.
[link] (http://en.wikipedia.org/wiki/Inner_join#Inner_join)
The reason that i can think of why in second attempt you are getting a improved performance is due to cache. People have mentioned above to use temporary table for Generate_series which could be a good option. Plus if you think of using WITH in your query then, you should read this article. link