PostgreSQL query is not using an index - sql

Enviroment
My PostgreSQL (9.2) schema looks like this:
CREATE TABLE first
(
id_first bigint NOT NULL,
first_date timestamp without time zone NOT NULL,
CONSTRAINT first_pkey PRIMARY KEY (id_first)
)
WITH (
OIDS=FALSE
);
CREATE INDEX first_first_date_idx
ON first
USING btree
(first_date);
CREATE TABLE second
(
id_second bigint NOT NULL,
id_first bigint NOT NULL,
CONSTRAINT second_pkey PRIMARY KEY (id_second),
CONSTRAINT fk_first FOREIGN KEY (id_first)
REFERENCES first (id_first) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
OIDS=FALSE
);
CREATE INDEX second_id_first_idx
ON second
USING btree
(id_first);
CREATE TABLE third
(
id_third bigint NOT NULL,
id_second bigint NOT NULL,
CONSTRAINT third_pkey PRIMARY KEY (id_third),
CONSTRAINT fk_second FOREIGN KEY (id_second)
REFERENCES second (id_second) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
OIDS=FALSE
);
CREATE INDEX third_id_second_idx
ON third
USING btree
(id_second);
So, I have 3 tables with own PK. First has an index on first_date, Second has a FK from First and index on it. Third as a FK from Second and index on it aswell:
First (0 --> n) Second (0 --> n) Third
First table contains about 10 000 000 records.
Second table contains about 20 000 000 records.
Third table contains about 18 000 000 records.
Date range in column first_date is from 2016-01-01 till today.
random_cost_page is set to 2.0.
default_statistics_target is set to 100.
All FK, PK and first_date STATISTICS are set to 5000
Task to do
I want to count all Third rows connected with First, where first_date < X
My query:
SELECT count(t.id_third) AS count
FROM first f
JOIN second s ON s.id_first = f.id_first
JOIN third t ON t.id_second = s.id_second
WHERE first_date < _my_date
Problem description
Asking for 2 days - _my_date = '2016-01-03'
Everything working pretty well. Query lasts 1-2 seconds.
EXPLAIN ANALYZE:
"Aggregate (cost=8585512.55..8585512.56 rows=1 width=8) (actual time=67.310..67.310 rows=1 loops=1)"
" -> Merge Join (cost=4208477.49..8583088.04 rows=969805 width=8) (actual time=44.277..65.948 rows=17631 loops=1)"
" Merge Cond: (s.id_second = t.id_second)"
" -> Sort (cost=4208477.48..4211121.75 rows=1057709 width=8) (actual time=44.263..46.035 rows=19230 loops=1)"
" Sort Key: s.id_second"
" Sort Method: quicksort Memory: 1670kB"
" -> Nested Loop (cost=0.01..4092310.41 rows=1057709 width=8) (actual time=6.169..39.183 rows=19230 loops=1)"
" -> Index Scan using first_first_date_idx on first f (cost=0.01..483786.81 rows=492376 width=8) (actual time=6.159..12.223 rows=10346 loops=1)"
" Index Cond: (first_date < '2016-01-03 00:00:00'::timestamp without time zone)"
" -> Index Scan using second_id_first_idx on second s (cost=0.00..7.26 rows=7 width=16) (actual time=0.002..0.002 rows=2 loops=10346)"
" Index Cond: (id_first = f.id_first)"
" -> Index Scan using third_id_second_idx on third t (cost=0.00..4316649.89 rows=17193788 width=16) (actual time=0.008..7.293 rows=17632 loops=1)"
"Total runtime: 67.369 ms"
Asking for 10 days or more - _my_date = '2016-01-11' or more
Query is not using a indexscan anymore - replaced by seqscan and last 3-4 minutes...
Query plan:
"Aggregate (cost=8731468.75..8731468.76 rows=1 width=8) (actual time=234411.229..234411.229 rows=1 loops=1)"
" -> Hash Join (cost=4352424.81..8728697.88 rows=1108348 width=8) (actual time=189670.068..234400.540 rows=138246 loops=1)"
" Hash Cond: (t.id_second = o.id_second)"
" -> Seq Scan on third t (cost=0.00..4128080.88 rows=17193788 width=16) (actual time=0.016..124111.453 rows=17570724 loops=1)"
" -> Hash (cost=4332592.69..4332592.69 rows=1208810 width=8) (actual time=98566.740..98566.740 rows=151263 loops=1)"
" Buckets: 16384 Batches: 16 Memory Usage: 378kB"
" -> Hash Join (cost=561918.25..4332592.69 rows=1208810 width=8) (actual time=6535.801..98535.915 rows=151263 loops=1)"
" Hash Cond: (s.id_first = f.id_first)"
" -> Seq Scan on second s (cost=0.00..3432617.48 rows=18752248 width=16) (actual time=6090.771..88891.691 rows=19132869 loops=1)"
" -> Hash (cost=552685.31..552685.31 rows=562715 width=8) (actual time=444.630..444.630 rows=81650 loops=1)"
" -> Index Scan using first_first_date_idx on first f (cost=0.01..552685.31 rows=562715 width=8) (actual time=7.987..421.087 rows=81650 loops=1)"
" Index Cond: (first_date < '2016-01-13 00:00:00'::timestamp without time zone)"
"Total runtime: 234411.303 ms"
For test purposes, I have set:
SET enable_seqscan = OFF;
My queries start using indexscan again and last for 1-10 s (depends on range).
Question
Why this is working like that? How to convince a Query Planner to use a indexscan?
EDIT
After reducing a random_page_cost to 1.1, I can select about 30 days now still using a indexscan. Query plan changed a little bit:
"Aggregate (cost=8071389.47..8071389.48 rows=1 width=8) (actual time=4915.196..4915.196 rows=1 loops=1)"
" -> Nested Loop (cost=0.01..8067832.28 rows=1422878 width=8) (actual time=14.402..4866.937 rows=399184 loops=1)"
" -> Nested Loop (cost=0.01..3492321.55 rows=1551849 width=8) (actual time=14.393..3012.617 rows=436794 loops=1)"
" -> Index Scan using first_first_date_idx on first f (cost=0.01..432541.99 rows=722404 width=8) (actual time=14.372..729.233 rows=236007 loops=1)"
" Index Cond: (first_date < '2016-02-01 00:00:00'::timestamp without time zone)"
" -> Index Scan using second_id_first_idx on second s (cost=0.00..4.17 rows=7 width=16) (actual time=0.008..0.009 rows=2 loops=236007)"
" Index Cond: (second = f.id_second)"
" -> Index Scan using third_id_second_idx on third t (cost=0.00..2.94 rows=1 width=16) (actual time=0.004..0.004 rows=1 loops=436794)"
" Index Cond: (id_second = s.id_second)"
"Total runtime: 4915.254 ms"
However, I still don get it why asking for more couse a seqscan...
Iteresting is that, when I ask for range just above some kind of limit I getting a Query plan like this (here select for 40 days - asking for more will produce full seqscan again):
"Aggregate (cost=8403399.27..8403399.28 rows=1 width=8) (actual time=138303.216..138303.217 rows=1 loops=1)"
" -> Hash Join (cost=3887619.07..8399467.63 rows=1572656 width=8) (actual time=44056.443..138261.203 rows=512062 loops=1)"
" Hash Cond: (t.id_second = s.id_second)"
" -> Seq Scan on third t (cost=0.00..4128080.88 rows=17193788 width=16) (actual time=0.004..119497.056 rows=17570724 loops=1)"
" -> Hash (cost=3859478.04..3859478.04 rows=1715203 width=8) (actual time=5695.077..5695.077 rows=560503 loops=1)"
" Buckets: 16384 Batches: 16 Memory Usage: 1390kB"
" -> Nested Loop (cost=0.01..3859478.04 rows=1715203 width=8) (actual time=65.250..5533.413 rows=560503 loops=1)"
" -> Index Scan using first_first_date_idx on first f (cost=0.01..477985.28 rows=798447 width=8) (actual time=64.927..1688.341 rows=302663 loops=1)"
" Index Cond: (first_date < '2016-02-11 00:00:00'::timestamp without time zone)"
" -> Index Scan using second_id_first_idx on second s (cost=0.00..4.17 rows=7 width=16) (actual time=0.010..0.012 rows=2 loops=302663)"
" Index Cond: (id_first = f.id_first)"
"Total runtime: 138303.306 ms"
UPDATE after Laurenz Able suggestions
After rewritting a query plan as Laurenz Able suggested:
"Aggregate (cost=9102321.05..9102321.06 rows=1 width=8) (actual time=15237.830..15237.830 rows=1 loops=1)"
" -> Merge Join (cost=4578171.25..9097528.19 rows=1917143 width=8) (actual time=9111.694..15156.092 rows=803657 loops=1)"
" Merge Cond: (third.id_second = s.id_second)"
" -> Index Scan using third_id_second_idx on third (cost=0.00..4270478.19 rows=17193788 width=16) (actual time=23.650..5425.137 rows=803658 loops=1)"
" -> Materialize (cost=4577722.81..4588177.38 rows=2090914 width=8) (actual time=9088.030..9354.326 rows=879283 loops=1)"
" -> Sort (cost=4577722.81..4582950.09 rows=2090914 width=8) (actual time=9088.023..9238.426 rows=879283 loops=1)"
" Sort Key: s.id_second"
" Sort Method: external sort Disk: 15480kB"
" -> Merge Join (cost=673389.38..4341477.37 rows=2090914 width=8) (actual time=3662.239..8485.768 rows=879283 loops=1)"
" Merge Cond: (s.id_first = f.id_first)"
" -> Index Scan using second_id_first_idx on second s (cost=0.00..3587838.88 rows=18752248 width=16) (actual time=0.015..4204.308 rows=879284 loops=1)"
" -> Materialize (cost=672960.82..677827.55 rows=973345 width=8) (actual time=3662.216..3855.667 rows=892988 loops=1)"
" -> Sort (cost=672960.82..675394.19 rows=973345 width=8) (actual time=3662.213..3745.975 rows=476519 loops=1)"
" Sort Key: f.id_first"
" Sort Method: external sort Disk: 8400kB"
" -> Index Scan using first_first_date_idx on first f (cost=0.01..568352.90 rows=973345 width=8) (actual time=126.386..3233.134 rows=476519 loops=1)"
" Index Cond: (first_date < '2016-03-03 00:00:00'::timestamp without time zone)"
"Total runtime: 15244.404 ms"

First, it looks like some of the estimates are off.
Try to ANALYZE the tables and see if that changes the query plan chosen.
What might also help is to lower random_page_cost to a value just over 1 and see if that improves the plan.
It is interesting to note that the index scan on third_id_second_idx in the fast query produces only 17632 rows instead of over 17 million, which I can only explain by assuming that from that row on, the values of id_second no longer match any row in the join of first and second, i.e. the merge join is completed after that.
You can try to exploit that with with a rewritten query. Try
JOIN (SELECT id_second, id_third FROM third ORDER BY id_second) t
instead of
JOIN third t
That may result in a better plan since PostgreSQL won't optimize the ORDER BY away, and the planner may decide that since it has to sort third anyway, it may be cheaper to use a merge join. That way you trick the planner into choosing a plan that it wouldn't recognize as ideal. With a different value distribution the planner's original choice would probably be better.

Related

Multiple ORDER BY DESC will not use index in Postgres

I'm trying to create some queries in order to implement a cursor pagination (something like this: https://shopify.engineering/pagination-relative-cursors) on Postgres. In my implementation I'm trying to reach an efficient pagination even with ordering NON-unique columns.
I'm struggling to do that efficiently, in particular on the query that retrieves the previous page given a specific cursor.
The table that I'm using (>3M records) to test these query is very simple, and it has this structure:
CREATE TABLE "placemarks" (
"id" serial NOT NULL DEFAULT,
"assetId" text,
"createdAt" timestamptz,
PRIMARY KEY ("id")
);
I have an index on the id field clearly and also an index on the assetId column.
This is the query I'm using for retrieving the next page given a cursor composed by the latest ID and the latest assetId:
SELECT
*
FROM
"placemarks"
WHERE
"assetId" > 'CURSOR_ASSETID'
or("assetId" = 'CURSOR_ASSETID'
AND id > CURSOR_INT_ID)
ORDER BY
"assetId",
id
LIMIT 5;
This query is actually pretty fast, it uses the indexes and it allows to handle also duplicated values on assetId by using the unique ID field in order to avoid skipping duplicated rows with same CURSOR_ASSETID values.
-> Sort (cost=25709.62..25726.63 rows=6803 width=2324) (actual time=0.128..0.138 rows=5 loops=1)
" Sort Key: ""assetId"", id"
Sort Method: top-N heapsort Memory: 45kB
-> Bitmap Heap Scan on placemarks (cost=271.29..25596.63 rows=6803 width=2324) (actual time=0.039..0.088 rows=11 loops=1)
" Recheck Cond: (((""assetId"")::text > 'CURSOR_ASSETID'::text) OR ((""assetId"")::text = 'CURSOR_ASSETID'::text))"
" Filter: (((""assetId"")::text > 'CURSOR_ASSETID'::text) OR (((""assetId"")::text = 'CURSOR_ASSETID'::text) AND (id > CURSOR_INT_ID)))"
Rows Removed by Filter: 1
Heap Blocks: exact=10
-> BitmapOr (cost=271.29..271.29 rows=6803 width=0) (actual time=0.030..0.034 rows=0 loops=1)
" -> Bitmap Index Scan on ""placemarks_assetId_key"" (cost=0.00..263.45 rows=6802 width=0) (actual time=0.023..0.023 rows=11 loops=1)"
" Index Cond: ((""assetId"")::text > 'CURSOR_ASSETID'::text)"
" -> Bitmap Index Scan on ""placemarks_assetId_key"" (cost=0.00..4.44 rows=1 width=0) (actual time=0.005..0.005 rows=1 loops=1)"
" Index Cond: ((""assetId"")::text = 'CURSOR_ASSETID'::text)"
Planning time: 0.201 ms
Execution time: 0.194 ms
The issue is when I try to get the same page but with the query that should return me the previous page:
SELECT
*
FROM
placemarks
WHERE
"assetId" < 'CURSOR_ASSETID'
or("assetId" = 'CURSOR_ASSETID'
AND id < CURSOR_INT_ID)
ORDER BY
"assetId" desc,
id desc
LIMIT 5;
With this query no indexes are used, even if it would be much faster:
Limit (cost=933644.62..933644.63 rows=5 width=2324)
-> Sort (cost=933644.62..944647.42 rows=4401120 width=2324)
" Sort Key: ""assetId"" DESC, id DESC"
-> Seq Scan on placemarks (cost=0.00..860543.60 rows=4401120 width=2324)
" Filter: (((""assetId"")::text < 'CURSOR_ASSETID'::text) OR (((""assetId"")::text = 'CURSOR_ASSETID'::text) AND (id < CURSOR_INT_ID)))"
I've noticied that by forcing the usage of indexes with SET enable_seqscan = OFF; the query appears to be using the indexes and it performs better and faster. The query plan resulting:
Limit (cost=12.53..12.54 rows=5 width=108) (actual time=0.532..0.555 rows=5 loops=1)
-> Sort (cost=12.53..12.55 rows=6 width=108) (actual time=0.524..0.537 rows=5 loops=1)
Sort Key: assetid DESC, id DESC
Sort Method: top-N heapsort Memory: 25kB
" -> Bitmap Heap Scan on ""placemarks"" (cost=8.33..12.45 rows=6 width=108) (actual time=0.274..0.340 rows=14 loops=1)"
" Recheck Cond: ((assetid < 'CURSOR_ASSETID'::text) OR (assetid = 'CURSOR_ASSETID'::text))"
" Filter: ((assetid < 'CURSOR_ASSETID'::text) OR ((assetid = 'CURSOR_ASSETID'::text) AND (id < 14)))"
Rows Removed by Filter: 1
Heap Blocks: exact=1
-> BitmapOr (cost=8.33..8.33 rows=7 width=0) (actual time=0.152..0.159 rows=0 loops=1)
" -> Bitmap Index Scan on ""placemarks_assetid_idx"" (cost=0.00..4.18 rows=6 width=0) (actual time=0.108..0.110 rows=12 loops=1)"
" Index Cond: (assetid < 'CURSOR_ASSETID'::text)"
" -> Bitmap Index Scan on ""placemarks_assetid_idx"" (cost=0.00..4.15 rows=1 width=0) (actual time=0.036..0.036 rows=3 loops=1)"
" Index Cond: (assetid = 'CURSOR_ASSETID'::text)"
Planning time: 1.319 ms
Execution time: 0.918 ms
Any clue to optimize the second query in order to use always the indexes?
Postgres DB version: 10.20
The fast performance of your first query seems to be down to luck of where your constant 'CURSOR_ASSETID' falls in the distribution of that column. Or maybe this luck is not luck but is how it will always be?
For good performance more generally, including for reverse sorting, you need to write your query with a tuple comparator, not an OR comparator.
WHERE
("assetId",id) < ('something',500000)
If you are using a version before incremental sorting was introduced in v13, or if "assetId" can have a large number of ties, then you will need a multicolumn index on ("assetId",id) to get optimal performance.
And there is no reason to decorate the index with DESC, as PostgreSQL knows how to read the index backwards. Decorating the index is needed when the two columns have different ordering than each other, as then you would need to read the undecorated index "spirally" rather than either completely forward or completely backwards. (But that wouldn't work well here anyway, as tuple comparators can't have different orderings between the columns.)

Why is postgis not applying an index when there is a "WHERE IN" in the query?

We have a table shops, with column named location of type GEOMETRY(POINT,4326) and version of type tinyint
and a table
version with a single row containing an integer.
Why is below query not using the index for GIST(location)?
SELECT * FROM shops
WHERE ("version" IN (SELECT "version" FROM "version"))
ORDER BY (location <-> '0020000001000010e64029d460d2a8aee0404bc8bb0955ea17'::geometry) LIMIT 10;
Where as the same query without the IN does use the index?
SELECT * FROM shops
WHERE ("version" = (SELECT "version" FROM "version" LIMIT 1))
ORDER BY (location <-> '0020000001000010e64029d460d2a8aee0404bc8bb0955ea17'::geometry) LIMIT 10;
This is affecting us since we updated from postgres 9 to 11. I was able to trace back the issue to the above selection.
EDIT: Add qry analyze
First query (without index application):
"Limit (cost=25260.30..25260.32 rows=10 width=1275) (actual time=254.809..254.814 rows=10 loops=1)"
" -> Sort (cost=25260.30..25260.39 rows=36 width=1275) (actual time=254.807..254.809 rows=10 loops=1)"
" Sort Key: ((shops.location <-> '0101000020E6100000E0AEA8D260D4294017EA5509BBC84B40'::geometry))"
" Sort Method: top-N heapsort Memory: 54kB"
" -> Nested Loop (cost=41.88..25259.52 rows=36 width=1275) (actual time=0.099..215.201 rows=58179 loops=1)"
" Join Filter: (shops.version = version.version)"
" -> HashAggregate (cost=41.88..43.88 rows=200 width=4) (actual time=0.014..0.016 rows=1 loops=1)"
" Group Key: version.version"
" -> Seq Scan on version (cost=0.00..35.50 rows=2550 width=4) (actual time=0.009..0.010 rows=1 loops=1)"
Second query:
"Limit (cost=0.28..440.04 rows=10 width=1275) (actual time=0.194..0.233 rows=10 loops=1)"
" -> Nested Loop Semi Join (cost=0.28..1574995.44 rows=35815 width=1275) (actual time=0.193..0.230 rows=10 loops=1)"
" Join Filter: (shop.version = version.version)"
" -> Index Scan using shop_location_idx on shops (cost=0.28..101549.81 rows=71630 width=1267) (actual time=0.182..0.213 rows=10 loops=1)"
" Order By: (location <-> '0101000020E6100000E0AEA8D260D4294017EA5509BBC84B40'::geometry)"
" -> Materialize (cost=0.00..48.25 rows=2550 width=4) (actual time=0.001..0.001 rows=1 loops=10)"
" -> Seq Scan on version (cost=0.00..35.50 rows=2550 width=4) (actual time=0.006..0.006 rows=1 loops=1)"
Solved
See the answer below, with thanks to #JimJones and #JimMacaulay
Adding an index on the version table somehow helps postgres towards using the index on the location column as well.
So adding this index is the fix for the 1st query:
CREATE INDEX version_idx
ON public.version USING btree
(version)
TABLESPACE pg_default;
Correctly applying indexes on all columns inside the lookup, helps postgres to make a performant query plan.

PostgreSQL 10 - IN and ANY performance inexplicable behaviour

I do selection from big table where id in array/list.
Checked several variants, result wonder me.
1. Use ANY and ARRAY
EXPLAIN (ANALYZE,BUFFERS)
SELECT * FROM cca_data_hours
WHERE
datetime = '2018-01-07 19:00:00'::timestamp without time zone AND
id_web_page = ANY (ARRAY[1, 2, 8, 3 /* ~50k ids */])
Result
"Index Scan using cca_data_hours_pri on cca_data_hours (cost=0.28..576.79 rows=15 width=188) (actual time=0.035..0.998 rows=6 loops=1)"
" Index Cond: (datetime = '2018-01-07 19:00:00'::timestamp without time zone)"
" Filter: (id_web_page = ANY ('{1,2,8,3, (...)"
" Rows Removed by Filter: 5"
" Buffers: shared hit=3"
"Planning time: 57.625 ms"
"Execution time: 1.065 ms"
2. Use IN and VALUES
EXPLAIN (ANALYZE, BUFFERS)
SELECT * FROM cca_data_hours
WHERE
datetime = '2018-01-07 19:00:00'::timestamp without time zone AND
id_web_page IN (VALUES (1),(2),(8),(3) /* ~50k ids */)
Result
"Hash Join (cost=439.77..472.66 rows=8 width=188) (actual time=90.806..90.858 rows=6 loops=1)"
" Hash Cond: (cca_data_hours.id_web_page = "*VALUES*".column1)"
" Buffers: shared hit=3"
" -> Index Scan using cca_data_hours_pri on cca_data_hours (cost=0.28..33.06 rows=15 width=188) (actual time=0.035..0.060 rows=11 loops=1)"
" Index Cond: (datetime = '2018-01-07 19:00:00'::timestamp without time zone)"
" Buffers: shared hit=3"
" -> Hash (cost=436.99..436.99 rows=200 width=4) (actual time=90.742..90.742 rows=4 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 9kB"
" -> HashAggregate (cost=434.99..436.99 rows=200 width=4) (actual time=90.709..90.717 rows=4 loops=1)"
" Group Key: "*VALUES*".column1"
" -> Values Scan on "*VALUES*" (cost=0.00..362.49 rows=28999 width=4) (actual time=0.008..47.056 rows=28999 loops=1)"
"Planning time: 53.607 ms"
"Execution time: 91.681 ms"
I expect case #2 will faster, but it is not like.
Why IN with VALUES slowelly?
Comparing the EXPLAIN ANALYZE results, it looks like the old version wasn't using the available index to key in the given examples. The reason why ANY (ARRAY[]) became faster is in version 9.2 https://www.postgresql.org/docs/current/static/release-9-2.html
Allow indexed_col op ANY(ARRAY[...]) conditions to be used in plain index scans and index-only scans (Tom Lane)
The site where you got the suggestion from was about version 9.0

Why does this query not use the index?

I encountered a strange behaviour of the Postgres optimizer on the following query:
select count(product0_.id) as col_0_0_ from Product product0_
where product0_.active=true
and (product0_.aggregatorId is null
or product0_.aggregatorId in ($1 , $2 , $3))
Product has about 54 columns, active is a boolean having a btree index, and aggregatorId is 'varchar(15)` and has a btree index.
On this query above the index for 'aggregatorId' is not used:
Aggregate (cost=169995.75..169995.76 rows=1 width=32) (actual time=3904.726..3904.727 rows=1 loops=1)
-> Seq Scan on product product0_ (cost=0.00..165510.39 rows=1794146 width=32) (actual time=0.055..2407.195 rows=1851827 loops=1)
Filter: (active AND ((aggregatorid IS NULL) OR ((aggregatorid)::text = ANY ('{5109037,5001015,70601}'::text[]))))
Rows Removed by Filter: 542146
Total runtime: 3904.925 ms
But if we reduce the query by leaving out the null check for this column, the index gets used:
Aggregate (cost=17600.93..17600.94 rows=1 width=32) (actual time=614.933..614.935 rows=1 loops=1)
-> Index Scan using idx_prod_aggr on product product0_ (cost=0.43..17487.56 rows=45347 width=32) (actual time=19.284..594.509 rows=12099 loops=1)
Index Cond: ((aggregatorid)::text = ANY ('{5109037,5001015,70601}'::text[]))
Filter: active
Rows Removed by Filter: 49130
Total runtime: 150.255 ms
As far as I know a btree index can handle null checks, so I don't understand why the index is not used for the complete query. The product table contains about 2.3 million entries, so it is not very fast.
EDIT:
The index is very standard:
CREATE INDEX idx_prod_aggr
ON product
USING btree
(aggregatorid COLLATE pg_catalog."default");
Since there are many identical values for the column which you use in the where clause (78% of all the table rows according to your numbers), the database will conclude that it is cheaper to use full table scan than to waste additional time to read the index.
The rule of thumb in most database vendors is that index will probably not be used if it can't narrow the search down to about 5% of all the table records.
Your problem looked interesting, so I reproduced your scenario - postgres 9.1, table with 1M rows, one boolean column, one varchar column, both indexed, half of table has NULL names.
I had same explain analyze output when varchar column was not indexed. However, with index postgres uses bitmap scan on NULL condition and IN condition and then merges them with OR condition.
Then he uses seq scan on boolean condition (because indexes are separated)
explain analyze
select * from A where active is true and ((name is null) OR (name in ('1','2','3') ));
See output:
"Bitmap Heap Scan on a (cost=17.34..21.35 rows=1 width=18) (actual time=0.048..0.048 rows=0 loops=1)"
" Recheck Cond: ((name IS NULL) OR ((name)::text = ANY ('{1,2,3}'::text[])))"
" Filter: (active IS TRUE)"
" -> BitmapOr (cost=17.34..17.34 rows=1 width=0) (actual time=0.047..0.047 rows=0 loops=1)"
" -> Bitmap Index Scan on idx_prod_aggr (cost=0.00..4.41 rows=1 width=0) (actual time=0.010..0.010 rows=0 loops=1)"
" Index Cond: (name IS NULL)"
" -> Bitmap Index Scan on idx_prod_aggr (cost=0.00..12.93 rows=1 width=0) (actual time=0.036..0.036 rows=0 loops=1)"
" Index Cond: ((name)::text = ANY ('{1,2,3}'::text[]))"
"Total runtime: 0.077 ms"
This makes me think that you missed some details, if so, add them to your question.

Postgresql Explain Plan Differences

this is my first post....
I have a query that is taking longer than I would like (don't we all!)
Depending on what I put in the WHERE clause...it MAY run faster.
I am trying to understand why the query plan is different
AND
what i can do to speed the query up over all.
Here's Query #1:
SELECT date_observed, base_value
FROM device_read_data
WHERE fk_device_rw_id IN
(SELECT fk_device_rw_id FROM equipment_set_rw
WHERE fk_equipment_set_id = CAST('ed151028-1fc0-11e3-b79f-47c0fd87d2b4' AS uuid))
AND date_observed
BETWEEN '2013-12-01 07:45:00+00'::timestamptz
AND '2014-01-01 07:59:59+00'::timestamptz
AND base_value ~ '[0-9]+(\.[0-9]+)?'
;
Here's Query Plan #1:
"Hash Semi Join (cost=11.65..5640243.59 rows=92194 width=16) (actual time=34.947..132522.023 rows=43609 loops=1)"
" Hash Cond: (device_read_data.fk_device_rw_id = equipment_set_rw.fk_device_rw_id)"
" -> Seq Scan on device_read_data (cost=0.00..5449563.56 rows=72157042 width=32) (actual time=0.844..123760.331 rows=71764376 loops=1)"
" Filter: ((date_observed >= '2013-12-01 07:45:00+00'::timestamp with time zone) AND (date_observed <= '2014-01-01 07:59:59+00'::timestamp with time zone) AND ((base_value)::text ~ '[0-9]+(\.[0-9]+)?'::text))"
" Rows Removed by Filter: 82135660"
" -> Hash (cost=11.61..11.61 rows=3 width=16) (actual time=0.018..0.018 rows=1 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 1kB"
" -> Bitmap Heap Scan on equipment_set_rw (cost=4.27..11.61 rows=3 width=16) (actual time=0.016..0.016 rows=1 loops=1)"
" Recheck Cond: (fk_equipment_set_id = 'ed151028-1fc0-11e3-b79f-47c0fd87d2b4'::uuid)"
" -> Bitmap Index Scan on uc_fk_equipment_set_id_fk_device_rw_id (cost=0.00..4.27 rows=3 width=0) (actual time=0.011..0.011 rows=1 loops=1)"
" Index Cond: (fk_equipment_set_id = 'ed151028-1fc0-11e3-b79f-47c0fd87d2b4'::uuid)"
"Total runtime: 132530.290 ms"
Here's Query #2:
SELECT date_observed, base_value
FROM device_read_data
WHERE fk_device_rw_id IN
(SELECT fk_device_rw_id FROM equipment_set_rw
WHERE fk_equipment_set_id = CAST('ed151028-1fc0-11e3-b79f-47c0fd87d2b4' AS uuid))
AND date_observed
BETWEEN '2014-01-01 07:45:00+00'::timestamptz
AND '2014-02-01 07:59:59+00'::timestamptz
AND base_value ~ '[0-9]+(\.[0-9]+)?'
;
Here's Query Plan #2:
"Nested Loop (cost=4.27..1869543.46 rows=20391 width=16) (actual time=0.041..2053.656 rows=12997 loops=1)"
" -> Bitmap Heap Scan on equipment_set_rw (cost=4.27..9.73 rows=2 width=16) (actual time=0.015..0.017 rows=1 loops=1)"
" Recheck Cond: (fk_equipment_set_id = 'ed151028-1fc0-11e3-b79f-47c0fd87d2b4'::uuid)"
" -> Bitmap Index Scan on uc_fk_equipment_set_id_fk_device_rw_id (cost=0.00..4.27 rows=2 width=0) (actual time=0.010..0.010 rows=1 loops=1)"
" Index Cond: (fk_equipment_set_id = 'ed151028-1fc0-11e3-b79f-47c0fd87d2b4'::uuid)"
" -> Index Scan using idx_device_read_data_date_observed_fk_device_rw_id on device_read_data (cost=0.00..934664.91 rows=10195 width=32) (actual time=0.024..2050.656 rows=12997 loops=1)"
" Index Cond: ((date_observed >= '2014-01-01 07:45:00+00'::timestamp with time zone) AND (date_observed <= '2014-02-01 07:59:59+00'::timestamp with time zone) AND (fk_device_rw_id = equipment_set_rw.fk_device_rw_id))"
" Filter: ((base_value)::text ~ '[0-9]+(\.[0-9]+)?'::text)"
"Total runtime: 2055.068 ms"
I've only changed the Date Range in the Where clause.
You can see that in Query #1 there is a Seq Scan on the table VS an Index Scan in Query #2.
I'm trying to determine what is causing this, but I can't seem to find the answer.
Additional Information
There is a composite index on (date_observed, fk_device_rw_id)
There are never any deletes on this table. Autovacuum is not needed.
I vacuumed the table anyway....but this had no effect.
I've rebuilt the Index on this table
I've Analyzed this table
This system is a copy of Prod and is currently Idle
System Information
Running Postgres 9.2 on Linux
16GB System Ram
Shared_Buffers set to 4GB
What other information can I provide? I am sure there are things I have left out.
Thanks for your help.
Edit 1
I tried: set enable_seqscan = false
Here are the Explain Plan Results:
"Hash Semi Join (cost=2566484.50..7008502.81 rows=92194 width=16) (actual time=18587.453..182228.966 rows=43609 loops=1)"
" Hash Cond: (device_read_data.fk_device_rw_id = equipment_set_rw.fk_device_rw_id)"
" -> Bitmap Heap Scan on device_read_data (cost=2566472.85..6817822.78 rows=72157042 width=32) (actual time=18562.247..172074.048 rows=71764376 loops=1)"
" Recheck Cond: ((date_observed >= '2013-12-01 07:45:00+00'::timestamp with time zone) AND (date_observed <= '2014-01-01 07:59:59+00'::timestamp with time zone))"
" Rows Removed by Index Recheck: 2102"
" Filter: ((base_value)::text ~ '[0-9]+(\.[0-9]+)?'::text)"
" Rows Removed by Filter: 12265137"
" -> Bitmap Index Scan on idx_device_read_data_date_observed_fk_device_rw_id (cost=0.00..2548433.59 rows=85430682 width=0) (actual time=18556.228..18556.228 rows=84029513 loops=1)"
" Index Cond: ((date_observed >= '2013-12-01 07:45:00+00'::timestamp with time zone) AND (date_observed <= '2014-01-01 07:59:59+00'::timestamp with time zone))"
" -> Hash (cost=11.61..11.61 rows=3 width=16) (actual time=16.134..16.134 rows=1 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 1kB"
" -> Bitmap Heap Scan on equipment_set_rw (cost=4.27..11.61 rows=3 width=16) (actual time=16.128..16.129 rows=1 loops=1)"
" Recheck Cond: (fk_equipment_set_id = 'ed151028-1fc0-11e3-b79f-47c0fd87d2b4'::uuid)"
" -> Bitmap Index Scan on uc_fk_equipment_set_id_fk_device_rw_id (cost=0.00..4.27 rows=3 width=0) (actual time=16.116..16.116 rows=1 loops=1)"
" Index Cond: (fk_equipment_set_id = 'ed151028-1fc0-11e3-b79f-47c0fd87d2b4'::uuid)"
"Total runtime: 182244.181 ms"
As predicted, the query took longer.
Are there just too may records to make this faster?
What are my choices?
Thanks.
Edit 2
I tried the re-write approach. I'm afraid the results were similar to the original.
Here's the query Plan:
"Hash Join (cost=11.65..6013386.19 rows=90835 width=16) (actual time=35.272..127965.785 rows=43609 loops=1)"
" Hash Cond: (a.fk_device_rw_id = b.fk_device_rw_id)"
" -> Seq Scan on device_read_data a (cost=0.00..5565898.74 rows=71450793 width=32) (actual time=13.050..119667.814 rows=71764376 loops=1)"
" Filter: ((date_observed >= '2013-12-01 07:45:00+00'::timestamp with time zone) AND (date_observed <= '2014-01-01 07:59:59+00'::timestamp with time zone) AND ((base_value)::text ~ '[0-9]+(\.[0-9]+)?'::text))"
" Rows Removed by Filter: 85426425"
" -> Hash (cost=11.61..11.61 rows=3 width=16) (actual time=0.018..0.018 rows=1 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 1kB"
" -> Bitmap Heap Scan on equipment_set_rw b (cost=4.27..11.61 rows=3 width=16) (actual time=0.015..0.016 rows=1 loops=1)"
" Recheck Cond: (fk_equipment_set_id = 'ed151028-1fc0-11e3-b79f-47c0fd87d2b4'::uuid)"
" -> Bitmap Index Scan on uc_fk_equipment_set_id_fk_device_rw_id (cost=0.00..4.27 rows=3 width=0) (actual time=0.011..0.011 rows=1 loops=1)"
" Index Cond: (fk_equipment_set_id = 'ed151028-1fc0-11e3-b79f-47c0fd87d2b4'::uuid)"
"Total runtime: 127992.849 ms"
It seems like a simple problem. Return records from a table that fall in a particular date range. Given my existing system architecture, perhaps there's a threshold of how many records that can exist in the table before performance is adversely affected.
Unless there are other suggestions, I may need to pursue the partitioning approach.
Thanks for the help thus far!
In your first query your date range spans a full month, as opposed to just the one day in the second query. The date range in the first query matches 72M rows out of about 154M rows in device_read_data, which is nearly half of the rows in that table.
Index scans are generally slower than full table scans for that many rows (because an index scan has to read index pages and data pages, the total number of disk reads required to get that many rows is likely larger than just reading every data page).
You can set enable_seq_scan = false before running the first query to see the difference, and if you're feeling adventurous run your explain as explain (analyze, buffers) <query> to see how many block reads you get when doing a table scan versus an index scan.
Edit: For your specific problem you might have some luck using partial indexes. You'll have to figure out how to build these so that they cast as wide a net as possible (it's tempting but wasteful to write a partial index per problem) but you might start with something like this:
create index idx_device_read_data_date_observed_base_value
on device_read_data (date_observed)
where base_value ~ '[0-9]+(\.[0-9]+)?'
;
That index will only be built for those rows matching that base_value pattern. You'd know better than we would if that's a fairly restrictive condition or not (it'd be good for you if it did reduce the number of rows to consider).
You might also flip that idea and index on base_value matching that pattern and make your where conditions something like date_observed between '2013-12-01 and '2013-12-31', adding one such index for each month (this way is likely to get out of hand with just indexes - I'd switch to partitioning).
Another potential improvement could come from re-writing your query. Here's an approach that eliminates the IN condition, which provides the same results if there are no repeats of fk_device_rw_id in equipment_set_rw for the given fk_equipment_set_id.
SELECT a.date_observed, a.base_value
FROM device_read_data a
join equipment_set_rw b
on a.fk_device_rw_id = b.fk_device_rw_id
WHERE b.fk_equipment_set_id = CAST('ed151028-1fc0-11e3-b79f-47c0fd87d2b4' AS uuid)
AND a.date_observed BETWEEN '2014-01-01 07:45:00+00'::timestamptz
AND '2014-02-01 07:59:59+00'::timestamptz
AND a.base_value ~ '[0-9]+(\.[0-9]+)?'
;
I've tried a few things and I'm satisfied for now with the performance.
I changed the index on the device_read_data table to the reverse order of what it was.
Original Index:
CREATE UNIQUE INDEX idx_device_read_data_date_observed_fk_device_rw_id
ON device_read_data
USING btree (date_observed, fk_device_rw_id);
New Index:
CREATE UNIQUE INDEX idx_device_read_data_date_observed_fk_device_rw_id
ON device_read_data
USING btree (fk_device_rw_id, date_observed);
The fk_device_rw_id column has a much lower cardinality. Placing this column first in the index has helped to filter the records much faster.
Also, make sure the columns in the where clause are in the same order as the composite index. (Which is the case....now.)
I altered the statistics on the date_observed column. Thus giving the query planner more information to work with.
Originally it was using the postgres default of 100. I set it to this:
ALTER TABLE device_read_data ALTER COLUMN date_observed SET STATISTICS 1000;
Below are the results of the query. Much...much faster.
I may be able to tweak this further with additional statistics...however, this works for now. I may be able to hold off on partitioning for a bit.
Thanks for all your help.
Query:
explain Analyze
SELECT date_observed, base_value
FROM device_read_data
WHERE fk_device_rw_id IN
(SELECT fk_device_rw_id FROM equipment_set_rw
WHERE fk_equipment_set_id = CAST('ed151028-1fc0-11e3-b79f-47c0fd87d2b4' AS uuid))
AND (date_observed >= '2013-12-01 07:45:00+00'::timestamptz AND date_observed <= '2014- 01-01 07:59:59+00'::timestamptz)
AND base_value ~ '[0-9]+(\.[0-9]+)?'
;
New Query Plan:
"Nested Loop (cost=1197.25..264699.54 rows=59694 width=16) (actual time=25.876..493.073 rows=43609 loops=1)"
" -> Bitmap Heap Scan on equipment_set_rw (cost=4.27..9.73 rows=2 width=16) (actual time=0.018..0.019 rows=1 loops=1)"
" Recheck Cond: (fk_equipment_set_id = 'ed151028-1fc0-11e3-b79f-47c0fd87d2b4'::uuid)"
" -> Bitmap Index Scan on uc_fk_equipment_set_id_fk_device_rw_id (cost=0.00..4.27 rows=2 width=0) (actual time=0.012..0.012 rows=1 loops=1)"
" Index Cond: (fk_equipment_set_id = 'ed151028-1fc0-11e3-b79f-47c0fd87d2b4'::uuid)"
" -> Bitmap Heap Scan on device_read_data (cost=1192.99..132046.43 rows=29847 width=32) (actual time=25.849..486.701 rows=43609 loops=1)"
" Recheck Cond: ((fk_device_rw_id = equipment_set_rw.fk_device_rw_id) AND (date_observed >= '2013-12-01 07:45:00+00'::timestamp with time zone) AND (date_observed <= '2014-01-01 07:59:59+00'::timestamp with time zone))"
" Rows Removed by Index Recheck: 2076173"
" Filter: ((base_value)::text ~ '[0-9]+(\.[0-9]+)?'::text)"
" -> Bitmap Index Scan on idx_device_read_data_date_observed_fk_device_rw_id (cost=0.00..1185.53 rows=35640 width=0) (actual time=24.000..24.000 rows=43609 loops=1)"
" Index Cond: ((fk_device_rw_id = equipment_set_rw.fk_device_rw_id) AND (date_observed >= '2013-12-01 07:45:00+00'::timestamp with time zone) AND (date_observed <= '2014-01-01 07:59:59+00'::timestamp with time zone))"
"Total runtime: 495.506 ms"