Why isn't Postgres using the index with Distinct? - sql

I have this table:
CREATE TABLE public.prodhistory (
curve_id int4 NOT NULL,
start_prod_date date NOT NULL,
prod_date date NOT NULL,
monthly_prod_rate float4 NOT NULL,
eff_date timestamp NOT NULL,
/* Keys */
CONSTRAINT prodhistorypk
PRIMARY KEY (curve_id, prod_date, start_prod_date, eff_date),
/* Foreign keys */
CONSTRAINT prodhistory2typecurves_fk
FOREIGN KEY (curve_id)
REFERENCES public.typecurves(curve_id)
) WITH (
OIDS = FALSE
);
CREATE INDEX prodhistory_idx_curve_id01
ON public.prodhistory
(curve_id);
with ~42M rows.
And I execute this query:
SELECT DISTINCT curve_id FROM prodhistory
Which I expect would be very quick, given the index. But no, 270 secs. So I explain, and I get:
HashAggregate (cost=824870.03..824873.08 rows=305 width=4) (actual time=211834.018..211834.097 rows=315 loops=1)
Output: curve_id
Group Key: prodhistory.curve_id
-> Seq Scan on public.prodhistory (cost=0.00..718003.22 rows=42746722 width=4) (actual time=12.751..200826.299 rows=43218808 loops=1)
Output: curve_id
Planning time: 0.115 ms
Execution time: 211848.137 ms
I'm not to experienced in reading these plans, but a Seq Scan on the DB seems bad.
Any thoughts? I'm sort of stumped.

This plan is chosen because PostgreSQL thinks it is cheaper.
You can compare by setting
SET enable_seqscan=off;
and then re-running your EXPLAIN (ANALYZE) statement. Compare cost and actual time in both cases and check if PostgreSQL estimated correctly or not.
If you find that using an Index Scan or Index Only Scan is actually cheaper, you could consider twiddling the cost parameters to match your machine better, e.g. lower random_page_cost or cpu_index_tuple_cost or raise cpu_tuple_cost.

PostgreSQL "index only scans" aren't always as cheap as you might think.
The reason is that each row needs to be checked as to whether it is visible to the MVCC snapshot or not.
Whether this is cheap or not depends on the table's visibility map.
If you force an index only scan (as per laurenz-albe's answer):
SET enable_seqscan=off;
Then run your query with:
EXPLAIN (ANALYZE ON, BUFFERS ON)
And see query plan output with "heap fetches" as below this means that the table's actual row data is being accessed, not just the index.
Index Only Scan using my_index on my_table (cost=0.42..17792.01 rows=595195 width=20) (actual time=37.942..2330.737 rows=539105 loops=1)
Heap Fetches: 234180
The official documentation describes this here:
https://www.postgresql.org/docs/current/indexes-index-only-scans.html
You may be able to resolve this by altering the way the table is updated, or by adjusting your auto vacuum settings.

Related

SQL - can search performance depend on amount of columns?

I have something like the following table
CREATE TABLE mytable
(
id serial NOT NULL
search_col int4 NULL,
a1 varchar NULL,
a2 varchar NULL,
...
a50 varchar NULL,
CONSTRAINT mytable_pkey PRIMARY KEY (id)
);
CREATE INDEX search_col_idx ON mytable USING btree (search_col);
This table has approximately 5 million rows and it takes about 15 seconds to perform a search operation like
select *
from mytable
where search_col = 83310
It is crucial for me to increase performance, but even clustering the table after search_col did not bring a major benefit.
However, I tried the following:
create table test as (select id, search_col, a1 from mytable);
A search on this table, having the same amount of rows as the original one, takes approximately 0.2 seconds. Why that and how can I use this for what I need?
Index Scan using search_col_idx on mytable (cost=0.43..2713.83 rows=10994 width=32802) (actual time=0.021..13.015 rows=12018 loops=1)
Seq Scan on test (cost=0.00..95729.46 rows=12347 width=19) (actual time=0.246..519.501 rows=12018 loops=1)
The result of DBeaver's Execution Plan
|Knotentyp|Entität|Kosten|Reihen|Zeit|Bedingung|
|Index Scan|mytable|0.43 - 3712.86|12018|13.141|(search_col = 83310)|
Execution Plan from psql:
Index Scan using mytable_search_col_idx on mytable (cost=0.43..3712.86 rows=15053 width=32032) (actual time=0.015..13.889 rows=12018 loops=1)
Index Cond: (search_col = 83310)
Planning time: 0.640 ms
Execution time: 23.910 ms
(4 rows)
One way that the columns would impact the timing would be if the columns were large. Really large.
In most cases, a row resides on a single data page. The index points to the page and the size of the row has little impact on the timing, because the timing is dominated by searching the index and fetching the row.
However, if the columns are really large, then that can require reading many more bytes from disk, which takes more time.
That said, another possibility is that the statistics are out-of-date and the index isn't being used on the first query.

Why PostgresQL count is so slow even with Index Only Scan

I have a simple count query that can use Index Only Scan, but it still take so long in PostgresQL!
I have a cars table with 2 columns type bigint and active boolean, I also have a multi-column index on those columns
CREATE TABLE cars
(
id BIGSERIAL NOT NULL
CONSTRAINT cars_pkey PRIMARY KEY ,
type BIGINT NOT NULL ,
name VARCHAR(500) NOT NULL ,
active BOOLEAN DEFAULT TRUE NOT NULL,
created_at TIMESTAMP(0) WITH TIME ZONE default NOW(),
updated_at TIMESTAMP(0) WITH TIME ZONE default NOW(),
deleted_at TIMESTAMP(0) WITH TIME ZONE
);
CREATE INDEX cars_type_active_index ON cars(type, active);
I inserted some test data with 950k records, type=1 have 600k records
INSERT INTO cars (type, name) (SELECT 1, 'car-name' FROM generate_series(1,600000));
INSERT INTO cars (type, name) (SELECT 2, 'car-name' FROM generate_series(1,200000));
INSERT INTO cars (type, name) (SELECT 3, 'car-name' FROM generate_series(1,100000));
INSERT INTO cars (type, name) (SELECT 4, 'car-name' FROM generate_series(1,50000));
Let 's run VACUUM ANALYZE and force PostgresQL to use Index Only Scan
VACUUM ANALYSE;
SET enable_seqscan = OFF;
SET enable_bitmapscan = OFF;
OK, I have a simple query on type and active
EXPLAIN (VERBOSE, BUFFERS, ANALYSE)
SELECT count(*)
FROM cars
WHERE type = 1 AND active = true;
Result:
Aggregate (cost=24805.70..24805.71 rows=1 width=0) (actual time=4460.915..4460.918 rows=1 loops=1)
Output: count(*)
Buffers: shared hit=2806
-> Index Only Scan using cars_type_active_index on public.cars (cost=0.42..23304.23 rows=600590 width=0) (actual time=0.051..2257.832 rows=600000 loops=1)
Output: type, active
Index Cond: ((cars.type = 1) AND (cars.active = true))
Filter: cars.active
Heap Fetches: 0
Buffers: shared hit=2806
Planning time: 0.213 ms
Execution time: 4461.002 ms
(11 rows)
Look at the query explain result,
It used Index Only Scan, with index only scan, depending on visibilities map, PostgresQL sometime need to fetch Table Heap to check for visibility of the tuple, But I already run VACUUM ANALYZE so you can see Heap fetch = 0, so reading the index is enough for answer this query.
The size of the index is quite small, it can all fit on the Buffer cache (Buffers: shared hit=2806), PostgresQL does not need to fetch pages from disk.
From there, I can't understand why PostgresQL take that long (4.5s) to answer the query, 1M records is not a big number of records, everything is already cached on memory, and the data on index is visible, it does not need to fetch Heap.
PostgreSQL 9.5.10 on x86_64-pc-linux-gnu, compiled by gcc (Debian 4enter code here.9.2-10) 4.9.2, 64-bit
I tested it on docker 17.09.1-ce, Macbook pro 2015.
I am still new to PostgresQL and trying to map my knowledge with the real cases.
Thanks so much,
It seems like I found the reason, it not about PostgresQL problems, it 's because of running in docker. When I run directly in my mac, the time will be around 100ms which is fast enough.
Another thing I figured out is the reason why PostgresQL still use seq scan instead of index only scan (that why I have to disable seq_scan and bitmapscan in my test):
The size of table is not so big compare to the size of the index, if I add more columns to the table or length of columns is longer, the bigger size of the table, the more chance index can be use.
random_page_cost value by default is 4, my disk is quite fast so I can set it to 1-2, it will help the psql's explainer estimate cost more correctly.

Behavior of WHERE clause on a primary key field

select * from table where username="johndoe"
In Postgres, if username is not a primary key, I know it will iterate through all the records.
But if it is a primary key field, will the above SQL statement iterate through the entire table, or terminate as soon as username is matched. In other words, does "where" act differently when it is running on a primary key column or not?
Primary keys (and all indexed columns for that matter) take advantage of indexes when those column(s) are used as filter predicates, like WHERE and JOIN...ON clauses.
As a real world example, my application has a table called Log_Games, which is a table with millions of rows, ID as the primary key, and a number of other non-indexed columns such as ParsedAt. Compare the below:
INDEXED QUERY
EXPLAIN ANALYZE
SELECT *
FROM "Log_Games"
WHERE "ID" = 792046
INDEXED QUERY PLAN
Index Scan using "Log_Games_pkey" on "Log_Games" (cost=0.43..8.45 rows=1 width=4190) (actual time=0.024..0.024 rows=1 loops=1)
Index Cond: ("ID" = 792046)
Planning time: 1.059 ms
Execution time: 0.066 ms
NON-INDEXED QUERY
EXPLAIN ANALYZE
SELECT *
FROM "Log_Games"
WHERE "ParsedAt" = '2015-05-07 07:31:24+00'
NON-INDEXED QUERY PLAN
Seq Scan on "Log_Games" (cost=0.00..141377.34 rows=18 width=4190) (actual time=0.013..793.094 rows=1 loops=1)
Filter: ("ParsedAt" = '2015-05-07 07:31:24+00'::timestamp with time zone)
Rows Removed by Filter: 1924676
Planning time: 0.794 ms
Execution time: 793.135 ms
The query with the indexed clause uses the index Log_Games_pkey, resulting in a query that executes in 0.066ms. The query with the non-indexed clause reverts to a sequential scan, which means it goes from the start to the finish of the table to see which columns match, an operation that causes the execution time to blow out to 793.135ms.
There are plenty of good resources around the web that can help you read execution plans and decide when they may need supporting indexes. A good place to start is the PostgreSQL documentation:
https://www.postgresql.org/docs/9.6/static/using-explain.html

Why is PostgreSQL not using *just* the covering index in this query depending on the contents of its IN() clause?

I have a table with a covering index that should respond to a query using just the index, without checking the table at all. Postgres does, in fact, do that, if the IN() clause has 1 or a few elements in it. However, if the IN clause has lots of elements, it seems like it's doing the search on the index, and then going to the table and re-checking the conditions...
I can't figure out why Postgres would do that. It can either serve the query straight from the index or it can't, why would it go to the table if it (in theory) doesn't have anything else to add?
The table:
CREATE TABLE phone_numbers
(
id serial NOT NULL,
phone_number character varying,
hashed_phone_number character varying,
user_id integer,
created_at timestamp without time zone,
updated_at timestamp without time zone,
ghost boolean DEFAULT false,
CONSTRAINT phone_numbers_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
CREATE INDEX index_phone_numbers_covering_hashed_ghost_and_user
ON phone_numbers
USING btree
(hashed_phone_number COLLATE pg_catalog."default", ghost, user_id);
The query I'm running is :
SELECT "phone_numbers"."user_id"
FROM "phone_numbers"
WHERE "phone_numbers"."hashed_phone_number" IN (*several numbers*)
AND "phone_numbers"."ghost" = 'f'
As you can see, the index has all the fields it needs to reply to that query.
And if I have only one or a few numbers in the IN clause, it does:
1 number:
Index Scan using index_phone_numbers_on_hashed_phone_number on phone_numbers (cost=0.41..8.43 rows=1 width=4)
Index Cond: ((hashed_phone_number)::text = 'bebd43a6eb29b2fda3bcb63dcc7ffaf5433e78660ccd1a495c1180a3eaaf6b6a'::text)
Filter: (NOT ghost)"
3 numbers:
Index Only Scan using index_phone_numbers_covering_hashed_ghost_and_user on phone_numbers (cost=0.42..17.29 rows=1 width=4)
Index Cond: ((hashed_phone_number = ANY ('{8228a8116f1fdb12e243102cb85ecd859ebf7873d9332dce5f1343a481ec72e8,43ddeebdca2ea829d468d5debc84d475c8322cf4bf6edca286c918b04216387e,1578bf773eb6eb8a9b57a130922a28c9c91f1bda67202ef5936b39630ca4cfe4}'::text[])) AND (...)
Filter: (NOT ghost)"
However, when I have a lot of numbers in the IN clause, Postgres is using the Index, but then hitting the table, and I don't know why:
Bitmap Heap Scan on phone_numbers (cost=926.59..1255.81 rows=106 width=4)
Recheck Cond: ((hashed_phone_number)::text = ANY ('{b6459ce58f21d99c462b132cce7adc9ea947fa522a3849321e9fb65893006a5e,8228a8116f1fdb12e243102cb85ecd859ebf7873d9332dce5f1343a481ec72e8,ab3554acc1f287bb2e22ff20bb855e19a4177ef552676689d217dbb2a1a6177b,7ec9f58 (...)
Filter: (NOT ghost)
-> Bitmap Index Scan on index_phone_numbers_covering_hashed_ghost_and_user (cost=0.00..926.56 rows=106 width=0)
Index Cond: (((hashed_phone_number)::text = ANY ('{b6459ce58f21d99c462b132cce7adc9ea947fa522a3849321e9fb65893006a5e,8228a8116f1fdb12e243102cb85ecd859ebf7873d9332dce5f1343a481ec72e8,ab3554acc1f287bb2e22ff20bb855e19a4177ef552676689d217dbb2a1a6177b,7e (...)
This is currently making this query, which is looking for 250 records in a table with 50k total rows, about twice as low as a similar query on another table, which looks for 250 records in a table with 5 million rows, which doesn't make much sense.
Any ideas what could be happening, and whether I can do anything to improve this?
UPDATE: Changing the order of the columns in the covering index to have first ghost and then hashed_phone_number also doesn't solve it:
Bitmap Heap Scan on phone_numbers (cost=926.59..1255.81 rows=106 width=4)
Recheck Cond: ((hashed_phone_number)::text = ANY ('{b6459ce58f21d99c462b132cce7adc9ea947fa522a3849321e9fb65893006a5e,8228a8116f1fdb12e243102cb85ecd859ebf7873d9332dce5f1343a481ec72e8,ab3554acc1f287bb2e22ff20bb855e19a4177ef552676689d217dbb2a1a6177b,7ec9f58 (...)
Filter: (NOT ghost)
-> Bitmap Index Scan on index_phone_numbers_covering_ghost_hashed_and_user (cost=0.00..926.56 rows=106 width=0)
Index Cond: ((ghost = false) AND ((hashed_phone_number)::text = ANY ('{b6459ce58f21d99c462b132cce7adc9ea947fa522a3849321e9fb65893006a5e,8228a8116f1fdb12e243102cb85ecd859ebf7873d9332dce5f1343a481ec72e8,ab3554acc1f287bb2e22ff20bb855e19a4177ef55267668 (...)
The choice of indexes is based on what the optimizer says is the best solution for the query. Postgres is trying really hard with your index, but it is not the best index for the query.
The best index has ghost first:
CREATE INDEX index_phone_numbers_covering_hashed_ghost_and_user
ON phone_numbers
USING btree
(ghost, hashed_phone_number COLLATE pg_catalog."default", user_id);
I happen to think that MySQL documentation does a good job of explaining how composite indexes are used.
Essentially, what is happening is that Postgres needs to do an index seek for every element of the in list. This may be compounded by the use of strings -- because collations/encodings affect the comparisons. Eventually, Postgres decides that other approaches are more efficient. If you put ghost first, then it will just jump to the right part of the index and find the rows it needs there.

Postgres: huge table with (delayed) read and write access

I have a huge table (currently ~3mil rows, expected to increase by a factor of 1000) with lots of inserts every second. The table is never updated.
Now I have to run queries on that table which are pretty slow (as expected). These queries do not have to be 100% accurate, it is ok if the result is a day old (but not older).
There is currently two indexes on two single integer columns and I would have to add two more indexes (integer and timestamp columns) to speed up my queries.
The ideas I had so far:
Add the two missing indexes to the table
No indexes on the huge table at all and copy the content (as a daily task) to a second table (just the important rows) then create the indexes on the second table and run the queries on that table?
Partitioning the huge table
Master/Slave setup (writing to the master and reading from the slaves).
What option is the best in terms of performance? Do you have any other suggestions?
EDIT:
Here is the table (I have marked the foreign keys and prettified the query a bit):
CREATE TABLE client_log
(
id serial NOT NULL,
logid integer NOT NULL,
client_id integer NOT NULL, (FOREIGN KEY)
client_version varchar(16),
sessionid varchar(100) NOT NULL,
created timestamptz NOT NULL,
filename varchar(256),
funcname varchar(256),
linenum integer,
comment text,
domain varchar(128),
code integer,
latitude float8,
longitude float8,
created_on_server timestamptz NOT NULL,
message_id integer, (FOREIGN KEY)
app_id integer NOT NULL, (FOREIGN KEY)
result integer
);
CREATE INDEX client_log_code_idx ON client_log USING btree (code);
CREATE INDEX client_log_created_idx ON client_log USING btree (created);
CREATE INDEX clients_clientlog_app_id ON client_log USING btree (app_id);
CREATE INDEX clients_clientlog_client_id ON client_log USING btree (client_id);
CREATE UNIQUE INDEX clients_clientlog_logid_client_id_key ON client_log USING btree (logid, client_id);
CREATE INDEX clients_clientlog_message_id ON client_log USING btree (message_id);
And an example query:
SELECT
client_log.comment,
COUNT(client_log.comment) AS count
FROM
client_log
WHERE
client_log.app_id = 33 AND
client_log.code = 3 AND
client_log.client_id IN (SELECT client.id FROM client WHERE
client.app_id = 33 AND
client."replaced_id" IS NULL)
GROUP BY client_log.comment ORDER BY count DESC;
client_log_code_idx is the index needed for the query above. There is other queries needing the client_log_created_idx index.
And the query plan:
Sort (cost=2844.72..2844.75 rows=11 width=242) (actual time=4684.113..4684.180 rows=70 loops=1)
Sort Key: (count(client_log.comment))
Sort Method: quicksort Memory: 32kB
-> HashAggregate (cost=2844.42..2844.53 rows=11 width=242) (actual time=4683.830..4683.907 rows=70 loops=1)
-> Hash Semi Join (cost=1358.52..2844.32 rows=20 width=242) (actual time=303.515..4681.211 rows=1202 loops=1)
Hash Cond: (client_log.client_id = client.id)
-> Bitmap Heap Scan on client_log (cost=1108.02..2592.57 rows=387 width=246) (actual time=113.599..4607.568 rows=6962 loops=1)
Recheck Cond: ((app_id = 33) AND (code = 3))
-> BitmapAnd (cost=1108.02..1108.02 rows=387 width=0) (actual time=104.955..104.955 rows=0 loops=1)
-> Bitmap Index Scan on clients_clientlog_app_id (cost=0.00..469.96 rows=25271 width=0) (actual time=58.315..58.315 rows=40662 loops=1)
Index Cond: (app_id = 33)
-> Bitmap Index Scan on client_log_code_idx (cost=0.00..637.61 rows=34291 width=0) (actual time=45.093..45.093 rows=36310 loops=1)
Index Cond: (code = 3)
-> Hash (cost=248.06..248.06 rows=196 width=4) (actual time=61.069..61.069 rows=105 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 4kB
-> Bitmap Heap Scan on client (cost=10.95..248.06 rows=196 width=4) (actual time=27.843..60.867 rows=105 loops=1)
Recheck Cond: (app_id = 33)
Filter: (replaced_id IS NULL)
Rows Removed by Filter: 271
-> Bitmap Index Scan on clients_client_app_id (cost=0.00..10.90 rows=349 width=0) (actual time=15.144..15.144 rows=380 loops=1)
Index Cond: (app_id = 33)
Total runtime: 4684.843 ms
In general, in a system where time related data is constantly being inserted into the database, I'd recommend partitioning according to time.
This is not just because it might improve query times, but because otherwise it makes managing the data difficult. However big your hardware is, it will have a limit to its capacity, so you will eventually have to start removing rows that are older than a certain date. The rate at which you remove the rows will have to be equal to the rate they are going in.
If you just have one big table, and you remove old rows using DELETE, you will leave a lot of dead tuples that need to be vacuumed out. The autovacuum will be running constantly, using up valuable disk IO.
On the other hand, if you partition according to time, then removing out of date data is as easy as dropping the relevant child table.
In terms of indexes - the indexes are not inherited, so you can save on creating the indexes until after the partition is loaded. You could have a partition size of 1 day in your use case. This means the indexes do not need to be constantly updated as data is being inserted. It will be more practical to have additional indexes as needed to make your queries perform.
Your sample query does not filter on the 'created' time field, but you say other queries do. If you partition by time, and are careful about how you construct your queries, constraint exclusion will kick in and it will only include the specific partitions that are relevant to the query.
Except for partitioning I would consider splitting the table into many tables, aka Sharding.
I don't have the full picture of your domain but these are some suggestions:
Each client get their own table in their own schema (or a set of clients share a schema depending on how many clients you have and how many new clients you expect to get).
create table client1.log(id, logid,.., code, app_id);
create table client2.log(id, logid,.., code, app_id);
Splitting the table like this should also reduce the contention on inserts.
The table can be split even more. Within each client-schema you can also split the table per "code" or "app_id" or something else that makes sense for you. This might be overdoing it but it is easy to implement if the number of "code" and/or "app_id" values do not change often.
Do keep the code/app_id columns even in the new smaller tables but do put a constraint on the column so that no other type of log record can be inserted. The constraint will also help the optimiser when searching, see this example:
create schema client1;
set search_path = 'client1';
create table error_log(id serial, code text check(code ='error'));
create table warning_log(id serial, code text check(code ='warning'));
create table message_log(id serial, code text check(code ='message'));
To get the full picture (all rows) of a client you can use a view on top of all tables:
create view client_log as
select * from error_log
union all
select * from warning_log
union all
select * from message_log;
The check constraints should allow the optimiser to only search the table where the "code" can exist.
explain
select * from client_log where code = 'error';
-- Output
Append (cost=0.00..25.38 rows=6 width=36)
-> Seq Scan on error_log (cost=0.00..25.38 rows=6 width=36)
Filter: (code = 'error'::text)