I'm trying to fetch most recent row in a table. I have a simple timestamp created_at which is indexed. When I query ORDER BY created_at DESC LIMIT 1, it takes far more than I think it should (about 50ms on my machine on 36k rows).
EXPLAIN-ing claims that it uses backwards index scan, but I confirmed that changing the index to be (created_at DESC) does not change the cost in query planner for a simple index scan.
How can I optimize this use case?
Running postgresql 9.2.4.
Edit:
# EXPLAIN SELECT * FROM articles ORDER BY created_at DESC LIMIT 1;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..0.58 rows=1 width=1752)
-> Index Scan Backward using index_articles_on_created_at on articles (cost=0.00..20667.37 rows=35696 width=1752)
(2 rows)
Assuming we are dealing with a big table, a partial index might help:
CREATE INDEX tbl_created_recently_idx ON tbl (created_at DESC)
WHERE created_at > '2013-09-15 0:0'::timestamp;
As you already found out: descending or ascending hardly matters here. Postgres can scan backwards at almost the same speed (exceptions apply with multi-column indices).
Query to use this index:
SELECT * FROM tbl
WHERE created_at > '2013-09-15 0:0'::timestamp -- matches index
ORDER BY created_at DESC
LIMIT 1;
The point here is to make the index much smaller, so it should be easier to cache and maintain.
You need to pick a timestamp that is guaranteed to be smaller than the most recent one.
You should recreate the index from time to time to cut off old data.
The condition needs to be IMMUTABLE.
So the one-time effect deteriorates over time. The specific problem is the hard coded condition:
WHERE created_at > '2013-09-15 0:0'::timestamp
Automate
You could update the index and your queries manually from time to time. Or you automate it with the help of a function like this one:
CREATE OR REPLACE FUNCTION f_min_ts()
RETURNS timestamp LANGUAGE sql IMMUTABLE AS
$$SELECT '2013-09-15 0:0'::timestamp$$
Index:
CREATE INDEX tbl_created_recently_idx ON tbl (created_at DESC);
WHERE created_at > f_min_ts();
Query:
SELECT * FROM tbl
WHERE created_at > f_min_ts()
ORDER BY created_at DESC
LIMIT 1;
Automate recreation with a cron job or some trigger-based event. Your queries can stay the same now. But you need to recreate all indices using this function in any way after changing it. Just drop and create each one.
First ..
... test whether you are actually hitting the bottle neck with this.
Try whether a simple DROP index ... ; CREATE index ... does the job. Then your index might have been bloated. Your autovacuum settings may be off.
Or try VACUUM FULL ANALYZE to get your whole table plus indices in pristine condition and check again.
Other options include the usual general performance tuning and covering indexes, depending on what you actually retrieve from the table.
Related
I am trying to speed up a delete query that appears to be very slow when compared to an identical select query:
Slow delete query:
https://explain.depesz.com/s/kkWJ
delete from processed.token_utxo
where token_utxo.output_tx_time >= (select '2022-03-01T00:00:00+00:00'::timestamp with time zone)
and token_utxo.output_tx_time < (select '2022-03-02T00:00:00+00:00'::timestamp with time zone)
and not exists (
select 1
from public.ma_tx_out
where ma_tx_out.id = token_utxo.id
)
Fast select query: https://explain.depesz.com/s/Bp8q
select * from processed.token_utxo
where token_utxo.output_tx_time >= (select '2022-03-01T00:00:00+00:00'::timestamp with time zone)
and token_utxo.output_tx_time < (select '2022-03-02T00:00:00+00:00'::timestamp with time zone)
and not exists (
select 1
from public.ma_tx_out
where ma_tx_out.id = token_utxo.id
)
Table reference:
create table processed.token_utxo (
id bigint,
tx_out_id bigint,
token_id bigint,
output_tx_id bigint,
output_tx_index int,
output_tx_time timestamp,
input_tx_id bigint,
input_tx_time timestamp,
address varchar,
address_has_script boolean,
payment_cred bytea,
redeemer_id bigint,
stake_address_id bigint,
quantity numeric,
primary key (id)
);
create index token_utxo_output_tx_id on processed.token_utxo using btree (output_tx_id);
create index token_utxo_input_tx_id on processed.token_utxo using btree (input_tx_id);
create index token_utxo_output_tx_time on processed.token_utxo using btree (output_tx_time);
create index token_utxo_input_tx_time on processed.token_utxo using btree (input_tx_time);
create index token_utxo_address on processed.token_utxo using btree (address);
create index token_utxo_token_id on processed.token_utxo using btree (token_id);
Version: PostgreSQL 13.6 on x86_64-pc-linux-gnu, compiled by Debian clang version 12.0.1, 64-bit
Postgres chooses different query plans which results in drastically different performance. I'm not familiar enough with Postgres to understand why it makes this decision. Hoping there is a simple way to guide it towards a better plan here.
Why it comes up with different plans is relatively easy to explain. First, the DELETE cannot use parallel queries, so the plan which is believed to be more parallel-friendly is more favored by the SELECT rather than the DELETE. Maybe that restriction will be eased in some future version. Second, the DELETE cannot use an index-only-scan on ma_tx_out_pkey, like the pure SELECT can--it would use an index scan instead. This too will make the faster plan appear less fast for the DELETE than it does for the SELECT. These two factors combined are apparently enough to get it switch plans. We have already seen evidence of the first factor, You can probably verify this 2nd factor by setting enable_seqscan to off and seeing what plan the DELETE chooses then, and if it is the nested loop, verifying that the last index scan is not index-only.
But of course the only reason those factors can make the decision between plans differ is because the plan estimates were so close together in the first place, despite being so different in actual performance. So what explains that false closeness? That is harder to determine with the info we have (it would be better if you had done EXPLAIN (ANALYZE, BUFFERS) with track_io_timing turned on).
One possibility is that the difference in actual performance is illusory. Maybe the nested loop is so fast only because all the data it needs is in memory, and the only reason for that is that you executed the same query repeatedly with the same parameters as part of your testing. Is it still so fast if you change the timestamps params, or clear both the PostgreSQL buffers and the file cache between runs?
Another possibility is that your system is just poorly tuned. For example, if your data is on SSD, then the default setting of random_page_cost is probably much too high. 1.1 might be a more reasonable setting than 4.
Finally, your setting of work_mem is probably way too low. That results in the hash using an extravagant number of batches: 8192. How much this effects the performance is hard predict, as it depends on your hardware, your kernel, your filesystem, etc. (Which is maybe why the planner does not try to take it into account). It is pretty easy to test, you can increase the setting of work_mem locally (in your session) and see if it changes the speed.
Much of this analysis is possible only based on the fact that your delete doesn't actually find any rows to delete. If it were deleting rows, that would make the situation far more complex.
I am not exactly sure what triggers the switch of query plan between SELECT and DELETE, but I do know this: the subqueries returning a constant value are actively unhelpful. Use instead:
SELECT *
FROM processed.token_utxo t
WHERE t.output_tx_time >= '2022-03-01T00:00:00+00:00'::timestamptz -- no subquery
AND t.output_tx_time < '2022-03-02T00:00:00+00:00'::timestamptz -- no subquery
AND NOT EXISTS (SELECT FROM public.ma_tx_out m WHERE m.id = t.id)
DELETE FROM processed.token_utxo t
WHERE t.output_tx_time >= '2022-03-01T00:00:00+00:00'::timestamptz
AND t.output_tx_time < '2022-03-02T00:00:00+00:00'::timestamptz
AND NOT EXISTS (SELECT FROM public.ma_tx_out m WHERE m.id = t.id)
As you can see in the query plan, Postgres comes up with a generic plan for yet unknown timestamps:
Index Cond: ((output_tx_time >= $0) AND (output_tx_time < $1))
My fixed query allows Postgres to devise a plan for the actual given constant values. If your column statistics are up to date, this allows for more optimization according to the number of rows expected to qualify for that time interval. The query plan will change to:
Index Cond: ((output_tx_time >= '2022-03-01T00:00:00+00:00'::timestamp with time zone) AND (output_tx_time < '2022-03-02T00:00:00+00:00'::timestamp with time zone))
And you will see different row estimates, that may result in a different query plan.
Of course, DELETE cannot have the exact same plan. Besides the obvious difference that DELETE has write-lock and write to dying rows, it also cannot (currently - up to at least pg 15) use parallelism, and it cannot use index-only scans. See:
Delete using another table with index
So you'll see an index scan where SELECT might use an index-only scan.
I'm using will_paginate to get the top 10-20 rows from a table, but I've found that the simple query it produces is scanning the entire table.
sqlite> explain query plan
SELECT "deals".* FROM "deals" ORDER BY created_at DESC LIMIT 10 OFFSET 0;
0|0|0|SCAN TABLE deals (~1000000 rows)
0|0|0|USE TEMP B-TREE FOR ORDER BY
If I was using a WITH clause and indexes, I'm sure it would be different, but this is just displaying the newest posts on the top page of the site. I did find a post or two on here that suggested adding indexes anyway, but I don't see how it can help with the table scan.
sqlite> explain query plan
SELECT deals.id FROM deals ORDER BY id DESC LIMIT 10 OFFSET 0;
0|0|0|SCAN TABLE deals USING INTEGER PRIMARY KEY (~1000000 rows)
It seems like a common use case, so how is it typically done efficiently?
The ORDER BY created_at DESC requires the database to search for the largest values in the entire table.
To speed up this search, you would need an index on the created_at column.
I have a big report table. Bitmap Heap Scan step take more than 5 sec.
Is there something that I can do? I add columns to the table, does reindex the index that it use will help?
I do union and sum on the data, so I don't return 500K records to the client.
I use postgres 9.1.
Here the explain:
Bitmap Heap Scan on foo_table (cost=24747.45..1339408.81 rows=473986 width=116) (actual time=422.210..5918.037 rows=495747 loops=1)
Recheck Cond: ((foo_id = 72) AND (date >= '2013-04-04 00:00:00'::timestamp without time zone) AND (date <= '2013-05-05 00:00:00'::timestamp without time zone))
Filter: ((foo)::text = 'foooooo'::text)
-> Bitmap Index Scan on foo_table_idx (cost=0.00..24628.96 rows=573023 width=0) (actual time=341.269..341.269 rows=723918 loops=1)
Query:
explain analyze
SELECT CAST(date as date) AS date, foo_id, ....
from foo_table
where foo_id = 72
and date >= '2013-04-04'
and date <= '2013-05-05'
and foo = 'foooooo'
Index def:
Index "public.foo_table_idx"
Column | Type
-------------+-----------------------------
foo_id | bigint
date | timestamp without time zone
btree, for table "public.external_channel_report"
Table:
foo is text field with 4 different values.
foo_id is bigint with currently 10K distinct values.
Create a composite index on (foo_id, foo, date) (in this order).
Note that if you select 500k records (and return them all to the client), this may take long.
Are you sure you need all 500k records on the client (rather than some kind of an aggregate or a LIMIT)?
Answer to comment
Do i need the where columns in the same order of the index?
The order of expressions in the WHERE clause is completely irrelevant, SQL is not a procedural language.
Fix mistakes
The timestamp column should not be named "date" for several reasons. Obviously, it's a timestamp, not a date. But more importantly, date it is a reserved word in all SQL standards and a type and function name in Postgres and shouldn't be used as identifier.
You should provide proper information with your question, including a complete table definition and conclusive information about existing indexes. I might be a good idea to start by reading the chapter about indexes in the manual.
The WHERE conditions on the timestamp are most probably incorrect:
and date >= '2013-04-04'
and date <= '2013-05-05'
The upper border for a timestamp column should probably be excluded:
and date >= '2013-04-04'
and date < '2013-05-05'
Index
With the multicolumn index #Quassnoi provided, your query will be much faster, since all qualifying rows can be read from one continuous data block of the index. No row is read in vain (and later disqualified), like you have it now.
But 500k rows will still take some time. Normally you have to verify visibility and fetch additional columns from the table. An index-only scan might be an option in Postgres 9.2+.
The order of columns is best this way, because the rule of thumb is: columns for equality first — then for ranges. More explanation and links in this related answer on dba.SE.
CLUSTER / pg_repack
You could further speed things up by streamlining the table according to this index, so that a minimum of blocks have to be read from the table - if you don't have other requirements that stand against it!
If you want it faster, yet, you could streamline the physical order of rows in your table. If you can afford to lock your table exclusively for a few seconds (at off hours for instance) to rewrite your table and order rows according to the index:
ALTER TABLE foo_table CLUSTER ON idx_myindex_idx;
If concurrent use is a problem, consider pg_repack, which can do the same without exclusive lock.
The effect: fewer blocks need to be read from the table and everything is pre-sorted. It's a one-time effect deteriorating over time, if you have writes on the table. So you would rerun it from time to time.
I copied and adapted the last chapter from this related answer on dba.SE.
Need help regarding performance of a query in PostgreSQL. It seems to relate to the indexes.
This query:
Filters according to type
Orders by timestamp, ascending:
SELECT * FROM the_table WHERE type = 'some_type' ORDER BY timestamp LIMIT 20
The Indexes:
CREATE INDEX the_table_timestamp_index ON the_table(timestamp);
CREATE INDEX the_table_type_index ON the_table(type);
The values of the type field are only ever one of about 11 different strings.
The problem is that the query seems to execute in O(log n) time, taking only a few milliseconds most times except for some values of type which take on the order of several minutes to run.
In these example queries, the first takes only a few milliseconds to run while the second takes over 30 minutes:
SELECT * FROM the_table WHERE type = 'goq' ORDER BY timestamp LIMIT 20
SELECT * FROM the_table WHERE type = 'csp' ORDER BY timestamp LIMIT 20
I suspect, with about 90% certainty, that the indexes we have are not the right ones. I think, after reading this similar question about index performance, that most likely what we need is a composite index, over type and timestamp.
The query plans that I have run are here:
Expected performance, type-specific index (i.e. new index with the type = 'csq' in the WHERE clause).
Slowest, problematic case, indexes as described above.
Fast case, same indexes as above.
Thanks very much for your help! Any pointers will be really appreciated!
The indexes can be used either for the where clause or the order by clause. With the index thetable(type, timestamp), then the same index can be used for both.
My guess is that Postgres is deciding which index to use based on statistics it gathers. When it uses the index for the where and then attempts a sort, you get really bad performance.
This is just a guess, but it is worth creating the above index to see if that fixes the performance problems.
The explain outputs all use the timestamp index. That is probably because the cardinality of the type column is too low so a scan on an index on that column is as expensive as a table scan.
The composite index to be created should be:
create index comp_index on the_table ("timestamp", type)
In that order.
I've just restructured my database to use partitioning in Postgres 8.2. Now I have a problem with query performance:
SELECT *
FROM my_table
WHERE time_stamp >= '2010-02-10' and time_stamp < '2010-02-11'
ORDER BY id DESC
LIMIT 100;
There are 45 million rows in the table. Prior to partitioning, this would use a reverse index scan and stop as soon as it hit the limit.
After partitioning (on time_stamp ranges), Postgres does a full index scan of the master table and the relevant partition and merges the results, sorts them, then applies the limit. This takes way too long.
I can fix it with:
SELECT * FROM (
SELECT *
FROM my_table_part_a
WHERE time_stamp >= '2010-02-10' and time_stamp < '2010-02-11'
ORDER BY id DESC
LIMIT 100) t
UNION ALL
SELECT * FROM (
SELECT *
FROM my_table_part_b
WHERE time_stamp >= '2010-02-10' and time_stamp < '2010-02-11'
ORDER BY id DESC
LIMIT 100) t
UNION ALL
... and so on ...
ORDER BY id DESC
LIMIT 100
This runs quickly. The partitions where the times-stamps are out-of-range aren't even included in the query plan.
My question is: Is there some hint or syntax I can use in Postgres 8.2 to prevent the query-planner from scanning the full table but still using simple syntax that only refers to the master table?
Basically, can I avoid the pain of dynamically building the big UNION query over each partition that happens to be currently defined?
EDIT: I have constraint_exclusion enabled (thanks #Vinko Vrsalovic)
Have you tried Constraint Exclusion (section 5.9.4 in the document you've linked to)
Constraint exclusion is a query
optimization technique that improves
performance for partitioned tables
defined in the fashion described
above. As an example:
SET constraint_exclusion = on;
SELECT count(*) FROM measurement WHERE logdate >= DATE '2006-01-01';
Without
constraint exclusion, the above query
would scan each of the partitions of
the measurement table. With constraint
exclusion enabled, the planner will
examine the constraints of each
partition and try to prove that the
partition need not be scanned because
it could not contain any rows meeting
the query's WHERE clause. When the
planner can prove this, it excludes
the partition from the query plan.
You can use the EXPLAIN command to
show the difference between a plan
with constraint_exclusion on and a
plan with it off.
I had a similar problem that I was able fix by casting conditions in WHERE.
EG: (assuming the time_stamp column is timestamptz type)
WHERE time_stamp >= '2010-02-10'::timestamptz and time_stamp < '2010-02-11'::timestamptz
Also, make sure the CHECK condition on the table is defined the same way...
EG:
CHECK (time_stamp < '2010-02-10'::timestamptz)
I had the same problem and it boiled down to two reasons in my case:
I had indexed column of type timestamp WITH time zone and partition constraint by this column with type timestamp WITHOUT time zone.
After fixing constraints ANALYZE of all child tables was needed.
Edit: another bit of knowledge - it's important to remember that constraint exclusion (which allows PG to skip scanning some tables based on your partitioning criteria) doesn't work with, quote: non-immutable function such as CURRENT_TIMESTAMP
I had requests with CURRENT_DATE and it was part of my problem.