Indexing a column used to ORDER BY with a constraint in PostgreSQL - sql

I've got a modest table of about 10k rows that is often sorted by a column called 'name'. So, I added an index on this column. Now selects on it are fast:
EXPLAIN ANALYZE SELECT * FROM crm_venue ORDER BY name ASC LIMIT 10;
...query plan...
Limit (cost=0.00..1.22 rows=10 width=154) (actual time=0.029..0.065 rows=10 loops=1)
-> Index Scan using crm_venue_name on crm_venue (cost=0.00..1317.73 rows=10768 width=154) (actual time=0.026..0.050 rows=10 loops=1)
Total runtime: 0.130 ms
If I increase the LIMIT to 60 (which is roughly what I use in the application) the total runtime doesn't increase much further.
Since I'm using a "logical delete pattern" on this table I only consider entries where the delete_date NULL. So this is a common select I make:
SELECT * FROM crm_venue WHERE delete_date IS NULL ORDER BY name ASC LIMIT 10;
To make that query snappy as well I put an index on the name column with a constraint like this:
CREATE INDEX name_delete_date_null ON crm_venue (name) WHERE delete_date IS NULL;
Now it's fast to do the ordering with the logical delete constraint:
EXPLAIN ANALYZE SELECT * FROM crm_venue WHERE delete_date IS NULL ORDER BY name ASC LIMIT 10;
Limit (cost=0.00..84.93 rows=10 width=154) (actual time=0.020..0.039 rows=10 loops=1)
-> Index Scan using name_delete_date_null on crm_venue (cost=0.00..458.62 rows=54 width=154) (actual time=0.018..0.033 rows=10 loops=1)
Total runtime: 0.076 ms
Awesome! But this is were I get myself into trouble. The application rarely calls for the first 10 rows. So, let's select some more rows:
EXPLAIN ANALYZE SELECT * FROM crm_venue WHERE delete_date IS NULL ORDER BY name ASC LIMIT 20;
Limit (cost=135.81..135.86 rows=20 width=154) (actual time=18.171..18.189 rows=20 loops=1)
-> Sort (cost=135.81..135.94 rows=54 width=154) (actual time=18.168..18.173 rows=20 loops=1)
Sort Key: name
Sort Method: top-N heapsort Memory: 21kB
-> Bitmap Heap Scan on crm_venue (cost=4.67..134.37 rows=54 width=154) (actual time=2.355..8.126 rows=10768 loops=1)
Recheck Cond: (delete_date IS NULL)
-> Bitmap Index Scan on crm_venue_delete_date_null_idx (cost=0.00..4.66 rows=54 width=0) (actual time=2.270..2.270 rows=10768 loops=1)
Index Cond: (delete_date IS NULL)
Total runtime: 18.278 ms
As you can see it goes from 0.1 ms to 18!!
Clearly what happens is that there's a point where the ordering can no longer use the index to run the sort. I noticed that as I increase the LIMIT number from 20 to higher numbers it always takes around 20-25 ms.
Am I doing it wrong or is this a limitation of PostgreSQL? What is best way to set up indexes for this type of queries?

My guess would be that since, logically, the index is comprised of pointers to a set of rows on a set of data pages. if you fetch a page that is known to ONLY have "deleted" records on it, it doesn't have to recheck the page once it is fetched to only fetch the records that are deleted.
Therefore, it may be that when you do LIMIT 10 and order by the name, the first 10 that come back from the index are all on a data page (or pages) that are comprised only of deleted records. Since it knows that these pages are homogeneous, then it doesn't have to recheck them once it's fetched them from disk. Once you increase to LIMIT 20, at least one of the first 20 is on a mixed page with non-deleted records. This would then force the executor to recheck each record since it can't fetch data pages in less than 1 page increments from either the disk or the cache.
As an experiment, if you can create an index (delete_date, name) and issue the command CLUSTER crm_venue ON where the index is your new index. This should rebuild the table in the sort order of delete_date then name. Just to be super-sure, you should then issue a REINDEX TABLE crm_venue. Now try the query again. Since all the NOT NULLs will be clustered together on disk, it may work faster with the larger LIMIT values.
Of course, this is all just off-the-cuff theory, so YMMV...

As you increase the number of rows, the index cardinality changes. I am not sure, but it could be that because it is using a greater number of rows from the table, it will need to read enough table blocks that those plus the index blocks are enough to make the index no longer make sense to use. This may be a miscalculation by the planner. Also your name (the field indexed) is not the field limiting the scope of the index which may be wreaking havoc with the planner math.
Things to try:
Increase the percentage of the table considered when building your statistics, your data may be skewed in such a way that the statistics are not picking up a true representative sample.
Index all rows, not just the NULL rows, see which is better. you could even try indexing where NOT NULL.
Cluster based on an index on that field to reduce the data blocks required and turn it into a range scan.
Nulls and indexes do not always play nice. Try another way:
alter table crm_venue add column char delete_flag;
update crm_venue set delete flag='Y' where delete_date is not null;
update crm_venue set delete flag='N' where delete_date is null;
create index deleted_venue (delete_flag) where delete_flag = 'N';
SELECT * FROM crm_venue WHERE delete__flag='Y' ORDER BY name ASC LIMIT 20;

Related

Prevent usage of index for a particular query in Postgres

I have a slow query in a Postgres DB. Using explain analyze, I can see that Postgres makes bitmap index scan on two different indexes followed by bitmap AND on the two resulting sets.
Deleting one of the indexes makes the evaluation ten times faster (bitmap index scan is still used on the first index). However, that deleted index is useful in other queries.
Query:
select
booking_id
from
booking
where
substitute_confirmation_token is null
and date_trunc('day', from_time) >= cast('01/25/2016 14:23:00.004' as date)
and from_time >= '01/25/2016 14:23:00.004'
and type = 'LESSON_SUBSTITUTE'
and valid
order by
booking_id;
Indexes:
"idx_booking_lesson_substitute_day" btree (date_trunc('day'::text, from_time)) WHERE valid AND type::text = 'LESSON_SUBSTITUTE'::text
"booking_substitute_confirmation_token_key" UNIQUE CONSTRAINT, btree (substitute_confirmation_token)
Query plan:
Sort (cost=287.26..287.26 rows=1 width=8) (actual time=711.371..711.377 rows=44 loops=1)
Sort Key: booking_id
Sort Method: quicksort Memory: 27kB
Buffers: shared hit=8 read=7437 written=1
-> Bitmap Heap Scan on booking (cost=275.25..287.25 rows=1 width=8) (actual time=711.255..711.294 rows=44 loops=1)
Recheck Cond: ((date_trunc('day'::text, from_time) >= '2016-01-25'::date) AND valid AND ((type)::text = 'LESSON_SUBSTITUTE'::text) AND (substitute_confirmation_token IS NULL))
Filter: (from_time >= '2016-01-25 14:23:00.004'::timestamp without time zone)
Buffers: shared hit=5 read=7437 written=1
-> BitmapAnd (cost=275.25..275.25 rows=3 width=0) (actual time=711.224..711.224 rows=0 loops=1)
Buffers: shared hit=5 read=7433 written=1
-> Bitmap Index Scan on idx_booking_lesson_substitute_day (cost=0.00..20.50 rows=594 width=0) (actual time=0.080..0.080 rows=72 loops=1)
Index Cond: (date_trunc('day'::text, from_time) >= '2016-01-25'::date)
Buffers: shared hit=5 read=1
-> Bitmap Index Scan on booking_substitute_confirmation_token_key (cost=0.00..254.50 rows=13594 width=0) (actual time=711.102..711.102 rows=2718734 loops=1)
Index Cond: (substitute_confirmation_token IS NULL)
Buffers: shared read=7432 written=1
Total runtime: 711.436 ms
Can I prevent using a particular index for a particular query in Postgres?
Your clever solution
You already found a clever solution for your particular case: A partial unique index that only covers rare values, so Postgres won't (can't) use the index for the common NULL value.
CREATE UNIQUE INDEX booking_substitute_confirmation_uni
ON booking (substitute_confirmation_token)
WHERE substitute_confirmation_token IS NOT NULL;
It's a textbook use-case for a partial index. Literally! The manual has a similar example and these perfectly matching advice to go with it:
Finally, a partial index can also be used to override the system's
query plan choices. Also, data sets with peculiar distributions might
cause the system to use an index when it really should not. In that
case the index can be set up so that it is not available for the
offending query. Normally, PostgreSQL makes reasonable choices about
index usage (e.g., it avoids them when retrieving common values, so
the earlier example really only saves index size, it is not required
to avoid index usage), and grossly incorrect plan choices are cause
for a bug report.
Keep in mind that setting up a partial index indicates that you know
at least as much as the query planner knows, in particular you know
when an index might be profitable. Forming this knowledge requires
experience and understanding of how indexes in PostgreSQL work. In
most cases, the advantage of a partial index over a regular index will
be minimal.
You commented: The table has few millions of rows and just few thousands of rows with not null values, so this is a perfect use-case. It will even speed up queries on non-null values for substitute_confirmation_token because the index is much smaller now.
Answer to question
To answer your original question: it's not possible to "disable" an existing index for a particular query. You would have to drop it, but that's way to expensive.
Fake drop index
You could drop an index inside a transaction, run your SELECT and then, instead of committing, use ROLLBACK. That's fast, but be aware that (per documentation):
A normal DROP INDEX acquires exclusive lock on the table, blocking
other accesses until the index drop can be completed.
So this is no good for multi-user environments.
BEGIN;
DROP INDEX big_user_id_created_at_idx;
SELECT ...;
ROLLBACK; -- so the index is preserved after all
More detailed statistics
Normally, though, it should be enough to raise the STATISTICS target for the column, so Postgres can more reliably identify common values and avoid the index for those. Try:
ALTER TABLE booking ALTER COLUMN substitute_confirmation_token SET STATISTICS 2000;
Then: ANALYZE booking; before you try your query again. 2000 is an example value. Related:
Keep PostgreSQL from sometimes choosing a bad query plan

Postgres not using index when index scan is much better option

I have a simple query to join two tables that's being really slow. I found out that the query plan does a seq scan on the large table email_activities (~10m rows) while I think using indexes doing nested loops will actually be faster.
I rewrote the query using a subquery in an attempt to force the use of index, then noticed something interesting. If you look at the two query plans below, you will see that when I limit the result set of subquery to 43k, query plan does use index on email_activities while setting the limit in subquery to even 44k will cause query plan to use seq scan on email_activities. One is clearly more efficient than the other, but Postgres doesn't seem to care.
What could cause this? Does it have a configs somewhere that forces the use of hash join if one of the set is larger than certain size?
explain analyze SELECT COUNT(DISTINCT "email_activities"."email_recipient_id") FROM "email_activities" where email_recipient_id in (select "email_recipients"."id" from email_recipients WHERE "email_recipients"."email_campaign_id" = 1607 limit 43000);
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=118261.50..118261.50 rows=1 width=4) (actual time=224.556..224.556 rows=1 loops=1)
-> Nested Loop (cost=3699.03..118147.99 rows=227007 width=4) (actual time=32.586..209.076 rows=40789 loops=1)
-> HashAggregate (cost=3698.94..3827.94 rows=43000 width=4) (actual time=32.572..47.276 rows=43000 loops=1)
-> Limit (cost=0.09..3548.44 rows=43000 width=4) (actual time=0.017..22.547 rows=43000 loops=1)
-> Index Scan using index_email_recipients_on_email_campaign_id on email_recipients (cost=0.09..5422.47 rows=65710 width=4) (actual time=0.017..19.168 rows=43000 loops=1)
Index Cond: (email_campaign_id = 1607)
-> Index Only Scan using index_email_activities_on_email_recipient_id on email_activities (cost=0.09..2.64 rows=5 width=4) (actual time=0.003..0.003 rows=1 loops=43000)
Index Cond: (email_recipient_id = email_recipients.id)
Heap Fetches: 40789
Total runtime: 224.675 ms
And:
explain analyze SELECT COUNT(DISTINCT "email_activities"."email_recipient_id") FROM "email_activities" where email_recipient_id in (select "email_recipients"."id" from email_recipients WHERE "email_recipients"."email_campaign_id" = 1607 limit 50000);
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=119306.25..119306.25 rows=1 width=4) (actual time=3050.612..3050.613 rows=1 loops=1)
-> Hash Semi Join (cost=4451.08..119174.27 rows=263962 width=4) (actual time=1831.673..3038.683 rows=47935 loops=1)
Hash Cond: (email_activities.email_recipient_id = email_recipients.id)
-> Seq Scan on email_activities (cost=0.00..107490.96 rows=9359988 width=4) (actual time=0.003..751.988 rows=9360039 loops=1)
-> Hash (cost=4276.08..4276.08 rows=50000 width=4) (actual time=34.058..34.058 rows=50000 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 1758kB
-> Limit (cost=0.09..4126.08 rows=50000 width=4) (actual time=0.016..27.302 rows=50000 loops=1)
-> Index Scan using index_email_recipients_on_email_campaign_id on email_recipients (cost=0.09..5422.47 rows=65710 width=4) (actual time=0.016..22.244 rows=50000 loops=1)
Index Cond: (email_campaign_id = 1607)
Total runtime: 3050.660 ms
Version: PostgreSQL 9.3.10 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
email_activities: ~10m rows
email_recipients: ~11m rows
Index (Only) Scan --> Bitmap Index Scan --> Sequential Scan
For few rows it pays to run an index scan. If enough data pages are visible to all (= vacuumed enough, and not too much concurrent write load) and the index can provide all column values needed, then a faster index only scan is used. With more rows expected to be returned (higher percentage of the table and depending on data distribution, value frequencies and row width) it becomes more likely to find several rows on one data page. Then it pays to switch to a bitmap index scans. (Or to combine multiple distinct indexes.) Once a large percentage of data pages has to be visited anyway, it's cheaper to run a sequential scan, filter surplus rows and skip the overhead for indexes altogether.
Index usage becomes (much) cheaper and more likely when accessing data pages in random order is not (much) more expensive than accessing them in sequential order. That's the case when using SSD instead of spinning disks, or even more so the more is cached in RAM - and the respective configuration parameters random_page_cost and effective_cache_size are set accordingly.
In your case, Postgres switches to a sequential scan, expecting to find rows=263962, that's already 3 % of the whole table. (While only rows=47935 are actually found, see below.)
More in this related answer:
Efficient PostgreSQL query on timestamp using index or bitmap index scan?
Beware of forcing query plans
You cannot force a certain planner method directly in Postgres, but you can make other methods seem extremely expensive for debugging purposes. See Planner Method Configuration in the manual.
SET enable_seqscan = off (like suggested in another answer) does that to sequential scans. But that's intended for debugging purposes in your session only. Do not use this as a general setting in production unless you know exactly what you are doing. It can force ridiculous query plans. The manual:
These configuration parameters provide a crude method of influencing
the query plans chosen by the query optimizer. If the default plan
chosen by the optimizer for a particular query is not optimal, a
temporary solution is to use one of these configuration parameters to force the optimizer to choose a different plan. Better ways to
improve the quality of the plans chosen by the optimizer include
adjusting the planner cost constants (see Section 19.7.2),
running ANALYZE manually, increasing the value of the
default_statistics_target configuration parameter, and
increasing the amount of statistics collected for specific columns
using ALTER TABLE SET STATISTICS.
That's already most of the advice you need.
Keep PostgreSQL from sometimes choosing a bad query plan
In this particular case, Postgres expects 5-6 times more hits on email_activities.email_recipient_id than are actually found:
estimated rows=227007 vs. actual ... rows=40789
estimated rows=263962 vs. actual ... rows=47935
If you run this query often it will pay to have ANALYZE look at a bigger sample for more accurate statistics on the particular column. Your table is big (~ 10M rows), so make that:
ALTER TABLE email_activities ALTER COLUMN email_recipient_id
SET STATISTICS 3000; -- max 10000, default 100
Then ANALYZE email_activities;
Measure of last resort
In very rare cases you might resort to force an index with SET LOCAL enable_seqscan = off in a separate transaction or in a function with its own environment. Like:
CREATE OR REPLACE FUNCTION f_count_dist_recipients(_email_campaign_id int, _limit int)
RETURNS bigint AS
$func$
SELECT COUNT(DISTINCT a.email_recipient_id)
FROM email_activities a
WHERE a.email_recipient_id IN (
SELECT id
FROM email_recipients
WHERE email_campaign_id = $1
LIMIT $2) -- or consider query below
$func$ LANGUAGE sql VOLATILE COST 100000 SET enable_seqscan = off;
The setting only applies to the local scope of the function.
Warning: This is just a proof of concept. Even this much less radical manual intervention might bite you in the long run. Cardinalities, value frequencies, your schema, global Postgres settings, everything changes over time. You are going to upgrade to a new Postgres version. The query plan you force now, may become a very bad idea later.
And typically this is just a workaround for a problem with your setup. Better find and fix it.
Alternative query
Essential information is missing in the question, but this equivalent query is probably faster and more likely to use an index on (email_recipient_id) - increasingly so for a bigger LIMIT.
SELECT COUNT(*) AS ct
FROM (
SELECT id
FROM email_recipients
WHERE email_campaign_id = 1607
LIMIT 43000
) r
WHERE EXISTS (
SELECT FROM email_activities
WHERE email_recipient_id = r.id);
A sequential scan can be more efficient, even when an index exists. In this case, postgres seems to estimate things rather wrong.
An ANALYZE <TABLE> on all related tables can help in such cases. If it doesnt, you can set the variable enable_seqscan to OFF, to force postgres to use an index whenever technically possible, at the expense, that sometimes an index-scan will be used when a sequential scan would perform better.

Postgres inconsistent use of Index vs Seq Scan

I'm having difficulty understanding what I perceive as an inconsistancy in how postgres chooses to use indices. We have a query based on NOT IN against an indexed column that postgres executes sequentially, but when we perform the same query as IN, it uses the index.
I've created a simplistic example that I believe demonstrates the issue, notice this first query is sequential
CREATE TABLE node
(
id SERIAL PRIMARY KEY,
vid INTEGER
);
CREATE INDEX x ON node(vid);
INSERT INTO node(vid) VALUES (1),(2);
EXPLAIN ANALYZE
SELECT *
FROM node
WHERE NOT vid IN (1);
Seq Scan on node (cost=0.00..36.75 rows=2129 width=8) (actual time=0.009..0.010 rows=1 loops=1)
Filter: (vid <> 1)
Rows Removed by Filter: 1
Total runtime: 0.025 ms
But if we invert the query to IN, you'll notice that it now decided to use the index
EXPLAIN ANALYZE
SELECT *
FROM node
WHERE vid IN (2);
Bitmap Heap Scan on node (cost=4.34..15.01 rows=11 width=8) (actual time=0.017..0.017 rows=1 loops=1)
Recheck Cond: (vid = 1)
-> Bitmap Index Scan on x (cost=0.00..4.33 rows=11 width=0) (actual time=0.012..0.012 rows=1 loops=1)
Index Cond: (vid = 1)
Total runtime: 0.039 ms
Can anyone shed any light on this? Specifically, is there a way to re-write out NOT IN to work with the index (when obviously the result set is not as simplistic as just 1 or 2).
We are using Postgres 9.2 on CentOS 6.6
PostgreSQL is going to use an Index when it makes sense. It is likely that the statistics state that your NOT IN has too many tuples to return to make an Index effective.
You can test this by doing the following:
set enable_seqscan to false;
explain analyze .... NOT IN
set enable_seqscan to true;
explain analyze .... NOT IN
The results will tell you if PostgreSQL is making the correct decision. If it isn't you can make adjustments to the statistics of the column and or the costs (random_page_cost) to get the desired behavior.

Postgres: huge table with (delayed) read and write access

I have a huge table (currently ~3mil rows, expected to increase by a factor of 1000) with lots of inserts every second. The table is never updated.
Now I have to run queries on that table which are pretty slow (as expected). These queries do not have to be 100% accurate, it is ok if the result is a day old (but not older).
There is currently two indexes on two single integer columns and I would have to add two more indexes (integer and timestamp columns) to speed up my queries.
The ideas I had so far:
Add the two missing indexes to the table
No indexes on the huge table at all and copy the content (as a daily task) to a second table (just the important rows) then create the indexes on the second table and run the queries on that table?
Partitioning the huge table
Master/Slave setup (writing to the master and reading from the slaves).
What option is the best in terms of performance? Do you have any other suggestions?
EDIT:
Here is the table (I have marked the foreign keys and prettified the query a bit):
CREATE TABLE client_log
(
id serial NOT NULL,
logid integer NOT NULL,
client_id integer NOT NULL, (FOREIGN KEY)
client_version varchar(16),
sessionid varchar(100) NOT NULL,
created timestamptz NOT NULL,
filename varchar(256),
funcname varchar(256),
linenum integer,
comment text,
domain varchar(128),
code integer,
latitude float8,
longitude float8,
created_on_server timestamptz NOT NULL,
message_id integer, (FOREIGN KEY)
app_id integer NOT NULL, (FOREIGN KEY)
result integer
);
CREATE INDEX client_log_code_idx ON client_log USING btree (code);
CREATE INDEX client_log_created_idx ON client_log USING btree (created);
CREATE INDEX clients_clientlog_app_id ON client_log USING btree (app_id);
CREATE INDEX clients_clientlog_client_id ON client_log USING btree (client_id);
CREATE UNIQUE INDEX clients_clientlog_logid_client_id_key ON client_log USING btree (logid, client_id);
CREATE INDEX clients_clientlog_message_id ON client_log USING btree (message_id);
And an example query:
SELECT
client_log.comment,
COUNT(client_log.comment) AS count
FROM
client_log
WHERE
client_log.app_id = 33 AND
client_log.code = 3 AND
client_log.client_id IN (SELECT client.id FROM client WHERE
client.app_id = 33 AND
client."replaced_id" IS NULL)
GROUP BY client_log.comment ORDER BY count DESC;
client_log_code_idx is the index needed for the query above. There is other queries needing the client_log_created_idx index.
And the query plan:
Sort (cost=2844.72..2844.75 rows=11 width=242) (actual time=4684.113..4684.180 rows=70 loops=1)
Sort Key: (count(client_log.comment))
Sort Method: quicksort Memory: 32kB
-> HashAggregate (cost=2844.42..2844.53 rows=11 width=242) (actual time=4683.830..4683.907 rows=70 loops=1)
-> Hash Semi Join (cost=1358.52..2844.32 rows=20 width=242) (actual time=303.515..4681.211 rows=1202 loops=1)
Hash Cond: (client_log.client_id = client.id)
-> Bitmap Heap Scan on client_log (cost=1108.02..2592.57 rows=387 width=246) (actual time=113.599..4607.568 rows=6962 loops=1)
Recheck Cond: ((app_id = 33) AND (code = 3))
-> BitmapAnd (cost=1108.02..1108.02 rows=387 width=0) (actual time=104.955..104.955 rows=0 loops=1)
-> Bitmap Index Scan on clients_clientlog_app_id (cost=0.00..469.96 rows=25271 width=0) (actual time=58.315..58.315 rows=40662 loops=1)
Index Cond: (app_id = 33)
-> Bitmap Index Scan on client_log_code_idx (cost=0.00..637.61 rows=34291 width=0) (actual time=45.093..45.093 rows=36310 loops=1)
Index Cond: (code = 3)
-> Hash (cost=248.06..248.06 rows=196 width=4) (actual time=61.069..61.069 rows=105 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 4kB
-> Bitmap Heap Scan on client (cost=10.95..248.06 rows=196 width=4) (actual time=27.843..60.867 rows=105 loops=1)
Recheck Cond: (app_id = 33)
Filter: (replaced_id IS NULL)
Rows Removed by Filter: 271
-> Bitmap Index Scan on clients_client_app_id (cost=0.00..10.90 rows=349 width=0) (actual time=15.144..15.144 rows=380 loops=1)
Index Cond: (app_id = 33)
Total runtime: 4684.843 ms
In general, in a system where time related data is constantly being inserted into the database, I'd recommend partitioning according to time.
This is not just because it might improve query times, but because otherwise it makes managing the data difficult. However big your hardware is, it will have a limit to its capacity, so you will eventually have to start removing rows that are older than a certain date. The rate at which you remove the rows will have to be equal to the rate they are going in.
If you just have one big table, and you remove old rows using DELETE, you will leave a lot of dead tuples that need to be vacuumed out. The autovacuum will be running constantly, using up valuable disk IO.
On the other hand, if you partition according to time, then removing out of date data is as easy as dropping the relevant child table.
In terms of indexes - the indexes are not inherited, so you can save on creating the indexes until after the partition is loaded. You could have a partition size of 1 day in your use case. This means the indexes do not need to be constantly updated as data is being inserted. It will be more practical to have additional indexes as needed to make your queries perform.
Your sample query does not filter on the 'created' time field, but you say other queries do. If you partition by time, and are careful about how you construct your queries, constraint exclusion will kick in and it will only include the specific partitions that are relevant to the query.
Except for partitioning I would consider splitting the table into many tables, aka Sharding.
I don't have the full picture of your domain but these are some suggestions:
Each client get their own table in their own schema (or a set of clients share a schema depending on how many clients you have and how many new clients you expect to get).
create table client1.log(id, logid,.., code, app_id);
create table client2.log(id, logid,.., code, app_id);
Splitting the table like this should also reduce the contention on inserts.
The table can be split even more. Within each client-schema you can also split the table per "code" or "app_id" or something else that makes sense for you. This might be overdoing it but it is easy to implement if the number of "code" and/or "app_id" values do not change often.
Do keep the code/app_id columns even in the new smaller tables but do put a constraint on the column so that no other type of log record can be inserted. The constraint will also help the optimiser when searching, see this example:
create schema client1;
set search_path = 'client1';
create table error_log(id serial, code text check(code ='error'));
create table warning_log(id serial, code text check(code ='warning'));
create table message_log(id serial, code text check(code ='message'));
To get the full picture (all rows) of a client you can use a view on top of all tables:
create view client_log as
select * from error_log
union all
select * from warning_log
union all
select * from message_log;
The check constraints should allow the optimiser to only search the table where the "code" can exist.
explain
select * from client_log where code = 'error';
-- Output
Append (cost=0.00..25.38 rows=6 width=36)
-> Seq Scan on error_log (cost=0.00..25.38 rows=6 width=36)
Filter: (code = 'error'::text)

Why does this SUM() function take so long in PostgreSQL?

This is my query:
SELECT SUM(amount) FROM bill WHERE name = 'peter'
There are 800K+ rows in the table. EXPLAIN ANALYZE says:
Aggregate (cost=288570.06..288570.07 rows=1 width=4) (actual time=537213.327..537213.328 rows=1 loops=1)
-> Seq Scan on bill (cost=0.00..288320.94 rows=498251 width=4) (actual time=48385.201..535941.041 rows=800947 loops=1)
Filter: ((name)::text = 'peter'::text)
Rows Removed by Filter: 8
Total runtime: 537213.381 ms
All rows are affected, and this is correct. But why so long? A similar query without WHERE runs way faster:
ANALYZE EXPLAIN SELECT SUM(amount) FROM bill
Aggregate (cost=137523.31..137523.31 rows=1 width=4) (actual time=2198.663..2198.664 rows=1 loops=1)
-> Index Only Scan using idx_amount on bill (cost=0.00..137274.17 rows=498268 width=4) (actual time=0.032..1223.512 rows=800955 loops=1)
Heap Fetches: 533399
Total runtime: 2198.717 ms
I have an index on amount and an index on name. Have I missed any indexes?
ps. I managed to solve the problem just by adding a new idex ON bill(name, amount). I didn't get why it helped, so let's leave the question open for some time...
Since you are searching for a specific name, you should have an index that has name as the first column, e.g. CREATE INDEX IX_bill_name ON bill( name ).
But Postgres can still opt to do a full table scan if it estimates your index to not be specific enough, i.e. if it thinks it is faster to just scan all rows and pick the matching ones instead of consulting an index and start jumping around in the table to gather the matching rows. Postgres uses a cost-based estimation technique that weights random disk reads to be more expensive than sequential reads.
For an index to actually be used in your situation, there should be no more than 10% of the rows matching what you are searching for. Since most of your rows have name=peter it is actually faster to do a full table scan.
As to why the SUM without filtering runs faster has to do with overall width of the table. With a where-clause, postgres has to sequentially read all rows in the table so it can disregard those that do not match the filter. Without a where-clause, postgres can instead read all the amounts from the index. Because the index on amounts contains the amounts and pointers to each corresponding rows, but no other data from the table, it is simply less data to wade through. Based on the big different in performance I guess you have quite a lot of other fields in your table..