time-dependent postgres query speed - sql

I have a situation where the select query could be done in 3 seconds or more than 1 hours still not finish (I could not wait that long and killed it). I believe it may have something to do with the automatic statistics collection behavior of postgres server. I have a 3 table join one of them has over 70 million rows.
-- tmp_variant_filtered has about 4000 rows
-- variant_quick > 70 million rows
-- filtered_variant_quick has about 70 k rows
select count(*)
from "tmp_variant_filtered" t join "variant_quick" v on getchrnum(t.seqname)=v.chrom
and t.pos_start=v.pos and t.ref=v.ref
and t.alt=v.alt
join "filtered_variant_quick" f on f.variantid=v.id
where v.samplerun=165
;
-- running the query immediately after tmp_variant_filtered was loaded
-- Query plan that will take > 1 hour and not finish
Aggregate (cost=332.05..332.06 rows=1 width=8)
-> Nested Loop (cost=0.86..332.05 rows=1 width=0)
-> Nested Loop (cost=0.57..323.74 rows=1 width=8)
Join Filter: ((t.pos_start = v.pos) AND ((t.ref)::text = (v.ref)::text) AND ((t.alt)::text = (v.alt)::text) AND (getchrnum(t.seqname) = v.chrom))
-> Seq Scan on tmp_variant_filtered t (cost=0.00..315.00 rows=1 width=1126)
-> Index Scan using variant_quick_samplerun_chrom_pos_ref_alt_key on variant_quick v (cost=0.57..8.47 rows=1 width=20)
Index Cond: (samplerun = 165)
-> Index Only Scan using filtered_variant_quick_pkey on filtered_variant_quick f (cost=0.29..8.31 rows=1 width=8)
Index Cond: (variantid = v.id)
-- running the query a few minutes after tmp_variant_filtered was loaded with copy command
-- query plan that will take less than 5 seconds to finish
Aggregate (cost=425.69..425.70 rows=1 width=8)
-> Nested Loop (cost=8.78..425.68 rows=1 width=0)
-> Hash Join (cost=8.48..417.37 rows=1 width=8)
Hash Cond: ((t.pos_start = v.pos) AND ((t.ref)::text = (v.ref)::text) AND ((t.alt)::text = (v.alt)::text))
Join Filter: (getchrnum(t.seqname) = v.chrom)
-> Seq Scan on tmp_variant_filtered t (cost=0.00..359.06 rows=4406 width=13)
-> Hash (cost=8.47..8.47 rows=1 width=20)
-> Index Scan using variant_quick_samplerun_chrom_pos_ref_alt_key on variant_quick v (cost=0.57..8.47 rows=1 width=20)
Index Cond: (samplerun = 165)
-> Index Only Scan using filtered_variant_quick_pkey on filtered_variant_quick f (cost=0.29..8.31 rows=1 width=8)
Index Cond: (variantid = v.id)
If you run the query immediately after the tmp table got populated, it will give you the plan as shown on top, and the query will take a very long time. If you wait a few minutes, the the plan will be the lower with hash-join. The cost estimate for the upper is less than the lower.
Since the query was embedded in some scripting language, the top plan is used and usually it got finished in a couple of hours. If I do this on a terminal, after I terminated the script, the lower plan would be used, and it usually take a couple of seconds to finish.
I even did an experiment by copying the tmp_variant_filtered table into another table, say 'test'. If I run the query immediately after the copy (manually, there will be a couple of seconds of delay), then I was stuck. Killing the current job, wait for a few minutes, the the same query become blazing fast.
It was long time ago that I was doing query tuning; now I am just starting to pick it up again. I am reading and trying to understand why postgres has such a behavior. Would appreciate the experts to give a hint.

Immediately after inserting the rows into the table, there are no statistics available for column values and their distribution. Thus the optimizer assumes the table is empty. The only sensible strategy to retrieve all rows from an (supposedly) empty table is to do a Seq Scan. You can see this assumption in the execution plan:
Seq Scan on tmp_variant_filtered t (cost=0.00..315.00 rows=1 width=1126)
The rows=1 means that the optimizer expects that only one row will be returned by the Seq Scan. Because it's only one row, the planner chooses a nested loop to do the join - which means the Seq Scan is done once for each row in the other table (you could see that more clearly if your use explain (analyze, verbose) to generate the execution plan)
The statistics are updated in the background by the "autovacuum daemon" if you don't do it manually. That's why after waiting a while, you see a better plan, as the optimizer now know the table isn't empty.
Once the optimizer has better knowledge of the size of the table, it chooses the much more efficient Hash Join to bring the two tables together - which means the Seq Scan is only executed once, rather than multiple times.
It is always recommended to run analyze (or vacuum analyze) on tables where you changed the number of rows significantly if you need a good execution plan immediately after populating the table.
Quote from the manual
Whenever you have significantly altered the distribution of data within a table, running ANALYZE is strongly recommended. This includes bulk loading large amounts of data into the table. Running ANALYZE (or VACUUM ANALYZE) ensures that the planner has up-to-date statistics about the table. With no statistics or obsolete statistics, the planner might make poor decisions during query planning, leading to poor performance on any tables with inaccurate or nonexistent statistics

Regardless the mechanism for this time dependent behavior, I figure out a solution with VACUUM ANALYZE my_table. Not sure it is the cure or just give a little bit time delay. I was using psocopg2 to execute the query and had to avoid the 'cannot vacuum inside a transaction' exception. Here I list the code block you need:
self.conn.commit()
self.conn.set_session(autocommit=True)
self.cursor.execute("vacuum analyze {}".format(one_of_my_tables))
# here you probably should have used sql.SQL("...").format()
# to be more secure, I am using the text composition for example
self.conn.set_session(autocommit=False)
I applied to two of the three tables involved in my join in the question. Maybe apply vacuum analyze to one should be sufficient. As mentioned by Basil, I should have asked the question in the dba group.

Related

Optimize aggregate query on massive table to refresh materialized view

Let's say I have the following PostgreSQL database schema:
Group
id: int
Task:
id: int
created_at: datetime
group: FK Group
I have the following Materialized View to calculate the number of tasks and the most recent Task.created_at value per group:
CREATE MATERIALIZED VIEW group_statistics AS (
SELECT
group.id as group_id,
MAX(task.created_at) AS latest_task_created_at,
COUNT(task.id) AS task_count
FROM group
LEFT OUTER JOIN task ON (group.id = task.group_id)
GROUP BY group.id
);
The Task table currently has 20 million records so refreshing this materialized view takes a long time (20-30 seconds). We've also been experiencing some short but major DB performance issues ever since we started refreshing the materialized every 10 min, even with CONCURRENTLY:
REFRESH MATERIALIZED VIEW CONCURRENTLY group_statistics;
Is there a more performant way to calculate these values? Note, they do NOT need to be exact. Approximate values are totally fine, e.g. latest_task_created_at can be 10-20 min delayed.
I'm thinking of caching these values on every write to the Task table. Either in Redis or in PostgreSQL itself.
Update
People are requesting the execution plan. EXPLAIN doesn't work on REFRESH but I ran EXPLAIN on the actual query. Note, it's different than my theoretical data model above. In this case, Database is Group and Record is Task. Also note, I'm on PostgreSQL 12.10.
EXPLAIN (analyze, buffers, verbose)
SELECT
store_database.id as database_id,
MAX(store_record.updated_at) AS latest_record_updated_at,
COUNT(store_record.id) AS record_count
FROM store_database
LEFT JOIN store_record ON (store_database.id = store_record.database_id)
GROUP BY store_database.id;
Output:
HashAggregate (cost=1903868.71..1903869.22 rows=169 width=32) (actual time=18227.016..18227.042 rows=169 loops=1)
" Output: store_database.id, max(store_record.updated_at), count(store_record.id)"
Group Key: store_database.id
Buffers: shared hit=609211 read=1190704
I/O Timings: read=3385.027
-> Hash Right Join (cost=41.28..1872948.10 rows=20613744 width=40) (actual time=169.766..14572.558 rows=20928339 loops=1)
" Output: store_database.id, store_record.updated_at, store_record.id"
Inner Unique: true
Hash Cond: (store_record.database_id = store_database.id)
Buffers: shared hit=609211 read=1190704
I/O Timings: read=3385.027
-> Seq Scan on public.store_record (cost=0.00..1861691.23 rows=20613744 width=40) (actual time=0.007..8607.425 rows=20928316 loops=1)
" Output: store_record.id, store_record.key, store_record.data, store_record.created_at, store_record.updated_at, store_record.database_id, store_record.organization_id, store_record.user_id"
Buffers: shared hit=609146 read=1190704
I/O Timings: read=3385.027
-> Hash (cost=40.69..40.69 rows=169 width=16) (actual time=169.748..169.748 rows=169 loops=1)
Output: store_database.id
Buckets: 1024 Batches: 1 Memory Usage: 16kB
Buffers: shared hit=65
-> Index Only Scan using store_database_pkey on public.store_database (cost=0.05..40.69 rows=169 width=16) (actual time=0.012..0.124 rows=169 loops=1)
Output: store_database.id
Heap Fetches: 78
Buffers: shared hit=65
Planning Time: 0.418 ms
JIT:
Functions: 14
" Options: Inlining true, Optimization true, Expressions true, Deforming true"
" Timing: Generation 2.465 ms, Inlining 15.728 ms, Optimization 92.852 ms, Emission 60.694 ms, Total 171.738 ms"
Execution Time: 18229.600 ms
Note, the large execution time. It sometimes takes 5-10 minutes to run. I would love to bring this down to consistently a few seconds max.
Update #2
People are requesting the execution plan when the query takes minutes. Here it is:
HashAggregate (cost=1905790.10..1905790.61 rows=169 width=32) (actual time=128442.799..128442.825 rows=169 loops=1)
" Output: store_database.id, max(store_record.updated_at), count(store_record.id)"
Group Key: store_database.id
Buffers: shared hit=114011 read=1685876 dirtied=367
I/O Timings: read=112953.619
-> Hash Right Join (cost=15.32..1874290.39 rows=20999810 width=40) (actual time=323.497..124809.521 rows=21448762 loops=1)
" Output: store_database.id, store_record.updated_at, store_record.id"
Inner Unique: true
Hash Cond: (store_record.database_id = store_database.id)
Buffers: shared hit=114011 read=1685876 dirtied=367
I/O Timings: read=112953.619
-> Seq Scan on public.store_record (cost=0.00..1862849.43 rows=20999810 width=40) (actual time=0.649..119522.406 rows=21448739 loops=1)
" Output: store_record.id, store_record.key, store_record.data, store_record.created_at, store_record.updated_at, store_record.database_id, store_record.organization_id, store_record.user_id"
Buffers: shared hit=113974 read=1685876 dirtied=367
I/O Timings: read=112953.619
-> Hash (cost=14.73..14.73 rows=169 width=16) (actual time=322.823..322.824 rows=169 loops=1)
Output: store_database.id
Buckets: 1024 Batches: 1 Memory Usage: 16kB
Buffers: shared hit=37
-> Index Only Scan using store_database_pkey on public.store_database (cost=0.05..14.73 rows=169 width=16) (actual time=0.032..0.220 rows=169 loops=1)
Output: store_database.id
Heap Fetches: 41
Buffers: shared hit=37
Planning Time: 5.390 ms
JIT:
Functions: 14
" Options: Inlining true, Optimization true, Expressions true, Deforming true"
" Timing: Generation 1.306 ms, Inlining 82.966 ms, Optimization 176.787 ms, Emission 62.561 ms, Total 323.620 ms"
Execution Time: 128474.490 ms
Your MV currently has 169 rows, so write costs are negligible (unless you have locking issues). It's all about the expensive sequential scan over the big table.
Full counts are slow
Getting exact counts per group ("database") is expensive. There is no magic bullet for that in Postgres. Postgres has to count all rows. If the table is all-visible (visibility map is up to date), Postgres can shorten the procedure somewhat by only traversing a covering index. (You did not provide indexes ...)
There are possible shortcuts with an estimate for the total row count in the whole table. But the same is not easily available per group. See:
Fast way to discover the row count of a table in PostgreSQL
But not that slow
That said, your query can still be substantially faster. Aggregate before the join:
SELECT id AS database_id
, r.latest_record_updated_at
, COALESCE(r.record_count, 0) AS record_count
FROM store_database d
LEFT JOIN (
SELECT r.database_id AS id
, max(r.updated_at) AS latest_record_updated_at
, count(*) AS record_count
FROM store_record r
GROUP BY 1
) r USING (id);
See:
Query with LEFT JOIN not returning rows for count of 0
And use the slightly faster (and equivalent in this case) count(*). Related:
PostgreSQL: running count of rows for a query 'by minute'
Also - visibility provided - count(*) can use any non-partial index, preferably the smallest, while count(store_record.id) is limited to an index on that column (and has to inspect values, too).
I/O is your bottleneck
You added the EXPLAIN plan for an expensive execution, and the skyrocketing I/O cost stands out. It dominates the cost of your query.
Fast plan:
Buffers: shared hit=609146 read=1190704
I/O Timings: read=3385.027
Slow plan:
Buffers: shared hit=113974 read=1685876 dirtied=367
I/O Timings: read=112953.619
Your Seq Scan on public.store_record spent 112953.619 ms on reading data file blocks. 367 dirtied buffers represent under 3MB and are only a tiny fraction of total I/O. Either way, I/O dominates the cost.
Either your storage system is abysmally slow or, more likely since I/O of the fast query costs 30x less, there is too much contention for I/O from concurrent work load (on an inappropriately configured system). One or more of these can help:
faster storage
better (more appropriate) server configuration
more RAM (and server config that allows more cache memory)
less concurrent workload
more efficient table design with smaller disk footprint
smarter query that needs to read fewer data blocks
upgrade to a current version of Postgres
Hugely faster without count
If there was no count, just latest_record_updated_at, this query would deliver that in close to no time:
SELECT d.id
, (SELECT r.updated_at
FROM store_record r
WHERE r.database_id = d.id
ORDER BY r.updated_at DESC NULLS LAST
LIMIT 1) AS latest_record_updated_at
FROM store_database d;
In combination with a matching index! Ideally:
CREATE INDEX store_record_database_id_idx ON store_record (database_id, updated_at DESC NULL LAST);
See:
Optimize GROUP BY query to retrieve latest row per user
The same index can also help the complete query above, even if not as dramatically. If the table is vacuumed enough (visibility map up to date) Postgres can do a sequential scan on the smaller index without involving the bigger table. Obviously matters more for wider table rows - especially easing your I/O problem.
(Of course, index maintenance adds costs, too ...)
Upgrade to use parallelism
Upgrade to the latest version of Postgres if at all possible. Postgres 14 or 15 have received various performance improvements over Postgres 12. Most importantly, quoting the release notes for Postgres 14:
Allow REFRESH MATERIALIZED VIEW to use parallelism (Bharath Rupireddy)
Could be massive for your use case. Related:
Materialized view refresh in parallel
Estimates?
Warning: experimental stuff.
You stated:
Approximate values are totally fine
I see only 169 groups ("databases") in the query plan. Postgres maintains column statistics. While the distinct count of groups is tiny and stays below the "statistics target" for the column store_record.database_id (which you have to make sure of!), we can work with this. See:
How to check statistics targets used by ANALYZE?
Unless you have very aggressive autovacuum settings, to get better estimates, run ANALYZE on database_id to update column statistics before running below query. (Also updates reltuples and relpages in pg_class.):
ANALYZE public.store_record(database_id);
Or even (to also update the visibility map for above query):
VACUUM ANALYZE public.store_record(database_id);
This was the most expensive part (with collateral benefits). And it's optional.
WITH ct(total_est) AS (
SELECT reltuples / relpages * (pg_relation_size(oid) / 8192)
FROM pg_class
WHERE oid = 'public.store_record'::regclass -- your table here
)
SELECT v.database_id, (ct.total_est * v.freq)::bigint AS estimate
FROM pg_stats s
, ct
, unnest(most_common_vals::text::int[], most_common_freqs) v(database_id, freq)
WHERE s.schemaname = 'public'
AND s.tablename = 'store_record'
AND s.attname = 'database_id';
The query relies on various Postgres internals and may break in the future major versions (though unlikely). Tested with Postgres 14, but works with Postgres 12, too. It's basically black magic. You need to know what you are doing. You have been warned.
But the query costs close to nothing.
Take exact values for latest_record_updated_at from the fast query above, and join to these estimates for the count.
Basic explanation: Postgres maintains column statistics in the system catalog pg_statistic. pg_stats is a view on it, easier to access. Among other things, "most common values" and their relative frequency are gathered. Represented in most_common_vals and most_common_freqs. Multiplied with the current (estimated) total count, we get estimates per group. You could do all of it manually, but Postgres is probably much faster and better at this.
For the computation of the total estimate ct.total_est see:
Fast way to discover the row count of a table in PostgreSQL
(Note the "Safe and explicit" form for this query.)
Given the explain plan, the sequential scan seems to be causing the slowness. An index can definitely help there.
You can also utilize index-only scans as there are few columns in the query. So you can use something like this for store_record table.
Create index idx_store_record_db_id btree(database_id) include (id, updated_at);
An index on id column on the store_database table is also needed.
Create index idx_db_id on store_database btree(id)
Sometimes in such cases it is necessary to think of completely different business logic solutions.
For example, the count operation is a very slow query. This cannot be accelerated by any means in DB. What can be done in such cases? Since I do not know your business logic in full detail, I will tell you several options. However, these options also have disadvantages. For example:
group_id id
---------------
1 12
1 145
1 100
3 652
3 102
We group it once and insert the numbers into a table.
group_id count_id
--------------------
1 3
3 2
After then, when each record is inserted into main table then we update the group table using with triggers. Like as this:
update group_table set count_id = count_id + 1 where group_id = new.group_id
Or like that:
update group_table set count_id = (select count(id) from main_table where group_id = new.group_id)
I am not talking about small details here. For updating row properly, we can use clause for update, so for update locks row for other transactions.
So, the main solution is that: Functions like count need to be executed separately on grouped data, not on the entire table at once. Similar solutions can be applied. I explained it for general understanding.
The disadvantage of this solution is that: if you have many inserting operations on this main table, so performance of inserting will be decrease.
Parallel plan
If you first collect the store_record statistics and then join that with the store_database, you'll get a better, parallelisable plan.
EXPLAIN (analyze, buffers, verbose)
SELECT
store_database.id as database_id,
s.latest_record_updated_at as latest_record_updated_at,
coalesce(s.record_count,0) as record_count
FROM store_database
LEFT JOIN
( SELECT
store_record.database_id as database_id,
MAX(store_record.updated_at) as latest_record_updated_at,
COUNT(store_record.id) as record_count
FROM store_record
GROUP BY store_record.database_id)
AS s ON (store_database.id = s.database_id);
Here's a demo - at the end you can see both queries return the exact same results, but the one I suggest runs faster and has a more flexible plan. The number of workers dispatched depends on your max_worker_processes, max_parallel_workers, max_parallel_workers_per_gather settings as well as some additional logic inside the planner.
With more rows in store_record the difference will be more pronounced. On my system with 40 million test rows it went down from 14 seconds to 3 seconds with one worker, 1.4 seconds when it caps out dispatching six workers out of 16 available.
Caching
I'm thinking of caching these values on every write to the Task table. Either in Redis or in PostgreSQL itself.
If it's an option, it's worth a try - you can maintain proper accuracy and instantly available statistics at the cost of some (deferrable) table throughput overhead. You can replace your materialized view with a regular table or add the statistics columns to store_database
create table store_record_statistics(
database_id smallint unique references store_database(id)
on update cascade,
latest_record_updated_at timestamptz,
record_count integer default 0);
insert into store_record_statistics --initializes table with view definition
SELECT g.id, MAX(s.updated_at), COUNT(*)
FROM store_database g LEFT JOIN store_record s ON g.id = s.database_id
GROUP BY g.id;
create index store_record_statistics_idx
on store_record_statistics (database_id)
include (latest_record_updated_at,record_count);
cluster verbose store_record_statistics using store_record_statistics_idx;
And leave keeping the table up to date to a trigger that fires each time store_record changes.
CREATE FUNCTION maintain_store_record_statistics_trigger()
RETURNS TRIGGER LANGUAGE plpgsql AS
$$ BEGIN
IF TG_OP IN ('UPDATE', 'DELETE') THEN --decrement and find second most recent updated_at
UPDATE store_record_statistics srs
SET (record_count,
latest_record_updated_at)
= (record_count - 1,
(SELECT s.updated_at
FROM store_record s
WHERE s.database_id = srs.database_id
ORDER BY s.updated_at DESC NULLS LAST
LIMIT 1))
WHERE database_id = old.database_id;
END IF;
IF TG_OP in ('INSERT','UPDATE') THEN --increment and pick most recent updated_at
UPDATE store_record_statistics
SET (record_count,
latest_record_updated_at)
= (record_count + 1,
greatest(
latest_record_updated_at,
new.updated_at))
WHERE database_id=new.database_id;
END IF;
RETURN NULL;
END $$;
Making the trigger deferrable decouples its execution time from the main operation but it'll still infer its costs at the end of the transaction.
CREATE CONSTRAINT TRIGGER maintain_store_record_statistics
AFTER INSERT OR UPDATE OF database_id OR DELETE ON store_record
INITIALLY DEFERRED FOR EACH ROW
EXECUTE PROCEDURE maintain_store_record_statistics_trigger();
TRUNCATE trigger cannot be declared FOR EACH ROW with the rest of events, so it has to be defined separately
CREATE FUNCTION maintain_store_record_statistics_truncate_trigger()
RETURNS TRIGGER LANGUAGE plpgsql AS
$$ BEGIN
update store_record_statistics
set (record_count, latest_record_updated_at)
= (0 , null);--wipes/resets all stats
RETURN NULL;
END $$;
CREATE TRIGGER maintain_store_record_statistics_truncate
AFTER TRUNCATE ON store_record
EXECUTE PROCEDURE maintain_store_record_statistics_truncate_trigger();
In my test, an update or delete of 10000 random rows in a 100-million-row table run in seconds. A single insert of 1000 new, randomly generated rows took 25ms without and 200ms with the trigger. A million was 30s and 3 minutes correspondingly.
A demo.
Partitioning-backed parallel plan
store_record might be a good fit for partitioning:
create table store_record(
id serial not null,
updated_at timestamptz default now(),
database_id smallint references store_database(id)
) partition by range (database_id);
DO $$
declare
vi_database_max_id smallint:=0;
vi_database_id smallint:=0;
vi_database_id_per_partition smallint:=40;--tweak for lower/higher granularity
begin
select max(id) from store_database into vi_database_max_id;
for vi_database_id in 1 .. vi_database_max_id by vi_database_id_per_partition loop
execute format ('
drop table if exists store_record_p_%1$s;
create table store_record_p_%1$s
partition of store_record
for VALUES from (%1$s) to (%1$s + %2$s)
with (parallel_workers=16);
', vi_database_id, vi_database_id_per_partition);
end loop;
end $$ ;
Splitting objects in this manner lets the planner split their scans accordingly, which works best with parallel workers, but doesn't require them. Even your initial, unaltered query behind the view is able to take advantage of this structure:
HashAggregate (cost=60014.27..60041.47 rows=2720 width=18) (actual time=910.657..910.698 rows=169 loops=1)
  Output: store_database.id, max(store_record_p_1.updated_at), count(store_record_p_1.id)
  Group Key: store_database.id
  Buffers: shared hit=827 read=9367 dirtied=5099 written=4145
  -> Hash Right Join (cost=71.20..45168.91 rows=1979382 width=14) (actual time=0.064..663.603 rows=1600020 loops=1)
        Output: store_database.id, store_record_p_1.updated_at, store_record_p_1.id
        Inner Unique: true
        Hash Cond: (store_record_p_1.database_id = store_database.id)
        Buffers: shared hit=827 read=9367 dirtied=5099 written=4145
        -> Append (cost=0.00..39893.73 rows=1979382 width=14) (actual time=0.014..390.152 rows=1600000 loops=1)
              Buffers: shared hit=826 read=9367 dirtied=5099 written=4145
              -> Seq Scan on public.store_record_p_1 (cost=0.00..8035.02 rows=530202 width=14) (actual time=0.014..77.130 rows=429068 loops=1)
                    Output: store_record_p_1.updated_at, store_record_p_1.id, store_record_p_1.database_id
                    Buffers: shared read=2733 dirtied=1367 written=1335
              -> Seq Scan on public.store_record_p_41 (cost=0.00..8067.36 rows=532336 width=14) (actual time=0.017..75.193 rows=430684 loops=1)
                    Output: store_record_p_41.updated_at, store_record_p_41.id, store_record_p_41.database_id
                    Buffers: shared read=2744 dirtied=1373 written=1341
              -> Seq Scan on public.store_record_p_81 (cost=0.00..8029.14 rows=529814 width=14) (actual time=0.017..74.583 rows=428682 loops=1)
                    Output: store_record_p_81.updated_at, store_record_p_81.id, store_record_p_81.database_id
                    Buffers: shared read=2731 dirtied=1366 written=1334
              -> Seq Scan on public.store_record_p_121 (cost=0.00..5835.90 rows=385090 width=14) (actual time=0.016..45.407 rows=311566 loops=1)
                    Output: store_record_p_121.updated_at, store_record_p_121.id, store_record_p_121.database_id
                    Buffers: shared hit=826 read=1159 dirtied=993 written=135
              -> Seq Scan on public.store_record_p_161 (cost=0.00..29.40 rows=1940 width=14) (actual time=0.008..0.008 rows=0 loops=1)
                    Output: store_record_p_161.updated_at, store_record_p_161.id, store_record_p_161.database_id
        -> Hash (cost=37.20..37.20 rows=2720 width=2) (actual time=0.041..0.042 rows=169 loops=1)
              Output: store_database.id
              Buckets: 4096 Batches: 1 Memory Usage: 38kB
              Buffers: shared hit=1
              -> Seq Scan on public.store_database (cost=0.00..37.20 rows=2720 width=2) (actual time=0.012..0.021 rows=169 loops=1)
                    Output: store_database.id
                    Buffers: shared hit=1
Planning Time: 0.292 ms
Execution Time: 910.811 ms
Demo. It's best to test what granularity gives the best performance on your setup. You can even test sub-partioning, giving each store_record.database_id a partition, that is then sub-partitioned into date ranges, simplifying access to most recent entries.
MATERIALIZED VIEW is not a good idea for that ...
If you just want to "calculate the number of tasks and the most recent Task.created_at value per group" then I suggest you to simply :
Add two columns in the group table :
ALTER TABLE IF EXISTS "group" ADD COLUMN IF NOT EXISTS task_count integer SET DEFAULT 0 ;
ALTER TABLE IF EXISTS "group" ADD COLUMN IF NOT EXISTS last_created_date timestamp ; -- instead of datetime which does not really exist in postgres ...
Update these 2 columns from trigger fonctions defined on table task :
CREATE OR REPLACE FUNCTION task_insert() RETURNS trigger LANGUAGE plpgsql AS $$
BEGIN
UPDATE "group" AS g
SET task_count = count + 1
, last_created_at = NEW.created_at -- assuming that the last task inserted has the latest created_at datetime of the group, if not, then reuse the solution proposed in task_delete()
WHERE g.id = NEW.group ;
RETURN NEW ;
END ; $$ ;
CREATE OR REPLACE TRIGGER task_insert AFTER INSERT ON task
FOR EACH ROW EXECUTE FUNCTION task_insert () ;
CREATE OR REPLACE FUNCTION task_delete () RETURNS trigger LANGUAGE plpgsql AS $$
BEGIN
UPDATE "group" AS g
SET task_count = count - 1
, last_created_at = u.last_created_at
FROM
( SELECT max(created_at) AS last_created_at
FROM task
WHERE t.group = OLD.group
) AS u
WHERE g.id = OLD.group ;
RETURN OLD ;
END ; $$ ;
CREATE OR REPLACE TRIGGER task_insert AFTER DELETE ON task
FOR EACH ROW EXECUTE FUNCTION task_delete () ;
You will need to perform a setup action at the beginning ...
UPDATE "group" AS g
SET task_count = ref.count
, last_created_date = ref.last_created_at
FROM
( SELECT group
, max(created_at) AS last_created_at
, count(*) AS count
FROM task
GROUP BY group
) AS ref
WHERE g.id= ref.group ;
... but then you will have no more performance issue with the queries !!!
SELECT * FROM "group"
and you will optimize the size of your database ...

Improve Postgre SQL query performance

I'm running this query in our database:
select
(
select least(2147483647, sum(pb.nr_size))
from tb_pr_dc pd
inner join tb_pr_dc_bn pb on 1=1
and pb.id_pr_dc_bn = pd.id_pr_dc_bn
where 1=1
and pd.id_pr = pt.id_pr -- outer query column
)
from
(
select regexp_split_to_table('[list of 500 ids]', ',')::integer id_pr
) pt
;
Which outputs 500 rows having a single result column and takes around 1 min and 43 secs to run. The explain (analyze, verbose, buffers) outputs the following plan:
Subquery Scan on pt (cost=0.00..805828.19 rows=1000 width=8) (actual time=96.791..103205.872 rows=500 loops=1)
Output: (SubPlan 1)
Buffers: shared hit=373771 read=153484
-> Result (cost=0.00..22.52 rows=1000 width=4) (actual time=0.434..3.729 rows=500 loops=1)
Output: ((regexp_split_to_table('[list of 500 ids]', ',')::integer id_pr)
-> ProjectSet (cost=0.00..5.02 rows=1000 width=32) (actual time=0.429..2.288 rows=500 loops=1)
Output: (regexp_split_to_table('[list of 500 ids]', ',')::integer id_pr
-> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.001 rows=1 loops=1)
SubPlan 1
-> Aggregate (cost=805.78..805.80 rows=1 width=8) (actual time=206.399..206.400 rows=1 loops=500)
Output: LEAST('2147483647'::bigint, sum((pb.nr_size)::integer))
Buffers: shared hit=373771 read=153484
-> Nested Loop (cost=0.87..805.58 rows=83 width=4) (actual time=1.468..206.247 rows=219 loops=500)
Output: pb.nr_size
Inner Unique: true
Buffers: shared hit=373771 read=153484
-> Index Scan using tb_pr_dc_in05 on db.tb_pr_dc pd (cost=0.43..104.02 rows=83 width=4) (actual time=0.233..49.289 rows=219 loops=500)
Output: pd.id_pr_dc, pd.ds_pr_dc, pd.id_pr, pd.id_user_in, pd.id_user_ex, pd.dt_in, pd.dt_ex, pd.ds_mt_ex, pd.in_at, pd.id_tp_pr_dc, pd.id_pr_xz (...)
Index Cond: ((pd.id_pr)::integer = pt.id_pr)
Buffers: shared hit=24859 read=64222
-> Index Scan using tb_pr_dc_bn_pk on db.tb_pr_dc_bn pb (cost=0.43..8.45 rows=1 width=8) (actual time=0.715..0.715 rows=1 loops=109468)
Output: pb.id_pr_dc_bn, pb.ds_ex, pb.ds_md_dc, pb.ds_m5_dc, pb.nm_aq, pb.id_user, pb.dt_in, pb.ob_pr_dc, pb.nr_size, pb.ds_sg, pb.ds_cr_ch, pb.id_user_ (...)
Index Cond: ((pb.id_pr_dc_bn)::integer = (pd.id_pr_dc_bn)::integer)
Buffers: shared hit=348912 read=89262
Planning Time: 1.151 ms
Execution Time: 103206.243 ms
The logic is: for each id_pr chosen (in the list of 500 ids) calculate the sum of the integer column pb.nr_size associated with them, returning the lesser value between this amount and the number 2,147,483,647. The result must contain 500 rows, one for each id, and we already know that they'll match at least one row in the subquery, so will not produce null values.
The index tb_pr_dc_in05 is a b-tree on id_pr only, which is of integer type. The index tb_pr_dc_bn_pk is a b-tree on the primary key id_pr_dc_bn only, which is of integer type also. Table tb_pr_dc has many rows for each id_pr. Actually, we have 209,217 unique id_prs in tb_pr_dc for a total of 13,910,855 rows. Table tb_pr_dc_bn has the same amount of rows.
As can be seen, we defined 500 ids to query tb_pr_dc, finding 109,468 rows (less than 1% of the table size) and then finding the same amount looking in tb_pr_dc_bn. Imo, the indexes look fine and the amount of rows to evaluate is minimal, so I can't understand why it's taking so much time to run this query. A lot of other queries reading a lot more of data on other tables and doing more calculations are running fine. The DBA just ran a reindex and vacuum analyze, but still it's running the same slow way. We are running PostgreSQL 11 on Linux. I'm running this query in a replica without concurrent access.
What could I be missing that could improve this query performance?
Thanks for your attention.
The time is spent jumping all over the table to find 109468 randomly scattered rows, issuing random IO requests to do so. You can verify that be turning track_io_timing on and redoing the plans (probably just leave it turned on globally and by default, the overhead is low and the value it produces is high), but I'm sure enough that I don't need to see that output before reaching this conclusion. The other queries that are faster are probably accessing fewer disk pages because they access data that is more tightly packed, or is organized so that it can be read more sequentially. In fact, I would say your query is quite fast given how many pages it had to read.
You ask about why so many columns are output in the internal nodes of the plan. The reason for that is that PostgreSQL often just passes around pointers to where the tuple lives in the shared_buffers, and the tuple being pointed to has the columns that the table itself has. It could allocate memory in which to store a reformatted version of the tuple with the unnecessary columns stripped out, but that would generally be more work, not less. If it was a reason to copy and re-form the tuple anyway, it will remove the extraneous columns while it does so. But it won't do it without a reason.
One way to sped this up is to create indexes which will enable index-only scans. Those would be on tb_pr_dc (id_pr, id_pr_dc_bn) and on tb_pr_dc_bn (id_pr_dc_bn, nr_size).
If this isn't enough, there might be other ways to improve this too; but I can't think through them if I keep getting distracted by the long strings of unmemorable unpronounceable gibberish you have for table and column names.

PostgreSQL query against millions of rows takes long on UUIDs

I have a reference table for UUIDs that is roughly 200M rows. I have ~5000 UUIDs that I want to look up in the reference table. Reference table looks like:
CREATE TABLE object_store AS (
project_id UUID,
object_id UUID,
object_name VARCHAR(20),
description VARCHAR(80)
);
CREATE INDEX object_store_project_idx ON object_store(project_id);
CREATE INDEX object_store_id_idx ON object_store(object_id);
* Edit #2 *
Request for the temp_objects table definition.
CREATE TEMPORARY TABLE temp_objects AS (
object_id UUID
)
ON COMMIT DELETE ROWS;
The reason for the separate index is because object_id is not unique, and can belong to many different projects. The reference table is just a temp table of UUIDs (temp_objects) that I want to check (5000 object_ids).
If I query the above reference table with 1 object_id literal value, it's almost instantaneous (2ms). If the temp table only has 1 row, again, instantaneous (2ms). But with 5000 rows it takes 25 minutes to even return. Granted it pulls back >3M rows of matches.
* Edited *
This is for 1 row comparison (4.198 ms):
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)SELECT O.project_id
FROM temp_objects T JOIN object_store O ON T.object_id = O.object_id;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------
Nested Loop (cost=0.57..475780.22 rows=494005 width=65) (actual time=0.038..2.631 rows=1194 loops=1)
Buffers: shared hit=1202, local hit=1
-> Seq Scan on temp_objects t (cost=0.00..13.60 rows=360 width=16) (actual time=0.007..0.009 rows=1 loops=1)
Buffers: local hit=1
-> Index Scan using object_store_id_idx on object_store l (cost=0.57..1307.85 rows=1372 width=81) (actual time=0.027..1.707 rows=1194 loops=1)
Index Cond: (object_id = t.object_id)
Buffers: shared hit=1202
Planning time: 0.173 ms
Execution time: 3.096 ms
(9 rows)
Time: 4.198 ms
This is for 4911 row comparison (1579082.974 ms (26:19.083)):
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)SELECT O.project_id
FROM temp_objects T JOIN object_store O ON T.object_id = O.object_id;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------
Nested Loop (cost=0.57..3217316.86 rows=3507438 width=65) (actual time=0.041..1576913.100 rows=8043500 loops=1)
Buffers: shared hit=5185078 read=2887548, local hit=71
-> Seq Scan on temp_objects d (cost=0.00..96.56 rows=2556 width=16) (actual time=0.009..3.945 rows=4911 loops=1)
Buffers: local hit=71
-> Index Scan using object_store_id_idx on object_store l (cost=0.57..1244.97 rows=1372 width=81) (actual time=1.492..320.081 rows=1638 loops=4911)
Index Cond: (object_id = t.object_id)
Buffers: shared hit=5185078 read=2887548
Planning time: 0.169 ms
Execution time: 1579078.811 ms
(9 rows)
Time: 1579082.974 ms (26:19.083)
Eventually I want to group and get a count of the matching object_ids by project_id, using standard grouping. The aggregate is at the upper end (of course) of the cost. It took just about 25 minutes again to complete the below query. Yet, when I limit the temp table to only 1 row, it comes back in 21ms. Something is not adding up...
EXPLAIN SELECT O.project_id, count(*)
FROM temp_objects T JOIN object_store O ON T.object_id = O.object_id GROUP BY O.project_id;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------
HashAggregate (cost=6189484.10..6189682.84 rows=19874 width=73)
Group Key: o.project_id
-> Nested Loop (cost=0.57..6155795.69 rows=6737683 width=65)
-> Seq Scan on temp_objects t (cost=0.00..120.10 rows=4910 width=16)
-> Index Scan using object_store_id_idx on object_store o (cost=0.57..1239.98 rows=1372 width=81)
Index Cond: (object_id = t.object_id)
(6 rows)
I'm on PostgreSQL 10.6, running 2 CPUs and 8GB of RAM on an SSD. I have ANALYZEd the tables, I have set the work_mem to 50MB, shared_buffers to 2GB, and have set the random_page_cost to 1. All helped the queries actually to come back in several minutes, but still not as fast as I feel it should be.
I have the option to go to cloud computing if CPUs/RAM/parallelization make a big difference. Just looking for suggestions on how to get this simple query to return in < few seconds (if possible).
* UPDATE *
Taking the hint from Jürgen Zornig, I changed both object_id fields to be bigint, using just the top half of the UUID and reducing my datasize by half. Doing the aggregate query above the query now performs at ~16min.
Next, taking jjane's suggestion of set enable_nestloop to off, my aggregate query jumped to 6min! Unfortunately, all the other suggestions haven't sped it up past 6min, although it's interesting that changing my "TEMPORARY" table to a permanent one allowed 2 workers to work it, it didn't change the time. I think jjane is accurate by saying the IO is the binding factor here. Here is the latest explain plan from the 6min (wish it were faster, still, but it's better!):
explain (analyze, buffers, format text) select project_id, count(*) from object_store natural join temp_object group by project_id;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize GroupAggregate (cost=3966899.86..3967396.69 rows=19873 width=73) (actual time=368124.126..368744.157 rows=153633 loops=1)
Group Key: object_store.project_id
Buffers: shared hit=243022 read=2423215, temp read=3674 written=3687
I/O Timings: read=870720.440
-> Sort (cost=3966899.86..3966999.23 rows=39746 width=73) (actual time=368124.116..368586.497 rows=333427 loops=1)
Sort Key: object_store.project_id
Sort Method: external merge Disk: 29392kB
Buffers: shared hit=243022 read=2423215, temp read=3674 written=3687
I/O Timings: read=870720.440
-> Gather (cost=3959690.23..3963863.56 rows=39746 width=73) (actual time=366476.369..366827.313 rows=333427 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=243022 read=2423215
I/O Timings: read=870720.440
-> Partial HashAggregate (cost=3958690.23..3958888.96 rows=19873 width=73) (actual time=366472.712..366568.313 rows=111142 loops=3)
Group Key: object_store.project_id
Buffers: shared hit=243022 read=2423215
I/O Timings: read=870720.440
-> Hash Join (cost=132.50..3944473.09 rows=2843429 width=65) (actual time=7.880..363848.830 rows=2681167 loops=3)
Hash Cond: (object_store.object_id = temp_object.object_id)
Buffers: shared hit=243022 read=2423215
I/O Timings: read=870720.440
-> Parallel Seq Scan on object_store (cost=0.00..3499320.53 rows=83317153 width=73) (actual time=0.467..324932.880 rows=66653718 loops=3)
Buffers: shared hit=242934 read=2423215
I/O Timings: read=870720.440
-> Hash (cost=71.11..71.11 rows=4911 width=8) (actual time=7.349..7.349 rows=4911 loops=3)
Buckets: 8192 Batches: 1 Memory Usage: 256kB
Buffers: shared hit=66
-> Seq Scan on temp_object (cost=0.00..71.11 rows=4911 width=8) (actual time=0.014..2.101 rows=4911 loops=3)
Buffers: shared hit=66
Planning time: 0.247 ms
Execution time: 368779.757 ms
(32 rows)
Time: 368780.532 ms (06:08.781)
So I'm at 6min per query now. I think with I/O costs, I may try for an in-memory store on this table if possible to see if getting it off SSD makes it even better.
UUIDs are (EDIT) working against adaptive cache management and, because of their random nature effectively dropping the cache hit ratio because the index space is larger than memory. Ids cover a numerically wide range equally distributed, so in fact every Id lands pretty much on its own leaf on the index tree. As the index leaf determines in which data page the row is saved in disk pretty much every row gets its own page resulting in a whole lot of extremely expensive I/O Operations to get all these rows read in.
That's the reason why its generally not recommended to use UUIDs and if you really need UUIDs then at least generate timestamp/mac-prefixed UUIDs (have a look at uuid_generate_v1() - https://www.postgresql.org/docs/9.4/uuid-ossp.html) that are numerically close to each other, therefore chances are higher that data rows are clustered together on lesser data Pages resulting in fewer I/O Operations to get more Data in.
Long Story Short: Randomness over a large range kills your index (well actually not the index, it results in a lot of expensive I/O to get data on reading and to maintain the index on writing) and therefore slows queries down to a point where it is as good as having no index at all.
Here is also an article for reference
It looks like the centerpiece of your question is why it doesn't scale up from one input row to 5000 input rows linearly. But I think that this is a red herring. How are you choosing the one row? If you choose the same one row each time, then the data will stay in cache and will be very fast. I bet this is what you are doing. If you choose a different random one row each time you do a one-row plan, you will probably find the scaling to be more linear.
You should turn on track_io_timing. I have little doubt that IO is actually the bottleneck, but it is always nice to see it actually measured and reported, I have been surprised before.
The use of temporary table will inhibit parallel query. You might want to test with a permanent table, to see if you do get use of parallel workers, and if so, whether that actually helps. If you do this test, you should use your aggregation version of the query. They parallelize more efficiently than non-aggregation queries do, and if that is your ultimate goal that is what you should initially test with.
Another thing you could try is a large setting of effective_io_concurrency. But, that will only help if your plan uses bitmap scans to start with, which the plans you show do not. Setting random_page_cost from 1 to a slightly higher value might encourage it to use bitmap scans. (effective_io_concurrency is weird because bitmap plans can get a substantial realistic benefit from a higher setting, but the planner doesn't give bitmap plans any credit for that benefit they receive. So you must be "accidentally" using that plan already in order to get the benefit)
At some point (as you increase the number of rows in temp_objects) it is going to be faster to hash that table, and hashjoin it to a seq-scan of the object_store table. Is 5000 already past the point at which that would be faster? The planner clearly doesn't think so, but the planner never gets the cut-over point exactly right, and is often off by quite a bit. What happens if you set enable_nestloop TO off; before running your query?
Have you done low-level benchmarking of your SSD (outside of the database)? Assuming substantially all of your time is spent on IO reads and nearly none of those are fulfilled by the filesystem cache, you are getting 1576913/2887548 = 0.55ms per read. That seems pretty long. That is about what I get on a bottom-dollar laptop where the SSD is being exposed through a VM layer. I'd expect better than that from server-grade hardware.
Be sure you have also a proper index for temp_objects table
CREATE INDEX temp_object_id_idx ON temp_objects(object_id);
SELECT O.project_id
FROM temp_objects T
JOIN object_store O ON T.object_id = O.object_id;
Firstly: I would try to get the index into memory. What is shared_buffers set to? If it is small, lets make that bigger first. See if we can reduce the index scan IO.
Next: Are parallel queries enabled? I'm not sure that will help here very much because you have only 2 cpus, but it wouldn't hurt.
Even though the object column is completely random, I'd also bump up the statistics on that table from the default (100 rows or something like that) to a few thousand rows. Then run Analyze again. (or for thoroughness, vacuum analyze)
Work Mem at 50M may be low too. It could potentially be larger if you don't have a lot of concurrent users and you have G's of RAM to work with. Too large and it can be counter productive, but you could go up a bit more to see if it helps.
You could try CTAS on the big table into a new table to sort object id so that it isn't completely random.
There might be a crazy partitioning scheme you could come up with if you were using PostgreSQL 12 that would group the object ids into some even partition distribution.

Efficient PostgreSQL query on timestamp using index or bitmap index scan?

In PostgreSQL, I have an index on a date field on my tickets table.
When I compare the field against now(), the query is pretty efficient:
# explain analyze select count(1) as count from tickets where updated_at > now();
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=90.64..90.66 rows=1 width=0) (actual time=33.238..33.238 rows=1 loops=1)
-> Index Scan using tickets_updated_at_idx on tickets (cost=0.01..90.27 rows=74 width=0) (actual time=0.016..29.318 rows=40250 loops=1)
Index Cond: (updated_at > now())
Total runtime: 33.271 ms
It goes downhill and uses a Bitmap Heap Scan if I try to compare it against now() minus an interval.
# explain analyze select count(1) as count from tickets where updated_at > (now() - '24 hours'::interval);
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=180450.15..180450.17 rows=1 width=0) (actual time=543.898..543.898 rows=1 loops=1)
-> Bitmap Heap Scan on tickets (cost=21296.43..175963.31 rows=897368 width=0) (actual time=251.700..457.916 rows=924373 loops=1)
Recheck Cond: (updated_at > (now() - '24:00:00'::interval))
-> Bitmap Index Scan on tickets_updated_at_idx (cost=0.00..20847.74 rows=897368 width=0) (actual time=238.799..238.799 rows=924699 loops=1)
Index Cond: (updated_at > (now() - '24:00:00'::interval))
Total runtime: 543.952 ms
Is there a more efficient way to query using date arithmetic?
The 1st query expects to find rows=74, but actually finds rows=40250.
The 2nd query expects to find rows=897368 and actually finds rows=924699.
Of course, processing 23 x as many rows takes considerably more time. So your actual times are not surprising.
Statistics for data with updated_at > now() are outdated. Run:
ANALYZE tickets;
and repeat your queries. And you seriously have data with updated_at > now()? That sounds wrong.
It's not surprising, however, that statistics are outdated for data most recently changed. That's in the logic of things. If your query depends on current statistics, you have to run ANALYZE before you run your query.
Also test with (in your session only):
SET enable_bitmapscan = off;
and repeat your second query to see times without bitmap index scan.
Why bitmap index scan for more rows?
A plain index scan fetches rows from the heap sequentially as found in the index. That's simple, dumb and without overhead. Fast for few rows, but may end up more expensive than a bitmap index scan with a growing number of rows.
A bitmap index scan collects rows from the index before looking up the table. If multiple rows reside on the same data page, that saves repeated visits and can make things considerably faster. The more rows, the greater the chance, a bitmap index scan will save time.
For even more rows (around 5% of the table, heavily depends on actual data), the planner switches to a sequential scan of the table and doesn't use the index at all.
The optimum would be an index only scan, introduced with Postgres 9.2. That's only possible if some preconditions are met. If all relevant columns are included in the index, the index type support it and the visibility map indicates that all rows on a data page are visible to all transactions, that page doesn't have to be fetched from the heap (the table) and the information in the index is enough.
The decision depends on your statistics (how many rows Postgres expects to find and their distribution) and on cost settings, most importantly random_page_cost, cpu_index_tuple_cost and effective_cache_size.

Postgres slow limit query

I have this simple query in pg
EXPLAIN ANALYZE
select * from email_events
where act_owner_id = 500
order by date desc
limit 500
The first query execution take very long time about 7 seconds.
"Limit (cost=0.43..8792.83 rows=500 width=2311) (actual time=3.064..7282.497 rows=500 loops=1)"
" -> Index Scan Backward using email_events_idx_date on email_events (cost=0.43..233667.36 rows=13288 width=2311) (actual time=3.059..7282.094 rows=500 loops=1)"
" Filter: (act_owner_id = 500)"
" Rows Removed by Filter: 1053020"
"Total runtime: 7282.818 ms"
After the first execution the query i guess is cached and goes in 20-30 ms.
Why the LIMIT is so slow when there is no cache? How can i fix this?
CLUSTER TABLE on INDEX seems to fix the problem. It seems that after bulk data loading that data is all over the hard drive. CLUSTER table will re-order the data on the hard drive
PostgreSQL thinks it will be faster to scan the date-ordered index backwards (i.e. in DESC order), reading every row and throwing away the rows that don't have the right act_owner_id. It's having to do 1053020 random reads to do this, and backward index scans aren't very fast either.
Try creating an index on email_events(date DESC, act_owner_id). I think Pg will be able to do a forward index scan on that and then use the second index term to filter rows, so it shouldn't have to do a heap lookup. Test with EXPLAIN and see.