I have a large table with BRIN index, and if I do query with limit it ignores the index and go for sequence scan, without it it uses index (I tried it several times with same results)
explain (analyze,verbose,buffers,timing,costs)
select *
from testj.cdc_s5_gpps_ind
where id_transformace = 1293
limit 100
Limit (cost=0.00..349.26 rows=100 width=207) (actual time=28927.179..28927.214 rows=100 loops=1)
Output: id, date_key_trainjr...
Buffers: shared hit=225 read=1680241
-> Seq Scan on testj.cdc_s5_gpps_ind (cost=0.00..3894204.10 rows=1114998 width=207) (actual time=28927.175..28927.202 rows=100 loops=1)
Output: id, date_key_trainjr...
Filter: (cdc_s5_gpps_ind.id_transformace = 1293)
Rows Removed by Filter: 59204140
Buffers: shared hit=225 read=1680241
Planning Time: 0.149 ms
Execution Time: 28927.255 ms
explain (analyze,verbose,buffers,timing,costs)
select *
from testj.cdc_s5_gpps_ind
where id_transformace = 1293
Bitmap Heap Scan on testj.cdc_s5_gpps_ind (cost=324.36..979783.34 rows=1114998 width=207) (actual time=110.103..467.008 rows=1073725 loops=1)
Output: id, date_key_trainjr...
Recheck Cond: (cdc_s5_gpps_ind.id_transformace = 1293)
Rows Removed by Index Recheck: 11663
Heap Blocks: lossy=32000
Buffers: shared hit=32056
-> Bitmap Index Scan on gpps_brin_index (cost=0.00..45.61 rows=1120373 width=0) (actual time=2.326..2.326 rows=320000 loops=1)
Index Cond: (cdc_s5_gpps_ind.id_transformace = 1293)
Buffers: shared hit=56
Planning Time: 1.343 ms
JIT:
Functions: 2
Options: Inlining true, Optimization true, Expressions true, Deforming true
Timing: Generation 0.540 ms, Inlining 32.246 ms, Optimization 44.423 ms, Emission 22.524 ms, Total 99.732 ms
Execution Time: 537.627 ms
Is there a reason for this behavior?
PostgreSQL 12.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.3.1
20191121 (Red Hat 8.3.1-5), 64-bit
There is a very simple (which is not to say good) reason for this. The planner assumes rows with id_transformace = 1293 are evenly distributed throughout the table, and so it will be able to collect 100 of them very quickly with a seq scan and then stop early. But this assumption is very wrong, and needs to go through a big chunk of the table to find 100 qualifying rows.
This assumption is not based on any statistics gathered on the table, so increasing the statistics target will not help. And extended statistics will not help either, as it only offers statistic between columns, not between a column and the physical ordering.
There are no good clean ways to solve this purely on the stock server side. One work-around is to set enable_seqscan=off before running the query, then reset afterwords. Another would be to add ORDER BY random() to your query, that way the planner knows it can't stop early. Or maybe the extension pg_hint_plan could help, I've never used it.
You might get it to change the plan by tweaking your some of your *_cost parameters, but that would likely make other things worse. Seeing the output of the EXPLAIN (ANALYZE, BUFFERS) of the LIMITed query run with enable_seqscan=off could inform that decision.
Since the column appears to be sparse/skew, you could try to increase the statistics size :
ALTER TABLE testj.cdc_s5_gpps_ind
ALTER COLUMN id_transformace SET STATISTICS 1000;
ANALYZE testj.cdc_s5_gpps_ind;
Postgres-11 and above also has extended statistics, allowing multi-column correlations to be recognised and exploited. You must have some understanding of the actual structure of the data in the table to use them effectively.
Related
I am trying to debug a query that runs faster the more records it returns but performance severely degrades (>10x slower) with smaller returns (i.e. <10 rows) using small LIMIT (ie 10).
Example:
Fast query with 5 results out of 1M rows - no LIMIT
SELECT *
FROM transaction_internal_by_addresses
WHERE address = 'foo'
ORDER BY block_number desc;
Explain:
Sort (cost=7733.14..7749.31 rows=6468 width=126) (actual time=0.030..0.031 rows=5 loops=1)
" Output: address, block_number, log_index, transaction_hash"
Sort Key: transaction_internal_by_addresses.block_number
Sort Method: quicksort Memory: 26kB
Buffers: shared hit=10
-> Index Scan using transaction_internal_by_addresses_pkey on public.transaction_internal_by_addresses (cost=0.69..7323.75 rows=6468 width=126) (actual time=0.018..0.021 rows=5 loops=1)
" Output: address, block_number, log_index, transaction_hash"
Index Cond: (transaction_internal_by_addresses.address = 'foo'::text)
Buffers: shared hit=10
Query Identifier: -8912211611755432198
Planning Time: 0.051 ms
Execution Time: 0.041 ms
Fast query with 5 results out of 1M rows: - High LIMIT
SELECT *
FROM transaction_internal_by_addresses
WHERE address = 'foo'
ORDER BY block_number desc
LIMIT 100;
Limit (cost=7570.95..7571.20 rows=100 width=126) (actual time=0.024..0.025 rows=5 loops=1)
" Output: address, block_number, log_index, transaction_hash"
Buffers: shared hit=10
-> Sort (cost=7570.95..7587.12 rows=6468 width=126) (actual time=0.023..0.024 rows=5 loops=1)
" Output: address, block_number, log_index, transaction_hash"
Sort Key: transaction_internal_by_addresses.block_number DESC
Sort Method: quicksort Memory: 26kB
Buffers: shared hit=10
-> Index Scan using transaction_internal_by_addresses_pkey on public.transaction_internal_by_addresses (cost=0.69..7323.75 rows=6468 width=126) (actual time=0.016..0.020 rows=5 loops=1)
" Output: address, block_number, log_index, transaction_hash"
Index Cond: (transaction_internal_by_addresses.address = 'foo'::text)
Buffers: shared hit=10
Query Identifier: 3421253327669991203
Planning Time: 0.042 ms
Execution Time: 0.034 ms
Slow query: - Low LIMIT
SELECT *
FROM transaction_internal_by_addresses
WHERE address = 'foo'
ORDER BY block_number desc
LIMIT 10;
Explain result:
Limit (cost=1000.63..6133.94 rows=10 width=126) (actual time=10277.845..11861.269 rows=0 loops=1)
" Output: address, block_number, log_index, transaction_hash"
Buffers: shared hit=56313576
-> Gather Merge (cost=1000.63..3333036.90 rows=6491 width=126) (actual time=10277.844..11861.266 rows=0 loops=1)
" Output: address, block_number, log_index, transaction_hash"
Workers Planned: 4
Workers Launched: 4
Buffers: shared hit=56313576
-> Parallel Index Scan Backward using transaction_internal_by_address_idx_block_number on public.transaction_internal_by_addresses (cost=0.57..3331263.70 rows=1623 width=126) (actual time=10256.995..10256.995 rows=0 loops=5)
" Output: address, block_number, log_index, transaction_hash"
Filter: (transaction_internal_by_addresses.address = 'foo'::text)
Rows Removed by Filter: 18485480
Buffers: shared hit=56313576
Worker 0: actual time=10251.822..10251.823 rows=0 loops=1
Buffers: shared hit=11387166
Worker 1: actual time=10250.971..10250.972 rows=0 loops=1
Buffers: shared hit=10215941
Worker 2: actual time=10252.269..10252.269 rows=0 loops=1
Buffers: shared hit=10191990
Worker 3: actual time=10252.513..10252.514 rows=0 loops=1
Buffers: shared hit=10238279
Query Identifier: 2050754902087402293
Planning Time: 0.081 ms
Execution Time: 11861.297 ms
DDL
create table transaction_internal_by_addresses
(
address text not null,
block_number bigint,
log_index bigint not null,
transaction_hash text not null,
primary key (address, log_index, transaction_hash)
);
alter table transaction_internal_by_addresses
owner to "icon-worker";
create index transaction_internal_by_address_idx_block_number
on transaction_internal_by_addresses (block_number);
So my questions
Should I just be looking at ways to force the query planner to apply the WHERE on the address (primary key)?
As you can see in the explain, the row block_number is scanned in the slow query but I am not sure why. Can anyone explain?
Is this normal? Seems like the more data, the harder the query, not the other way around as in this case.
Update
Apologies for A, the delay in responding and B, some of the inconsistencies in this question.
I have updated the EXPLAIN clearly showing the 1000x performance degradation
A multicolumn BTREE index on (address, block_number DESC) is exactly what the query planner needs to generate the result sets you mentioned. It will random-access the index to the first eligible row, then read the rows out in sequential order until it hits the LIMIT. You can also omit the DESC with no ill effects.
create index address_block_number
on transaction_internal_by_addresses
(address, block_number DESC);
As for asking "why" about query planner results, that's often an enduring mystery.
Sub-millisecond differences are hardly predictable so you're pretty much staring at noise, random miniscule differences caused by other things happening on the system. Your fastest query runs in tens of microseconds, slowest in a single millisecond - all of these are below typical network, mouse click, screen refresh latencies.
The planner already applies a where on your address: Index Cond: (a_table.address = 'foo'::text)
You're ordering by block_number, so it makes sense to scan it. It's also taking place in all three of your queries because they all do that.
It is normal - here's an online demo with similar differences. If what you're after is some reliable time estimation, use pgbench to run your queries multiple times and average out the timings.
Your third query plan seems to be coming from a different query, against a different table: a_table, compared to the initial two: transaction_internal_by_addresses.
If you were just wondering why these timings look like that, it's pretty much random and/or irrelevant at this level. If you're facing some kind of problem because of how these queries behave, it'd be better to focus on describing that problem - the queries themselves all do the same thing and the difference in their execution times is negligible.
Should I just be looking at ways to force the query planner to apply the WHERE on the address (primary key)?
Yes, it can be improve performance
As you can see in the explain, the row block_number is scanned in the slow query but I am not sure why. Can anyone explain?
Because Sort keys are different. Look carefully:
Sort Key: transaction_internal_by_addresses.block_number DESC
Sort Key: a_table.a_indexed_row DESC
it seems a_table.a_indexed_row has less performant stuff (eg more columns, more complex structure etc.)
Is this normal? Seems like the more data, the harder the query, not the other way around as in this case.
Normally more queries cause more time. But as I mentioned above, maybe a_table.a_indexed_row returns more values, has more columns etc.
I have a table notification with 2 columns: id (primary key) and entity (jsonb)
SELECT *
FROM notification
WHERE entity -> 'name' = 'Hello World'
ORDER by id DESC
Executes much faster, than the same query without the ORDER BY:
SELECT *
FROM notification
WHERE entity -> 'name' = 'Hello World'
There is no entity -> 'name' index.
I've noticed, that the first query uses index scan, while the second one - a sequence scan. The difference in execution time: 60 sec vs 0.5 sec.
Number of rows in the table: 16696
Returned result: 95 rows
How to explain it?
UPD. EXPLAIN (ANALYZE, BUFFERS) for the first query:
Index Scan using notification_pkey on notification (cost=0.41..233277.12 rows=1 width=264) (actual time=480.582..583.623 rows=95 loops=1)
Filter: ((entity ->> 'name'::text) = 'Hello World'::text)
Rows Removed by Filter: 16606
Buffers: shared hit=96807
Planning Time: 0.211 ms
Execution Time: 583.826 ms
For the second query:
Seq Scan on notification (cost=0.00..502145.78 rows=1 width=264) (actual time=49675.453..60160.280 rows=95 loops=1)
Filter: ((entity ->> 'name'::text) = 'Hello World'::text)
Rows Removed by Filter: 16606
Buffers: shared hit=91608 read=497908
I/O Timings: read=55311.842
Planning Time: 0.112 ms
Execution Time: 60160.309 ms
UPD 2. I executed VACUUM ANALYZE on the table, but it didn't improve the performance.
It seems that your table is bloated to the extent that it contains almost 500000 empty 8kB blocks. Since those are not read during an index scan, the index scan is actually faster than the sequential scan.
You should find and fix the problem that causes that bloat, then take the down time to reorganize the table with
VACUUM (FULL) notification;
I'm running this query in our database:
select
(
select least(2147483647, sum(pb.nr_size))
from tb_pr_dc pd
inner join tb_pr_dc_bn pb on 1=1
and pb.id_pr_dc_bn = pd.id_pr_dc_bn
where 1=1
and pd.id_pr = pt.id_pr -- outer query column
)
from
(
select regexp_split_to_table('[list of 500 ids]', ',')::integer id_pr
) pt
;
Which outputs 500 rows having a single result column and takes around 1 min and 43 secs to run. The explain (analyze, verbose, buffers) outputs the following plan:
Subquery Scan on pt (cost=0.00..805828.19 rows=1000 width=8) (actual time=96.791..103205.872 rows=500 loops=1)
Output: (SubPlan 1)
Buffers: shared hit=373771 read=153484
-> Result (cost=0.00..22.52 rows=1000 width=4) (actual time=0.434..3.729 rows=500 loops=1)
Output: ((regexp_split_to_table('[list of 500 ids]', ',')::integer id_pr)
-> ProjectSet (cost=0.00..5.02 rows=1000 width=32) (actual time=0.429..2.288 rows=500 loops=1)
Output: (regexp_split_to_table('[list of 500 ids]', ',')::integer id_pr
-> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.001 rows=1 loops=1)
SubPlan 1
-> Aggregate (cost=805.78..805.80 rows=1 width=8) (actual time=206.399..206.400 rows=1 loops=500)
Output: LEAST('2147483647'::bigint, sum((pb.nr_size)::integer))
Buffers: shared hit=373771 read=153484
-> Nested Loop (cost=0.87..805.58 rows=83 width=4) (actual time=1.468..206.247 rows=219 loops=500)
Output: pb.nr_size
Inner Unique: true
Buffers: shared hit=373771 read=153484
-> Index Scan using tb_pr_dc_in05 on db.tb_pr_dc pd (cost=0.43..104.02 rows=83 width=4) (actual time=0.233..49.289 rows=219 loops=500)
Output: pd.id_pr_dc, pd.ds_pr_dc, pd.id_pr, pd.id_user_in, pd.id_user_ex, pd.dt_in, pd.dt_ex, pd.ds_mt_ex, pd.in_at, pd.id_tp_pr_dc, pd.id_pr_xz (...)
Index Cond: ((pd.id_pr)::integer = pt.id_pr)
Buffers: shared hit=24859 read=64222
-> Index Scan using tb_pr_dc_bn_pk on db.tb_pr_dc_bn pb (cost=0.43..8.45 rows=1 width=8) (actual time=0.715..0.715 rows=1 loops=109468)
Output: pb.id_pr_dc_bn, pb.ds_ex, pb.ds_md_dc, pb.ds_m5_dc, pb.nm_aq, pb.id_user, pb.dt_in, pb.ob_pr_dc, pb.nr_size, pb.ds_sg, pb.ds_cr_ch, pb.id_user_ (...)
Index Cond: ((pb.id_pr_dc_bn)::integer = (pd.id_pr_dc_bn)::integer)
Buffers: shared hit=348912 read=89262
Planning Time: 1.151 ms
Execution Time: 103206.243 ms
The logic is: for each id_pr chosen (in the list of 500 ids) calculate the sum of the integer column pb.nr_size associated with them, returning the lesser value between this amount and the number 2,147,483,647. The result must contain 500 rows, one for each id, and we already know that they'll match at least one row in the subquery, so will not produce null values.
The index tb_pr_dc_in05 is a b-tree on id_pr only, which is of integer type. The index tb_pr_dc_bn_pk is a b-tree on the primary key id_pr_dc_bn only, which is of integer type also. Table tb_pr_dc has many rows for each id_pr. Actually, we have 209,217 unique id_prs in tb_pr_dc for a total of 13,910,855 rows. Table tb_pr_dc_bn has the same amount of rows.
As can be seen, we defined 500 ids to query tb_pr_dc, finding 109,468 rows (less than 1% of the table size) and then finding the same amount looking in tb_pr_dc_bn. Imo, the indexes look fine and the amount of rows to evaluate is minimal, so I can't understand why it's taking so much time to run this query. A lot of other queries reading a lot more of data on other tables and doing more calculations are running fine. The DBA just ran a reindex and vacuum analyze, but still it's running the same slow way. We are running PostgreSQL 11 on Linux. I'm running this query in a replica without concurrent access.
What could I be missing that could improve this query performance?
Thanks for your attention.
The time is spent jumping all over the table to find 109468 randomly scattered rows, issuing random IO requests to do so. You can verify that be turning track_io_timing on and redoing the plans (probably just leave it turned on globally and by default, the overhead is low and the value it produces is high), but I'm sure enough that I don't need to see that output before reaching this conclusion. The other queries that are faster are probably accessing fewer disk pages because they access data that is more tightly packed, or is organized so that it can be read more sequentially. In fact, I would say your query is quite fast given how many pages it had to read.
You ask about why so many columns are output in the internal nodes of the plan. The reason for that is that PostgreSQL often just passes around pointers to where the tuple lives in the shared_buffers, and the tuple being pointed to has the columns that the table itself has. It could allocate memory in which to store a reformatted version of the tuple with the unnecessary columns stripped out, but that would generally be more work, not less. If it was a reason to copy and re-form the tuple anyway, it will remove the extraneous columns while it does so. But it won't do it without a reason.
One way to sped this up is to create indexes which will enable index-only scans. Those would be on tb_pr_dc (id_pr, id_pr_dc_bn) and on tb_pr_dc_bn (id_pr_dc_bn, nr_size).
If this isn't enough, there might be other ways to improve this too; but I can't think through them if I keep getting distracted by the long strings of unmemorable unpronounceable gibberish you have for table and column names.
I have a reference table for UUIDs that is roughly 200M rows. I have ~5000 UUIDs that I want to look up in the reference table. Reference table looks like:
CREATE TABLE object_store AS (
project_id UUID,
object_id UUID,
object_name VARCHAR(20),
description VARCHAR(80)
);
CREATE INDEX object_store_project_idx ON object_store(project_id);
CREATE INDEX object_store_id_idx ON object_store(object_id);
* Edit #2 *
Request for the temp_objects table definition.
CREATE TEMPORARY TABLE temp_objects AS (
object_id UUID
)
ON COMMIT DELETE ROWS;
The reason for the separate index is because object_id is not unique, and can belong to many different projects. The reference table is just a temp table of UUIDs (temp_objects) that I want to check (5000 object_ids).
If I query the above reference table with 1 object_id literal value, it's almost instantaneous (2ms). If the temp table only has 1 row, again, instantaneous (2ms). But with 5000 rows it takes 25 minutes to even return. Granted it pulls back >3M rows of matches.
* Edited *
This is for 1 row comparison (4.198 ms):
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)SELECT O.project_id
FROM temp_objects T JOIN object_store O ON T.object_id = O.object_id;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------
Nested Loop (cost=0.57..475780.22 rows=494005 width=65) (actual time=0.038..2.631 rows=1194 loops=1)
Buffers: shared hit=1202, local hit=1
-> Seq Scan on temp_objects t (cost=0.00..13.60 rows=360 width=16) (actual time=0.007..0.009 rows=1 loops=1)
Buffers: local hit=1
-> Index Scan using object_store_id_idx on object_store l (cost=0.57..1307.85 rows=1372 width=81) (actual time=0.027..1.707 rows=1194 loops=1)
Index Cond: (object_id = t.object_id)
Buffers: shared hit=1202
Planning time: 0.173 ms
Execution time: 3.096 ms
(9 rows)
Time: 4.198 ms
This is for 4911 row comparison (1579082.974 ms (26:19.083)):
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)SELECT O.project_id
FROM temp_objects T JOIN object_store O ON T.object_id = O.object_id;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------
Nested Loop (cost=0.57..3217316.86 rows=3507438 width=65) (actual time=0.041..1576913.100 rows=8043500 loops=1)
Buffers: shared hit=5185078 read=2887548, local hit=71
-> Seq Scan on temp_objects d (cost=0.00..96.56 rows=2556 width=16) (actual time=0.009..3.945 rows=4911 loops=1)
Buffers: local hit=71
-> Index Scan using object_store_id_idx on object_store l (cost=0.57..1244.97 rows=1372 width=81) (actual time=1.492..320.081 rows=1638 loops=4911)
Index Cond: (object_id = t.object_id)
Buffers: shared hit=5185078 read=2887548
Planning time: 0.169 ms
Execution time: 1579078.811 ms
(9 rows)
Time: 1579082.974 ms (26:19.083)
Eventually I want to group and get a count of the matching object_ids by project_id, using standard grouping. The aggregate is at the upper end (of course) of the cost. It took just about 25 minutes again to complete the below query. Yet, when I limit the temp table to only 1 row, it comes back in 21ms. Something is not adding up...
EXPLAIN SELECT O.project_id, count(*)
FROM temp_objects T JOIN object_store O ON T.object_id = O.object_id GROUP BY O.project_id;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------
HashAggregate (cost=6189484.10..6189682.84 rows=19874 width=73)
Group Key: o.project_id
-> Nested Loop (cost=0.57..6155795.69 rows=6737683 width=65)
-> Seq Scan on temp_objects t (cost=0.00..120.10 rows=4910 width=16)
-> Index Scan using object_store_id_idx on object_store o (cost=0.57..1239.98 rows=1372 width=81)
Index Cond: (object_id = t.object_id)
(6 rows)
I'm on PostgreSQL 10.6, running 2 CPUs and 8GB of RAM on an SSD. I have ANALYZEd the tables, I have set the work_mem to 50MB, shared_buffers to 2GB, and have set the random_page_cost to 1. All helped the queries actually to come back in several minutes, but still not as fast as I feel it should be.
I have the option to go to cloud computing if CPUs/RAM/parallelization make a big difference. Just looking for suggestions on how to get this simple query to return in < few seconds (if possible).
* UPDATE *
Taking the hint from Jürgen Zornig, I changed both object_id fields to be bigint, using just the top half of the UUID and reducing my datasize by half. Doing the aggregate query above the query now performs at ~16min.
Next, taking jjane's suggestion of set enable_nestloop to off, my aggregate query jumped to 6min! Unfortunately, all the other suggestions haven't sped it up past 6min, although it's interesting that changing my "TEMPORARY" table to a permanent one allowed 2 workers to work it, it didn't change the time. I think jjane is accurate by saying the IO is the binding factor here. Here is the latest explain plan from the 6min (wish it were faster, still, but it's better!):
explain (analyze, buffers, format text) select project_id, count(*) from object_store natural join temp_object group by project_id;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize GroupAggregate (cost=3966899.86..3967396.69 rows=19873 width=73) (actual time=368124.126..368744.157 rows=153633 loops=1)
Group Key: object_store.project_id
Buffers: shared hit=243022 read=2423215, temp read=3674 written=3687
I/O Timings: read=870720.440
-> Sort (cost=3966899.86..3966999.23 rows=39746 width=73) (actual time=368124.116..368586.497 rows=333427 loops=1)
Sort Key: object_store.project_id
Sort Method: external merge Disk: 29392kB
Buffers: shared hit=243022 read=2423215, temp read=3674 written=3687
I/O Timings: read=870720.440
-> Gather (cost=3959690.23..3963863.56 rows=39746 width=73) (actual time=366476.369..366827.313 rows=333427 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=243022 read=2423215
I/O Timings: read=870720.440
-> Partial HashAggregate (cost=3958690.23..3958888.96 rows=19873 width=73) (actual time=366472.712..366568.313 rows=111142 loops=3)
Group Key: object_store.project_id
Buffers: shared hit=243022 read=2423215
I/O Timings: read=870720.440
-> Hash Join (cost=132.50..3944473.09 rows=2843429 width=65) (actual time=7.880..363848.830 rows=2681167 loops=3)
Hash Cond: (object_store.object_id = temp_object.object_id)
Buffers: shared hit=243022 read=2423215
I/O Timings: read=870720.440
-> Parallel Seq Scan on object_store (cost=0.00..3499320.53 rows=83317153 width=73) (actual time=0.467..324932.880 rows=66653718 loops=3)
Buffers: shared hit=242934 read=2423215
I/O Timings: read=870720.440
-> Hash (cost=71.11..71.11 rows=4911 width=8) (actual time=7.349..7.349 rows=4911 loops=3)
Buckets: 8192 Batches: 1 Memory Usage: 256kB
Buffers: shared hit=66
-> Seq Scan on temp_object (cost=0.00..71.11 rows=4911 width=8) (actual time=0.014..2.101 rows=4911 loops=3)
Buffers: shared hit=66
Planning time: 0.247 ms
Execution time: 368779.757 ms
(32 rows)
Time: 368780.532 ms (06:08.781)
So I'm at 6min per query now. I think with I/O costs, I may try for an in-memory store on this table if possible to see if getting it off SSD makes it even better.
UUIDs are (EDIT) working against adaptive cache management and, because of their random nature effectively dropping the cache hit ratio because the index space is larger than memory. Ids cover a numerically wide range equally distributed, so in fact every Id lands pretty much on its own leaf on the index tree. As the index leaf determines in which data page the row is saved in disk pretty much every row gets its own page resulting in a whole lot of extremely expensive I/O Operations to get all these rows read in.
That's the reason why its generally not recommended to use UUIDs and if you really need UUIDs then at least generate timestamp/mac-prefixed UUIDs (have a look at uuid_generate_v1() - https://www.postgresql.org/docs/9.4/uuid-ossp.html) that are numerically close to each other, therefore chances are higher that data rows are clustered together on lesser data Pages resulting in fewer I/O Operations to get more Data in.
Long Story Short: Randomness over a large range kills your index (well actually not the index, it results in a lot of expensive I/O to get data on reading and to maintain the index on writing) and therefore slows queries down to a point where it is as good as having no index at all.
Here is also an article for reference
It looks like the centerpiece of your question is why it doesn't scale up from one input row to 5000 input rows linearly. But I think that this is a red herring. How are you choosing the one row? If you choose the same one row each time, then the data will stay in cache and will be very fast. I bet this is what you are doing. If you choose a different random one row each time you do a one-row plan, you will probably find the scaling to be more linear.
You should turn on track_io_timing. I have little doubt that IO is actually the bottleneck, but it is always nice to see it actually measured and reported, I have been surprised before.
The use of temporary table will inhibit parallel query. You might want to test with a permanent table, to see if you do get use of parallel workers, and if so, whether that actually helps. If you do this test, you should use your aggregation version of the query. They parallelize more efficiently than non-aggregation queries do, and if that is your ultimate goal that is what you should initially test with.
Another thing you could try is a large setting of effective_io_concurrency. But, that will only help if your plan uses bitmap scans to start with, which the plans you show do not. Setting random_page_cost from 1 to a slightly higher value might encourage it to use bitmap scans. (effective_io_concurrency is weird because bitmap plans can get a substantial realistic benefit from a higher setting, but the planner doesn't give bitmap plans any credit for that benefit they receive. So you must be "accidentally" using that plan already in order to get the benefit)
At some point (as you increase the number of rows in temp_objects) it is going to be faster to hash that table, and hashjoin it to a seq-scan of the object_store table. Is 5000 already past the point at which that would be faster? The planner clearly doesn't think so, but the planner never gets the cut-over point exactly right, and is often off by quite a bit. What happens if you set enable_nestloop TO off; before running your query?
Have you done low-level benchmarking of your SSD (outside of the database)? Assuming substantially all of your time is spent on IO reads and nearly none of those are fulfilled by the filesystem cache, you are getting 1576913/2887548 = 0.55ms per read. That seems pretty long. That is about what I get on a bottom-dollar laptop where the SSD is being exposed through a VM layer. I'd expect better than that from server-grade hardware.
Be sure you have also a proper index for temp_objects table
CREATE INDEX temp_object_id_idx ON temp_objects(object_id);
SELECT O.project_id
FROM temp_objects T
JOIN object_store O ON T.object_id = O.object_id;
Firstly: I would try to get the index into memory. What is shared_buffers set to? If it is small, lets make that bigger first. See if we can reduce the index scan IO.
Next: Are parallel queries enabled? I'm not sure that will help here very much because you have only 2 cpus, but it wouldn't hurt.
Even though the object column is completely random, I'd also bump up the statistics on that table from the default (100 rows or something like that) to a few thousand rows. Then run Analyze again. (or for thoroughness, vacuum analyze)
Work Mem at 50M may be low too. It could potentially be larger if you don't have a lot of concurrent users and you have G's of RAM to work with. Too large and it can be counter productive, but you could go up a bit more to see if it helps.
You could try CTAS on the big table into a new table to sort object id so that it isn't completely random.
There might be a crazy partitioning scheme you could come up with if you were using PostgreSQL 12 that would group the object ids into some even partition distribution.
On heroku postgres my group by query is taking more than 2 seconds, is it normal? How can I optimize this further? Both columns are indexed So I was assuming that it should run faster.
The query is,
EXPLAIN ANALYZE(SELECT COUNT(*), context, call_type FROM call_tasks GROUP BY call_tasks.context, call_tasks.call_type );
query plan and analyze:
GroupAggregate (cost=0.08..11500.84 rows=12 width=11) (actual time=35.395..2545.426 rows=7 loops=1)
Group Key: context, call_type
-> Index Only Scan using index_call_tasks_on_context_and_call_type on call_tasks (cost=0.08..10338.79 rows=774677 width=11) (actual time=0.022..1480.729 rows=781076 loops=1)
Heap Fetches: 43682
Planning time: 0.085 ms
Execution time: 2545.464 ms
(6 rows)
I am using hobby basic database.https://elements.heroku.com/addons/heroku-postgresql.
That's the best you can get, further improvements can only be made by improving the hardware:
Faster CPU for faster aggregation.
More memory to keep everything in RAM.
Other than that, if you are happy with an approximate result, you could use a materialized view that gets updated regularly.