I have a table notification with 2 columns: id (primary key) and entity (jsonb)
SELECT *
FROM notification
WHERE entity -> 'name' = 'Hello World'
ORDER by id DESC
Executes much faster, than the same query without the ORDER BY:
SELECT *
FROM notification
WHERE entity -> 'name' = 'Hello World'
There is no entity -> 'name' index.
I've noticed, that the first query uses index scan, while the second one - a sequence scan. The difference in execution time: 60 sec vs 0.5 sec.
Number of rows in the table: 16696
Returned result: 95 rows
How to explain it?
UPD. EXPLAIN (ANALYZE, BUFFERS) for the first query:
Index Scan using notification_pkey on notification (cost=0.41..233277.12 rows=1 width=264) (actual time=480.582..583.623 rows=95 loops=1)
Filter: ((entity ->> 'name'::text) = 'Hello World'::text)
Rows Removed by Filter: 16606
Buffers: shared hit=96807
Planning Time: 0.211 ms
Execution Time: 583.826 ms
For the second query:
Seq Scan on notification (cost=0.00..502145.78 rows=1 width=264) (actual time=49675.453..60160.280 rows=95 loops=1)
Filter: ((entity ->> 'name'::text) = 'Hello World'::text)
Rows Removed by Filter: 16606
Buffers: shared hit=91608 read=497908
I/O Timings: read=55311.842
Planning Time: 0.112 ms
Execution Time: 60160.309 ms
UPD 2. I executed VACUUM ANALYZE on the table, but it didn't improve the performance.
It seems that your table is bloated to the extent that it contains almost 500000 empty 8kB blocks. Since those are not read during an index scan, the index scan is actually faster than the sequential scan.
You should find and fix the problem that causes that bloat, then take the down time to reorganize the table with
VACUUM (FULL) notification;
Related
I am trying to debug a query that runs faster the more records it returns but performance severely degrades (>10x slower) with smaller returns (i.e. <10 rows) using small LIMIT (ie 10).
Example:
Fast query with 5 results out of 1M rows - no LIMIT
SELECT *
FROM transaction_internal_by_addresses
WHERE address = 'foo'
ORDER BY block_number desc;
Explain:
Sort (cost=7733.14..7749.31 rows=6468 width=126) (actual time=0.030..0.031 rows=5 loops=1)
" Output: address, block_number, log_index, transaction_hash"
Sort Key: transaction_internal_by_addresses.block_number
Sort Method: quicksort Memory: 26kB
Buffers: shared hit=10
-> Index Scan using transaction_internal_by_addresses_pkey on public.transaction_internal_by_addresses (cost=0.69..7323.75 rows=6468 width=126) (actual time=0.018..0.021 rows=5 loops=1)
" Output: address, block_number, log_index, transaction_hash"
Index Cond: (transaction_internal_by_addresses.address = 'foo'::text)
Buffers: shared hit=10
Query Identifier: -8912211611755432198
Planning Time: 0.051 ms
Execution Time: 0.041 ms
Fast query with 5 results out of 1M rows: - High LIMIT
SELECT *
FROM transaction_internal_by_addresses
WHERE address = 'foo'
ORDER BY block_number desc
LIMIT 100;
Limit (cost=7570.95..7571.20 rows=100 width=126) (actual time=0.024..0.025 rows=5 loops=1)
" Output: address, block_number, log_index, transaction_hash"
Buffers: shared hit=10
-> Sort (cost=7570.95..7587.12 rows=6468 width=126) (actual time=0.023..0.024 rows=5 loops=1)
" Output: address, block_number, log_index, transaction_hash"
Sort Key: transaction_internal_by_addresses.block_number DESC
Sort Method: quicksort Memory: 26kB
Buffers: shared hit=10
-> Index Scan using transaction_internal_by_addresses_pkey on public.transaction_internal_by_addresses (cost=0.69..7323.75 rows=6468 width=126) (actual time=0.016..0.020 rows=5 loops=1)
" Output: address, block_number, log_index, transaction_hash"
Index Cond: (transaction_internal_by_addresses.address = 'foo'::text)
Buffers: shared hit=10
Query Identifier: 3421253327669991203
Planning Time: 0.042 ms
Execution Time: 0.034 ms
Slow query: - Low LIMIT
SELECT *
FROM transaction_internal_by_addresses
WHERE address = 'foo'
ORDER BY block_number desc
LIMIT 10;
Explain result:
Limit (cost=1000.63..6133.94 rows=10 width=126) (actual time=10277.845..11861.269 rows=0 loops=1)
" Output: address, block_number, log_index, transaction_hash"
Buffers: shared hit=56313576
-> Gather Merge (cost=1000.63..3333036.90 rows=6491 width=126) (actual time=10277.844..11861.266 rows=0 loops=1)
" Output: address, block_number, log_index, transaction_hash"
Workers Planned: 4
Workers Launched: 4
Buffers: shared hit=56313576
-> Parallel Index Scan Backward using transaction_internal_by_address_idx_block_number on public.transaction_internal_by_addresses (cost=0.57..3331263.70 rows=1623 width=126) (actual time=10256.995..10256.995 rows=0 loops=5)
" Output: address, block_number, log_index, transaction_hash"
Filter: (transaction_internal_by_addresses.address = 'foo'::text)
Rows Removed by Filter: 18485480
Buffers: shared hit=56313576
Worker 0: actual time=10251.822..10251.823 rows=0 loops=1
Buffers: shared hit=11387166
Worker 1: actual time=10250.971..10250.972 rows=0 loops=1
Buffers: shared hit=10215941
Worker 2: actual time=10252.269..10252.269 rows=0 loops=1
Buffers: shared hit=10191990
Worker 3: actual time=10252.513..10252.514 rows=0 loops=1
Buffers: shared hit=10238279
Query Identifier: 2050754902087402293
Planning Time: 0.081 ms
Execution Time: 11861.297 ms
DDL
create table transaction_internal_by_addresses
(
address text not null,
block_number bigint,
log_index bigint not null,
transaction_hash text not null,
primary key (address, log_index, transaction_hash)
);
alter table transaction_internal_by_addresses
owner to "icon-worker";
create index transaction_internal_by_address_idx_block_number
on transaction_internal_by_addresses (block_number);
So my questions
Should I just be looking at ways to force the query planner to apply the WHERE on the address (primary key)?
As you can see in the explain, the row block_number is scanned in the slow query but I am not sure why. Can anyone explain?
Is this normal? Seems like the more data, the harder the query, not the other way around as in this case.
Update
Apologies for A, the delay in responding and B, some of the inconsistencies in this question.
I have updated the EXPLAIN clearly showing the 1000x performance degradation
A multicolumn BTREE index on (address, block_number DESC) is exactly what the query planner needs to generate the result sets you mentioned. It will random-access the index to the first eligible row, then read the rows out in sequential order until it hits the LIMIT. You can also omit the DESC with no ill effects.
create index address_block_number
on transaction_internal_by_addresses
(address, block_number DESC);
As for asking "why" about query planner results, that's often an enduring mystery.
Sub-millisecond differences are hardly predictable so you're pretty much staring at noise, random miniscule differences caused by other things happening on the system. Your fastest query runs in tens of microseconds, slowest in a single millisecond - all of these are below typical network, mouse click, screen refresh latencies.
The planner already applies a where on your address: Index Cond: (a_table.address = 'foo'::text)
You're ordering by block_number, so it makes sense to scan it. It's also taking place in all three of your queries because they all do that.
It is normal - here's an online demo with similar differences. If what you're after is some reliable time estimation, use pgbench to run your queries multiple times and average out the timings.
Your third query plan seems to be coming from a different query, against a different table: a_table, compared to the initial two: transaction_internal_by_addresses.
If you were just wondering why these timings look like that, it's pretty much random and/or irrelevant at this level. If you're facing some kind of problem because of how these queries behave, it'd be better to focus on describing that problem - the queries themselves all do the same thing and the difference in their execution times is negligible.
Should I just be looking at ways to force the query planner to apply the WHERE on the address (primary key)?
Yes, it can be improve performance
As you can see in the explain, the row block_number is scanned in the slow query but I am not sure why. Can anyone explain?
Because Sort keys are different. Look carefully:
Sort Key: transaction_internal_by_addresses.block_number DESC
Sort Key: a_table.a_indexed_row DESC
it seems a_table.a_indexed_row has less performant stuff (eg more columns, more complex structure etc.)
Is this normal? Seems like the more data, the harder the query, not the other way around as in this case.
Normally more queries cause more time. But as I mentioned above, maybe a_table.a_indexed_row returns more values, has more columns etc.
I'm making a site that stores a large amount of data (8 data points for 313 item_ids every 10 seconds over 24 hr) and I serve that data to users on demand. The request is supplied with an item ID with which I query the database with something along the lines of SELECT * FROM current_day_data WHERE item_id = <supplied ID> (assuming the id is valid).
CREATE TABLE current_day_data (
"time" bigint,
item_id text NOT NULL,
-- some data,
id integer NOT NULL
);
CREATE INDEX item_id_lookup ON public.current_day_data USING btree (item_id);
This works fine, but the request takes about a third of a second, so I'm looking into either other database options to help optimize this, or some way to optimize the query itself.
My current setup is a PostgreSQL database with an index on the item ID column, but I feel like there's options in the realm of NoSQL (an area I'm unfamiliar with) due to it's similarity to a hash table.
My ideal solution would be a hash table with the item IDs as the key and the data as a JSON-like object but I don't know what options could achieve that.
tl;dr how to optimize SELECT * FROM current_day_data WHERE item_id = <supplied ID> through better querying or new database solution?
edit: here's the EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM current_day_data
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------
Seq Scan on current_day_data (cost=0.00..46811.09 rows=2584364 width=75) (actual time=0.013..291.667 rows=2700251 loops=1)
Buffers: shared hit=39058
Planning:
Buffers: shared hit=112
Planning Time: 0.584 ms
Execution Time: 446.622 ms
(6 rows)
EXPLAIN with a specified item id EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM current_day_data WHERE item_id = 'SUGAR_CANE';
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on current_day_info (cost=33.40..12099.27 rows=8592 width=75) (actual time=2.949..12.236 rows=8627 loops=1)
Recheck Cond: (product_id = 'SUGAR_CANE'::text)
Heap Blocks: exact=8570
Buffers: shared hit=8619
-> Bitmap Index Scan on prod_id_lookup (cost=0.00..32.97 rows=8592 width=0) (actual time=1.751..1.751 rows=8665 loops=1)
Index Cond: (product_id = 'SUGAR_CANE'::text)
Buffers: shared hit=12
Planning:
Buffers: shared hit=68
Planning Time: 0.339 ms
Execution Time: 12.686 ms
(11 rows)
Now this says 12.7ms which makes me think the 300ms has something to do with the library I'm using (SQLAlchemy), but that wouldn't really make sense since it's a popular library. More specifically, the line I'm using is:
results = CurrentDayData.query.filter(CurrentDayData.item_id == item_id).all()
That’s a very simple query that uses an index, therefore the only way to possible speed it up would be to improve the specification of your hardware.
Moving to a different form of database, on the same hardware, is not going to make a significant difference in performance to this type of query.
I have a large table with BRIN index, and if I do query with limit it ignores the index and go for sequence scan, without it it uses index (I tried it several times with same results)
explain (analyze,verbose,buffers,timing,costs)
select *
from testj.cdc_s5_gpps_ind
where id_transformace = 1293
limit 100
Limit (cost=0.00..349.26 rows=100 width=207) (actual time=28927.179..28927.214 rows=100 loops=1)
Output: id, date_key_trainjr...
Buffers: shared hit=225 read=1680241
-> Seq Scan on testj.cdc_s5_gpps_ind (cost=0.00..3894204.10 rows=1114998 width=207) (actual time=28927.175..28927.202 rows=100 loops=1)
Output: id, date_key_trainjr...
Filter: (cdc_s5_gpps_ind.id_transformace = 1293)
Rows Removed by Filter: 59204140
Buffers: shared hit=225 read=1680241
Planning Time: 0.149 ms
Execution Time: 28927.255 ms
explain (analyze,verbose,buffers,timing,costs)
select *
from testj.cdc_s5_gpps_ind
where id_transformace = 1293
Bitmap Heap Scan on testj.cdc_s5_gpps_ind (cost=324.36..979783.34 rows=1114998 width=207) (actual time=110.103..467.008 rows=1073725 loops=1)
Output: id, date_key_trainjr...
Recheck Cond: (cdc_s5_gpps_ind.id_transformace = 1293)
Rows Removed by Index Recheck: 11663
Heap Blocks: lossy=32000
Buffers: shared hit=32056
-> Bitmap Index Scan on gpps_brin_index (cost=0.00..45.61 rows=1120373 width=0) (actual time=2.326..2.326 rows=320000 loops=1)
Index Cond: (cdc_s5_gpps_ind.id_transformace = 1293)
Buffers: shared hit=56
Planning Time: 1.343 ms
JIT:
Functions: 2
Options: Inlining true, Optimization true, Expressions true, Deforming true
Timing: Generation 0.540 ms, Inlining 32.246 ms, Optimization 44.423 ms, Emission 22.524 ms, Total 99.732 ms
Execution Time: 537.627 ms
Is there a reason for this behavior?
PostgreSQL 12.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.3.1
20191121 (Red Hat 8.3.1-5), 64-bit
There is a very simple (which is not to say good) reason for this. The planner assumes rows with id_transformace = 1293 are evenly distributed throughout the table, and so it will be able to collect 100 of them very quickly with a seq scan and then stop early. But this assumption is very wrong, and needs to go through a big chunk of the table to find 100 qualifying rows.
This assumption is not based on any statistics gathered on the table, so increasing the statistics target will not help. And extended statistics will not help either, as it only offers statistic between columns, not between a column and the physical ordering.
There are no good clean ways to solve this purely on the stock server side. One work-around is to set enable_seqscan=off before running the query, then reset afterwords. Another would be to add ORDER BY random() to your query, that way the planner knows it can't stop early. Or maybe the extension pg_hint_plan could help, I've never used it.
You might get it to change the plan by tweaking your some of your *_cost parameters, but that would likely make other things worse. Seeing the output of the EXPLAIN (ANALYZE, BUFFERS) of the LIMITed query run with enable_seqscan=off could inform that decision.
Since the column appears to be sparse/skew, you could try to increase the statistics size :
ALTER TABLE testj.cdc_s5_gpps_ind
ALTER COLUMN id_transformace SET STATISTICS 1000;
ANALYZE testj.cdc_s5_gpps_ind;
Postgres-11 and above also has extended statistics, allowing multi-column correlations to be recognised and exploited. You must have some understanding of the actual structure of the data in the table to use them effectively.
I am aggregating data from a Postgres table, the query is taking approx 2 seconds which I want to reduce to less than a second.
Please find below the execution details:
Query
select
a.search_keyword,
hll_cardinality( hll_union_agg(a.users) ):: int as user_count,
hll_cardinality( hll_union_agg(a.sessions) ):: int as session_count,
sum(a.total) as keyword_count
from
rollup_day a
where
a.created_date between '2018-09-01' and '2019-09-30'
and a.tenant_id = '62850a62-19ac-477d-9cd7-837f3d716885'
group by
a.search_keyword
order by
session_count desc
limit 100;
Table metadata
Total number of rows - 506527
Composite Index on columns : tenant_id and created_date
Query plan
Custom Scan (cost=0.00..0.00 rows=0 width=0) (actual time=1722.685..1722.694 rows=100 loops=1)
Task Count: 1
Tasks Shown: All
-> Task
Node: host=localhost port=5454 dbname=postgres
-> Limit (cost=64250.24..64250.49 rows=100 width=42) (actual time=1783.087..1783.106 rows=100 loops=1)
-> Sort (cost=64250.24..64558.81 rows=123430 width=42) (actual time=1783.085..1783.093 rows=100 loops=1)
Sort Key: ((hll_cardinality(hll_union_agg(sessions)))::integer) DESC
Sort Method: top-N heapsort Memory: 33kB
-> GroupAggregate (cost=52933.89..59532.83 rows=123430 width=42) (actual time=905.502..1724.363 rows=212633 loops=1)
Group Key: search_keyword
-> Sort (cost=52933.89..53636.53 rows=281055 width=54) (actual time=905.483..1351.212 rows=280981 loops=1)
Sort Key: search_keyword
Sort Method: external merge Disk: 18496kB
-> Seq Scan on rollup_day a (cost=0.00..17890.22 rows=281055 width=54) (actual time=29.720..112.161 rows=280981 loops=1)
Filter: ((created_date >= '2018-09-01'::date) AND (created_date <= '2019-09-30'::date) AND (tenant_id = '62850a62-19ac-477d-9cd7-837f3d716885'::uuid))
Rows Removed by Filter: 225546
Planning Time: 0.129 ms
Execution Time: 1786.222 ms
Planning Time: 0.103 ms
Execution Time: 1722.718 ms
What I've tried
I've tried with indexes on tenant_id and created_date but as the data is huge so it's always doing sequence scan rather than an index scan for filters. I've read about it and found, the Postgres query engine switch to sequence scan if the data returned is > 5-10% of the total rows. Please follow the link for more reference.
I've increased the work_mem to 100MB but it only improved the performance a little bit.
Any help would be really appreciated.
Update
Query plan after setting work_mem to 100MB
Custom Scan (cost=0.00..0.00 rows=0 width=0) (actual time=1375.926..1375.935 rows=100 loops=1)
Task Count: 1
Tasks Shown: All
-> Task
Node: host=localhost port=5454 dbname=postgres
-> Limit (cost=48348.85..48349.10 rows=100 width=42) (actual time=1307.072..1307.093 rows=100 loops=1)
-> Sort (cost=48348.85..48633.55 rows=113880 width=42) (actual time=1307.071..1307.080 rows=100 loops=1)
Sort Key: (sum(total)) DESC
Sort Method: top-N heapsort Memory: 35kB
-> GroupAggregate (cost=38285.79..43996.44 rows=113880 width=42) (actual time=941.504..1261.177 rows=172945 loops=1)
Group Key: search_keyword
-> Sort (cost=38285.79..38858.52 rows=229092 width=54) (actual time=941.484..963.061 rows=227261 loops=1)
Sort Key: search_keyword
Sort Method: quicksort Memory: 32982kB
-> Seq Scan on rollup_day_104290 a (cost=0.00..17890.22 rows=229092 width=54) (actual time=38.803..104.350 rows=227261 loops=1)
Filter: ((created_date >= '2019-01-01'::date) AND (created_date <= '2019-12-30'::date) AND (tenant_id = '62850a62-19ac-477d-9cd7-837f3d716885'::uuid))
Rows Removed by Filter: 279266
Planning Time: 0.131 ms
Execution Time: 1308.814 ms
Planning Time: 0.112 ms
Execution Time: 1375.961 ms
Update 2
After creating an index on created_date and increased work_mem to 120MB
create index date_idx on rollup_day(created_date);
The total number of rows is: 12,124,608
Query Plan is:
Custom Scan (cost=0.00..0.00 rows=0 width=0) (actual time=2635.530..2635.540 rows=100 loops=1)
Task Count: 1
Tasks Shown: All
-> Task
Node: host=localhost port=9702 dbname=postgres
-> Limit (cost=73545.19..73545.44 rows=100 width=51) (actual time=2755.849..2755.873 rows=100 loops=1)
-> Sort (cost=73545.19..73911.25 rows=146424 width=51) (actual time=2755.847..2755.858 rows=100 loops=1)
Sort Key: (sum(total)) DESC
Sort Method: top-N heapsort Memory: 35kB
-> GroupAggregate (cost=59173.97..67948.97 rows=146424 width=51) (actual time=2014.260..2670.732 rows=296537 loops=1)
Group Key: search_keyword
-> Sort (cost=59173.97..60196.85 rows=409152 width=55) (actual time=2013.885..2064.775 rows=410618 loops=1)
Sort Key: search_keyword
Sort Method: quicksort Memory: 61381kB
-> Index Scan using date_idx_102913 on rollup_day_102913 a (cost=0.42..21036.35 rows=409152 width=55) (actual time=0.026..183.370 rows=410618 loops=1)
Index Cond: ((created_date >= '2018-01-01'::date) AND (created_date <= '2018-12-31'::date))
Filter: (tenant_id = '12850a62-19ac-477d-9cd7-837f3d716885'::uuid)
Planning Time: 0.135 ms
Execution Time: 2760.667 ms
Planning Time: 0.090 ms
Execution Time: 2635.568 ms
You should experiment with higher settings of work_mem until you get an in-memory sort. Of course you can only be generous with memory if your machine has enough of it.
What would make your query way faster is if you store pre-aggregated data, either using a materialized view or a second table and a trigger on your original table that keeps the sums in the other table updated. I don't know if that is possible with your data, as I don't know what hll_cardinality and hll_union_agg are.
Have you tried a Covering indexes, so the optimizer will use the index, and not do a sequential scan ?
create index covering on rollup_day(tenant_id, created_date, search_keyword, users, sessions, total);
If Postgres 11
create index covering on rollup_day(tenant_id, created_date) INCLUDE (search_keyword, users, sessions, total);
But since you also do a sort/group by on search_keyword maybe :
create index covering on rollup_day(tenant_id, created_date, search_keyword);
create index covering on rollup_day(tenant_id, search_keyword, created_date);
Or :
create index covering on rollup_day(tenant_id, created_date, search_keyword) INCLUDE (users, sessions, total);
create index covering on rollup_day(tenant_id, search_keyword, created_date) INCLUDE (users, sessions, total);
One of these indexes should make the query faster. You should only add one of these indexes.
Even if it makes this query faster, having big indexes will/might make your write operations slower (especially HOT updates are not available on indexed columns). And you will use more storage.
Idea came from here , there is also an hint about size for the work_mem
Another example where the index was not used
use the table partitions and create a composite index it will bring down the total cost as:
it will save huge cost on scans for you.
partitions will segregate data and will be very helpful in future purge operations as well.
I have personally tried and tested table partitions with such cases and the throughput is amazing with the combination of
partitions & composite indexes.
Partitioning can be done on the range of created date and then composite indexes on date & tenant.
Remember you can always have a composite index with a condition in it if there is a very specific requirement for the condition in your query. This way the data will be sorted already in the index and will save huge costs for sort operations as well.
Hope this helps.
PS: Also, is it possible to share any test sample data for the same?
my suggestion would be to break up the select.
Now what I would try also in combination with this to setup 2 indices on the table. One on the Dates the other on the ID. One of the problem with weird IDs is, that it takes time to compare and they can be treated as string compare in the background. Thats why the break up, to prefilter the data before the between command is executed. Now the between command can make a select slow. Here I would suggest to break it up into 2 selects and inner join (I now the memory consumption is a problem).
Here is an example what I mean. I hope the optimizer is smart enough to restructure your query.
SELECT
a.search_keyword,
hll_cardinality( hll_union_agg(a.users) ):: int as user_count,
hll_cardinality( hll_union_agg(a.sessions) ):: int as session_count,
sum(a.total) as keyword_count
FROM
(SELECT
*
FROM
rollup_day a
WHERE
a.tenant_id = '62850a62-19ac-477d-9cd7-837f3d716885') t1
WHERE
a.created_date between '2018-09-01' and '2019-09-30'
group by
a.search_keyword
order by
session_count desc
Now if this does not work then you need more specific optimizations. For example. Can the total be equal to 0, then you need filtered index on the data where the total is > 0. Are there any other criteria that make it easy to exclude rows from the select.
The next consideration would be to create a row where there is a short ID (instead of 62850a62-19ac-477d-9cd7-837f3d716885 -> 62850 ), that can be a number and that would make preselection very easy and memory consumption less.
I have two tables which links to each other like this:
Table answered_questions with the following columns and indexes:
id: primary key
taken_test_id: integer (foreign key)
question_id: integer (foreign key, links to another table called questions)
indexes: (taken_test_id, question_id)
Table taken_tests
id: primary key
user_id: (foreign key, links to table Users)
indexes: user_id column
First query (with EXPLAIN ANALYZE output):
EXPLAIN ANALYZE
SELECT
"answered_questions".*
FROM
"answered_questions"
INNER JOIN "taken_tests" ON "answered_questions"."taken_test_id" = "taken_tests"."id"
WHERE
"taken_tests"."user_id" = 1;
Output:
Nested Loop (cost=0.99..116504.61 rows=1472 width=61) (actual time=0.025..2.208 rows=653 loops=1)
-> Index Scan using index_taken_tests_on_user_id on taken_tests (cost=0.43..274.18 rows=91 width=4) (actual time=0.014..0.483 rows=371 loops=1)
Index Cond: (user_id = 1)
-> Index Scan using index_answered_questions_on_taken_test_id_and_question_id on answered_questions (cost=0.56..1273.61 rows=365 width=61) (actual time=0.00
2..0.003 rows=2 loops=371)
Index Cond: (taken_test_id = taken_tests.id)
Planning time: 0.276 ms
Execution time: 2.365 ms
(7 rows)
Another query (this is generated automatically by Rails when using joins method in ActiveRecord)
EXPLAIN ANALYZE
SELECT
"answered_questions".*
FROM
"answered_questions"
INNER JOIN "taken_tests" ON "taken_tests"."id" = "answered_questions"."taken_test_id"
WHERE
"taken_tests"."user_id" = 1;
And here is the output
Nested Loop (cost=0.99..116504.61 rows=1472 width=61) (actual time=23.611..1257.807 rows=653 loops=1)
-> Index Scan using index_taken_tests_on_user_id on taken_tests (cost=0.43..274.18 rows=91 width=4) (actual time=10.451..71.474 rows=371 loops=1)
Index Cond: (user_id = 1)
-> Index Scan using index_answered_questions_on_taken_test_id_and_question_id on answered_questions (cost=0.56..1273.61 rows=365 width=61) (actual time=2.07
1..3.195 rows=2 loops=371)
Index Cond: (taken_test_id = taken_tests.id)
Planning time: 0.302 ms
Execution time: 1258.035 ms
(7 rows)
The only difference is the order of columns in the INNER JOIN condition. In the first query, it is ON "answered_questions"."taken_test_id" = "taken_tests"."id" while in the second query, it is ON "taken_tests"."id" = "answered_questions"."taken_test_id". But the query time is hugely different.
Do you have any idea why this happens? I read some articles and it says that the order of columns in JOIN condition should not affect the execution time (ex: Best practices for the order of joined columns in a sql join?)
I am using Postgres 9.6. There are more than 40 million rows in answered_questions table and more than 3 million rows in taken_tests table
Update 1:
When I ran the EXPLAIN with (analyze true, verbose true, buffers true), I got a much better result for the second query (quite similar to the first query)
EXPLAIN (ANALYZE TRUE, VERBOSE TRUE, BUFFERS TRUE)
SELECT
"answered_questions".*
FROM
"answered_questions"
INNER JOIN "taken_tests" ON "taken_tests"."id" = "answered_questions"."taken_test_id"
WHERE
"taken_tests"."user_id" = 1;
Output
Nested Loop (cost=0.99..116504.61 rows=1472 width=61) (actual time=0.030..2.192 rows=653 loops=1)
Output: answered_questions.id, answered_questions.question_id, answered_questions.answer_text, answered_questions.created_at, answered_questions.updated_at, a
nswered_questions.taken_test_id, answered_questions.correct, answered_questions.answer
Buffers: shared hit=1986
-> Index Scan using index_taken_tests_on_user_id on public.taken_tests (cost=0.43..274.18 rows=91 width=4) (actual time=0.014..0.441 rows=371 loops=1)
Output: taken_tests.id
Index Cond: (taken_tests.user_id = 1)
Buffers: shared hit=269
-> Index Scan using index_answered_questions_on_taken_test_id_and_question_id on public.answered_questions (cost=0.56..1273.61 rows=365 width=61) (actual ti
me=0.002..0.003 rows=2 loops=371)
Output: answered_questions.id, answered_questions.question_id, answered_questions.answer_text, answered_questions.created_at, answered_questions.updated
_at, answered_questions.taken_test_id, answered_questions.correct, answered_questions.answer
Index Cond: (answered_questions.taken_test_id = taken_tests.id)
Buffers: shared hit=1717
Planning time: 0.238 ms
Execution time: 2.335 ms
As you can see from the initial EXPLAIN ANALYZE statement results -- the queries are resulting in the equivalent query plan and are executed exactly the same.
The difference comes from the very same unit's execution time:
-> Index Scan using index_taken_tests_on_user_id on taken_tests (cost=0.43..274.18 rows=91 width=4) (actual time=0.014..0.483rows=371 loops=1)
and
-> Index Scan using index_taken_tests_on_user_id on taken_tests (cost=0.43..274.18 rows=91 width=4) (actual time=10.451..71.474rows=371 loops=1)
As the commenters already pointed out (see documentation links in the wuestion comments), the query plan for an inner join is expected to be the same regardless of the table order. It is ordered based on the query planner decisions. This means that you should really look at other performance-optimisation parts of the query execution. One of those would be memory used for caching (SHARED BUFFER). It looks like the query results would depend a lot on whether this data has already been loaded into memory. Just as you have noticed -- the query execution time grows after you have waited some time. This clearly indicates the cache expiry issue more than the plan problem.
Increasing the size of the shared buffers may help resolve it, but the initial execution of the query will always take longer -- this is just your disk access speed.
For more hints on memory configuration of Pg database see here: https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
Note: VACUUM or ANALYZE commands will be unlikely to help here. Both queries are using the same plan already. Keep in mind, though, that due to PostgreSQL transaction isolation mechanism (MVCC) it may have to read the underlying table rows to validate that they are still visible to the current transaction after getting the results from the index. This could be improved by updating the visibility map (see https://www.postgresql.org/docs/10/storage-vm.html), which is done during vacuuming.