Inconsistent cold start query performance in Postgres query - sql

We're having issues trying to build queries for a Postgres hosted datamart. Our query is simple, and contains a modest amount of data. We've seen some vast differences in the execution time of this query between runs- sometimes taking around 20 seconds, other times taking just 3 seconds- but we cannot seem to see what causes these differences and we're aiming to get consistent results. There are only 2 tables involved in the query, one representing order rows (OrderItemTransactionFact 2,937,264 rows) and the other recording current stock levels for each item (stocklevels 62,353 rows). There are no foreign keys due to this being a datamart which we run ETL processes against so require fast loading.
The query is;
select
oitf."SKUId",
sum(oitf."ConvertedLineTotal") as "totalrevenue",
sum(oitf."Quantity") as "quantitysold",
coalesce (sl."Available",0) as "availablestock"
from "OrderItemTransactionFact" oitf
left join stocklevels sl on sl."SKUId" = oitf."SKUId"
where
oitf."transactionTypeId" = 2
and oitf."hasComposite" = false
and oitf."ReceivedDate" >= extract(epoch from timestamp '2020-07-01 00:00:00')
and oitf."ReceivedDate" <= extract(epoch from timestamp '2021-10-01 00:00:00')
group by
oitf."SKUId", sl."Available"
order by oitf."SKUId";
The OrderItemTransactionFact table has a couple indexes;
create index IX_OrderItemTransactionFact_ReceivedDate on public."OrderItemTransactionFact" ("ReceivedDate" DESC);
create index IX_OrderItemTransactionFact_ReceivedDate_transactionTypeId on public."OrderItemTransactionFact" ("ReceivedDate" desc, "transactionTypeId");
Execution plan output for a 26 second run is;
GroupAggregate (cost=175096.24..195424.66 rows=813137 width=52) (actual time=24100.268..24874.065 rows=26591 loops=1)
Group Key: oitf."SKUId", sl."Available"
Buffers: shared hit=659 read=43311 written=1042
-> Sort (cost=175096.24..177129.08 rows=813137 width=19) (actual time=24100.249..24275.594 rows=916772 loops=1)
Sort Key: oitf."SKUId", sl."Available"
Sort Method: quicksort Memory: 95471kB
Buffers: shared hit=659 read=43311 written=1042
-> Hash Left Join (cost=20671.85..95274.08 rows=813137 width=19) (actual time=239.392..23127.993 rows=916772 loops=1)
Hash Cond: (oitf."SKUId" = sl."SKUId")
Buffers: shared hit=659 read=43311 written=1042
-> Bitmap Heap Scan on "OrderItemTransactionFact" oitf (cost=18091.90..73485.91 rows=738457 width=15) (actual time=200.178..22413.601 rows=701397 loops=1)
Recheck Cond: (("ReceivedDate" >= '1585699200'::double precision) AND ("ReceivedDate" <= '1625097600'::double precision))
Filter: ((NOT "hasComposite") AND ("transactionTypeId" = 2))
Rows Removed by Filter: 166349
Heap Blocks: exact=40419
Buffers: shared hit=55 read=42738 written=1023
-> Bitmap Index Scan on ix_orderitemtransactionfact_receiveddate (cost=0.00..17907.29 rows=853486 width=0) (actual time=191.274..191.274 rows=867746 loops=1)
Index Cond: (("ReceivedDate" >= '1585699200'::double precision) AND ("ReceivedDate" <= '1625097600'::double precision))
Buffers: shared hit=9 read=2365 written=181
-> Hash (cost=1800.53..1800.53 rows=62353 width=8) (actual time=38.978..38.978 rows=62353 loops=1)
Buckets: 65536 Batches: 1 Memory Usage: 2948kB
Buffers: shared hit=604 read=573 written=19
-> Seq Scan on stocklevels sl (cost=0.00..1800.53 rows=62353 width=8) (actual time=0.031..24.301 rows=62353 loops=1)
Buffers: shared hit=604 read=573 written=19
Planning Time: 0.543 ms
Execution Time: 24889.522 ms
But then execution plan for the same query when it took just 3 seconds;
GroupAggregate (cost=173586.52..193692.59 rows=804243 width=52) (actual time=2616.588..3220.394 rows=26848 loops=1)
Group Key: oitf."SKUId", sl."Available"
Buffers: shared hit=2 read=43929
-> Sort (cost=173586.52..175597.13 rows=804243 width=19) (actual time=2616.570..2813.571 rows=889937 loops=1)
Sort Key: oitf."SKUId", sl."Available"
Sort Method: quicksort Memory: 93001kB
Buffers: shared hit=2 read=43929
-> Hash Left Join (cost=20472.48..94701.25 rows=804243 width=19) (actual time=185.018..1512.626 rows=889937 loops=1)
Hash Cond: (oitf."SKUId" = sl."SKUId")
Buffers: shared hit=2 read=43929
-> Bitmap Heap Scan on "OrderItemTransactionFact" oitf (cost=17892.54..73123.18 rows=730380 width=15) (actual time=144.000..960.232 rows=689090 loops=1)
Recheck Cond: (("ReceivedDate" >= '1593561600'::double precision) AND ("ReceivedDate" <= '1633046400'::double precision))
Filter: ((NOT "hasComposite") AND ("transactionTypeId" = 2))
Rows Removed by Filter: 159949
Heap Blocks: exact=40431
Buffers: shared read=42754
-> Bitmap Index Scan on ix_orderitemtransactionfact_receiveddate (cost=0.00..17709.94 rows=844151 width=0) (actual time=134.806..134.806 rows=849039 loops=1)
Index Cond: (("ReceivedDate" >= '1593561600'::double precision) AND ("ReceivedDate" <= '1633046400'::double precision))
Buffers: shared read=2323
-> Hash (cost=1800.53..1800.53 rows=62353 width=8) (actual time=40.500..40.500 rows=62353 loops=1)
Buckets: 65536 Batches: 1 Memory Usage: 2948kB
Buffers: shared hit=2 read=1175
-> Seq Scan on stocklevels sl (cost=0.00..1800.53 rows=62353 width=8) (actual time=0.025..24.620 rows=62353 loops=1)
Buffers: shared hit=2 read=1175
Planning Time: 0.565 ms
Execution Time: 3235.300 ms
The server config is;
version: PostgreSQL 12.1, compiled by Visual C++ build 1914, 64-bit
work_mem : 1048576kb
shared_buffers : 16384 (x8kb)
Thanks in advance!

It is the filesystem cache. The slow one had to read the data off disk. The fast one just had to fetch the data from memory, probably because the slow one already read it and left it there. You can make this show up explicitly in the plans by turning on track_io_timing.
It should help a little to have an index on ("transactionTypeId","hasComposite","ReceivedDate"), perhaps a lot to crank up effective_io_concurrency (depending on your storage system).
But mostly, get faster disks.

Related

Postgresql Query slow if empty table in IN clause

I have the following SQL
WITH filtered_users_pre as (
SELECT value as username,row_number() OVER (partition by value) AS rk
FROM "user-stats".tag_table
WHERE _at_timestamp = 1626955200
AND tag in ('commercial','marketing')
),
filtered_users as (
SELECT username
FROM filtered_users_pre
WHERE rk = 2
),
valid_users as (
SELECT aa.username, aa.rank, aa.points, aa.version
FROM "users-results".ai_algo aa
WHERE aa._at_timestamp = 1626955200
AND aa.rank_timeframe = '7d'
AND aa.username IN (SELECT * FROM filtered_users)
ORDER BY aa.rank ASC
LIMIT 15
OFFSET 0
)
select * from valid_users;
"user-stats".tag_table is a table with around 60 million rows, with proper indexes.
"users-results".ai_algo is a table with around 10 million rows, with proper indexes.
With proper indexes I mean all the fields that appear in a WHERE clause above.
If filtered_users is empty, the query takes 4 seconds to run. If filtered_users has at least one row, it takes 400ms.
Anyone can explain me why? Is there I way I can have the query running with the same performance (400ms) also with filtered_users empty? I was expecting to get better performance with the reducing of number of rows in filtered_users. That's what happens up to 1 row. When the rows are 0, it takes 10 times more.
Of couse same happens if instead of IN clause in the WHERE, I put a INNER JOIN between ai_algo and filtered_users
Update
This is the EXPLAIN (ANALYZE, BUFFERS) output query when filtered_users has 0 rows (4 secs of execution)
Limit (cost=14592.13..15870.39 rows=15 width=35) (actual time=3953.945..3953.949 rows=0 loops=1)
Buffers: shared hit=7456641
-> Nested Loop Semi Join (cost=14592.13..1795382.62 rows=20897 width=35) (actual time=3953.944..3953.947 rows=0 loops=1)
Join Filter: (aa.username = filtered_users_pre.username)
Buffers: shared hit=7456641
-> Index Scan using ai_algo_202107_rank_timeframe_rank_idx on ai_algo_202107 aa (cost=0.56..1718018.61 rows=321495 width=35) (actual time=0.085..3885.547 rows=313611 loops=1)
" Index Cond: (rank_timeframe = '7d'::""valid-users-timeframe"")"
Filter: (_at_timestamp = 1626955200)
Rows Removed by Filter: 7793096
Buffers: shared hit=7456533
-> Materialize (cost=14591.56..14672.51 rows=13 width=21) (actual time=0.000..0.000 rows=0 loops=313611)
Buffers: shared hit=108
-> Subquery Scan on filtered_users_pre (cost=14591.56..14672.44 rows=13 width=21) (actual time=3.543..3.545 rows=0 loops=1)
Filter: (filtered_users_pre.rk = 2)
Rows Removed by Filter: 2415
Buffers: shared hit=108
-> WindowAgg (cost=14591.56..14638.74 rows=2696 width=29) (actual time=1.996..3.356 rows=2415 loops=1)
Buffers: shared hit=108
-> Sort (cost=14591.56..14598.30 rows=2696 width=21) (actual time=1.990..2.189 rows=2415 loops=1)
Sort Key: tag_table_20210722.value
Sort Method: quicksort Memory: 285kB
Buffers: shared hit=108
-> Bitmap Heap Scan on tag_table_20210722 (cost=146.24..14437.94 rows=2696 width=21) (actual time=0.612..1.080 rows=2415 loops=1)
" Recheck Cond: ((tag)::text = ANY ('{commercial,marketing}'::text[]))"
Filter: (_at_timestamp = 1626955200)
Rows Removed by Filter: 2415
Heap Blocks: exact=72
Buffers: shared hit=105
-> Bitmap Index Scan on tag_table_20210722_tag_idx (cost=0.00..145.57 rows=5428 width=0) (actual time=0.292..0.292 rows=4830 loops=1)
" Index Cond: ((tag)::text = ANY ('{commercial,marketing}'::text[]))"
Buffers: shared hit=33
Planning Time: 0.914 ms
Execution Time: 3954.035 ms
This is when filtered_users has at least 1 row (300ms)
Limit (cost=14592.13..15870.39 rows=15 width=35) (actual time=15.958..300.759 rows=15 loops=1)
Buffers: shared hit=11042
-> Nested Loop Semi Join (cost=14592.13..1795382.62 rows=20897 width=35) (actual time=15.957..300.752 rows=15 loops=1)
Join Filter: (aa.username = filtered_users_pre.username)
Rows Removed by Join Filter: 1544611
Buffers: shared hit=11042
-> Index Scan using ai_algo_202107_rank_timeframe_rank_idx on ai_algo_202107 aa (cost=0.56..1718018.61 rows=321495 width=35) (actual time=0.075..10.455 rows=645 loops=1)
" Index Cond: (rank_timeframe = '7d'::""valid-users-timeframe"")"
Filter: (_at_timestamp = 1626955200)
Rows Removed by Filter: 16124
Buffers: shared hit=10937
-> Materialize (cost=14591.56..14672.51 rows=13 width=21) (actual time=0.003..0.174 rows=2395 loops=645)
Buffers: shared hit=105
-> Subquery Scan on filtered_users_pre (cost=14591.56..14672.44 rows=13 width=21) (actual time=1.895..3.680 rows=2415 loops=1)
Filter: (filtered_users_pre.rk = 1)
Buffers: shared hit=105
-> WindowAgg (cost=14591.56..14638.74 rows=2696 width=29) (actual time=1.894..3.334 rows=2415 loops=1)
Buffers: shared hit=105
-> Sort (cost=14591.56..14598.30 rows=2696 width=21) (actual time=1.889..2.102 rows=2415 loops=1)
Sort Key: tag_table_20210722.value
Sort Method: quicksort Memory: 285kB
Buffers: shared hit=105
-> Bitmap Heap Scan on tag_table_20210722 (cost=146.24..14437.94 rows=2696 width=21) (actual time=0.604..1.046 rows=2415 loops=1)
" Recheck Cond: ((tag)::text = ANY ('{commercial,marketing}'::text[]))"
Filter: (_at_timestamp = 1626955200)
Rows Removed by Filter: 2415
Heap Blocks: exact=72
Buffers: shared hit=105
-> Bitmap Index Scan on tag_table_20210722_tag_idx (cost=0.00..145.57 rows=5428 width=0) (actual time=0.287..0.287 rows=4830 loops=1)
" Index Cond: ((tag)::text = ANY ('{commercial,marketing}'::text[]))"
Buffers: shared hit=33
Planning Time: 0.310 ms
Execution Time: 300.954 ms
The problem is that if there are no matching filtered_users, PostgreSQL has to go through all "users-results".ai_algo without finding 15 result rows. If the subquery contains elements, it quickly finds 15 matching "users-results".ai_algo rows and can terminate processing.
There is nothing you can do about that, but you can speed up the scan of "users-results".ai_algo. Currently, you have
-> Index Scan using ai_algo_202107_rank_timeframe_rank_idx on ai_algo_202107 aa
... (actual time=0.085..3885.547 rows=313611 loops=1)
Index Cond: (rank_timeframe = '7d'::"valid-users-timeframe")
Filter: (_at_timestamp = 1626955200)
Rows Removed by Filter: 7793096
Buffers: shared hit=7456533
You see that the index scan is not as effective as it could be: it reads 313611 + 7793096 = 8106707 rows from the table and discards all but the 313611 that match the filter condition.
You can do better by creating an index that can find only the result rows directly:
CREATE INDEX ON "users-results".ai_algo (rank_timeframe, _at_timestamp);
Then you can drop the index ai_algo_rank_timeframe_rank_idx, because the new index can do everything that the old one could do (and more).

Explain analyze slower than actual query in postgres

I have the following query
select * from activity_feed where user_id in (select following_id from user_follow where follower_id=:user_id)
union
select * from activity_feed where project_id in (select project_id from user_project_follow where user_id=:user_id)
order by id desc limit 30
Which runs in approximately 14 ms according to postico
But when i do explain analyze on this query , the plannig time is 0.5 ms and the execution time is around 800 ms (which is what i would actually expect). Is this because the query without explain analyze is returning cached results? I still get less than 20 ms results even when. use other values.
Which one is more indictivie of the performance I'll get in production? I also realized that this is a rather inefficient query, I can't seem to figure out an index that would make this more efficient. It's possible that I will have to not use union
Edit: the execution plan
Limit (cost=1380.94..1380.96 rows=10 width=148) (actual time=771.111..771.405 rows=10 loops=1)
-> Sort (cost=1380.94..1385.64 rows=1881 width=148) (actual time=771.097..771.160 rows=10 loops=1)
Sort Key: activity_feed."timestamp" DESC
Sort Method: top-N heapsort Memory: 27kB
-> HashAggregate (cost=1321.48..1340.29 rows=1881 width=148) (actual time=714.888..743.273 rows=4462 loops=1)
Group Key: activity_feed.id, activity_feed."timestamp", activity_feed.user_id, activity_feed.verb, activity_feed.object_type, activity_feed.object_id, activity_feed.project_id, activity_feed.privacy_level, activity_feed.local_time, activity_feed.local_date
-> Append (cost=5.12..1274.46 rows=1881 width=148) (actual time=0.998..682.466 rows=4487 loops=1)
-> Hash Join (cost=5.12..610.43 rows=1350 width=70) (actual time=0.982..326.089 rows=3013 loops=1)
Hash Cond: (activity_feed.user_id = user_follow.following_id)
-> Seq Scan on activity_feed (cost=0.00..541.15 rows=24215 width=70) (actual time=0.016..150.535 rows=24215 loops=1)
-> Hash (cost=4.78..4.78 rows=28 width=8) (actual time=0.911..0.922 rows=29 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 10kB
-> Index Only Scan using unique_user_follow_pair on user_follow (cost=0.29..4.78 rows=28 width=8) (actual time=0.022..0.334 rows=29 loops=1)
Index Cond: (follower_id = '17420532762804570'::bigint)
Heap Fetches: 0
-> Hash Join (cost=30.50..635.81 rows=531 width=70) (actual time=0.351..301.945 rows=1474 loops=1)
Hash Cond: (activity_feed_1.project_id = user_project_follow.project_id)
-> Seq Scan on activity_feed activity_feed_1 (cost=0.00..541.15 rows=24215 width=70) (actual time=0.027..143.896 rows=24215 loops=1)
-> Hash (cost=30.36..30.36 rows=11 width=8) (actual time=0.171..0.182 rows=11 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Index Only Scan using idx_user_project_follow_temp on user_project_follow (cost=0.28..30.36 rows=11 width=8) (actual time=0.020..0.102 rows=11 loops=1)
Index Cond: (user_id = '17420532762804570'::bigint)
Heap Fetches: 11
Planning Time: 0.571 ms
Execution Time: 771.774 ms
Thanks for the help in advance!
Very slow clock access like you show here (nearly 100 fold slower when TIMING defaults to ON!) usually indicates either old hardware or an old kernel IME. Not being able to trust EXPLAIN (ANALYZE) to get good data can be very frustrating if you are very particular about performance, so you should consider upgrading your hardware or your OS.

Why my query is slower when sorting in RAM vs disk in Postgres?

I use AWS RDS PG 12.5 (db.t3.xlarge / 4vCPUs / 16GB RAM / SSD storage).
I was trying to optimize a query by tuning work_mem parameter to avoid spilling data on the disk to sort the data.
As expected, when increasing the work_mem from 4MB to 100MB, a quicksort is used instead of an external merge disk.
However, the total execution time is longer (2293ms vs 2541ms).
why the quicksort shows no significative gain? I thought RAM outperforms disk sorting. (540ms external merge disk vs 527ms quicksort)
why seqscans, hash and merge operations are slower? (why work_mem impacts those operations?)
I have found this similar SO post but their issue was that their sorting was only a small fraction of the overall execution time.
Any insights would be welcomed.
The query:
select
dd.date
, jf.job_id
, od.id
, od.company_size_category
, od.invoicing_entity
, os.sector_category_id
, os.sector_id
, jf.pageviews
, jf.apply_clicks
, concat(sector_category_id, '_', company_size_category, '_', invoicing_entity) as bench_id
from organization_dimensions od
left join job_facts jf
on od.id = jf.organization_id
left join date_dimensions dd
on jf.date = dd.date
left join organizations_sectors os
on od.id = os.organization_id
where dd.date >= '2021-01-01' and dd.date < '2021-02-01'
order by 1, 2, 3
;
The query plan with work_mem=4MB (link to depesz):
Gather Merge (cost=182185.20..197262.15 rows=129222 width=76) (actual time=1988.652..2293.219 rows=981409 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=105939, temp read=10557 written=10595
-> Sort (cost=181185.18..181346.71 rows=64611 width=76) (actual time=1975.907..2076.591 rows=327136 loops=3)
Sort Key: dd.date, jf.job_id, od.id
Sort Method: external merge Disk: 32088kB
Worker 0: Sort Method: external merge Disk: 22672kB
Worker 1: Sort Method: external merge Disk: 22048kB
Buffers: shared hit=105939, temp read=10557 written=10595
-> Hash Join (cost=1001.68..173149.42 rows=64611 width=76) (actual time=14.719..1536.513 rows=327136 loops=3)
Hash Cond: (jf.organization_id = od.id)
Buffers: shared hit=105821
-> Hash Join (cost=177.27..171332.76 rows=36922 width=21) (actual time=0.797..1269.917 rows=148781 loops=3)
Hash Cond: (jf.date = dd.date)
Buffers: shared hit=104722
-> Parallel Seq Scan on job_facts jf (cost=0.00..152657.47 rows=4834347 width=21) (actual time=0.004..432.145 rows=3867527 loops=3)
Buffers: shared hit=104314
-> Hash (cost=176.88..176.88 rows=31 width=4) (actual time=0.554..0.555 rows=31 loops=3)
Buckets: 1024 Batches: 1 Memory Usage: 10kB
Buffers: shared hit=348
-> Seq Scan on date_dimensions dd (cost=0.00..176.88 rows=31 width=4) (actual time=0.011..0.543 rows=31 loops=3)
Filter: ((date >= '2021-01-01'::date) AND (date < '2021-02-01'::date))
Rows Removed by Filter: 4028
Buffers: shared hit=348
-> Hash (cost=705.43..705.43 rows=9518 width=27) (actual time=13.813..13.815 rows=9828 loops=3)
Buckets: 16384 Batches: 1 Memory Usage: 709kB
Buffers: shared hit=1071
-> Hash Right Join (cost=367.38..705.43 rows=9518 width=27) (actual time=5.035..10.702 rows=9828 loops=3)
Hash Cond: (os.organization_id = od.id)
Buffers: shared hit=1071
-> Seq Scan on organizations_sectors os (cost=0.00..207.18 rows=9518 width=12) (actual time=0.015..0.995 rows=9518 loops=3)
Buffers: shared hit=336
-> Hash (cost=299.39..299.39 rows=5439 width=19) (actual time=4.961..4.962 rows=5439 loops=3)
Buckets: 8192 Batches: 1 Memory Usage: 339kB
Buffers: shared hit=735
-> Seq Scan on organization_dimensions od (cost=0.00..299.39 rows=5439 width=19) (actual time=0.011..3.536 rows=5439 loops=3)
Buffers: shared hit=735
Planning Time: 0.220 ms
Execution Time: 2343.474 ms
The query plan with work_mem=100MB (link to depesz):
Gather Merge (cost=179311.70..194388.65 rows=129222 width=76) (actual time=2205.016..2541.827 rows=981409 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=105939
-> Sort (cost=178311.68..178473.21 rows=64611 width=76) (actual time=2173.869..2241.519 rows=327136 loops=3)
Sort Key: dd.date, jf.job_id, od.id
Sort Method: quicksort Memory: 66835kB
Worker 0: Sort Method: quicksort Memory: 56623kB
Worker 1: Sort Method: quicksort Memory: 51417kB
Buffers: shared hit=105939
-> Hash Join (cost=1001.68..173149.42 rows=64611 width=76) (actual time=36.991..1714.073 rows=327136 loops=3)
Hash Cond: (jf.organization_id = od.id)
Buffers: shared hit=105821
-> Hash Join (cost=177.27..171332.76 rows=36922 width=21) (actual time=2.232..1412.442 rows=148781 loops=3)
Hash Cond: (jf.date = dd.date)
Buffers: shared hit=104722
-> Parallel Seq Scan on job_facts jf (cost=0.00..152657.47 rows=4834347 width=21) (actual time=0.005..486.592 rows=3867527 loops=3)
Buffers: shared hit=104314
-> Hash (cost=176.88..176.88 rows=31 width=4) (actual time=1.904..1.906 rows=31 loops=3)
Buckets: 1024 Batches: 1 Memory Usage: 10kB
Buffers: shared hit=348
-> Seq Scan on date_dimensions dd (cost=0.00..176.88 rows=31 width=4) (actual time=0.013..1.892 rows=31 loops=3)
Filter: ((date >= '2021-01-01'::date) AND (date < '2021-02-01'::date))
Rows Removed by Filter: 4028
Buffers: shared hit=348
-> Hash (cost=705.43..705.43 rows=9518 width=27) (actual time=34.586..34.589 rows=9828 loops=3)
Buckets: 16384 Batches: 1 Memory Usage: 709kB
Buffers: shared hit=1071
-> Hash Right Join (cost=367.38..705.43 rows=9518 width=27) (actual time=13.367..27.326 rows=9828 loops=3)
Hash Cond: (os.organization_id = od.id)
Buffers: shared hit=1071
-> Seq Scan on organizations_sectors os (cost=0.00..207.18 rows=9518 width=12) (actual time=0.019..1.443 rows=9518 loops=3)
Buffers: shared hit=336
-> Hash (cost=299.39..299.39 rows=5439 width=19) (actual time=13.314..13.315 rows=5439 loops=3)
Buckets: 8192 Batches: 1 Memory Usage: 339kB
Buffers: shared hit=735
-> Seq Scan on organization_dimensions od (cost=0.00..299.39 rows=5439 width=19) (actual time=0.016..6.407 rows=5439 loops=3)
Buffers: shared hit=735
Planning Time: 0.221 ms
Execution Time: 2601.698 ms
I'd say that there are two factors contributing:
Your writes didn't really hit the disk, but the kernel cache. PostgreSQL uses buffered I/O!
To see more, set track_io_timing = on.
Random noise. For example, there is no real reason why the sequential scan is 50 ms slower with more work_mem. That parameter has no influence here.
Repeat the experiment a couple of times, and you'll see that the times will vary. I doubt that the query with more work_mem will execute significantly slower.

How do i reduce the cost of this query while keeping the query results the same?

I have the below query running on a postgres and sqlserver DB (Use top for SQL server). The sorting of the "change_sequence" value is causing a high cost in my query, is there any way to reduce the cost but maintain the same results?
Query:
SELECT tablename,
CAST(primary_key_values AS VARCHAR),
primary_key_fields,
CAST(min_sequence AS NUMERIC),
_changed_fieldlist,
_operation,
min_sequence
FROM (
SELECT 'memdep' AS tablename,
CONCAT_WS(',',dependant,mem_num) AS primary_key_values,
'dependant,mem_num,' AS primary_key_fields,
_change_sequence AS min_sequence,
ROW_NUMBER() OVER(partition by dependant,mem_num order by _change_sequence) AS rn,
_changed_fieldlist,
_operation
FROM mipbi_ods.memdep
WHERE mipbi_status = 'NEW'
) main
WHERE rn = 1
LIMIT 100
In essence what i'm looking for is the records from "memdep" where they have a "mipbi_status" of 'NEW' with the lowest "_change_sequence". Ive tried using a MIN() function instead of the ROW_NUMBER the speed is about the same cost is about 5 more.
Is there a way to reduce the cost/speed of the query. I have around 400 million records in this table if that helps.
Here is the query explained:
Limit (cost=3080.03..3080.53 rows=100 width=109) (actual time=17.633..17.648 rows=35 loops=1)
-> Unique (cost=3080.03..3089.04 rows=1793 width=109) (actual time=17.632..17.644 rows=35 loops=1)
-> Sort (cost=3080.03..3084.53 rows=1803 width=109) (actual time=17.631..17.634 rows=36 loops=1)
Sort Key: (concat_ws(','::text, memdet.mem_num))
Sort Method: quicksort Memory: 29kB
-> Bitmap Heap Scan on memdet (cost=54.39..2982.52 rows=1803 width=109) (actual time=16.853..17.542 rows=36 loops=1)
Recheck Cond: ((mipbi_status)::text = 'NEW'::text)
Heap Blocks: exact=8
-> Bitmap Index Scan on idx_mipbi_status_memdet (cost=0.00..53.94 rows=1803 width=0) (actual time=10.396..10.396 rows=38 loops=1)
Index Cond: ((mipbi_status)::text = 'NEW'::text)
Planning time: 0.201 ms
Execution time: 17.700 ms
I'm using a smaller table to show here, this isn't the 400 million record table, but indexes and all will be the same.
Here is the query plan for the large table:
Limit (cost=47148422.27..47149122.27 rows=100 width=113) (actual time=2407976.293..2407977.112 rows=100 loops=1)
Output: main.tablename, ((main.primary_key_values)::character varying), main.primary_key_fields, main.min_sequence, main._changed_fieldlist, main._operation, main.min_sequence
Buffers: shared hit=6269554 read=12205028 dirtied=1893 written=4566983, temp read=443831 written=1016025
-> Subquery Scan on main (cost=47148422.27..52102269.25 rows=707692 width=113) (actual time=2407976.292..2407977.100 rows=100 loops=1)
Output: main.tablename, (main.primary_key_values)::character varying, main.primary_key_fields, main.min_sequence, main._changed_fieldlist, main._operation, main.min_sequence
Filter: (main.rn = 1)
Buffers: shared hit=6269554 read=12205028 dirtied=1893 written=4566983, temp read=443831 written=1016025
-> WindowAgg (cost=47148422.27..50333038.19 rows=141538485 width=143) (actual time=2407976.288..2407977.080 rows=100 loops=1)
Output: 'claim', concat_ws(','::text, claim.gen_claimnum), 'gen_claimnum,', claim._change_sequence, row_number() OVER (?), claim._changed_fieldlist, claim._operation, claim.gen_claimnum
Buffers: shared hit=6269554 read=12205028 dirtied=1893 written=4566983, temp read=443831 written=1016025
-> Sort (cost=47148422.27..47502268.49 rows=141538485 width=39) (actual time=2407976.236..2407976.905 rows=100 loops=1)
Output: claim._change_sequence, claim.gen_claimnum, claim._changed_fieldlist, claim._operation
Sort Key: claim.gen_claimnum, claim._change_sequence
Sort Method: external merge Disk: 4588144kB
Buffers: shared hit=6269554 read=12205028 dirtied=1893 written=4566983, temp read=443831 written=1016025
-> Seq Scan on mipbi_ods.claim (cost=0.00..20246114.01 rows=141538485 width=39) (actual time=0.028..843181.418 rows=88042077 loops=1)
Output: claim._change_sequence, claim.gen_claimnum, claim._changed_fieldlist, claim._operation
Filter: ((claim.mipbi_status)::text = 'NEW'::text)
Rows Removed by Filter: 356194
Buffers: shared hit=6269554 read=12205028 dirtied=1893 written=4566983
Planning time: 8.796 ms
Execution time: 2408702.464 ms

Full text search query is slow on first query

I have an airports table which contains a list of nearly 4k airports. The table has a searchable column which is a ts_vector column and an index airports_searchable_index:
searchable tsvector NULL
CREATE INDEX airports_searchable_index ON airports USING gin (searchable)
Given I have an indexed document in the searchable column and I attempt to run a query against that column, I get very quick responses on my dev machine (around 3ms for the query) but around 650ms on production (using the exact same data!). The weird part is that my production machine is much stronger than my local dev machine. A query for example:
select * from "airports" where searchable ## to_tsquery('public.hebrew', 'ltn:*') order by "popularity" desc limit 100
I've opened PGAdmin and tried doing some tests. What I saw that for the first time I run the query above in a new "Query Tool Panel", it takes anywhere between 650-800ms to execute. However, on the second run, it takes 30-60ms to run even if I change the query term. I had concluded from that, that Postgres is possible opening the document in memory for each connection and run the query against that. Since I'm using PHP to talk with my backend, every request is going to open it's own connection to the DB, hence causing Postgres to constantly re-opening the document.
Could it be a misconfiguration on my production server?
Here is an explain query (for production server):
Limit (cost=24.03..24.04 rows=1 width=8) (actual time=0.048..0.048 rows=1 loops=1)
Output: id, popularity
Buffers: shared hit=4
-> Sort (cost=24.03..24.04 rows=1 width=8) (actual time=0.047..0.047 rows=1 loops=1)
Output: id, popularity
Sort Key: airports.popularity DESC
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=4
-> Bitmap Heap Scan on lametayel.airports (cost=20.01..24.02 rows=1 width=8) (actual time=0.040..0.040 rows=1 loops=1)
Output: id, popularity
Recheck Cond: (airports.searchable ## '''ltn'':*'::tsquery)
Heap Blocks: exact=1
Buffers: shared hit=4
-> Bitmap Index Scan on airports_searchable_index (cost=0.00..20.01 rows=1 width=0) (actual time=0.036..0.036 rows=1 loops=1)
Index Cond: (airports.searchable ## '''ltn'':*'::tsquery)
Buffers: shared hit=3
Planning time: 0.304 ms
Execution time: 0.078 ms
Here is an explain query (for development server):
Limit (cost=28.03..28.04 rows=1 width=8) (actual time=0.065..0.067 rows=1 loops=1)
Output: id, popularity
Buffers: shared hit=5
-> Sort (cost=28.03..28.04 rows=1 width=8) (actual time=0.064..0.065 rows=1 loops=1)
Output: id, popularity
Sort Key: airports.popularity DESC
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=5
-> Bitmap Heap Scan on lametayel.airports (cost=24.01..28.02 rows=1 width=8) (actual time=0.046..0.047 rows=1 loops=1)
Output: id, popularity
Recheck Cond: (airports.searchable ## '''ltn'':*'::tsquery)
Heap Blocks: exact=1
Buffers: shared hit=5
-> Bitmap Index Scan on airports_searchable_index (cost=0.00..24.01 rows=1 width=0) (actual time=0.038..0.038 rows=1 loops=1)
Index Cond: (airports.searchable ## '''ltn'':*'::tsquery)
Buffers: shared hit=4
Planning time: 0.534 ms
Execution time: 0.122 ms