Postgres Optimizing Average operation - sql

This issue has been resolved
Solution was to create an index on (listings.id, listings.lat, listings.lng). At first the planner refused to use this index in favor of a seq scan, however vacuuming the table (VACUUM calendar) resolved that. Whew.
I'm having issues with query speed, it takes approx. 20s run, I really need to access in a few seconds. Am I using inefficient SQL here?
select c.date, avg(c.price)
from calendar c
join listings l on l.id = c.listing_id
where l.lat >= 51.45 and l.lat <= 51.5
and l.lng >= -0.2 and l.lng <= -0.1
group by c.date;
The lat/lng are hard-coded here, in reality they are dynamic tableau parameters.
I know the lat/lng test and join onto listings is NOT the bottleneck, that runs fast. The query runs in roughly the same time without a group by as well, so I don't believe that is the bottleneck either.
To indicate size, that particular query is averaging ~230,000 rows. I have tried multi-column indexes of (c.date, c.listing_id) and (c.date, c.listing_id, c.price) without any speed improvement.
Naturally I can't pre-compute the result as the lat/lng will be different each time.
Can I improve the speed here? Do I need to upgrade my Amazon RDS instance?
Any advice appreciated, thanks!
(Using Postgres 9.5)
Update
From what I can tell from testing, the query is very slow because it is using a seq scan on 'calendar'. I added the index (l.id, l.lat, l.lng) and force the index usage using SET enable_seqscan = off; and the query ran in <1s. I have no idea why the optimiser is using a seq instead of index scan.
EXPLAIN ANALYZE: with seq scan, without seq scan

Related

Optimizing SQL query on large table

Is there any optimization I can do to speed up this query. It is currently taking 30 minutes to run.
SELECT
*
FROM
service s
JOIN
bucket b ON s.procedure = b.hcpc
WHERE
month >= '201904'
AND bucket = 'Respirator'
Explain execution plan -
Gather (cost=1002.24..81397944.91 rows=9782404 width=212)
Workers Planned: 2
-> Hash Join (cost=2.24..80418704.51 rows=4076002 width=212)
Hash Cond: ((s .procedure)::text = (bac.hcpc)::text)
-> Parallel Seq Scan on service s (cost=0.00..77753288.33 rows=699907712 width=154)
Filter: ((month)::text >= '201904'::text)
-> Hash (cost=2.06..2.06 rows=14 width=58)
-> Seq Scan on buckets b (cost=0.00..2.06 rows=14 width=58)
Filter: ((bucket)::text = 'Respirator'::text)
SELECT *
FROM service s JOIN
bucket b
ON s.procedure = b.hcpc
WHERE s.month >= '201904' AND
b.bucket = 'Respirator';
I would suggest indexes on:
bucket(bucket, hcpc)
service(procedure, month)
Query optimization is something that doesn't have super hard and fast rules, it's more of a trial and error thing. Sometimes you will try a technique and it will work really well, but then the same technique will have little to no effect on another query. That being said, here are a couple of things that I would try to get you started.
Instead of SELECT *, list out the column names that you need. If you need all of both tables, still list them out.
Are there any numeric columns that you can use in your WHERE clause to do some preliminary filtering? Comparing only string data types is almost always a pain point in query optimization.
Look at the existing indexes on the table and see if any changes need to be made. Indexes can have a huge impact on query performance, both positive and negative depending on setup.
Again, it's all trial and error, these are just a couple of places to start.

A postgresql query won't finish

on postgresl 9.0 we have a sql query:
SELECT count(*) FROM lane
WHERE not exists
(SELECT 1 FROM id_map
WHERE id_map.new_id=lane.lane_id
and id_map.column_name='lane_id'
and id_map.table_name='lane')
and lane.lane_id is not null;
that usually takes somewhat around 1.5 seconds to finish.
Here's the explain plan: http://explain.depesz.com/s/axNN
Sometimes though this query hangs and will not finish. It may run even for 11 hours without success.
It then takes up 100% of the cpu.
The only locks this query takes are "AccessShareLock"s and they are all granted.
SELECT a.datname,
c.relname,
l.transactionid,
l.mode,
l.granted,
a.usename,
a.current_query,
a.query_start,
age(now(), a.query_start) AS "age",
a.procpid
FROM pg_stat_activity a
JOIN pg_locks l ON l.pid = a.procpid
JOIN pg_class c ON c.oid = l.relation
ORDER BY a.query_start;
The query is run as a part of a java process that connects to a database using a connection pool and performs sequentially similar select queries of this format:
SELECT count(*) FROM {} WHERE not exists (SELECT 1 FROM id_map WHERE id_map.new_id={}.{} and id_map.column_name='{}' and id_map.table_name='{}') and {}.{} is not null
no updates or delete are happening in parallel to this process so I don't think vacuuming can be the issue here.
Prior to running the entire process (so before 6 queries of this sort are run) an analyze on all the tables were run.
postgres logs don't show any entry for the long running queries because they never finish and thus never get to be logged.
Any idea what may cause this kind of behavior and how to prevent it from happening?
the explain plan without analyze:
Aggregate (cost=874337.91..874337.92 rows=1 width=0)
-> Nested Loop Anti Join (cost=0.00..870424.70 rows=1565283 width=0)
Join Filter: (id_map.new_id = lane.lane_id)
-> Seq Scan on lane (cost=0.00..30281.84 rows=1565284 width=8)
Filter: (lane_id IS NOT NULL)
-> Materialize (cost=0.00..816663.60 rows=1 width=8)
-> Seq Scan on id_map (cost=0.00..816663.60 rows=1 width=8)
Filter: (((column_name)::text = 'lane_id'::text) AND ((table_name)::text = 'lane'::text))
VACUUM ANALYZE VERBOSE;
refreshing statistics shall help db to choose optimal plan - not nested loops, which I believe take 100% CPU
This problem might be caused because (from what I understood):
Postgres has ran out the number of available transaction ids (When all of the two billion available transaction IDs have been used, the transaction IDs start over again at one, which results in wraparound issues which can cause severe data loss or DB shutdown)
The database is too segmented, i.e. DELETE or UPDATE (it is converted into INSERT + DELETE by Postgres) commands mark tuples as deleted but do not physically delete it.
If you have any cloud Server like GCloud, you can set some variables on Database flags to make VACUUM be called automatically and clean the tuples marked as deleted and is still in you database, and ANALYZE gather the latest statistics about frequently updated tables used on execution plan. Example:
autovacuum: on
autovacuum_analyze_scale_factor: 0.05
autovacuum_analyze_threshold: 10
autovacuum_naptime: 15
autovacuum_vacuum_cost_delay: 10
autovacuum_vacuum_cost_limit: 1000
autovacuum_vacuum_scale_factor: 0.1
autovacuum_vacuum_threshold: 25
log_autovacuum_min_duration: 0
track_counts: on
Source:
https://www.postgresql.org/docs/9.5/runtime-config-autovacuum.html
https://www.techonthenet.com/postgresql/autovacuum.php
https://aws.amazon.com/premiumsupport/knowledge-center/transaction-id-wraparound-effects/

PostgreSql Select query performance issue

I have a simple select query:
SELECT * FROM entities WHERE entity_type_id = 1 ORDER BY entity_id
Then I want to get the first 100 results, so I use this:
SELECT * FROM entities WHERE entity_type_id = 1 ORDER BY entity_id LIMIT 100
The problem is that the second query works much slower then the first one. It takes less than a second to execute the first query and more than a minute to execute the second one.
These are execution plans for the queries:
without limit:
Sort (cost=26201.43..26231.42 rows=11994 width=72)
Sort Key: entity_id
-> Index Scan using entity_type_id_idx on entities (cost=0.00..24895.34 rows=11994 width=72)
Index Cond: (entity_type_id = 1)
with limit:
Limit (cost=0.00..8134.39 rows=100 width=72)
-> Index Scan using xpkentities on entities (cost=0.00..975638.85 rows=11994 width=72)
Filter: (entity_type_id = 1)
I don't understand why these two plans are so different and why the performance decreases so much. How should I tweak the second query to make it work faster?
I use PostgreSql 9.2.
You want the 100 smallest entity_id's matching your condition. Now - if those were numbers 1..100 then clearly using the entity_id index is the best way to handle this - everything is pre-sorted. In fact, if the 100 you wanted were in the range 1..200 then it still makes sense. Probably 1..1000 would.
So - PostgreSQL thinks it will find lots of entity_type_id=1 values at the "start" of the table. It estimates a cost of 8134 vs 26231 to filter by type then sort. In your case it is wrong.
Now - either there is some correlation which isn't obvious (that's bad - we can't tell the planner about that at present), or we don't have up-to-date or sufficient stats.
Does an ANALYZE entities make any difference? You can see what values the planner knows about by reading the planner-stats page in the manuals.

Postgres min function performance

I need the lowest value for runnerId.
This query:
SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ;
takes 80 ms (1968 result rows).
This:
SELECT min("runnerId") FROM betlog WHERE "marketId" = '107416794' ;
takes 1600 ms.
Is there a faster way to find the minimum, or should I calc the min in my java program?
"Result (cost=100.88..100.89 rows=1 width=0)"
" InitPlan 1 (returns $0)"
" -> Limit (cost=0.00..100.88 rows=1 width=9)"
" -> Index Scan using runneridindex on betlog (cost=0.00..410066.33 rows=4065 width=9)"
" Index Cond: ("runnerId" IS NOT NULL)"
" Filter: ("marketId" = 107416794::bigint)"
CREATE INDEX marketidindex
ON betlog
USING btree
("marketId" COLLATE pg_catalog."default");
Another idea:
SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ORDER BY "runnerId" LIMIT 1 >1600ms
SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ORDER BY "runnerId" >>100ms
How can a LIMIT slow the query down?
What you need is a multi-column index:
CREATE INDEX betlog_mult_idx ON betlog ("marketId", "runnerId");
If interested, you'll find in-depth information about multi-column indexes in PostgreSQL, links and benchmarks under this related question on dba.SE.
How did I figure?
In a multi-column index, rows are ordered (and thereby clustered) by the first column of the index ("marketId"), and each cluster is in turn ordered by the second column of the index - so the first row matches the condition min("runnerId"). This makes the index scan extremely fast.
Concerning the paradox effect of LIMIT slowing down a query - the Postgres query planner has a weakness there. The common workaround is to use a CTE (not necessary in this case). Find more information under this recent, closely related question:
PostgreSQL query taking too long
The min statement will be executed by PostgreSQL using a sequential scan of the entire table. You could optimize the query using the following approach:
SELECT col FROM sometable ORDER BY col ASC LIMIT 1;
When you had an index on ("runnerId") (or at least with "runnerId" as the high order column) but did not have the index on ("marketId", "runnerId") it compared the cost of passing all rows with a matching "marketId" using the index on that column and picking out the minimum "runnerId" from that set to the cost of scanning using the index on "runnerId" and stopping when it found the first row with a matching "marketId". Based on available statistics and the assumption that "marketId" values would be randomly distributed within the index entries for the index on "runnerId" it estimated a lower cost for the latter approach.
It also estimated the cost of scanning the whole table and picking the minimum from matching rows as well as probably a number of other alternatives. It does not always use a certain type of plan, but compares costs of all the alternatives.
The problem is that the assumption that values will be randomly distributed in the range is not necessarily true (as in this example), leading to a scan of a high percentage of the range to find the rows lurking at the end. For some values of "marketId", where the chosen value is available near the beginning of the "runnerId" index, this plan should be very fast.
There has been discussion in the PostgreSQL developer community of how we might bias against plans which are "risky" in terms of running long if the data distribution is not what was assumed, and there has been work on tracking multi-column statistics so that correlated values don't run into such problems. Expect improvements in this area in the next few releases. Until then, Erwin's suggestions are on target for how to work around the issue.
Basically it comes down to making a more attractive plan available or introducing an optimization barrier. In this case you can provide a more attractive option by adding the index on ("marketId", "runnerId") -- which allows a very direct way to get straight to the answer. The planner assigns a very low cost to that alternative, causing it to be chosen. If you preferred not to add the index, you could force an optimization barrier by doing something like this:
SELECT min("runnerId")
FROM (SELECT "runnerId" FROM betlog
WHERE "marketId" = '107416794'
OFFSET 0) x;
When there is an OFFSET clause (even for an offset of zero) it forces the subquery to be planned separately and its results fed to the outer query. I would expect this to run in 80 ms rather than the 1600 ms you get without the optimization barrier. Of course, if you can add the index, the speed of the query when data is cached should be less than 1 ms.

Poor DB Performance when using ORDER BY

I'm working with a non-profit that is mapping out solar potential in the US. Needless to say, we have a ridiculously large PostgreSQL 9 database. Running a query like the one shown below is speedy until the order by line is uncommented, in which case the same query takes forever to run (185 ms without sorting compared to 25 minutes with). What steps should be taken to ensure this and other queries run in a more manageable and reasonable amount of time?
select A.s_oid, A.s_id, A.area_acre, A.power_peak, A.nearby_city, A.solar_total
from global_site A cross join na_utility_line B
where (A.power_peak between 1.0 AND 100.0)
and A.area_acre >= 500
and A.solar_avg >= 5.0
AND A.pc_num <= 1000
and (A.fips_level1 = '06' AND A.fips_country = 'US' AND A.fips_level2 = '025')
and B.volt_mn_kv >= 69
and B.fips_code like '%US06%'
and B.status = 'active'
and ST_within(ST_Centroid(A.wkb_geometry), ST_Buffer((B.wkb_geometry), 1000))
--order by A.area_acre
offset 0 limit 11;
The sort is not the problem - in fact the CPU and memory cost of the sort is close to zero since Postgres has Top-N sort where the result set is scanned while keeping up to date a small sort buffer holding only the Top-N rows.
select count(*) from (1 million row table) -- 0.17 s
select * from (1 million row table) order by x limit 10; -- 0.18 s
select * from (1 million row table) order by x; -- 1.80 s
So you see the Top-10 sorting only adds 10 ms to a dumb fast count(*) versus a lot longer for a real sort. That's a very neat feature, I use it a lot.
OK now without EXPLAIN ANALYZE it's impossible to be sure, but my feeling is that the real problem is the cross join. Basically you're filtering the rows in both tables using :
where (A.power_peak between 1.0 AND 100.0)
and A.area_acre >= 500
and A.solar_avg >= 5.0
AND A.pc_num <= 1000
and (A.fips_level1 = '06' AND A.fips_country = 'US' AND A.fips_level2 = '025')
and B.volt_mn_kv >= 69
and B.fips_code like '%US06%'
and B.status = 'active'
OK. I don't know how many rows are selected in both tables (only EXPLAIN ANALYZE would tell), but it's probably significant. Knowing those numbers would help.
Then we got the worst case CROSS JOIN condition ever :
and ST_within(ST_Centroid(A.wkb_geometry), ST_Buffer((B.wkb_geometry), 1000))
This means all rows of A are matched against all rows of B (so, this expression is going to be evaluated a large number of times), using a bunch of pretty complex, slow, and cpu-intensive functions.
Of course it's horribly slow !
When you remove the ORDER BY, postgres just comes up (by chance ?) with a bunch of matching rows right at the start, outputs those, and stops since the LIMIT is reached.
Here's a little example :
Tables a and b are identical and contain 1000 rows, and a column of type BOX.
select * from a cross join b where (a.b && b.b) --- 0.28 s
Here 1000000 box overlap (operator &&) tests are completed in 0.28s. The test data set is generated so that the result set contains only 1000 rows.
create index a_b on a using gist(b);
create index b_b on a using gist(b);
select * from a cross join b where (a.b && b.b) --- 0.01 s
Here the index is used to optimize the cross join, and speed is ridiculous.
You need to optimize that geometry matching.
add columns which will cache :
ST_Centroid(A.wkb_geometry)
ST_Buffer((B.wkb_geometry), 1000)
There is NO POINT in recomputing those slow functions a million times during your CROSS JOIN, so store the results in a column. Use a trigger to keep them up to date.
add columns of type BOX which will cache :
Bounding Box of ST_Centroid(A.wkb_geometry)
Bounding Box of ST_Buffer((B.wkb_geometry), 1000)
add gist indexes on the BOXes
add a Box overlap test (using the && operator) which will use the index
keep your ST_Within which will act as a final filter on the rows that pass
Maybe you can just index the ST_Centroid and ST_Buffer columns... and use an (indexed) "contains" operator, see here :
http://www.postgresql.org/docs/8.2/static/functions-geometry.html
I would suggest creating an index on area_acre. You may want to take a look at the following: http://www.postgresql.org/docs/9.0/static/sql-createindex.html
I would recommend doing this sort of thing off of peak hours though because this can be somewhat intensive with a large amount of data. One thing you will have to look at as well with indexes is rebuilding them on a schedule to ensure performance over time. Again this schedule should be outside of peak hours.
You may want to take a look at this article from a fellow SO'er and his experience with database slowdowns over time with indexes: Why does PostgresQL query performance drop over time, but restored when rebuilding index
If the A.area_acre field is not indexed that may slow it down. You can run the query with EXPLAIN to see what it is doing during execution.
First off I would look at creating indexes , ensure your db is being vacuumed, increase the shared buffers for your db install, work_mem settings.
First thing to look at is whether you have an index on the field you're ordering by. If not, adding one will dramatically improve performance. I don't know postgresql that well but something similar to:
CREATE INDEX area_acre ON global_site(area_acre)
As noted in other replies, the indexing process is intensive when working with a large data set, so do this during off-peak.
I am not familiar with the PostgreSQL optimizations, but it sounds like what is happening when the query is run with the ORDER BY clause is that the entire result set is created, then it is sorted, and then the top 11 rows are taken from that sorted result. Without the ORDER BY, the query engine can just generate the first 11 rows in whatever order it pleases and then it's done.
Having an index on the area_acre field very possibly may not help for the sorting (ORDER BY) depending on how the result set is built. It could, in theory, be used to generate the result set by traversing the global_site table using an index on area_acre; in that case, the results would be generated in the desired order (and it could stop after generating 11 rows in the result). If it does not generate the results in that order (and it seems like it may not be), then that index will not help in sorting the results.
One thing you might try is to remove the "CROSS JOIN" from the query. I doubt that this will make a difference, but it's worth a test. Because a WHERE clause is involved joining the two tables (via ST_WITHIN), I believe the result is the same as an inner join. It is possible that the use of the CROSS JOIN syntax is causing the optimizer to make an undesirable choice.
Otherwise (aside from making sure indexes exist for fields that are being filtered), you could play a bit of a guessing game with the query. One condition that stands out is the area_acre >= 500. This means that the query engine is considering all rows that meet that condition. But then only the first 11 rows are taken. You could try changing it to area_acre >= 500 and area_acre <= somevalue. The somevalue is the guessing part that would need adjustment to make sure you get at least 11 rows. This, however, seems like a pretty cheesy thing to do, so I mention it with some reticence.
Have you considered creating Expression based indexes for the benefit of the hairier joins and where conditions?