limit query time in the client side of PostgreSQL - sql

I'm trying to query the PostgreSQL database, but this is a public server and I really don't want to waste a lot of CPU for a long time.
So I wonder if there is some way to limit my query time for a specific duration, for example, 3/5/10 minutes.
I assume that there is syntax like limit but not for results amount but for query duration.
Thanks for any kind of help.

Set statement_timeout, then your query will be terminated with an error if it exceeds that limit.
According to this documentation:
statement_timeout (integer)
Abort any statement that takes more than
the specified number of milliseconds, starting from the time the
command arrives at the server from the client. If
log_min_error_statement is set to ERROR or lower, the statement that
timed out will also be logged. A value of zero (the default) turns
this off.
Setting statement_timeout in postgresql.conf is not recommended
because it would affect all sessions.

Here is a potential catch with using LIMIT to control how long a query might run. Rightfully, you ought to also be using ORDER BY with your query, to tell Postgres how exactly it should limit the size of the result set. But the caveat here is that Postgres would typically have to materialize the entire result set when using LIMIT with ORDER BY, and then also possibly sort, which might take longer than just reading in the entire result set.
One workaround to this might be to just use LIMIT without ORDER BY. If the execution plan does not include reading the entire table and sorting, it might be one way to do what you want. However, keep in mind that if you go this route, Postgres would have free license to return any records from the table it wishes, in any order. This is probably not what you want from a business or reporting point of view.
But, a much better approach here would be just tune your query using things like indices, and make it faster to the point where you don't need to resort to a LIMIT trick.

Related

deleting 500 records from table with 1 million records shouldn't take this long

I hope someone can help me. I have a simple sql statement
delete from sometable
where tableidcolumn in (...)
I have 500 records I want to delete and recreate. The table recently grew to over 1 mill records. The problem is the statement above is taking over 5 minutes without completing. I have a primary key and 2 non clustered non unique indexes. My delete statement is using the primary key.
Can anyone help me understand why this statement is taking so long and how I can speed it up?
There are two areas I would look at first, locking and a bad plan.
Locking - run your query and while it is running see if it is being blocked by anything else "select * from sys.dm_exec_requests where blocking_session_id <> 0" if you see anything blocking your request then I would start with looking at:
https://www.simple-talk.com/sql/database-administration/the-dba-as-detective-troubleshooting-locking-and-blocking/
If there is no locking then get the execution plan for the insert, what is it doing? it it exceptionally high?
Other than that, how long do you expect it to take? Is it a little bit longer than that or a lot longer? Did it only get so slow after it grew significantly or has it been getting slower over a long period of time?
What is the I/O performance, what are your average read / write times etc etc.
TL;DR: Don't do that (instead of a big 'in' clause: preload and use a temporary table).
With the number of parameters, unknown backend configuration (even though it should be fine by today's standards) and not able to guess what your in-memory size may be during processing, you may be hitting (in order) a stack, batch or memory size limit, starting with this answer. Also possible to hit an instruction size limit.
The troubleshooting comments may lead you to another answer. My pivot's the 'in' clause, statement size, and that all of these links include advice to preload a temporary table and use that with your query.

What is the best way to ensure consistent ordering in an Oracle query?

I have an program that needs to run queries on a number of very large Oracle tables (the largest with tens of millions of rows). The output of these queries is fed into another process which (as a side effect) can record the progress of the query (i.e., the last row fetched).
It would be nice if, in the event that the task stopped half way through for some reason, it could be restarted. For this to happen, the query has to return rows in a consistent order, so it has to be sorted. The obvious thing to do is to sort on the primary key; however, there is probably going to be a penalty for this in terms of performance (an index access) versus a non-sorted solution. Given that a restart may never happen this is not desirable.
Is there some trick to ensure consistent ordering in another way? Any other suggestions for maintaining performance in this case?
EDIT: I have been looking around and seen "order by rowid" mentioned. Is this useful or even possible?
EDIT2: I am adding some benchmarks:
With no order by: 17 seconds.
With order by PK: 46 seconds.
With order by rowid: 43 seconds.
So any order by has a savage effect on performance, and using rowid makes little difference. Accepted answer is - there is no easy way to do it.
The best advice I can think of is to reduce the chance of a problem occurring that might stop the process, and that means keeping the code simple. No cursors, no commits, no trying to move part of the data, just straight SQL statements.
Unless a complete restart would be a completely unacceptable disaster, I'd go for simplicity without any part-way restart code at all.
If you want some order and queried data is unsorted then you need to sort it anyway, and spend some resources to do sorting.
So, there are at least two variants for optimization:
Minimize resources spent on sorting;
Query already sorted data.
For the first variant Oracle on its own calculates a best variant to minimize data access and overall query time. It may be possible to choose sorting order involved in unique index which already used by optimizer, but it's a very questionable tactic.
Second variant is about index-organized tables and about forcing Oracle with hints to use some specific index. It seems Ok if you need to process nearly all records in some specific table, but if selectivity of query is high it's significantly slows a process, even on a single table.
Think about a table with surrogate primary key which holds data with 10-year transaction history. If you need data only for previous year and you force order by primary key then Oracle need to process records in all 10 years one-by-one to find all records which belongs to a single year.
But if you need data for 9 years from this table then full table scan may be faster than index-based choice.
So selectivity of your query is a key to choose between full table scan and result sorting.
For storing results and restarting query a good solution is to use Oracle Streams Advanced Queuing to fed another process.
All unprocessed messages in queue redirected to Exception Queue where it may be processed separately.
Because you don't specify exact ordering for selected messages I suppose that you need ordering only to maintain unprocessed part of records. If it's true then with AQ you don't need ordering at all and may, even, process records in parallel.
So, finally, from my point of view Buffered Queue is what you really need.
You could skip ordering and just update the records you processed with something like SET is_processed = 'Y' or SET date_processed = sysdate. Complete restartability and no ordering.
For performance you can partition by is_processed. Yes, partition key changes might be slow, but it is all about trade-offs.

libpq very slow for large (20 million record) database

I am new to SQL/RDBMS.
I have an application which adds rows with 10 columns in PostgreSQL server using the libpq library. Right now, my server is running on same machine as my visual c++ application.
I have added around 15-20 million records. The simple query of getting total count is taking 4-5 minutes using select count(*) from <tableName>;.
I have indexed my table with the time I am entering the data (timecode). Most of the time I need count with different WHERE / AND clauses added.
Is there any way to make things fast? I need to make it as fast as possible because once the server moves to network, things will become much slower.
Thanks
I don't think network latency will be a large factor in how long your query takes. All the processing is being done on the PostgreSQL server.
The PostgreSQL MVCC design means each row in the table - not just the index(es) - must be walked to calculate the count(*) which is an expensive operation. In your case there are a lot of rows involved.
There is a good wiki page on this topic here http://wiki.postgresql.org/wiki/Slow_Counting with suggestions.
Two suggestions from this link, one is to use an index column:
select count(index-col) from ...;
... though this only works under some circumstances.
If you have more than one index see which one has the least cost by using:
EXPLAIN ANALYZE select count(index-col) from ...;
If you can live with an approximate value, another is to use a Postgres specific function for an approximate value like:
select reltuples from pg_class where relname='mytable';
How good this approximation is depends on how often autovacuum is set to run and many other factors; see the comments.
Consider pg_relation_size('tablename') and divide it by the seconds spent in
select count(*) from tablename
That will give the throughput of your disk(s) when doing a full scan of this table. If it's too low, you want to focus on improving that in the first place.
Having a good I/O subsystem and well performing operating system disk cache is crucial for databases.
The default postgres configuration is meant to not consume too much resources to play nice with other applications. Depending on your hardware and the overall utilization of the machine, you may want to adjust several performance parameters way up, like shared_buffers, effective_cache_size or work_mem. See the docs for your specific version and the wiki's performance optimization page.
Also note that the speed of select count(*)-style queries have nothing to do with libpq or the network, since only one resulting row is retrieved. It happens entirely server-side.
You don't state what your data is, but normally the why to handle tables with a very large amount of data is to partition the table. http://www.postgresql.org/docs/9.1/static/ddl-partitioning.html
This will not speed up your select count(*) from <tableName>; query, and might even slow it down, but if you are normally only interested in a portion of the data in the table this can be helpful.

SQL : Select records at a time

I have question that at a time in select query how many records are selected that means what is maximum limit of selecting recods in sql 2K,2k5,2k8.
Thanks in advaces.
There's no hard limit that I'm aware of on SQL server's side on how many rows you can SELECT. If you could INSERT them all you can read them all out at the same time.
However, if you select millions of rows at the time, you may experience issues like your client running out of memory or your connection timing out before being able to transmit all the data you SELECTed.
I don't believe there is a built in 'limit' for selecting rows, it'll be down to the architecture that SQL server is running out (i.e. 32bit/64bit, memory available etc) Certainly you can select hundreds of thousands of rows without issue.
But... if you ever find yourself asking for that many rows from a database you should optimise your code / stored procedures so that you only retrieve a subset at a time.
As #paolo says, there is no SQL-defined hard limit; you can specify a limit in your query with the LIMIT keyword (although that is database-dependent).
However, there is one important point: performing a SELECT query with actual database servers typically does not actually transfer the data from the server to the client, or load everything into memory. A query usually has a cursor into a resultset, and as the client iterates through the resultset more rows are fetch from the server, usually in chunks. So unless a client explicitly copies all data from the resultset to memory, at no point will this implicitly happen.
This is all completely database-dependent, and in many cases drivers allow tweaking of chunk size etc.

How do I force Postgres to use a particular index?

How do I force Postgres to use an index when it would otherwise insist on doing a sequential scan?
Assuming you're asking about the common "index hinting" feature found in many databases, PostgreSQL doesn't provide such a feature. This was a conscious decision made by the PostgreSQL team. A good overview of why and what you can do instead can be found here. The reasons are basically that it's a performance hack that tends to cause more problems later down the line as your data changes, whereas PostgreSQL's optimizer can re-evaluate the plan based on the statistics. In other words, what might be a good query plan today probably won't be a good query plan for all time, and index hints force a particular query plan for all time.
As a very blunt hammer, useful for testing, you can use the enable_seqscan and enable_indexscan parameters. See:
Examining index usage
enable_ parameters
These are not suitable for ongoing production use. If you have issues with query plan choice, you should see the documentation for tracking down query performance issues. Don't just set enable_ params and walk away.
Unless you have a very good reason for using the index, Postgres may be making the correct choice. Why?
For small tables, it's faster to do sequential scans.
Postgres doesn't use indexes when datatypes don't match properly, you may need to include appropriate casts.
Your planner settings might be causing problems.
See also this old newsgroup post.
Probably the only valid reason for using
set enable_seqscan=false
is when you're writing queries and want to quickly see what the query plan would actually be were there large amounts of data in the table(s). Or of course if you need to quickly confirm that your query is not using an index simply because the dataset is too small.
TL;DR
Run the following three commands and check whether the problem is fixed:
ANALYZE;
SET random_page_cost = 1.0;
SET effective_cache_size = 'X GB'; # replace X with total RAM size minus 2 GB
Read on for further details and background information about this.
Step 1: Analyze tables
As a simple first attempt to fix the issue, run the ANALYZE; command as the database superuser in order to update all table statistics. From the documentation:
The query planner uses these statistics to help determine the most efficient execution plans for queries.
Step 2: Set the correct random page cost
Index scans require non-sequential disk page fetches. PostgreSQL uses the random_page_cost configuration parameter to estimate the cost of such non-sequential fetches in relation to sequential fetches. From the documentation:
Reducing this value [...] will cause the system to prefer index scans; raising it will make index scans look relatively more expensive.
The default value is 4.0, thus assuming an average cost factor of 4 compared to sequential fetches, taking caching effects into account. However, if your database is stored on an SSD drive, then you should actually set random_page_cost to 1.1 according to the documentation:
Storage that has a low random read cost relative to sequential, e.g., solid-state drives, might also be better modeled with a lower value for random_page_cost, e.g., 1.1.
Also, if an index is mostly (or even entirely) cached in RAM, then an index scan will always be significantly faster than a disk-served sequential scan. The query planner however doesn't know which parts of the index are already cached, and thus might make an incorrect decision.
If your database indices are frequently used, and if the system has sufficient RAM, then the indices are likely to be cached eventually. In such a case, random_page_cost can be set to 1.0, or even to a value below 1.0 to aggressively prefer using index scans (although the documentation advises against doing that). You'll have to experiment with different values and see what works for you.
As a side note, you could also consider using the pg_prewarm extension to explicitly cache your indices into RAM.
You can set the random_page_cost like this:
SET random_page_cost = 1.0;
Step 3: Set the correct cache size
On a system with 8 or more GB RAM, you should set the effective_cache_size configuration parameter to the amount of memory which is typically available to PostgreSQL for data caching. From the documentation:
A higher value makes it more likely index scans will be used, a lower value makes it more likely sequential scans will be used.
Note that this parameter doesn't change the amount of memory which PostgreSQL will actually allocate, but is only used to compute cost estimates. A reasonable value (on a dedicated database server, at least) is the total RAM size minus 2 GB. The default value is 4 GB.
You can set the effective_cache_size like this:
SET effective_cache_size = '14 GB'; # e.g. on a dedicated server with 16 GB RAM
Step 4: Fix the problem permanently
You probably want to use ALTER SYSTEM SET ... or ALTER DATABASE db_name SET ... to set the new configuration parameter values permanently (either globally or per-database). See the documentation for details about setting parameters.
Step 5: Additional resources
If it still doesn't work, then you might also want to take a look at this PostgreSQL Wiki page about server tuning.
Sometimes PostgreSQL fails to make the best choice of indexes for a particular condition. As an example, suppose there is a transactions table with several million rows, of which there are several hundred for any given day, and the table has four indexes: transaction_id, client_id, date, and description. You want to run the following query:
SELECT client_id, SUM(amount)
FROM transactions
WHERE date >= 'yesterday'::timestamp AND date < 'today'::timestamp AND
description = 'Refund'
GROUP BY client_id
PostgreSQL may choose to use the index transactions_description_idx instead of transactions_date_idx, which may lead to the query taking several minutes instead of less than one second. If this is the case, you can force using the index on date by fudging the condition like this:
SELECT client_id, SUM(amount)
FROM transactions
WHERE date >= 'yesterday'::timestamp AND date < 'today'::timestamp AND
description||'' = 'Refund'
GROUP BY client_id
The question on itself is very much invalid. Forcing (by doing enable_seqscan=off for example) is very bad idea. It might be useful to check if it will be faster, but production code should never use such tricks.
Instead - do explain analyze of your query, read it, and find out why PostgreSQL chooses bad (in your opinion) plan.
There are tools on the web that help with reading explain analyze output - one of them is explain.depesz.com - written by me.
Another option is to join #postgresql channel on freenode irc network, and talking to guys there to help you out - as optimizing query is not a matter of "ask a question, get answer be happy". it's more like a conversation, with many things to check, many things to be learned.
One thing to note with PostgreSQL; where you are expecting an index to be used and it is not being used, is to VACUUM ANALYZE the table.
VACUUM ANALYZE schema.table;
This updates statistics used by the planner to determine the most efficient way to execute a query. Which may result in the index being used.
Another thing to check is the types.
Is the index on an int8 column and you are querying with numeric? The query will work but the index will not be used.
There is a trick to push postgres to prefer a seqscan adding a OFFSET 0 in the subquery
This is handy for optimizing requests linking big/huge tables when all you need is only the n first/last elements.
Lets say you are looking for first/last 20 elements involving multiple tables having 100k (or more) entries, no point building/linking up all the query over all the data when what you'll be looking for is in the first 100 or 1000 entries. In this scenario for example, it turns out to be over 10x faster to do a sequential scan.
see How can I prevent Postgres from inlining a subquery?
Indexes can only be used under certain circumstances.
For example the type of the value fits to the type of the column.
You are not doing a operation on the column before comparing to the value.
Given a customer table with 3 columns with 3 indexes on all of the columns.
create table customer(id numeric(10), age int, phone varchar(200))
It might happend that the database trys to use for example the index idx_age instead of using the phone number.
You can sabotage the usage of the index age by doing an operation of age:
select * from customer where phone = '1235' and age+1 = 24
(although you are looking for the age 23)
This is of course a very simple example and the intelligence of postgres is probably good enough to do the right choice. But sometimes there is no other way then tricking the system.
Another example is to
select * from customer where phone = '1235' and age::varchar = '23'
But this is probably more costy than the option above.
Unfortunately you CANNOT set the name of the index into the query like you can do in MSSQL or Sybase.
select * from customer (index idx_phone) where phone = '1235' and age = 23.
This would help a lot to avoid problems like this.
Apparently there are cases where Postgre can be hinted to using an index by repeating a similar condition twice.
The specific case I observed was using PostGIS gin index and the ST_Within predicate like this:
select *
from address
natural join city
natural join restaurant
where st_within(address.location, restaurant.delivery_area)
and restaurant.delivery_area ~ address.location
Note that the first predicate st_within(address.location, restaurant.delivery_area) is automatically decomposed by PostGIS into (restaurant.delivery_area ~ address.location) AND _st_contains(restaurant.delivery_area, address.location) so adding the second predicate restaurant.delivery_area ~ address.location is completely redundant. Nevertheless, the second predicate convinced the planner to use spatial index on address.location and in the specific case I needed, improved the running time 8 times.