Specify SELECT timeout for SQLITE - sql

Is it possible to specify the maximum amount of time a SELECT query may take with SQLITE?
The situation would be useful where you have big tables and the users have to possibility to enter free search terms. If the searched for terms is not quickly found then the entire table is scanned which can take a very long time as indices cannot generally be used.
So having SQLITE give up after a few seconds would be useful.
I am using SQLITE via the System.Data.Sqlite and it seemed that SqliteCommand.CommandTimeout would be what I want, but setting this seem to have no effect for some reason. Perhaps I'm missing something.

For a simple select query, no, there doesn't appear to be a way to set a timeout, or maximum time to execute, on SQLite itself. The only mention of timeout in the documentation is the busy timeout. So, if you need to limit the maximum amount of time a select query can take, you'll need to wrap your connection with a timeout in the application level, and cancel/close your connection if that timeout is exceeded. How to do that would obviously be application/language specific.
For a busy timeout (the timeout before the connection stops waiting for locks to clear) you can do it through the provided C interface, or through the SQLite driver provided to your application.

Related

Performance issue with update in postgres

I've got a pretty simple update statement in my application in one of the APIs.
UPDATE "table" SET "column" = "column" + 1 WHERE "id" = $1
The problem is when this API incurs a sudden load it starts to fail. There is no increase in DB CPU only the number of active sessions shoot up. When checking the RDS performance insights this is what I see.
The number of updates to this table doesn't shoot up more than 300. I don't think postgres should be behaving this way with such low number of updates (time spent in waits). It'll be helpful if I could have suggestions what might be going wrong here? I understand what does wait in tuple and transactionid mean but how can I reduce these waits?
Thanks
The problem is obvious: all these sessions are waiting for a row lock. If all these sessions are running the statement you quote with the same id (or only a low number of different ids), then they are blocking each other.
Since the sessions don't seem to do any work (if I read the graph right, they are all waiting), I'd say that the problem is that your transactions are taking too long. PostgreSQL holds the row lock, like all other user-facing locks, until the transaction ends, so having that UPDATE in your workload effectively serializes processing.
You could improve the situation by performing the UPDATE at the very end of the transaction, so that the lock is not held for a long time. In addition, you should reduce your connection pool size considerably. I don't think that your database can deal with 1000 active connections.
Reduce the maximum number of parallel HTTP requests your web server accepts (assuming HTTP(S)) so you don't overload the database. This will cause the API requests to your service to queue waiting for a TCP connection. This will result in improved performance rather than worse at time of peak loads.

How to forcefully stop long postgres query under heavy load?

I am working on a Rails app with Postgres on Ubuntu. Unfortunately for me, this legacy app uses some heavyweight stored procedures in the db. What's more, the db is quite large (5GB) and my computer is not particularly fast. Every now and then, if I pass some bad parameters from my code to the db, my computer becomes super slow to the degree that I cannot get to the console and kill the postgres process. I assume, this is due to some very expensive db query. My only solution is to hard reset my laptop. So my question is - is there a way to forcefully kill a long-taking query? Or perhaps, is there a way to limit the CPU and RAM the db is allowed to use, so that I still have some resources left to go and manually restart postgres?
You can set a maximum time for statements to take with the statement_timeout configuration option:
Abort any statement that takes more than the specified number of milliseconds, starting from the time the command arrives at the server from the client. If log_min_error_statement is set to ERROR or lower, the statement that timed out will also be logged. A value of zero (the default) turns this off.
You can set this option a variety of ways, such as in postgresql.conf for everyone, per session with the SET command, or even per database or per role. More information on setting options is in the documentation.

Detect and handle when a database query goes wrong

My problem is I want all my queries must return results after a limited time. AFAIK, postgres has 2 options for this: connect_timeout when open a connection to database, and statement_timeout for a query.
This lead to 2 problems:
I must estimate how much time the query is run. My approach is setup
a worst case scenario: with a preset bandwidth to db server, a query
with a lot record... to determine it, but I think this ain't a smart
way. Are there any better ideas/patterns... to handle this?
The network problem. Assume the network is bad with heavy packets
loss, high ping as hell... the query from clients, and the result from
the server are stuck ... Of course we can set a
timeout from the code, but I think it will be complicated due to
handle resource and other things, and it's duplicated with the
database timeout mechanism. Are there anyway to handle this?
Another version of the story: when a query take a long time, I want to distinguish: this query is all good, just has too many records, wait for it, and no, the query is "broken",don't wait for it...
Ps : I found this link, but this is for SQL Server 2005 :(
http://www.mssqltips.com/sqlservertip/1338/finding-a-sql-server-process-percentage-complete-with-dmvs/
As you already mentioned, it is hard to predict how long a query runs (due to the query itself and its parameters, due to network, due to server load).
Anyway you should move the SQL queries into QThreads. This allows your application from serving the GUI while the queries run.
Also I would not try to solve this by timeouts. You will get into a lot of trouble because you will fail to choose the right timeouts for each query and each situation. Instead provide a way of cancelling queries by a button or a dialog so the user can decide if it is sensible to continue waiting or not.
What you want to do:
when a query take a long time, I want to distinguish: this query is all good, just has too many records, wait for it, and no, the query is "broken",don't wait for it.
is just not going to work out. You appear to require a solution to the halting problem, a fundamentally hard problem in computer science.
You must decide how long is acceptable for a query to run, and set a timeout. There is no reliable way to predict how long it should run, except by looking at how long other similar queries took to run before. Nor is there any way to tell the difference between a correct (but slow) query and one that is going to run forever. That's particularly true when things like WITH RECURSIVE or PL/PgSQL functions are involved.
You can do the queries in a specific class the object of which resides in a separate thread and wait for a timeout for the object to quit :
databaseObject->performQuery();
QThread * th = databaseObject->thread();
th->quit();
th->wait(2000);
if(th->isRunning())
{
th->terminate();
return false;
}
else
return true;

TADOStoredProc/TADOQuery CommandTimeout...?‏

On a client is being raised the error "Timeout" to trigger some commands against the database.
My first test option for correction is to increase the CommandTimeout to 99999 ... but I am afraid that this treatment generates further problems.
Have experienced it ...?
I wonder if my question is relevant, and/or if there is another option more robust and elegant correction.
You are correct to assume that upping the timeout is not the correct approach. Typically, I look for log running queries that are running around the timeouts. They will typically stand out in the areas of duration and reads.
Then I'll work to reduce the query run time using this method:
https://www.simple-talk.com/sql/performance/simple-query-tuning-with-statistics-io-and-execution-plans/
If it's a report causing issues and you can't get it running faster, you may need to start thinking about setting up a reporting database.
CommandTimeout is a time, that the client is waiting for a response from server. If the query is run in the main VCL thread then the whole application is "frozen" and might be marked "not responding" by Windows. So, would you expect your users to wait at frozen app for 99999 sec?
Generally, leave the Timeout values at default and rather concentrate on tunning the queries as Sam suggests. If you happen to have long running queries (ie. some background data movement, calculations etc in Stored Procedures) set the CommandTimeout to 0 (=INFINITE) but run them in a separate thread.

mysql slow on first query, then fast for related queries

I have been struggling with a problem that only happens when the database has been idle for a period of time for the data queried. The first query will be extremely slow, on the order of 30 seconds and then related queries will be fast like 0.1 seconds. I am assuming this is related to caching, but I have been unable to find the cause of it.
Changing the mysql variables tmp_table_size, max_heap_table_size to a larger size had no effect except to create the temp tables in memory.
I do not think this is related to the query itself as it is well indexed and after the first slow query, variants of the same query do not show up in the slow query log. I am most interested in trying to determine the cause of this or a way to reset the offending cache so I can troubleshoot the issue.
Pages of the innodb data files get cached in the innodb buffer pool. This is what you'd expect. Reading files is slow, even on good hard drives, especially random reads which is mostly what databases see.
It may be that your first query is doing some kind of table scan which pulls a lot of pages into the buffer pool, then accessing them is fast. Or something similar.
This is what I'd expect.
Ideally, use the same engine for all tables (exceptions: system tables, temporary tables (perhaps) and very small tables or short-lived ones). If you don't do this then they have to fight for ram.
Assuming all your tables are innodb, make the buffer pool use up to 75% of the server's physical ram (assuming you don't run too many other tasks on the machine).
Then you will be able to fit around 12G of your database into ram, so once it's "warmed up", the "most used" 12G of your database will be in ram, where accessing it is nice and fast.
Some users of mysql tend to "warm up" production servers following a restart by sending them queries copied from another machine for a while (these will be replication slaves) until they add them into their production pool. This avoids the extreme slowness seen while the cache is cold. For example, Youtube does this (or at least it used to; Google bought them and they may now use Google-fu)
MySQL Workbench:
The below isn't 100% related to this SO question, but the symptoms are very related and this is the first SO result when searching for "mysql workbench slow" or related terms, so hopefully it's useful for others.
Clear the query history! - following the process at MySql workbench query history ( last executed query / queries ) i.e. create / alter table, select, insert update queries to clear MySQL Workbench's query history really sped up the program for me.
In summary: change the Output pane to History Output, right click on a Date and select Delete All Logs.
The issue I was experiencing was "slow first query" in that it would take a few seconds to load the results even though the duration/fetch were well under 1 second. After clearing my query history, the duration/fetch times stayed the same (well under 1 second, so no DB behavior actually changed), but now the results loaded instantly rather than after a few second delay.
Is anything else running on your mysql server? My thought is that maybe after the first query, your table is still cached in memory. Once it's idle, another process is causing it to be de-cached. Just a guess though.
How much memory do you have any what else is running?
I had an SSIS package that was timing out. The query was very simple, from a single MySQL table, but it sometimes returned a lot of records and would sometimes take a few minutes initially to execute, then only a few milliseconds afterwards if I wanted to query it again. We were stuck with the ADO connection, which meant it would time out after 30 seconds, so about half the databases we were trying to load were failing.
After beating my head against the wall I tried performing an initial query first; very simple and only returning a few rows. Since it was so simple it executed fast and set the table in the cache for faster querying. In the next step of the package I would do the more complex query which returned the large data set that kept timing out. Problem solved - all tables loaded. I may start doing this on a regular basis, the complex queries execute much faster by doing a simple query first.
Ttry and compare the output of "vmstat 1" on the linux command line when running the query after a period of time, vs when you re-run it and get results fast. Specifically check the "bi" column (that's the kb read from disk per second).
You may find the operating system is caching the disk blocks in the fast case (and thus a low "bi" figure), but not in the slow case (and hence a large "bi" figure).
You might also find that vmstat shows high/low cpu usage in either case. If it's low when fast, and disk throughput is also low, then your system may still be returning a cached query, even though you've indicated the relevant config value is set to zero. Perhaps check the output of show engine innodb status and SHOW VARIABLES and confirm.
innodb_buffer_pool_size may also be set high (it should be...), which would cache the blocks even before the OS can return them.
You might also find that "key_buffer" is set high - this would cache the keys in the indexes, which could make your select blindingly fast.
Try check the mysql performance blog site for lots of useful info.
I had issue when MySQL 5.6 was slow on first query after idle period. This was a connection problem, not MySQL instance problem, e.g. if you run MYSQL Query Browser execute "select * from some_queue", leave it alone for a couple of hours, then execute any query, it runs slow, while at the same time processes on server or new instance of Browser will select from same tables instantly.
Adding skip-host-cache, skip-name-resolve to my.ini file solved this problem.
I don't know why is that. Why I tried this: MySQL 5.1 without those options was slowly establishing connections from other networks (e.g. server is 192.168.1.100, 192.168.1.101 connects fast, 192.168.2.100 connects slow), MySQL 5.6 didn't have such problem to start with so we didn't add these to my.ini initially.
UPD: Solved half the cases, actually. Setting wait_timeout to maximum integer fixed the other half. Maybe I even now can remove skip-host-cache, skip-name-resolve and it won't slow down in 100% of the cases