On a client is being raised the error "Timeout" to trigger some commands against the database.
My first test option for correction is to increase the CommandTimeout to 99999 ... but I am afraid that this treatment generates further problems.
Have experienced it ...?
I wonder if my question is relevant, and/or if there is another option more robust and elegant correction.
You are correct to assume that upping the timeout is not the correct approach. Typically, I look for log running queries that are running around the timeouts. They will typically stand out in the areas of duration and reads.
Then I'll work to reduce the query run time using this method:
https://www.simple-talk.com/sql/performance/simple-query-tuning-with-statistics-io-and-execution-plans/
If it's a report causing issues and you can't get it running faster, you may need to start thinking about setting up a reporting database.
CommandTimeout is a time, that the client is waiting for a response from server. If the query is run in the main VCL thread then the whole application is "frozen" and might be marked "not responding" by Windows. So, would you expect your users to wait at frozen app for 99999 sec?
Generally, leave the Timeout values at default and rather concentrate on tunning the queries as Sam suggests. If you happen to have long running queries (ie. some background data movement, calculations etc in Stored Procedures) set the CommandTimeout to 0 (=INFINITE) but run them in a separate thread.
Related
I have a SQL Server database in production and it has been live for 2 months. A while ago the web application associated with it loading takes too long time. And sometimes it says timeout occurred.
Found a quick fix by running a command 'exec sp_updatestats' will fixed the problem. But I need to be run that one consistently (for every 5 minutes).
So I created a Windows service with timer and started on server. My question is what are the root causes and possible permanent solutions? Anyone?
Here is a Most expensive query from Activity Monitor
WAITFOR(RECEIVE TOP (1) message_type_name, conversation_handle, cast(message_body AS XML) as message_body from [SqlQueryNotificationService-2eea594b-f994-43be-a5ed-d9a47837a391]), TIMEOUT #p2;
To diagnose a poorly performing queries you need to:
Identify the poorly performing query, e.g. via application logging, a SQL Profiler trace filtered to show only queries with a longer duration than a certain threshold etc...
Get an execution plan for the query
At that point you can start to try to figure out what the performance issue is.
Given that exec sp_updatestats fixes your issue it could be that statistics on certain tables are out of date (this is a pretty common cause of performance issues). If thats the case then you might be able to tweak your statistics or at least rebuild only those statistics that are causing issues.
Its worth noting that updating statistics will also cause cached execution plans to become invalid, and so its plausible that your issue is unrelated to statistics - you need to collect more information about the poorly performing queries before looking at solutions.
Your most expensive query looks like its waiting for a message, i.e. its in your list of long running queries because its designed to run for a long time, not because its expensive.
Thanks for everyone i found a solution for my issue . Its quite different I've enabled sql dependency module on my sql server by setting up enable broker on , thats the one causing timeout query so by setting it to false everything is fine working now.
I am working on a Rails app with Postgres on Ubuntu. Unfortunately for me, this legacy app uses some heavyweight stored procedures in the db. What's more, the db is quite large (5GB) and my computer is not particularly fast. Every now and then, if I pass some bad parameters from my code to the db, my computer becomes super slow to the degree that I cannot get to the console and kill the postgres process. I assume, this is due to some very expensive db query. My only solution is to hard reset my laptop. So my question is - is there a way to forcefully kill a long-taking query? Or perhaps, is there a way to limit the CPU and RAM the db is allowed to use, so that I still have some resources left to go and manually restart postgres?
You can set a maximum time for statements to take with the statement_timeout configuration option:
Abort any statement that takes more than the specified number of milliseconds, starting from the time the command arrives at the server from the client. If log_min_error_statement is set to ERROR or lower, the statement that timed out will also be logged. A value of zero (the default) turns this off.
You can set this option a variety of ways, such as in postgresql.conf for everyone, per session with the SET command, or even per database or per role. More information on setting options is in the documentation.
My problem is I want all my queries must return results after a limited time. AFAIK, postgres has 2 options for this: connect_timeout when open a connection to database, and statement_timeout for a query.
This lead to 2 problems:
I must estimate how much time the query is run. My approach is setup
a worst case scenario: with a preset bandwidth to db server, a query
with a lot record... to determine it, but I think this ain't a smart
way. Are there any better ideas/patterns... to handle this?
The network problem. Assume the network is bad with heavy packets
loss, high ping as hell... the query from clients, and the result from
the server are stuck ... Of course we can set a
timeout from the code, but I think it will be complicated due to
handle resource and other things, and it's duplicated with the
database timeout mechanism. Are there anyway to handle this?
Another version of the story: when a query take a long time, I want to distinguish: this query is all good, just has too many records, wait for it, and no, the query is "broken",don't wait for it...
Ps : I found this link, but this is for SQL Server 2005 :(
http://www.mssqltips.com/sqlservertip/1338/finding-a-sql-server-process-percentage-complete-with-dmvs/
As you already mentioned, it is hard to predict how long a query runs (due to the query itself and its parameters, due to network, due to server load).
Anyway you should move the SQL queries into QThreads. This allows your application from serving the GUI while the queries run.
Also I would not try to solve this by timeouts. You will get into a lot of trouble because you will fail to choose the right timeouts for each query and each situation. Instead provide a way of cancelling queries by a button or a dialog so the user can decide if it is sensible to continue waiting or not.
What you want to do:
when a query take a long time, I want to distinguish: this query is all good, just has too many records, wait for it, and no, the query is "broken",don't wait for it.
is just not going to work out. You appear to require a solution to the halting problem, a fundamentally hard problem in computer science.
You must decide how long is acceptable for a query to run, and set a timeout. There is no reliable way to predict how long it should run, except by looking at how long other similar queries took to run before. Nor is there any way to tell the difference between a correct (but slow) query and one that is going to run forever. That's particularly true when things like WITH RECURSIVE or PL/PgSQL functions are involved.
You can do the queries in a specific class the object of which resides in a separate thread and wait for a timeout for the object to quit :
databaseObject->performQuery();
QThread * th = databaseObject->thread();
th->quit();
th->wait(2000);
if(th->isRunning())
{
th->terminate();
return false;
}
else
return true;
On our production server, for some reason at specific amount of time, thread count goes over and over to the certain point that though CPU Utilization is normal(30-50%), but the query starting to run slow we so lot more blocking statements.
I am not sure where to look at it, basically when our site runs normally, the thread count is around 150 threads, but during the specific time in a day(during 1:30 to 2:30) it come up to 270 threads.there are no extra sql transaction goes on, everything as normal as it was before but thread count grows and sql start behaving very very slow.
After restarting the SQL service immediately thread count comes to normal, and our site function fine for another 24 hours.
We are using SQL Server 2005, it is 24 core machine.
any idea?
Blocking statements steal workers (sys.dm_os_workers) so the server will spawn more workers to handle the incoming tasks. At 24 cores you'll have some 700 max worker threads out-of-the-box by default. So seeing 270 'threads' is not an issue, is well within the normal functioning parameters. You real problem must be the blocking, and you have to investigate it accordingly: who is blocking who and why. My bet is that you have a job running between 1:30 and 2:30 that is locking large portions of the database (a delete job perhaps?) and your queries block on locked rows. You'll have to investigate, find the root cause, and act accordingly. Reboot is not a solution, nor is blaming unrelated components (thread count). Use Activity Monitor, use Who Is Active, follow the methodical approach of Waits and Queues methodology. There are plenty of ways to identify the real problem. SQL Server will never appear slow due to thread count. It simply doesn't work like that.
You can control the degree of parallalism using the MAXDOP query hint. For more details please check this article:
http://blog.sqlauthority.com/2010/03/15/sql-server-maxdop-settings-to-limit-query-to-run-on-specific-cpu/
Thanks for your valuable feedback, yes it is true it is nothing which sql is behaving weiredly, it is something our Site which is based on Ektron CMS is responsible for, one of the Functionality of Ektron CMS (which is PageBuilder), while doing operation on this piece of Content was holding of the table badly, we have around 10 million users to our site, and probably since this was blocking the tables SQL Server goes nuts and does not respond very well.
We have finally eliminated the issue.
Since a few days ago, the SQL server (Microsoft SQL Server 2005) backing our site has started occasionally timeouting. It is happening at seemingly random times approximately every hour or two. It usually takes about 10 minutes during which we see hundreds of timeouted requests. Under normal circumstances, most of our queries take less than 50ms. A query which takes a significant fraction of a second is an exception.
I have effectively killed a day trying to figure out at least something without any real progress. Normally, the server load is about 10-20%, and when the timeouts happen, we don’t see any increased CPU load. Also, there is nothing special happening during the timeouts; no overzealous web crawler, no heavy background tasks, no increased network traffic, no increased number of connections etc. Simply, everything looks as usual.
Not making any progress, we decided to restart it (and install the latest SP since we were in it) which seems to have fixed the problem. It has been already over six hours without any incident. Also, the CPU load has gone down under 10%.
It almost seems like if the SQL server "deteriorated" overtime. Perhaps, some internal structure (some cache or statistic) got out shape and caused the occasional problems. I don’t have any other explanation.
The only thing I noticed when I was monitoring the server (and got lucky once to be present when the timeouts were happening), I saw several long running queries waiting on CXPACKET. But I learned that this is most likely just a consequence of some other problem. I wrote a script monitoring SQL requests, and so hopefully, next time it happens, I will have more information.
Has anybody had similar experience? I’m not an SQL Server guru. Any suggestions are welcome.
since everything looked normal: CPU, nothing special happening, no overzealous web crawler, no heavy background tasks, no increased network traffic, no increased number of connections etc. I'd look into locking\blocking\race condition. Use this to see what (if anything) is locking when the time-out are happening:
How to find out what SQL queries are being blocked and what's blocking them?