The timeout period elapsed prior to completion of the operation or the server is not responding - sql

We have the production server database and Widnows Client project. Suddenly we are getting this error while querying the database.
"The timeout period elapsed prior to completion of the operation or the server is not responding"
How can we resolve this from Database side/ C# Windows ?
Thanks,
Velusamy

The quick-and-dirty answer: set SqlCommand.CommandTimeout to a higher value.
The long answer:
Run the query in SSMS with "Show actual execution plan" turned on. Check the result for index hints.
Check your database server logs for suspicious events
Verify that statistics are not out of date
This query shows the age of statistics:
SELECT Object_Name(ind.object_id)
, ind.name
, STATS_DATE(ind.object_id, ind.index_id)
FROM SYS.INDEXES ind
The query optimizer uses statistics when it chooses how to execute a query. If the statistics are old, it can make (really) bad choices.

Related

Max out Azure SQL DTU with SQL inside code, but not from SQL Server Management Studio

I have a bit a funny situation. Our Azure SQL instance maxes out at 100 DTU for a certain query and the query returns a timeout:
SqlException (0x80131904): Timeout expired. The timeout period
elapsed prior to completion of the operation or the server is not
responding. This failure occurred while attempting to connect to the
routing destination.
If I run exactly the same query (with the parameters hardcoded) in SQL Server Management Studio it still takes the DTU up to 25%, but that's still far away from 100%. Nothing else runs on that server. There are a few other queries that run before/after. But if we just run them, nothing spikes out.
Any ideas?
My analysis of the issue goes like this..
First when DTU's are maxed out and if a query fails due to that,you will not get time out..Below is the error message you will get..
Resource ID: %d. The %s limit for the database is %d and has been reached. For more information
You can try testing that by opening multiple resource intensive queries
Secondly when you get time out's as indicated in your question,this is mostly due to query waiting for resources like say some database IO,memory..
we faced similar time out's ,but most of them are fixed by updating stats and rebuilding indexes,rest of them we optimized

How to benchmark a SQL Server query using SQL Server

I would like to benchmark a SQL query within SQL Server. What is the best approach to accomplish this in the most accurate way?
My idea was as follows -->
record start-time
execute query
record end-time
perform a date diff between start-time and the end-time and output to milliseconds or microseconds.
You are looking for:-
SET STATISTICS TIME ON
Check this out.
When SET STATISTICS TIME is ON, the time statistics for a statement
are displayed. When OFF, the time statistics are not displayed. The
setting of SET STATISTICS TIME is set at execute or run time and not
at parse time. Microsoft SQL Server is unable to provide accurate
statistics in fiber mode, which is activated when you enable the
lightweight pooling configuration option. The cpu column in the
sysprocesses table is only updated when a query executes with SET
STATISTICS TIME ON. When SET STATISTICS TIME is OFF, 0 is returned. ON
and OFF settings also affect the CPU column in the Process Info View
for Current Activity in SQL Server Management Studio.

high 'total_worker_time' for stored proc using OPENQUERY in SQL Server 2005

[Cross posted from the Database Administrators site, in the hope that it may gain better traction here. I'll update either site as appropriate.]
I have a stored procedure in SQL Server 2005 (SP2) which contains a single query like the following (simplified for clarity)
SELECT * FROM OPENQUERY(MYODBC, 'SELECT * FROM MyTable WHERE Option = ''Y'' ')
OPTION (MAXDOP 1)
When this proc is run (and run only once) I can see the plan appear in sys.dm_exec_query_stats with a high 'total_worker_time' value (eg. 34762.196 ms). This is close to the elapsed time. However, in SQL Management Studio the statistics show a much lower CPU time, as I'd expect (eg. 6828 ms). The query takes some time to return, because of the slowness of the server it is talking to, but it doesn't return many rows.
I'm aware of the issue that parallel queries in SQL Server 2005 can present odd CPU times, which is why I've tried to turn off any parallism with the query hint (though I really don't think that there was any in any case).
I don't know how to account for the fact that the two ways of looking at CPU usage can differ, nor which of the two might be accurate (I have other reasons for thinking that the CPU usage may be the higher number, but it's tricky to measure). Can anyone give me a steer?
UPDATE: I was assuming that the problem was with the OPENQUERY so I tried looking at times for a long-running query which doesn't use OPENQUERY. In this case the statistics (gained by setting STATISTICS TIME ON) reported the CPU time at 3315ms, whereas the DMV gave it at 0.511ms. The total elapsed times in each case agreed.
total_worker_time in sys.dm_exec_query_stats is cumulative - it is the total execution time for all the executions of the currently compiled version of the query - see execution_count for the number of executions this represents.
See last_worker_time, min_worker_time or max_worker_time for timing of individual executions.
reference: http://msdn.microsoft.com/en-us/library/ms189741.aspx

Suspended status in SQL Activity Monitor

What would cause a query being done in Management Studio to get suspended?
I perform a simple select top 60000 from a table (which has 11 million rows) and the results come back within a sec or two.
I change the query to top 70000 and the results take up to 40 min.
From doing a bit of searching on another but related issue I came across someone using DBCC FREEPROCCACHE to fix it.
I run DBCC FREEPROCCACHE and then redo the query for 70000 and it seemmed to work.
However, the issue still occurs with a different query.
I increase to say 90000 or if I try to open the table using [Right->Open Table], it pulls about 8000 records and stops.
Checking the activity log for when I do the Open Table shows the session has been suspended with a wait type of "Async_Network_IO". For the session running the select of 90000 the status is "Sleeping", this is the same status for the above select 70000 query which did return but in 45min. It is strange to me that the status shows "Sleeping" and it does not appear to be changing to "Runable" (I have the activiy monitor refreshing ever 30sec).
Additional notes:
I am not running both the Open Table and select 90000 at the same time. All queries are done one at a time.
I am running 32bit SQL Server 2005 SP2 CU9. I tried upgrading to SP3 but ran into install failurs. The issues was occuring prior to me trying this upgrade.
Server setup is an Active/Active cluster the issue occurs on either node, and the other instance does not have this issue.
I have ~20 other database on this same server instance but only this one DB is seeing the issue.
This database gets fairly large. It is currently at 76756.19MB. Data file is 11,513MB.
I am logged in locally on the Server box using Remote Desktop.
The wait type "Async_Network_IO" means that its waiting for the client to retrieve the result set as SQL Server's network buffer is full. Why your client isn't picking up the data in a timely manner I can't say.
The other case it can happen is with linked servers when SQL Server is querying a remote table, in this case SQL Server is waiting for the remote server to respond.
Something worth looking at is virus scanners, if they are monitoring network connections sometimes they can get lagged, its often apparent by them hogging all the CPU.
Suspended means it is waiting on a resource and will resume when it gets its resource. Judging from the sizes you are pulling back, it seems you are in an OLAP type of query.
Try the following things:
Use NOLOCK or set the TRANSACTION ISOLATION LEVEL at the top of the query
Check your execution plan and tune the query to be more efficient

sql 2005 profiler analysis

we have some performance issue with a program, I traced tsql using sql server 2005 profiler and find below result for an INSERT statement,
cpu: 0
read: 28
write: 0
duration: 32804
I guess it's because the inserted table has several indexes and it's getting big. is there any other possibilities that I should check?
thanks.
Duration is in microseconds. 32804 microseconds = 0.032804 seconds. I don't think you have a problem.
Actually that depends on if your looking at the results in a table or if you are looking at them from the Profiler UI. If it's the UI then it is milliseconds.
"Beginning with SQL Server 2005, the server reports the duration of an event in microseconds (one millionth, or 10-6, of a second) and the amount of CPU time used by the event in milliseconds (one thousandth, or 10-3, of a second). In SQL Server 2000, the server reported both duration and CPU time in milliseconds. In SQL Server 2005 and later, the SQL Server Profiler graphical user interface displays the Duration column in milliseconds by default, but when a trace is saved to either a file or a database table, the Duration column value is written in microseconds." http://msdn.microsoft.com/en-us/library/ms175848.aspx
How many indexes are there and how many columns are there in the associated indexes. What is the execution plan for this query?
28 writes that take 32 seconds? That has nothing to do with indexes, you can't have that many indexes as to justify such large time. That is concurency, the insert is blocked by other operations. Use sys.dm_exec_requests and watch the wait_time and wait_resource of the insert request.