How safe is to clear Wait stats - sql-server-2012

I have been facing performance issues in the production server; and while reading it about on internet I came across #Brent Ozar article about wait stats.
I want to try that but I am not sure how safe is it to run. My production environment is constantly occupied with SSIS jobs and I don't want to kill any job or server. So, I have few questions
Is it safe to run when queries or SQL Jobs are running on server
DBCC SQLPERF("sys.dm_os_wait_stats",CLEAR);
DBCC SQLPREF("sys.dm_os_latch_stats",CLEAR);
What is the difference between update stats and clearing wait stats?

Clearing wait stats has no affect on performance of SQL Server. It would just remove information related to accumulates wait stats. Now you should have a valid reason to do it and believe me lot of DBA's and SQL Server users do it quite often when troubleshooting performance issue. Only issue is that you loose valuable information about wait stats. But there is a way to get over with it before clearing wait stats run sys.dm_os_wait_stats and get current output of wait stats now clear it and start your monitoring. Atleast you would have statistics before clearing.
•What is the difference between update stats and clearing wait stats?
They both are not related to each other in any way. Statistics (one which you are referring via update stats) is distribution of data. It is how SQL Server data is distributed and is used by SQL Server in cardinality estimation and helps optimizer to prepare cost bases GOOD plan for a query. Clearing wait stats(statistics about on what resource the query was waiting) would not affect SQL Server data distribution statistics.

Related

Query execution slow when scaling DTUs in Azure SQL Database

Am doing some POC with real-time scenarios for SaaS product to handle high volume of message, this will reach peak within few seconds(send/process) and listener side processing message then storing that computed data into Azure SQL Database(Separate Elastic Pool, 100 eDTU with Standard subscription), to mimic this am sending & processing message in parallel with few nodes and threads, in this case am facing some slowness in first few seconds of database operation when DTU reached maximum level the query execution is normal
Is this expected behavior?
What will happen if executes query during scaling of DTU?
How to avoid this?
When you scale up or down the service tier of an Azure SQL Database open transactions are rolled back, server logins may be disconnected, query plans may vary because the number of threads available for query changes, and the data cache and query cache will be cleared.
Since the data cache is empty, the first time you run a query it has to do a lot of physical IO, memory allocation raises and it's slow. You may take a look at queries performing slow and they may be showing the PAGEIOLATCH_SH and MEMORY_ALLOCATION_EXT waits and that corresponds to pages being pulled from disk to the buffer. The second time you run the query the data is stored on the data cache and it runs faster.
If the database faces high DTU usage for a good period of time throttling may see connection timeouts and poor performance on queries.

How to troubleshoot suspended queries in Azure Synapse?

Currently, I encounter an issue of suspended queries in Azure Synapse when executing from ADF (Store procedures call).
Also, I followed the suggestion in the link below for troubleshooting the issue:
Delete due to sensitive informations
The troubleshoot queries returned as below:
I checked if the transaction lock is the issue as I killed a few suspending or running queries which they ran for more than 15 hours. I also checked for the rest of the queries running but there is nothing would cause the transaction lock. I tried to run the store procedure manually from Azure Data Studio which is blocked as mentioned above and It took 40 seconds to complete.
While the suspending query from ADF, it took nearly an hour to finish.
Any suggestion to troubleshoot this issue is much appreciated.
Thanks
There a number of factors you must always consider when tuning queries in Azure Synapse Analytics, dedicated SQL pools:
DWU - what DWU is your pool at? Lower DWUs mean lower concurrent users and lower performance and should not be used for any kind of performance tuning. Crank it up temporarily to rule this out as a problem, bearing in mind changing this disconnects any active queries. Also bear in mind, not all queries respond to higher DWU.
Resource class - what resource class is associated with the user executing these queries? Remember the default is smallrc, and the admin user always has smallrc. Understand static and dynamic resource classes. DMV sys.dm_pdw_exec_requests will give you useful information on this. Trial with your workload to find the sweetspot between performance and concurrency v resource class. Encourage your dev team to use labels in their queries: OPTION ( LABEL = 'some informative label' )
Table geometry - this is the distribution (ROUND_ROBIN|HASH|REPLICATE) of your table and the indexing choice (CLUSTERED COLUMNSTORE|CLUSTERED INDEX|HEAP). Clustered columnstore and round robin are the defaults but they are not always appropriate. Consider what is appropriate for your tables.
If you work through those and still have an issue you can start to look at statistics and workload classification for starters, but gather information on the points above should give you a good idea.
If you are just doing single value INSERTs, then don't. Dedicated SQL pools are terrible with these. Convert these to load from a file in a single INSERT / COPY INTO.

Azure Sql Database Log I/O Seems High

I've been optimizing our Azure Sql Database and started getting really good performance. The main concern now is the Logging that it does. When running a insert/update load test, everything is low except the CPU which is peaking around 15% and the logging which is peaking around 25%. Since the logging IO is hitting 25%, this causes the DTUs to be 25%. I turned off Auditing in the settings for the database but that did nothing. Is there a way to reduce the logging that is being done? I'm not even sure where the logs are being saved.
Any insight on this would help as I've googled and couldn't locate anything worth mentioning about the logging that is happening.
Here is a screen shot of the metrics...
Workflow Details:
I don't have byte sizes on me as I'm not in the office atm. Every Task is a SELECT and either INSERT or UPDATE, a typical add or update flow using Entity Framework. These tasks fire off and finish at a rate of 63 tasks per second to create those metrics.
The metrics represents the writes to the transaction log of the database. The transaction log tracks the changes to the data and hence this for the most part is a function of the amount of data you insert or update. As you said auditing has no impact on the log rate in SQL Database.
There is no impact of auditing based on your description of the workload. Insert / Update produces the log in the order of data you insert. A couple options you can try to improve the workload efficiency in SQL DB v12 is to disable SI / RCSI. If your records size is small this saves some data IO / log and temp db usage. More details # http://www.sqlindepth.com/row-versioning-in-sql-database-version-v12/ You can also compress the tables so that your IO / log can be a little less than what it is, however your CPU consumption goes up a little.
Auditing is orthogonal to the transaction log IO and this is to maintain consistency and durability of your database. As Sirisha said you can disable SI / RCSI / compress (V12 database). You may also want to drop unwanted indexes on the tables so that the log IO can be small.
Try to change the retention days to the minimal that works with your case unless you need to keep the retention days to the max.
The logs are stored in your storage account.
Hope this helps.

Does DBCC ShrinkDatabase slow down sql server

I have a service which has been scheduled to run several times in a day. The service will execute a select query each time it fires.
It runs perfectly for sometimes but eventually it will hit the "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." exception after sometime. This happens very randomly.
I run a trace in the sql profiler and see DBCC shrinkdatabase started around the time and used quite a lot of time but it is for another database on the same server.
I wonder if the shrinkdatabase action will slow down the sql server and is it possible for it to cause my query timeout?
The short answer would be yes but there are a lot of different things happening on the machine when you run the shrink database command which would all contribute towards slow queries.
There would be high Disk I/O when the data in the files are being re-arranged
Since the data has been re-arranged you would most likely have fragmentation on the indexes which could slow down your queries.
The operation is also very CPU intensive
You can get a lot of information on the shrink process on this page http://sqlmag.com/sql-server/shrinking-data-files

SQL Server running slow

I have SQL job that runs every night which does various inserts/updates/deletes. The job contains 40 steps which mainly execute stored procedures.
It's been running fine up until a week ago when suddenly the run time went up from 2.5 hours to over 5 hours, sometimes even 8,9,10!
Could one you please give me any pointers?
First of all let me recommend you a valuable resource on Simple-Talk site. Is a detailed methodology of how to troubleshoot performance issues on SQL Server.
Does the insert you say was carried out was a huge bulk insert that could affect performance? Maybe if it was a huge load the query execution plans could be different and you need to re-tune your table structure, indexes, etc.
If the run time suddenlychanged and no changes where done in the queries or your database structure then I would ask myself several questions:
first, does the process is still taking so long or it was only one time it ran so slow? maybe now is running smoothly and the issue only arised once. Nevertheless, try to find what triggered that bad performance, it can happend again and take down your server
is the server a dedicated sql server? if not, check if some new tasks unrelated to the SQL engine had been configured, maybe a new tasks is doing some heavy I/O jobs and therefore your CRUD operations take longer
if it is a dedicated server, then check that no new job has been added and can take down your existing jobs. Check this SO link for details on jobs settled up from the SQL Agent
maybe low memory due to another process on same server?
And there is lot more to check, but before going deeper I would check that no external (non sql server related) was the reason of the delay on the process execution.