I'm encountering an issue with my local instance of SSMS 2012, where it has suddenly started running extremely slowly.
I have queries that would usually take 20-30 seconds now taking 6+ minutes. These aren't necessarily complex, just a simple join.
I should note that working on a single database performs as normal (as far as my limited tests show)
The only changes I've made recently, is connecting to R using ODBC with the ODBC Driver 11 for SQL Server, straight from Microsoft's website.
I've tried deleting my ODBC connections, closing RStudio, closing SSMS and even restarted my laptop to sever any remaining connections - all to no avail.
Any help at this point would be appreciated, thanks.
Perhaps your statistics are out of date as that can cause a query that's run okay, to go downhill if there has been a lot of change in the database. You can update them using:
EXEC sp_updatestats;
Hopefully this will help. Also another thing to check is that your indexes aren't fragmented.
SELECT dbschemas.[name] as 'Schema',
dbtables.[name] as 'Table',
dbindexes.[name] as 'Index',
indexstats.avg_fragmentation_in_percent,
indexstats.page_count
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL) AS indexstats
INNER JOIN sys.tables dbtables on dbtables.[object_id] = indexstats.[object_id]
INNER JOIN sys.schemas dbschemas on dbtables.[schema_id] = dbschemas.[schema_id]
INNER JOIN sys.indexes AS dbindexes ON dbindexes.[object_id] = indexstats.[object_id]
AND indexstats.index_id = dbindexes.index_id
WHERE indexstats.database_id = DB_ID()
ORDER BY indexstats.avg_fragmentation_in_percent desc
This will show a list of the objects in the current database. (Use a use statement to choose the database before running)
Use Northwind
GO
Anything that is more than 5% then use
ALTER INDEX IndexName REBUILD
Anything that is more than 30% then use a
ALTER INDEX IndexName REBUILD
This could take some time!
hope it helps
Related
So essentially my query is currently grabbing data from two different linked servers which is causing the query to run at a unbelievable slow time (both are slow servers and need to be replaced). Normally for queries if they all grabbing information from the same database I'd do something like this:
EXEC (#SQL) AT [SERVER]
The server above that I execute it at makes the query run blazing fast. Like I'm talking a 43 minutes query running in 14 seconds. Not sure exactly why but was told it may have better indexing (not quite sure how indexing works that much).
But I can't do this anymore since one of the database's doesn't exist within this server. And no, I can't copy the database over to the other server.
Can anyone give me any advice on what to replace the server prefix with or what's a good way to approach this?
Example:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
-- Several Columns.
FROM
dbo.mem m
INNER JOIN [SERVER1].DATABASE1.dbo.TABLE1 c on c.COL1 = m.COL1
INNER JOIN [SERVER2].DATABASE2.dbo.TABLE2 BM ON BM.COL1= c.COL1
WHERE
-- CONDITIONS
GO
I have a sql query, the exact code of which is generated in C#, and passed through ADO.Net as a text-based SqlCommand.
The query looks something like this:
SELECT TOP (#n)
a.ID,
a.Event_Type_ID as EventType,
a.Date_Created,
a.Meta_Data
FROM net.Activity a
LEFT JOIN net.vu_Network_Activity na WITH (NOEXPAND)
ON na.Member_ID = #memberId AND na.Activity_ID = a.ID
LEFT JOIN net.Member_Activity_Xref ma
ON ma.Member_ID = #memberId AND ma.Activity_ID = a.ID
WHERE
a.ID < #LatestId
AND (
Event_Type_ID IN(1,2,3))
OR
(
(na.Activity_ID IS NOT NULL OR ma.Activity_ID IS NOT NULL)
AND
Event_Type_ID IN(4,5,6)
)
)
ORDER BY a.ID DESC
This query has been working well for quite some time. It takes advantage of some indexes we have on these tables.
In any event, all of a sudden this query started running really slow, but ran almost instantaneously in SSMS.
Eventually, after reading several resources, I was able to verify that the slowdown we were getting was from poor parameter sniffing.
By copying all of the parameters to local variables, I was able to successfully reduce the problem. The thing is, this just feels like all kind of wrong to me.
I'm assuming that what happened was the statistics of one of these tables was updated, and then by some crappy luck, the very first time this query was recompiled, it was called with parameter values that cause the execution plan to differ?
I was able to track down the query in the Activity Monitor, and the execution plan resulting in the query to run in ~13 seconds was:
Running in SSMS results in the following execution plan (and only takes ~100ms):
So what is the question?
I guess my question is this: How can I fix this problem, without copying the parameters to local variables, which could lead to a large number of cached execution plans?
Quote from the linked comment / Jes Borland:
You can use local variables in stored procedures to “avoid” parameter sniffing. Understand, though, that this can lead to many plans stored in the cache. That can have its own performance implications. There isn’t a one-size-fits-all solution to the problem!
My thinking is that if there is some way for me to manually remove the current execution plan from the temp db, that might just be good enough... but everything I have found online only shows me how to do this for an actual named stored procedure.
This is a text-based SqlCommand coming from C#, so I do not know how to find the cached execution plan, with the sniffed parameter values, and remove it?
Note: the somewhat obvious solution of "just create a proper stored procedure" is difficult to do because this query can get generated in a number of different ways... and would require a somewhat unpleasant refactor.
If you want to remove a specific plan from the cache then it is really a two step process: first obtain the plan handle for that specific plan; and then use DBCC FREEPROCCACHE to remove that plan from the cache.
To get the plan handle, you need to look in the execution plan cache. The T-SQL below is an example of how you could search for the plan and get the handle (you may need to play with the filter clause a bit to hone in on your particular plan):
SELECT top (10)
qs.last_execution_time,
qs.creation_time,
cp.objtype,
SUBSTRING(qt.[text], qs.statement_start_offset/2, (
CASE
WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.[text])) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2 + 1
) AS query_text,
qt.text as full_query_text,
tp.query_plan,
qs.sql_handle,
qs.plan_handle
FROM
sys.dm_exec_query_stats qs
LEFT JOIN sys.dm_exec_cached_plans cp ON cp.plan_handle=qs.plan_handle
CROSS APPLY sys.dm_exec_sql_text (qs.[sql_handle]) AS qt
OUTER APPLY sys.dm_exec_query_plan(qs.plan_handle) tp
WHERE qt.text like '%vu_Network_Activity%'
Once you have the plan handle, call DBCC FREEPROCCACHE as below:
DBCC FREEPROCCACHE(<plan_handle>)
There are many ways to delete/invalidate a query plan:
DBCC FREEPROCCACHE(plan_handle)
or
EXEC sp_recompile 'net.Activity'
or
adding OPTION (RECOMPILE) query hint at the end of your query
or
using optimize for ad hoc workloads server settings
or
updating statistics
If you have a crappy product from a crappy vendor, the best way to handle parameter sniffing is to create you own plan using EXEC sp_create_plan_guide/
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Help with deadlock in Sql Server 2008
SQLServer automatically logs all deadlocks. Can anyone help me to get sql query which will capture deadlocks data that is being collected, for a recent event.
I am using SQL SERVER 2008 R2 for my development activities.
Thanks & Regards,
Santosh Kumar Patro
You can use a deadlock graph and gather the information you require from the log file.
The only other way I could suggest is digging through the information by using EXEC SP_LOCK (Soon to be deprecated), EXEC SP_WHO2 or the sys.dm_tran_locks table.
SELECT L.request_session_id AS SPID,
DB_NAME(L.resource_database_id) AS DatabaseName,
O.Name AS LockedObjectName,
P.object_id AS LockedObjectId,
L.resource_type AS LockedResource,
L.request_mode AS LockType,
ST.text AS SqlStatementText,
ES.login_name AS LoginName,
ES.host_name AS HostName,
TST.is_user_transaction as IsUserTransaction,
AT.name as TransactionName,
CN.auth_scheme as AuthenticationMethod
FROM sys.dm_tran_locks L
JOIN sys.partitions P ON P.hobt_id = L.resource_associated_entity_id
JOIN sys.objects O ON O.object_id = P.object_id
JOIN sys.dm_exec_sessions ES ON ES.session_id = L.request_session_id
JOIN sys.dm_tran_session_transactions TST ON ES.session_id = TST.session_id
JOIN sys.dm_tran_active_transactions AT ON TST.transaction_id = AT.transaction_id
JOIN sys.dm_exec_connections CN ON CN.session_id = ES.session_id
CROSS APPLY sys.dm_exec_sql_text(CN.most_recent_sql_handle) AS ST
WHERE resource_database_id = db_id()
ORDER BY L.request_session_id
http://www.sqlmag.com/article/sql-server-profiler/gathering-deadlock-information-with-deadlock-graph
http://weblogs.sqlteam.com/mladenp/archive/2008/04/29/SQL-Server-2005-Get-full-information-about-transaction-locks.aspx
In order to capture deadlock graphs without using a trace (you don't need profiler necessarily), you can enable trace flag 1222. This will write deadlock information to the error log. However, the error log is textual, so you won't get nice deadlock graph pictures - you'll have to read the text of the deadlocks to figure it out.
I would set this as a startup trace flag (in which case you'll need to restart the service). However, you can run it only for the current running instance of the service (which won't require a restart, but which won't resume upon the next restart) using the following global trace flag command:
DBCC TRACEON(1222, -1);
A quick search yielded this tutorial:
Finding SQL Server Deadlocks Using Trace Flag 1222
Also note that if your system experiences a lot of deadlocks, this can really hammer your error log, and can become quite a lot of noise, drowning out other, important errors.
Have you considered third party monitoring tools? SQL Sentry and Plan Explorer, for example, have a much nicer deadlock graph, showing you object / index names, as well as the order in which the locks were taken. As a bonus, these are captured for you automatically on monitored servers without having to configure trace flags, run your own traces, etc.:
New Deadlock Visualizations in SQL Sentry and Plan Explorer
Analyzing Deadlocks in SQL Sentry
Disclaimer: I used to work for SQL Sentry.
We have a function in our database that searches two large tables to see if a value exists. It is a pretty large query, but it is optimized to use indexes and generally runs pretty fast.
Three times over the past 2 weeks, this function decided to go haywire and run extremely slow, which causes deadlocking and bad performance all around. This happens even at times of less than peak usage.
Rebuilding the function using "Alter Function" in SQL Server seems to take care of the issue. Once we do that, the server usage goes back to normal and everything is OK.
This leads us to think that the functions query plan has rebuilt, and is taking the correct indexes into account, but we have no idea why SQL Server decided to change the query plan to a worse plan all of a sudden.
Does anyone have any ideas what might cause this behavior, or how to test for it, or prevent it? We are running SQL Server 2008 Enterprise.
The behaviour you are describing is often due to an incorrectly cached query plan and/or out of date statistics.
It commonly occurs when you have a large number of parameters in a WHERE clause, especially a long list of those that are of the form:
(#parameter1 is NULL OR TableColumn1 = #parameter1)
Say, the cached query plan expires, and the proc is called with an unrepresentative set of parameters. The plan is then cached for this data profile. BUT, if the proc is more oftenly common with a very different set of parameters, the plan might not be appropriate. This is often known as 'parameter sniffing'.
There are ways to mitigate and eliminate this problem but they may involve trade-offs and depend on your SQL Server version. Look at OPTIMIZE FOR and OPTIMIZE FOR UNKNOWN. IF (and it's a big if) the proc is called infrequently but must run as fast as possible you can mark it as OPTION(RECOMPILE), to force a recompile each time it is called, BUT don't do this for frequently called procs OR without investigation.
[NOTE: be aware of which Service pack and Cumulative Update (CU) your SQL Server 2008 box has, as the recompile and parameter sniffing logic works differently in some versions]
Run this query (from Glenn Berry) to determine the state of statistics:
-- When were Statistics last updated on all indexes?
SELECT o.name, i.name AS [Index Name],
STATS_DATE(i.[object_id], i.index_id) AS [Statistics Date],
s.auto_created, s.no_recompute, s.user_created, st.row_count
FROM sys.objects AS o WITH (NOLOCK)
INNER JOIN sys.indexes AS i WITH (NOLOCK)
ON o.[object_id] = i.[object_id]
INNER JOIN sys.stats AS s WITH (NOLOCK)
ON i.[object_id] = s.[object_id]
AND i.index_id = s.stats_id
INNER JOIN sys.dm_db_partition_stats AS st WITH (NOLOCK)
ON o.[object_id] = st.[object_id]
AND i.[index_id] = st.[index_id]
WHERE o.[type] = 'U'
ORDER BY STATS_DATE(i.[object_id], i.index_id) ASC OPTION (RECOMPILE);
I just started a new job and was given a bug to track down and fix. The basic issues it that a field in a DB record is being cleared out and no one knows why. So far I have tried:
Checking the table for triggers,
there are none.
Monitoring the table using SQL
Server profiler for the last couple
of days in the hopes that the error
would happen again but unfortunately
it hasn't.
Reviewing all the code that does
inserts/updates and I didn't see
anything that would cause this
problem.
Does anyone have any other suggests for finding what could have updated this record? Am I not checking something that I should be? Are there any other sources of information that I should look at?
Create a trigger that will write to a history table. Include columns for the date of the write as well as the user.
::fn_dblog() will show you at the very least when the update occurred (as sequence, not as time) and what other operations were done by that transaction. From the information on what other operations this transaction did, along with what other transactions were doing at that moment, you should be able to narrow down at least the context under which the update occurred, from which point code inspection is a viable option to continue.
Reading the log requires ... the log so your database should be in full recovery mode.
select
schema_name = s.name,
object_name = o.name
from sys.sql_modules m join sys.objects o on m.object_id = o.object_id
join sys.schemas s on o.schema_id = s.schema_id
where definition like '%FieldName%'
This query looks through all the objects in the database (stored procedures, functions, views) and looks for any place where 'FieldName' is being refered. I would go through all the objects returned by this query to see if anything unusual is being done with the Field. This might be extremely tedious as it might return more results than you care about, but its a sure shot way of catching all the references to the field
If you are on SQL Server 2008 you can use extended events to get the full stack trace for the offending statement. Example code here Create Trigger to log SQL that affected table?