So essentially my query is currently grabbing data from two different linked servers which is causing the query to run at a unbelievable slow time (both are slow servers and need to be replaced). Normally for queries if they all grabbing information from the same database I'd do something like this:
EXEC (#SQL) AT [SERVER]
The server above that I execute it at makes the query run blazing fast. Like I'm talking a 43 minutes query running in 14 seconds. Not sure exactly why but was told it may have better indexing (not quite sure how indexing works that much).
But I can't do this anymore since one of the database's doesn't exist within this server. And no, I can't copy the database over to the other server.
Can anyone give me any advice on what to replace the server prefix with or what's a good way to approach this?
Example:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
-- Several Columns.
FROM
dbo.mem m
INNER JOIN [SERVER1].DATABASE1.dbo.TABLE1 c on c.COL1 = m.COL1
INNER JOIN [SERVER2].DATABASE2.dbo.TABLE2 BM ON BM.COL1= c.COL1
WHERE
-- CONDITIONS
GO
Related
Occassionaly, SQL Azure will cause a simple query inside of a stored procedure to take a long time to run. After much research, I've tracked the issue down to the last_optimize_duration within query store showing 20+ seconds.
I've re-written the query in a number of ways, simplified it, used a with index, tried calling other stored procs from the main one. The query itself seems to run fast 100% of the time EXCEPT sometimes when the system recompiles it. Of course if I do it within SSMS it compiles and runs quickly.
CREATE PROC[dbo].spLogHatesMe(
#CID[VARCHAR](200),
#LogType VARCHAR(50) = NULL
)
AS
SELECT *
FROM dbo.Log
WHERE CID = #CID
AND LogType = #LogType;
GO
Also Note that the Log table has an Index on CID and LogType.
I would expect the optimization time to be similar to all other compilations in the 1000-5000 microsecond range. Not '25913462' which was the last duration I had. No other queries are having the same type of issue.
The Log table is a log table that is mostly inserts. For one specific task we need to look back at and read one of the values. Roughly, 20-25 inserts per 1 read.
I'm using this query out of the query store to get the compile times:
SELECT TOP 100 *
FROM sys.query_store_plan AS Pl
INNER JOIN sys.query_store_query AS Qry ON Pl.query_id = Qry.query_id
INNER JOIN sys.query_store_query_text AS Txt ON Qry.query_text_id = Txt.query_text_id
WHERE Qry.is_internal_query=0
ORDER BY Pl.last_compile_start_time desc
After much research, turns out Azure was indeed updating before executing the stored procedure.
I used SET AUTO_UPDATE_STATISTICS_ASYNC ON to make that into an async operation instead.
Once I did that, the errors I was getting completely stopped. Haven't had the same issue since, and that was months ago.
I ran a query to delete around 4 million rows from my database. It ran for about 12 hours before my laptop lost the network connection. At that point, I decided to take a look at the status of the query in the database. I found that it was in the suspended state. Specifically:
Start Time SPID Database Executing SQL Status command wait_type wait_time wait_resource last_wait_type
---------------------------------------------------------------------------------------------------------------------------------------------------
2018/08/15 11:28:39.490 115 RingClone *see below suspended DELETE PAGEIOLATCH_EX 41 5:1:1116111 PAGEIOLATCH_EX
*Here is the sql query in question:
DELETE FROM T_INDEXRAWDATA WHERE INDEXRAWDATAID IN (SELECT INDEXRAWDATAID FROM T_INDEX WHERE OWNERID='1486836020')
After reading this;
https://dba.stackexchange.com/questions/87066/sql-query-in-suspended-state-causing-high-cpu-usage
I realize I probably should have broken this up into smaller pieces to delete them (or even delete them one-by-one). But now I just want to know if it is "safe" for me to KILL this query, as the answer in that post suggests. One thing the selected answer states is that "you may run into data consistency problems" if you KILL a query while it's executing. If it causes some issues with the data I am trying to delete, I'm not that concerned. However, I'm more concerned about this causing some issues with other data, or with the table structure itself.
Is it safe to KILL this query?
If you ran the delete from your laptop over the network and it lost connection with the server, you can either kill the spid or wait when it will disappear by itself. Depending on the ##version of your SQL Server instance, in particular how well it's patched, the latter might require instance restart.
Regarding the consistency issues, you seem to misunderstand it. It is possible only if you had multiple statements run in a single batch without being wrapped with a transaction. As I understand, you had a single statement; if that's the case, don't worry about consistency, SQL Server wouldn't have become what it is now if it would be so easy to corrupt its data.
I would have rewritten the query however, if T_INDEX.INDEXRAWDATAID column has NULLs then you can run into issues. It's better to rewrite it via join, also adding batch splitting:
while 1=1 begin
DELETE top (10000) t
FROM T_INDEXRAWDATA t
inner join T_INDEX i on t.INDEXRAWDATAID = i.INDEXRAWDATAID
WHERE i.OWNERID = '1486836020';
if ##rowcount = 0
break;
checkpoint;
end;
It definitely will not be any slower, but it can boost performance, depending on your schema, data and the state of any indices the tables have.
I have a sql query, the exact code of which is generated in C#, and passed through ADO.Net as a text-based SqlCommand.
The query looks something like this:
SELECT TOP (#n)
a.ID,
a.Event_Type_ID as EventType,
a.Date_Created,
a.Meta_Data
FROM net.Activity a
LEFT JOIN net.vu_Network_Activity na WITH (NOEXPAND)
ON na.Member_ID = #memberId AND na.Activity_ID = a.ID
LEFT JOIN net.Member_Activity_Xref ma
ON ma.Member_ID = #memberId AND ma.Activity_ID = a.ID
WHERE
a.ID < #LatestId
AND (
Event_Type_ID IN(1,2,3))
OR
(
(na.Activity_ID IS NOT NULL OR ma.Activity_ID IS NOT NULL)
AND
Event_Type_ID IN(4,5,6)
)
)
ORDER BY a.ID DESC
This query has been working well for quite some time. It takes advantage of some indexes we have on these tables.
In any event, all of a sudden this query started running really slow, but ran almost instantaneously in SSMS.
Eventually, after reading several resources, I was able to verify that the slowdown we were getting was from poor parameter sniffing.
By copying all of the parameters to local variables, I was able to successfully reduce the problem. The thing is, this just feels like all kind of wrong to me.
I'm assuming that what happened was the statistics of one of these tables was updated, and then by some crappy luck, the very first time this query was recompiled, it was called with parameter values that cause the execution plan to differ?
I was able to track down the query in the Activity Monitor, and the execution plan resulting in the query to run in ~13 seconds was:
Running in SSMS results in the following execution plan (and only takes ~100ms):
So what is the question?
I guess my question is this: How can I fix this problem, without copying the parameters to local variables, which could lead to a large number of cached execution plans?
Quote from the linked comment / Jes Borland:
You can use local variables in stored procedures to “avoid” parameter sniffing. Understand, though, that this can lead to many plans stored in the cache. That can have its own performance implications. There isn’t a one-size-fits-all solution to the problem!
My thinking is that if there is some way for me to manually remove the current execution plan from the temp db, that might just be good enough... but everything I have found online only shows me how to do this for an actual named stored procedure.
This is a text-based SqlCommand coming from C#, so I do not know how to find the cached execution plan, with the sniffed parameter values, and remove it?
Note: the somewhat obvious solution of "just create a proper stored procedure" is difficult to do because this query can get generated in a number of different ways... and would require a somewhat unpleasant refactor.
If you want to remove a specific plan from the cache then it is really a two step process: first obtain the plan handle for that specific plan; and then use DBCC FREEPROCCACHE to remove that plan from the cache.
To get the plan handle, you need to look in the execution plan cache. The T-SQL below is an example of how you could search for the plan and get the handle (you may need to play with the filter clause a bit to hone in on your particular plan):
SELECT top (10)
qs.last_execution_time,
qs.creation_time,
cp.objtype,
SUBSTRING(qt.[text], qs.statement_start_offset/2, (
CASE
WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.[text])) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2 + 1
) AS query_text,
qt.text as full_query_text,
tp.query_plan,
qs.sql_handle,
qs.plan_handle
FROM
sys.dm_exec_query_stats qs
LEFT JOIN sys.dm_exec_cached_plans cp ON cp.plan_handle=qs.plan_handle
CROSS APPLY sys.dm_exec_sql_text (qs.[sql_handle]) AS qt
OUTER APPLY sys.dm_exec_query_plan(qs.plan_handle) tp
WHERE qt.text like '%vu_Network_Activity%'
Once you have the plan handle, call DBCC FREEPROCCACHE as below:
DBCC FREEPROCCACHE(<plan_handle>)
There are many ways to delete/invalidate a query plan:
DBCC FREEPROCCACHE(plan_handle)
or
EXEC sp_recompile 'net.Activity'
or
adding OPTION (RECOMPILE) query hint at the end of your query
or
using optimize for ad hoc workloads server settings
or
updating statistics
If you have a crappy product from a crappy vendor, the best way to handle parameter sniffing is to create you own plan using EXEC sp_create_plan_guide/
I ran a query on about 1,300,000 records in one table. It takes certain records that meet some WHERE conditions and inserts them into another table. It first clears out the target table entirely.
The time taken to complete the query gets drastically better with each Execute:
1st: 5 minutes, 3 seconds
2nd: 2 minutes, 43 seconds
3rd: 12 seconds
4th: 3 seconds
I'm not doing anything other than just hitting Execute. My query looks like this (somewhat abbreviated for length purposes):
DELETE FROM dbo.ConsolidatedLogs --clear target table
DECLARE #ClientID int
DECLARE c CURSOR FOR SELECT ClientID FROM dbo.Clients
OPEN c
FETCH NEXT FROM c INTO #ClientID --foreach LogID
WHILE ##FETCH_STATUS = 0
BEGIN
INSERT INTO dbo.ConsolidatedLogs
(col1, col2)
SELECT col1, col2
FROM dbo.CompleteLogsRaw
WHERE col3 = true AND
ClientID = #ClientID
FETCH NEXT FROM c INTO #ClientID
END
CLOSE c
DEALLOCATE c
How/why does this happen? What is SQL Server doing exactly to make this possible?
This query is going to be run as an SQL Server Agent job, once every 3 hours. Will it take the full 5 minutes every time, or will it be shorter because the job is only running this one query, even though it's got a 3 hour delay?
If identical queries get faster with each run, there is an outstanding chance that things are being cached. So, where can things be cached?
SQL Server Query Plan Cache
SQL Server's Query Cache
Operating System IO Buffers
Hard Disk Controller Cache
Hard Disk On-Disk Cache
You can clear the SQL Server Query Cache between runs to see what the impact of that cache is
How can I clear the SQL Server query cache?
SQL Server will use whatever RAM is dedicated to it to keep things that it accesses frequently in RAM rather than on disk. The more you run the same query, the more the data that would normally be found on disk is likely to reside in RAM instead.
The OS level and hardware-level caches are easiest to reset by performing a reboot if you wish to see whether they are contributing to the improving results.
If you publish the query plan that SQL Server is using for each execution of your query, a more detailed diagnostics would be possible.
When Sql Server executes a query in Enterprise Manager, it creates an "execution" or "query plan" after the first execution, and caches that plan. A "query plan", in a nutshell, describes how SQL Server will attack the tables, fields, indexes, and data necessary to satisfy the result. Each time you re-run it, that query is fetched from the plan cache, and the "heavy lifting" that the query preprocessor would ordinarily have to do is omitted. That allows the query to be performed more rapidly on second and subsequent executions.
Mind you, that's an oversimplification of a much more detailed (and, thus, inherently cooler) process, but that's Query Plan 101 :)
Sometimes when diagnosing issues with our SQL Server 2000 database it might be helpful to know that a stored procedure is using a bad plan or is having trouble coming up with a good plan at the time I'm running into problems. I'm wondering if there is a query or command I can run to tell me how many execution plans are cached currently for a particular stored procedure.
You can query the cache in a number of different ways, either looking at its contents, or looking at some related statistics.
A couple of commands to help you along your way:
SELECT * FROM syscacheobjects -- shows the contents of the procedure
-- cache for all databases
DBCC PROCCACHE -- shows some general cache statistics
DBCC CACHESTATS -- shows the usage statistics for the cache, things like hit ratio
If you need to clear the cache for just one database, you can use:
DBCC FLUSHPROCINDB (#dbid) -- that's an int, not the name of it.
-- The int you'd get from sysdatabases or the dbid() function
Edit: above the line is for 2000, which is what the question asked. However, for anyone visiting who's using SQL Server 2005, it's a slightly different arrangement to the above:
select * from sys.dm_exec_cached_plans -- shows the basic cache stuff
A useful query for showing plans in 2005:
SELECT cacheobjtype, objtype, usecounts, refcounts, text
from sys.dm_exec_cached_plans p
join sys.dm_exec_query_stats s on p.plan_handle = s.plan_handle
cross apply sys.dm_exec_sql_text(s.sql_handle)