SQL Server LCK_M_S only happens in production - sql

I have a stored procedure that is called by a SQL Server 2012 report that is taking an age to run in production compared to development because of a blocking session lck_m_s
The stored procedure runs instantaneously when executed in SQL Server Management Studio and also works well when called as part of the report from a dev laptop via Visual Studio.
When the report is uploaded to the production server this blocking issue appears.
How can I find out what is causing the lck_m_s issue when in production?

Execute this query when the problem happens again:
select * from
sys.dm_os_waiting_tasks t
inner join sys.dm_exec_connections c on c.session_id = t.blocking_session_id
cross apply sys.dm_exec_sql_text(c.most_recent_sql_handle) as h1
It will give you the spid of the session that caused blocking, on which resource was blocked, and text of the most rcent query for that session. This should give you a solid starting point.

You have a couple of options
Set up the blocked process report. Essentially, you set the blocked process threshold (s) system configuration to a non-zero number of seconds and set up an event notification on the BLOCKED_PROCESS_REPORT event. You'll get an XML report each time any process is blocked for more than the threshold you've set. The downside to this is that it'll be for anything getting blocked, not just your procedure, but you'll get whomever is holding the non-compatible lock in the report.
Set up an extended events session for your procedure to capture the lock_released event where the duration is longer than you care to wait. The upside is that this is extremely targeted and you can define the session so that you get very little noise. The downside is that you won't know what process is holding the incompatible lock (though you will get a pretty detailed description of what the locked resource is to further your investigation).

Related

Any way to cancel a request in SSMS?

I say cancel a "request" rather than a query because typically I am able to continue using Management Studio when I am running queries. However, I have remote access to an Azure database that shuts off after an hour of no activity. When I send it a query and it's shut down, it take a really long time to fire up and I'm not able to continue working during this time as Management Studio completely freezes.
I am literally still waiting on the request that prompted me to write this post to complete and it has been several minutes now. In this case, I actually ran my query on the wrong connection and did not even mean to hit the Azure database so it's especially annoying that I have to wait this long, lol.
If I didn't know better I would think it was permanently locked, but this has happened enough times now that I know it will eventually return control.
By your description, you have a SERVERLESS version of Azure SQL Database.
When this shuts off the compute due to lack of activity, it completely removes the compute portion of the service - simply leaving your database on storage. When you then query it again for the first time, it needs to allocate some compute to the database, start that up (with the redundant copies that Azure SQL provides), connect to your database, ensure the gateway is up to date to direct your connection and only THEN will it accept a connection.
So, with Management Studio, it is waiting on the response and I believe that it also has some degree of retry so that it keeps checking until the connection is established.
You could change the tier to a PROVISIONED service tier where it is available all the time (your billing will change so be sure it is what you need) and this will stop, or you could have a PowerShell script or similar that you run to ensure the database is available before connecting from SSMS.
When it is waiting for the response back from the service that it has started OK, there isn't a session available to KILL - so your only scope there would be to kill the client - i.e. use task manager to shut down SSMS.

When I start My SAP MMC EC6 server one service is not getting to wait mode

Can someone of you help me, how to make the following service selected in the image get into wait mode after starting the server.
Please let me know if developer trace is required to be posted for resolving this issue.
that particular process is a BATCH process, a process that runs scheduled background tasks (maintained by transaction SM36/SM37). If the process is busy right after starting the server, that means there were scheduled tasks with status released waiting for execution, and as soon as the server was up, it started those tasks.
If you want to make sure the system doesn't immediately start released background tasks, you'll have to set the status back to scheduled (which, thanks to a bit of weird translation, means they won't be executed because they are not released).
if you want to start the server without having a chance to first change the job status in SM37, you would either have to reset the status on database level (likely not officially supported by SAP) or first start the server without any BATCH processes (which would give you a number of great big warning messages upon login) and change the job status before then restarting the server with the BATCH processes. You can set the number of processes for each type in the profile of your instance (parameter rdisp/wp_no_btc).

How to check if SQL Server crashed

I'm building a fully automated process for my company, which includes 2 processes. One, where a 3rd party application that off a stored procedure, at certain times per day. Two, the stored procedure then controls kicking off other processes. The procedure is controlled by a table with the list of jobs that will be kicked off for the day. If the status for the job item is set to Queue, the procedure will start running that item and set the status to Running. My problem is, if for some reason SQL Server crashes, whether it be a power outage or some odd reason. If the 3rd party application goes and kicks of that stored procedure another day, there might be a job that still says running which should've failed or set back to Queue since the server crashed.
Is there a way in SQL where I can check if the server crashed during the time a process is being ran?
you may put your script and 3rd party application in one SQL job on the SQL server. that may resolve the issue.

Run a SQL command in the event my connection is broken? (SQL Server)

Here's the sequence of events my hypothetical program makes...
Open a connection to server.
Run an UPDATE command.
Go off and do something that might take a significant amount of time.
Run another UPDATE that reverses the change in step 2.
Close connection.
But oh-no! During step 3, the machine running this program literally exploded. Other machines querying the same database will now think that the exploded machine is still working and doing something.
What I'd like to do, is just as the connection is opened, but before any changes have been made, tell the server that should this connection close for whatever reason, to run some SQL. That way, I can be sure that if something goes wrong, the closing update will run.
(To pre-empt the answer, I'm not looking for table/record locks or transactions. I'm not doing resource claims here.)
Many thanks, billpg.
I'm not sure there's anything built in, so I think you'll have to do some bespoke stuff...
This is totally hypothetical and straight off the top of my head, but:
Take the SPID of the connection you
opened and store it in some temp
table, with the text of the reversal
update.
Use an a background process (either
SSIS or something else) to monitor
the temp table and check that the
SPID is still present as an open connection.
If the connection dies then the background process can execute the stored revert command
If the connection completes properly then the SPID can be removed from the temp table so that the background process no longer reverts it when the connection closes.
Comments or improvements welcome!
I'll expand on my comment. In general, I think you should reconsider your approach. All database access code should open a connection, execute a query then close the connection where you rely on connection pooling to mitigate the expense of opening lots of database connections.
If it is the case that we are talking about a single SQL command whose rows on which it operates should not change, that is a problem that should be handled by the transaction isolation level. For that you might investigate the Snapshot isolation level in SQL Server 2005+.
If we are talking about a series of queries that are part of a long running transaction, that is more complicated and can be handled via storage of a transaction state which other connections read in order to determine whether they can proceed. Going down this road, you need to provide users with tools where they can cancel a long running transaction that might no longer be applicable.
Assuming it's even possible... this will only help you if the client machine explodes during the transaction. Also, there's a risk of false positives - the connection might get dropped for a few seconds due to network noise.
The approach that I'd take is to start a process on another machine that periodically pings the first one to check if it's still on-line, then takes action if it becomes unreachable.

SSIS - Connection Management Within a Loop

I have the following SSIS package:
alt text http://www.freeimagehosting.net/uploads/5161bb571d.jpg
The problem is that within the Foreach loop a connection is opened and closed for each iteration.
On running SQL Profiler I see a series of:
Audit Login
RPC:Completed
Audit Logout
The duration for the login and the RPC that actually does the work is minimal. However, the duration for the logout is significant, running into several seconds each. This causes the JOB to run very slowly - taking many hours. I get the same problem when running either on a test server or stand-alone laptop.
Could anyone please suggest how I may change the package to improve performance?
Also, I have noticed that when running the package from Visual Studio, it looks as though it continues to run with the component blocks going amber then green but actually all the processing has been completed and SQL profiler has dropped silent?
Thanks,
Rob.
Have you tried running your data flow task in parallel vs serial? You can most likely break up your for loops to enable you to run each 'set' in parallel, so while it might still be expensive to login/out, you will be doing it N times simultaneously.
SQL Server is most performant when running a batch of operations in a single query. Is it possible to redesign your package so that it batches updates in a single call, rather than having a procedural workflow with for-loops, as you have it here?
If the design of your application and the RPC permits (or can be refactored to permit it), this might be the best solution for performance.
For example, instead of something like:
for each Facility
for each Stock
update Qty
See if you can create a structure (using SQL, or a bulk update RPC with a single connection) like:
update Qty
from Qty join Stock join Facility
...
If you control the implementation of the RPC, the RPC could maintain the same API (if needed) by delegating to another which does the batch operation, but specifies a single-record restriction (where record=someRecord).
Have you tried doing the following?
In your connection managers for the connection that is used within the loop, right click and choose properties. In the properties for the connection, find "RetainSameConnection" and change it to True from the default of False. This will let your package maintain the connection throughout your package run. Your profiler would then probably look like:
Audit Login
RPC:Completed
RPC:Completed
RPC:Completed
RPC:Completed
RPC:Completed
RPC:Completed
...
Audit Logout
With the final Audit Logout happening at the end of package execution.