Setting a deadlock victim - sql

We're using siteCore 6.5 and each time we start to publish items, users who are browsing the website will get server 500 errors which end up being
Transaction (Process ID ##) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
How can we setup SQL Server to give priority to a specific application? We cannnot modify any queries or code so it has to be done via SQL Server (or connection string)
I've seen "deadlock victim" in transaction, how to change the priority? and looked at http://msdn.microsoft.com/en-us/library/ms186736(v=SQL.105).aspx but these seem to be per session, not globally.
I don't care if it's a fix/change to SiteCore or a SQL solution.

I don't think you can set the deadlock priority globally - it's a session-only setting. There are not any connection string settings that I know of. The list of possible SqlConnection string settings can be found here.

It sounds to me like you're actually having a problem with the cache and every time you publish, it's clearing the cache and thus you're getting deadlocks with all these calls made at the same time. I haven't seen this sort of thing happen with 6.5 so you might also want to check into your caching. It would help a lot to look at your Sitecore logs and see if this is happening when caches are being created. Either way, check the caching guide on the SDN and see if that helps.

Related

RavenDB taking forever to show updates

I'm starting to assess our company using RavenDB for storing some stuff that doesn't really belong in a relational database (we're traditionally a SQL Server shop). I installed RavenDB locally on my machine, created a database, added a document. Nice!
Being a DBA, I decided to see how backups/restores work. I backed up my database, deleted it, then restored it from the backup. After refreshing my admin screen, I saw my database. I clicked on it, and got a message that the database doesn't exist.
After a couple hours, I tried again. Still doesn't exist. A full day later, I walk into work, and try again. This time the database works. I've had similar situations with updating documents. The update seems to take anywhere between 1 second - several hours to show an update...
Is this normal for RavenDB?? Am I completely misconfigured?? I run SQL Server on my local machine and it's lightning-fast, so I can't imagine updating a single document could take that long. As-is, I can't imagine recommending we use RavenDB for anything.
Are you querying using indexes or getting documents by ID? Documents should be updated immediately (ACID). If indexes are slow to update (check their status using RavenDB Studio), it could be a configuration problem or something external like an anti-virus software can cause them to update slowly.
Apparently, at least for the document-update latency, the default for caching in queries is enabled, so I was getting cached results.
Jeffery,
No, that isn't normal by a long short. You should be able to immediately see what was changed.
Note that certain AV products will interfere with the HTTP pipeline and can affect RavenDB's usage. The studio will also auto update things only every 5 seconds (to reduce UI jitter), but that is about it.
Restoring a database (from the same machine), should take only as long as it take to copy the files (pure I/O bound operation).
If this is from another machine using a different version of Windows, we might need to run a check on the file, which can take a bit of time, but that doesn't sound like your scenario

Disable DTC tracking in RavenDB with Voron storage

We are experimenting with using RavenDB as a log target within our current NLog usage. We created a new RavenDB server and a database using Voron as the storage engine. Then it was just a matter of updating our nlog configurations, which we did.
For a while, everything was great, but then we ran into a situation where we are running calling Trace() within a database transaction. With ravenDB setup as the log target, this means the SaveChanges() call is made within the transaction as well.
Everything is great until we call Transaction.Commit(). RavenDB throws a 500 server error at that point, because DTC is not supported with the Voron engine (and is slated to be removed everywhere, from what I understand). I'm fine with this. I don't particularly care - writing to the log should not be part of the transaction anyway, because if something does go wrong, we don't want to remove the related log entries.
I searched the documentation hoping to find a configuration option that I could set that would just tell RavenDB to ignore transactions, or at least ignore DTC, but I can't find anything. Is there anyway I can get it to stop throwing these 500 server errors?
Set EnlistInDistributedTransactions = false on the document store, and it will work for you

Single user mode "missing behavior" on SQL Server

I run a script provided by one of Microsoft employee to find out about which indexes need to Rebuild/Reorganize depending on the average fragmentation. I got back a reasonable list but while trying to rebuild some of them on a specific database I kept receiving errors :
The first idea I got is to set the database in single user mode, rebuild the indexes and then bring it back to life. Well that did not help because the database is being populated by a Windows service that ironically uses the same user I am connected with and the only available to me with enough permissions to do so. I am working on a corporate environment so the moon is a bit closer than getting another user credentials. I also cannot stop the service while executing my tasks because it is used for many other things.
My question is simple: How can I force single-user mode to force single connection source? In other words how to hide the database or eventually the SQL server from the service? It will correctly handle the absence as a network issue so I don't have to worry about that part.
I found a good solution to use that might help others. I start by getting the list of transactions with locks on the current table using :
USE [Your DB Name]
SELECT REQUEST_MODE, REQUEST_TYPE, REQUEST_SESSION_ID
FROM sys.dm_tran_locks
WHERE RESOURCE_TYPE = 'OBJECT'
AND RESOURCE_ASSOCIATED_ENTITY_ID =(SELECT OBJECT_ID('YourTableName'))
The REQUEST_SESSION_ID is the ID of the session which has the lock set on the table. Then I run EXEC sp_who2 to make sure that the SPID is the one for the expected service. All I needed to do at the end was KILL <SPID> and rebuild the index. You might need to do it multiple time if you are building more than one index as the lock could be set again.
There is an ONLINE = ON/OFF option available when rebuilding indexes in SQL Server 2005 and above which controls how users can access underlying table which may solve your problem.
http://msdn.microsoft.com/en-us/library/ms188388(v=sql.110).aspx
your problem is that the interface will only wait a certain amount of time before deciding to fail. I run into this all of the time.
You can try scripting the change and then running it manually, this will allow you to just wait until all of the locks are released by the users currently using the index. You will have to be careful though, an index rebuild locks the index for the time that it is running (unless of course you have enterprise edition, where rebuilds are online, and everything is made of money)

SQL Server - Timed Out Exception

We are facing the SQL Timed out issue and I found that the Error event ID is either Event 5586 or 3355 (Unable to connect / Network Issue), also could see few other DB related error event ids (3351 & 3760 - Permission issues) reported at different times.
what could be the reason? any help would be appreciated..
Can you elaborate a little? When is this happening? Can you reproduce the behavior or is it sporadic?
It appears SharePoint is involved. Is it possible there is high demand for a large file?
You should check for blocking/locking that might be preventing your query from completing. Also, if you have lots of computed/calculated columns (or just LOTS of data), your query make take a long time to compute.
Finally, if you can't find something blocking your result or optimize your query, it's possible to increase the timeout duration (set it to "0" for no timeout). Do this in Enterprise Manager under the server or database settings.
Troubleshooting Kerberos Errors. It never fails.
Are some of your webapps running under either the Local Service or Network Service account? If so, if your databases are not on the same machine (i.e. SharePoint is on machine A and SQL on machine B), authentication will fail for some tasks (i.e. timerjob related actions etc.) but not all. For instance it seems content databases are still accessible (weird, i know, but i've seen it happen....).

Stored Procedure failing on a specific user

I have a Stored Procedure that is constantly failing with the error message "Timeout expired," on a specific user.
All other users are able to invoke the sp just fine, and even I am able to invoke the sp normally using the Query Analyzer--it finishes in just 10 seconds. However with the user in question, the logs show that the ASP always hangs for about 5 minutes and then aborts with a timeout.
I invoke from the ASP page like so "EXEC SP_TV_GET_CLOSED_BANKS_BY_USERS '006111'"
Anybody know how to diagnose the problem? I have already tried looking at deadlocks in the DB, but didn't find any.
Thanks,
Some thoughts...
Reading the comments suggests that parameter sniffing is causing the issue.
For the other users, the cached plan is good enough for the parameter that they send
For this user, the cached plan is probably wrong
This could happen if this user has far more rows than other users, or has rows in another table (so a different table/index seek/scan would be better)
To test for parameter sniffing:
use RECOMPILE (temporarily) on the call or in the def. This could be slow for complex query
Rebuild the indexes (or just statistics) after the timeout and try again. This invalidates all cached plans
To fix:
Mask the parameter
DECLARE #MaskedParam varchar(10)
SELECT #MaskedParam = #SignaureParam
SELECT...WHERE column = #MaskedParam
Just google "Parameter sniffing" and "Parameter masking"
I think to answer your question, we may need a bit more information.
For example, are you using Active directory to authenticate your users? Have you used the SQL profiler to investigate? It sounds like it could be an auth issue where SQL Server is having problems authenticating this particular user.
Sounds to me like a dead lock issue..
Also make sure this user has execute rights and read rights in SQL Server
But if at the time info is being written as its trying to be read you will dead lock, as the transaction has not yet been committed.
Jeff did a great post about his experience with that and stackoverflow.
http://www.codinghorror.com/blog/archives/001166.html
Couple of things to check:
Does this happen only on that specific user's machine? Can he try it from another
machine? - it might be a client configuration problem.
Can you capture the actual string that this specific user runs and run it from an ASP page? It might be that user executes the SP in a way that generates either a loop or a massive load of data.
Finally, if you're using an intra-organization application, it might be that your particular user's permissions are different than the others. You can compare them at the Active Directory level.
Now, I can recommend a commercial software that will definitely solve your issue. It records end-to-end transactions, and analyzes particular failures. But I do not want to advertise in this forum. If you'd like, drop me a note and I'll explain more.
Well, I could suggest that you use SQL Server Profiler and open a new session. Invoke your stored procedure from your ASP page and see what is happening. While this may not solve your problem, it can surely provide a starting point for you to carry out some 'investigation' of your own.