Default SQL Server IsolationLevel Changes - sql-server-2005

we have a customer that's been experiencing some blocking issues with our database application. We asked them to run a Blocked Process Report trace and the trace they gave us shows blocking occurring between a SELECT and UPDATE operation. The trace files show the following:
The same SELECT query is being executed at different isolation levels. One trace shows a Serializable IsolationLevel while a later trace shows a RepeatableRead IsolationLevel. We do not use an explicit transaction while executing the query.
The UPDATE query is being executed with a RepeatableRead isolation level but is being blocked by the SELECT query. This is expected as our updates are wrapped in an explicit transaction with IsolationLevel of RepeatableRead.
So basically we're at a loss as to why the Isolation Level of the SELECT query would not be the default ReadCommitted IsolationLevel but, even more confusingly, why the IsolationLevel of the query would change over time? It is only one customer that is seeing this behaviour so we suspect it may be a database configuration issue.
Any ideas?
Thanks in advance,
Graham

In your scenario, I would recommend explicitly setting isolation level to snapshot - that will prevent read from getting in the way of writes (inserts and updates) by preventing locks, yet those read would still be "good" reads (i.e. not dirty data - it is not the same as a NOLOCK)
Generally i find that where i have locking issues with my queries, i manually control the lock applied. e.g. i would do updates with row-level locks to avoid page/table level locking, and set my reads to readpast (accepting that i may miss some data, in some scenarios that might be ok)
link|edit|delete|flag
EDIT-- Combining all the comments into the answer
As part of the optimisation process, sql server avoids getting commited reads on a page that it know hasn't changed, and automatically falls back to a lesser locking strategy. In your case, sql server drops from a serializable read to a repeatable read.
Q: Thanks for that useful info regarding dropping Isolation Levels. Can you think of any reason that it would use Serializable IsolationLevel in the first place, given that we don't use an explicit transaction for the SELECT - it was our understanding that the implicit transaction would use ReadCommitted?
A: By default, SQL Server will use Read Commmited if that is your default isolation level BUT if you do not additionally specify a locking strategy in your query, you are basically saying to sql server "do what you think is best, but my preference is Read Commited". Since SQL Server is free to choose, so it does in order to optimise the query. (The optimisation algorithm in sql server is very complex and i do not fully understand it myself). Not explicitly executing within a transaction does not, afaik, affect the isolation level that sql server uses.
Q: One last thing, does it seem reasonable that SQL Server would increase the Isolation Level (and presumably the number of locks required) to optimise the query? I'm also wondering whether the reuse of a pooled connection would affect this if it inherited the last used Isolation Level?
A: Sql server will do that as part of a process called "Lock Escalation". From http://support.microsoft.com/kb/323630, i quote: "Microsoft SQL Server dynamically determines when to perform lock escalation. When making this decision, SQL Server takes into account the number of locks that are held on a particular scan, the number of locks that are held by the whole transaction, and the memory that is being used for locks in the system as a whole. Typically, SQL Server's default behavior results in lock escalation occurring only at those points where it would improve performance or when you must reduce excessive system lock memory to a more reasonable level. However, some application or query designs may trigger lock escalation at a time when it is not desirable, and the escalated table lock may block other users".
Although lock escalation is not exactly the same thing as changing the isolation level a query runs under, this surprises me because i would not have expected sql server to take more locks than what the default isolation level permits.

More info regarding why SQL would take more locks by escalating: this is incorrect, escalating reduces (not increases) the number of locks required. A table lock is a single lock vs. all the page or row locks required to do the same from a lower level. Lock escalation is always done for one reason: it's more efficient to take a higher level lock than to lock all the lower-level objects
For example, perhaps there is no index available to lock efficiently against. I.e. if you take a count with UPDLOCK on all records with a year of 2010 in a field, and there is no index on that date field, this will require a row lock on each record in 2010, which is not efficient if many records are hit, and a page lock will not help either since they are presumably distributed randomly across pages, therefore SQL takes a table lock. Moreover, SQL MUST also lock other records from changing to being in the year 2010 while the UPDLOCK is held, and with no index on this field to do a range lock, SQL has NO CHOICE but to take a table lock to prevent this from happening. This latter point is one often missed by those new to optimization: the realization that SQL must also "protect" the integrity of the queries already executed in the transaction.

Related

Deadlock in SQL Server 2008 | INSERT (from application, EF) & SELECT (from stored procedure) statement working simultaneously

Program to insert into my 2 tables is written through Entity Framework and to SELECT the data is through a STORED PROC at SQL SERVER level. There is a point when SELECT and INSERT is getting done at the same time simultaneously. And when hitting that point, I got the below error:
Transaction (Process ID) was deadlocked on resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
How can I get rid of this DEADLOCK problem here? Need the best way to solve it.
Option 1: Implementing NOLOCK? What would be the PROS and CONS here for it?
Option 2: IF there is any way to exceed the DEADLOCK wait time so that it can wait for the resource for a longer time than usually it does? If yes, then HOW?
Option 3: Suggest Me?
Thanks,
Rahuul Dutta
A deadlock cannot be cured by increasing lock timeout. The resources are locked in such a way that it cannot be resolved by itself, regardless of how much time you can give it. A special background process in SQL Server, a deadlock monitor, periodically (rather often, actually) runs and if it identifies a deadlock it kills the 'lighter' transaction immediatelly.
The deadlocks are usually dealt with in one of several ways: by providing an alternative data access path for the SELECT query (ie adding a mnnclustered index), minimizing the transaction duration (by better indexing, again), or using one of snapshot isolation levels.
The least effort solution here will be setting the read committed snapshot isolation level. This way the SELECT query will not issue any shared locks on data, but still read only the committed data, which is a huge plus over using the NOLOCK hint (or read uncommitted isolation level).
You can change your transaction isolation level. Best option for deadlocks would be snapshot isolation i think. If you cannot turn this option on in your server or if you run into I/O issues, read committed should still prevent deadlocks from read/write dependencies. Make sure that you don't run into anomalies, read committed will allow non-repeatable reads and phantom reads.
First of all, thanks a lot for your precious answers!
With the help of your answers, some research and a call with Microsoft DBA team, I have got the following solution.
Solution: To implement this solution we have to change the database property to Read Committed Snapshot. This will help the Select statements in avoiding the blocks in case of locks by other sessions on the same table.
- To cater this solution the database will create a snapshot of the data in tempdb. Therefore we must have sufficient space in tempdb. Also if possible we must shift the tempdb to a new disk to split the I/O. This will improve the performance.
The following kb article helps in enabling Read Committed Snapshot property of the database:
http://technet.microsoft.com/en-us/library/ms175095(v=SQL.105).aspx
Alternately we can change this property through SSMS by right clicking the database---options---Miscellaneous----Is Read Committed Snapshot On. We have to change the value of this property to TRUE.
We do not have to restart the server to enable this property however we must note that 'When setting the READ_COMMITTED_SNAPSHOT option, only the connection executing the ALTER DATABASE command is allowed in the database. There must be no other open connection in the database until ALTER DATABASE is complete. The database does not have to be in single-user mode.'
This means we need a small amount of downtime from the application side.
Hope the MOM above would help you all too. :)
Thanks,
Rahuul Dutta

Could an query with READ UNCOMMITTED isolation level cause locks on the tables it access?

My app needs to batch process 10M rows, the result of a complex SQL query that join tables.
I'll plan to be iterating a resultset, reading a hundred per iteration.
To run this on a busy OLTP production DB and avoid locks, I figured I'll query with a READ UNCOMMITTED isolation level.
Would that get the query out of the way of any DB writes? avoiding any rows/table locks?
My main concern is my query blocking any other DB activity, I'm far less concerned with the other way around.
Side Notes:
1. I'll be reading historical data, so I'm unlikely to meet uncommitted data. It's OK if I do.
2. The iteration process could take hours. The DB connection would remain open through this process.
3. I'll have two such concurrent batch instances at most.
4. I can tolerate dup rows. (by product of read uncommitted).
5. DB2 is the target DB, but I want a solution that fits other DBs vendors as well.
6. Will snapshot isolation level help me clear out server memory?
Have you actually encountered any real locks on read?
As far as I'm concerned, the only reason that READ UNCOMMITED existed in SQL standard was to allow non-locking reads. So I don't know DB2, but I blindly bet that it does not lock data during read in READ UNCOMMITED mode. Most modern RDBMS systems however don't lock data at all during read (*). So READ UNCOMMITED is either not available (in Oracle, for example) or is silently promoted to READ COMMITED (PostgreSQL).
If you can freely choose the engine, either check DB2 transaction isolation level handling or go for Oracle/PostgreSQL/other.
(*) More precisely, they don't exclusively lock the data. Some shared locks can be placed on queried tables so no DDL alters them during read.
My answer applies to SQL Server.
Read committed releases lock after every row read (approximately). Locking is probably not your problem.
I recommend you use the safer READ COMMITTED. Better yet, use snapshot isolation. That removes many locking problems. There are disadvantages as well, sou you better read a little about it.
My main concern is my query blocking any other DB activity
Snapshot isolation makes all locking concerns go away for read-only transactions. No blocking either way, full data consistency. Be aware that long-running transactions can cause TempDB to fill with snapshot versions.
The DB connection would remain open through this process.
That's a problem because a network hiccup, app deployment or mirroring failover would kill your batch process.
Be aware, that read uncommitted can cause queries to sporadically fail outright. You need retry logic or tolerate failed jobs.
In sql server Transaction isolation level Read uncommitted cause no lock on table.

How to prevent locks/delays when querying the same tables from different processes?

I have an issue that people can't work with an intranet application that is querying the same tables that i'm querying from SQL Server Management Studio. They must wait until the query finished what can last ~10 minutes.
I've already reduced the deadlock-priority of SSMS but that seems to have no effect on the delays now.
I would think it should be possible that SQL-Server could handle both queries parallel or at least have an option to reduce SSMS' priority. So is there any way/option to ensure that some processes like w3sp.exe get high-priority and other "internal"(SSMS related queries) get low-priority in processor time? The server wasn't busy at all, only 6% CPU was used, why is this so? Also, why does SQL-Server seem to lock tables that are queried without any changes(updates/deletes). Can i avoid that?
Thank you in advance.
It's possible that you could reduce the transaction isolation level for some/all of the read-only operations being performed by either or both of the intranet application and your SSMS queries. But if you do this, you have to be wary of such things as dirty reads (where you read one or more rows from a table and these rows later turns out not to be committed by their owning transaction).
There are no priority level settings for connections within SQL Server (other than, as you've noted, volunteering to be the deadlock victim). OS level settings (e.g. process priorities) will have no effect on SQL Server - all it cares about are locks, and whether these locks are compatible between different connections.
You can use the SET TRANSACTION ISOLATION LEVEL statement to change your isolation at the connection level, or you can use locking hints (such as WITH NOLOCK) on individual tables within statements to have more granular control over what locks are being taken.
Note, though, that if you're running DML statements (e.g. INSERT or DELETE), these will still need to take exclusive locks, so if your intranet application wishes to query the same tables, it had to wait for the DML statement to complete, or it has to be modified to relaz its isolation. There's no means to specify the behaviour of other connections from your own queries - they have to choose their own isolation settings.

Serializable isolation level atomicity

I have several threads executing some SQL select queries with serializable isolation level. I am not sure which implementation to choose. This:
_repository.Select(...)
or this
lock (_lockObject)
{
_repository.Select(...);
}
In other words, is it possible several transactions will start executing at the same time and partially block records inside Select operation range.
P. S. I am using MySQL but I guess it is a more general question.
Transactions performing SELECT queries place a shared lock on the rows, permitting other transactions to read those rows, but preventing them from making changes to the rows (including inserting new records into the gaps)
Locking in the application is doing something else, it will not allow other threads to enter the code block which fetches the data from the repository, This approach can lead to very bad performance for a few reasons:
If any of the rows are locked by another transaction (outside the application) via a exclusive lock, the lock in the application will not help.
Multiple transactions will not be able to perform reads even on rows that are not locked in exclusive mode (not being updated).
The lock will not be released until all the data is fetched and returned to the client. This includes the network latency and any other overhead that it takes converting the MySql result set to a code object.
Most importantly, Enforcing data integrity & atomicity is the databases job, it knows how to handle it very well, how to detect potential deadlocks. When to perform record locks, and when to add Index gap locks. It is what databases are for, and MySql is ACID complaint and is proven to handle these situations
I suggest you read through Section 13.2.8. The InnoDB Transaction Model and Locking of the MySql docs, it will give you a great insight how locking in InnoDB is performed.

Regarding SQL Server Locking Mechanism

I would like to ask couple of questions regarding SQL Server Locking mechanism
If i am not using the lock hint with SQL Statement, SQL Server uses PAGELOCK hint by default. am I right??? If yes then why? may be its due to the factor of managing too many locks this is the only thing i took as drawback but please let me know if there are others. and also tell me if we can change this default behavior if its reasonable to do.
I am writing a server side application, a Sync Server (not using sync framework) and I have written database queries in C# code file and using ODBC connection to execute them. Now question is what is the best way to change the default locking from Page to Row keeping drawbacks in mind (e.g. adding lock hint in queries this is what i am planning for).
What if a sql query(SELECT/DML) is being executed without the scope of transaction and statement contains lock hint then what kind of lock will be acquired (e.g. shared, update, exclusive)? AND while in transaction scope does Isolation Level of transaction has impact on lock type if ROWLOCK hint is being used.
Lastly, If some could give me sample so i could test and experience all above scenarios my self (e.g. dot net code or sql script)
Thanks
Mubashar
No. It locks as it sees fit and escalates locks as needed
Let the DB engine manage it
See point 2
See point 2
I'd only use lock hints if you want specific and certain behaviours eg queues or non-blocking (dirty) reads.
More generally, why do you think the DB engine can't do what you want by default?
The default locking is row locks not page locks, although the way in which the locking mechanism works means you will be placing locks on all the objects within the hierarchy e.g. reading a single row will place a shared lock on the table, a shared lock on the page and then a shared lock on the row.
This enables an action requesting an exclusive lock on the table to know it may not take it yet, since there is a shared lock present (otherwise it would have to check every page / row for locks.)
If you issue too many locks for an individual query however, it performs lock escalation which reduces the granularity of the lock - so that is it managing less locks.
This can be turned off using a trace flag but I wouldn't consider it.
Until you know you actually have a locking / lock escalation issue you risk prematurely optimizing a non-existant problem.