What do I expect from changing default transaction isolation level from READ_COMMITTED_SNAPSHOT to READ_COMMITTED? - sql

In SQL Server the default isolation level is READ_COMMITTED but in SQL Azure the default level is READ_COMMITTED_SNAPSHOT.
Suppose I change the default level in my SQL Azure server to READ_COMMITTED_SNAPSHOT (using SET TRANSACTION ISOLATION LEVEL) so that it behaves like SQL Server.
What negative consequences should I expect?

Your application logic may break. Really, it depends a lot on what you're doing. Overall, some pointers:
True SNAPSHOT has a lot less 'surprises' than RCSI. As the 'snapshot' rows version is clearly defined in the true SNAPSHOT as the moment the transaction started, it does not suffer from RCSI issues of seeing different row versions inside the same transaction (which leads to very subtle and difficult to understand issues)
You will get update conflicts instead of deadlocks, but one exactly 'instead-of'. There are some differences, and definetely the app may not expect the new error code 3960.
I would recommend going over Implementing Snapshot or Read Committed Snapshot Isolation in SQL Server: A Guide.

Related

SQL Server 2005 Isolation level changes on service restart, how to stop it?

Whenever I change the isolation level from read committed to re uncommitted. Then I restart the SQL Server Service (2005) it will reset the isolation level back to read committed.
Is there a way to stop this from happening? (changing the value on restart of SQL Server)
Isolation level can only be set within a connection. Simply reconnecting and it will change it back to the default. An application should always explicitly set the desired isolation level if not happy with the default. The default cannot be changed.
That being said, read uncommitted is never a good isolation level, because it produces inconsistent results. Whenever an application abuses the uncommitted isolation level is an indication of an access problem (like missing indexes leading to table scans).
Isolation level is set per transaction, not per server or database.
You therefore must declare your isolation level every time you make a connection to the server.
More reading: Customizing Transaction Isolation Level.

Isolation level and DIRT READ SQL SERVER 2005 (advanced question)

I will describe my problem for an easier explanation:
I have a table and my soft is accessing it (update, insert) using transaction.
The problem is that I want to enable DIRT READ in this table. But I cant use with (nolock) in my sql statements because I cant change the soft source. So I was thinking in enable dirty read in the sql process that begin the transaction.
It seens that the command "SET ISOLATION LEVEL ..." and "WITH (NOLOCK)" are executed in the statements that do access the locked table... that's what I'm try to avoid. I want to enable dirt read in the statement that begin the transaction...
thanks in advance!
There is no point in changing the isolation level of your writes, like insert or update. Writes always take exclusive locks on anything they update, period. What you can do is to change the isolation level of your reads, your SELECT statements.
Dirty reads are never necessary. 99% of the times they are the indication of bad schema and query design that results in end-to-end scans that are guaranteed to block on locked rows. The solution is to properly change the schema, add necessary indexes to avoid scans. This does not require source changes.
For the rare cases when contention is indeed unavoidable and the schema is correctly designed, the answer is never to enable dirty reads, but to turn to snapshot isolation:
ALTER DATABASE ... SET ALLOW_SNAPSHOT_ISOLATION ON;
ALTER DATABASE ... SET READ_COMMITTED_SNAPSHOT ON;
For the few deployments where the overhead of row-versioning introduced by snapshot isolation is visible, they have professionals at their disposal to alleviate the problem.

SQL Server 2005 becomes blocked with no locked or locking processes

We have a database (let's call it database A) which becomes unusable every some days and we have to restart it. When I say unusable means all applications using it just block there waiting for the database to respond but it never does.
By luck it was noticed that executing a SELECT statement against a specific table using the SQL Server Management Studio seems to bring some records but at some point it blocks.
The funny thing is that there are no LOCKED or LOCKING processes on the specific database. I found out that the application uses the following transaction isolation:
ALLOW_SNAPSHOT_ISOLATION ON
which explains why we can't see Locked or Locking processes right?
We have another database (let's call it database B) which actually has the same schema and we never had this issue. The only difference between these databases is the isolation I mentioned earlier. This one uses the default transaction isolation and we never had this odd thing of the database blocking. But also database A has a lot more transactions opening per day; much much more. So what I can think of is that the SNAPSHOT ISOLATION should be avoided for a big number of concurrent transactions in this case.
Can someone confirm that most probably it's the SNAPSHOT ISOLATION causing the problems?
I mean we have no locks and we just have a database blocking with no actual exceptions or something that will help us detect the root cause of the problem.
Are my assumptions right? I surely hope so.
Have you tried to monitor your tempdb usage ? (AFAIK, ALLOWSNAPSHOT_ISOLATION ON relies heavily on tempdb, which isn't the case for standard locking strategies)
This MS technet page gives some tips on how to do this (see the section 'Monitoring space')
you can also use this quick query to check your tempdb isn't full :
use tempdb
exec sp_spaceused

Default SQL Server IsolationLevel Changes

we have a customer that's been experiencing some blocking issues with our database application. We asked them to run a Blocked Process Report trace and the trace they gave us shows blocking occurring between a SELECT and UPDATE operation. The trace files show the following:
The same SELECT query is being executed at different isolation levels. One trace shows a Serializable IsolationLevel while a later trace shows a RepeatableRead IsolationLevel. We do not use an explicit transaction while executing the query.
The UPDATE query is being executed with a RepeatableRead isolation level but is being blocked by the SELECT query. This is expected as our updates are wrapped in an explicit transaction with IsolationLevel of RepeatableRead.
So basically we're at a loss as to why the Isolation Level of the SELECT query would not be the default ReadCommitted IsolationLevel but, even more confusingly, why the IsolationLevel of the query would change over time? It is only one customer that is seeing this behaviour so we suspect it may be a database configuration issue.
Any ideas?
Thanks in advance,
Graham
In your scenario, I would recommend explicitly setting isolation level to snapshot - that will prevent read from getting in the way of writes (inserts and updates) by preventing locks, yet those read would still be "good" reads (i.e. not dirty data - it is not the same as a NOLOCK)
Generally i find that where i have locking issues with my queries, i manually control the lock applied. e.g. i would do updates with row-level locks to avoid page/table level locking, and set my reads to readpast (accepting that i may miss some data, in some scenarios that might be ok)
link|edit|delete|flag
EDIT-- Combining all the comments into the answer
As part of the optimisation process, sql server avoids getting commited reads on a page that it know hasn't changed, and automatically falls back to a lesser locking strategy. In your case, sql server drops from a serializable read to a repeatable read.
Q: Thanks for that useful info regarding dropping Isolation Levels. Can you think of any reason that it would use Serializable IsolationLevel in the first place, given that we don't use an explicit transaction for the SELECT - it was our understanding that the implicit transaction would use ReadCommitted?
A: By default, SQL Server will use Read Commmited if that is your default isolation level BUT if you do not additionally specify a locking strategy in your query, you are basically saying to sql server "do what you think is best, but my preference is Read Commited". Since SQL Server is free to choose, so it does in order to optimise the query. (The optimisation algorithm in sql server is very complex and i do not fully understand it myself). Not explicitly executing within a transaction does not, afaik, affect the isolation level that sql server uses.
Q: One last thing, does it seem reasonable that SQL Server would increase the Isolation Level (and presumably the number of locks required) to optimise the query? I'm also wondering whether the reuse of a pooled connection would affect this if it inherited the last used Isolation Level?
A: Sql server will do that as part of a process called "Lock Escalation". From http://support.microsoft.com/kb/323630, i quote: "Microsoft SQL Server dynamically determines when to perform lock escalation. When making this decision, SQL Server takes into account the number of locks that are held on a particular scan, the number of locks that are held by the whole transaction, and the memory that is being used for locks in the system as a whole. Typically, SQL Server's default behavior results in lock escalation occurring only at those points where it would improve performance or when you must reduce excessive system lock memory to a more reasonable level. However, some application or query designs may trigger lock escalation at a time when it is not desirable, and the escalated table lock may block other users".
Although lock escalation is not exactly the same thing as changing the isolation level a query runs under, this surprises me because i would not have expected sql server to take more locks than what the default isolation level permits.
More info regarding why SQL would take more locks by escalating: this is incorrect, escalating reduces (not increases) the number of locks required. A table lock is a single lock vs. all the page or row locks required to do the same from a lower level. Lock escalation is always done for one reason: it's more efficient to take a higher level lock than to lock all the lower-level objects
For example, perhaps there is no index available to lock efficiently against. I.e. if you take a count with UPDLOCK on all records with a year of 2010 in a field, and there is no index on that date field, this will require a row lock on each record in 2010, which is not efficient if many records are hit, and a page lock will not help either since they are presumably distributed randomly across pages, therefore SQL takes a table lock. Moreover, SQL MUST also lock other records from changing to being in the year 2010 while the UPDLOCK is held, and with no index on this field to do a range lock, SQL has NO CHOICE but to take a table lock to prevent this from happening. This latter point is one often missed by those new to optimization: the realization that SQL must also "protect" the integrity of the queries already executed in the transaction.

Using IsolationLevel.Snapshot but DB is still locking

I'm part of a team building an ADO.NET based web-site. We sometimes have several developers and an automated testing tool working simultaneously a development copy of the database.
We use snapshot isolation level, which, to the best of my knowledge, uses optimistic concurrency: rather than locking, it hopes for the best and throws an exception if you try to commit a transaction if the affected rows have been altered by another party during the transaction.
To use snapshot isolation level we use:
ALTER DATABASE <database name>
SET ALLOW_SNAPSHOT_ISOLATION ON;
and in C#:
Transaction = SqlConnection.BeginTransaction(IsolationLevel.Snapshot);
Note that IsolationLevel Snapshot isn't the same as ReadCommitted Snapshot, which we've also tried, but are not currently using.
When one of the developers enters debug mode and pauses the .NET app, they will hold a connection with an active transaction while debugging. Now, I'd expect this not to be a problem - after all, all transactions are using snapshot isolation level, so while one transaction is paused, other transactions should be able to proceed normally since the paused transaction isn't holding any locks. Of course, when the paused transaction completes, it is likely to detect a conflict; but that's acceptable so long as other developers and the automated tests can proceed unhindered.
However, in practice, when one person halts a transaction while debugging, all other DB users attempting to access the same rows are blocked despite using snapshot isolation level.
Does anybody know why this occurs, and/or how I can achieve true optimistic (non-blocking) concurrency?
The resolution (an unfortunate one for me): Remus Rusanu noted that writers always block other writers; this is backed up by MSDN - it doesn't quite come out and say so, but only ever mentions avoiding reader-writer locks. In short, the behavior I want isn't implemented in SQL Server.
SNAPSHOT isolation level affects, like all isolation levels, only reads. Writes are still blocking each other. If you believe that what you see are read blocks, then you should investigate further and check out the resource types and resource names on which blocking occurs (wait_type and wait_resource in sys.dm_exec_requests).
I wouldn't advise in making code changes in order to support a scenario that involves developers staring at debugger for minutes on end. If you believe that this scenario can repeat in production (ie. client hangs) then is a different story. To achieve what you want you must minimize writes and perform all writes at the end of transaction, in one single call that commits before return. This way no client can hold X locks for a long time (cannot hang while holding X locks). In practice this is pretty hard to pull off and requires a lot of discipline on the part of developers in how they write the data access code.
Have you looked at the locks when one developer pauses the transaction? Also, just turning on snapshot isolation level does not have much effect. Have you set ALLOW_SNAPSHOT_ISOLATION ON?
Here are the steps:
ALTER DATABASE <databasename>
SET READ_COMMITTED_SNAPSHOT ON;
GO
ALTER DATABASE <database name>
SET ALLOW_SNAPSHOT_ISOLATION ON;
GO
After the database has been enabled for snapshot isolation, developers and users must then request that their transactions be run in this snapshot mode. This must be done before starting a transaction, either by a client-side directive on the ADO.NET transaction object or within their Transact-SQL query by using the following statement:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
Raj