How to set TransactionScope to Read Uncommited by default - sql

My project use dbml to access database with linq to sql. It uses transaction at several operation which is required.
As the database is growing I face the following errors:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Transaction (Process ID 82) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction
There are thousands of linq queries spread over the project. So I can not put TransactionSope code to all select queries as it is time consuming.
Is there any way so i can set default Transaction IsolationLevel in dbml so that it reads uncommitted and do not get in deadlock.
Please let me know if you have any questions on the issue.

Following call before query execution should work ..
dataContext.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED")
You can execute this once after initialising the dataContext
OR
Implement an Extension method which calls datacontext ExecuteCommand once for setting transaction isolation level as above and then once for actual command execution.

Related

when commit will affect actually tables while procedure call?

I am working with ms sql with struts framework.
While calling procedure I put autocommit false in program.
when the procedure run I have to commit one seperate transaction and it must be affect the table externally
But it never be save until conn.commit() statement execute in program.
Is it any other way to commit the transaction in procedure itself, to affect the table on the end of the single transaction in procedure?
Pl. tell me if you know.
T.Saravanan
You should start and commit/rollback a transaction at the same level, otherwise you are introducing a lot of unpredictable paths - and frankly some bad design. So: if you need to commit at the server, use BEGIN TRAN / COMMIT TRAN in the TSQL to handle the transaction locally.
Note, though, that TSQL exception/error handling is not as rich as handling errors at a caller such as java/C#. If the problem is that you want to disassociate this work from another unrelated transaction, then it depends on how your calling code works:
if it is using connection-level transactions, then you will need to use a separate connection; just run the transaction on a different connection using the java/C#/whatever transaction API (i.e. the same as your existing code, by the sound of it, but on a different connection)
if it is using things like scope-based transactions (TransactionScope in C#; not sure about java etc - but this is an LTM or DTC transaction) then you can explicitly create a new scope that is bound to either a new (isolated) transaction, or the nil-transaction (i.e. the inner scope is not enlisted)
As for affecting the tables... SQL Server generally does optimistic changes, i.e. yes the changes are applied immediately (so that commit is cheap, and rollback is more expensive) - however, the isolation level will generally prevent other SPIDs from seeing the data. A competing SPID with a low isolation level (or using the NOLOCK hint) will see the uncommitted data, but this may be a phantom/non-repeatable read if the data eventually gets rolled back.

Handle NHibernate Transaction Errors

Our application (which uses NHibernate and ASP.NET MVC), when put under stress tests throws a lot of NHibernate transaction errors. The major types are:
Transaction not connected, or was disconnected
Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect)
Transaction (Process ID 177) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Can someone help me in identifying the reason for Exception 1?
I know I have to handle the other exceptions in my code. Can someone point me to resources which can help me handle these errors in an efficient manner?
Q. How do we manage Sessions and Transactions?
A. We are using Autofac. For every server request, we create a new request container which has the session in the container lifetime scope. On activating the session we begin the transaction. When the request completes, we commit the transaction. In some cases, the transaction can be huge. To simplify, every server request is contained in a transaction.
Have a look at this thread:
http://n2cms.codeplex.com/Thread/View.aspx?ThreadId=85016
Basically what it says as a possible cause of this exception:
2010-02-17 21:01:41,204 1 WARN
NHibernate.Util.ADOExceptionReporter -
System.Data.SqlClient.SqlException:
The transaction log for database
'databasename' is full. To find out
why space in the log cannot be reused,
see the log_reuse_wait_desc column in
sys.databases
As the transaction log's size is proportional to the amount of work done during the transaction, perhaps you ought to look into putting your transactional boundaries across command handlers 'handling' of commands on the write-part of transactions. You would then, with a session#X, load the state you wish to mutate, mutate it and commit it, all as one unit of work in #X.
With regards to the read-side of things, you might then have another ISession#Y that reads data; this ISession could be used to batch reads within e.g. RepeatableRead or something similar with the Futures feature and could simply be reading from a cache (albiet it being a crutch indeed). Doing it this way might help you recover from "errors" that aren't; livelocks, deadlocks and victim transactions.
The problem with using a transaction per request is that your ISession acquires a lot of book keeping data while you are working, all of which is part of the transaction. Hence the database marks the datas (rols, cols, tables, etc) as partaking in the transaction, causing the wait-graph to span 'entities' (in the database-sense, not the DDD-sense), which are not actually part of the transactional boundary of the command your application took.
For the record (other people googling this), Fabio had a post dealing with dealing with exceptions from the data layer. Quoting some of his code;
public class MsSqlExceptionConverterExample : ISQLExceptionConverter
{
public Exception Convert(AdoExceptionContextInfo exInfo)
{
var sqle = ADOExceptionHelper.ExtractDbException(exInfo.SqlException) as SqlException;
if(sqle != null)
{
switch (sqle.Number)
{
case 547:
return new ConstraintViolationException(exInfo.Message,
sqle.InnerException, exInfo.Sql, null);
case 208:
return new SQLGrammarException(exInfo.Message,
sqle.InnerException, exInfo.Sql);
case 3960:
return new StaleObjectStateException(exInfo.EntityName, exInfo.EntityId);
}
}
return SQLStateConverter.HandledNonSpecificException(exInfo.SqlException,
exInfo.Message, exInfo.Sql);
}
}
547 is the exception number for constraint conflict.
208 is the exception number for an invalid object name in the SQL.
3960 is the exception number for Snapshot isolation transaction aborted due to update conflict.
So if you are running into concurrency issues like what you describe; remember that they will invalidate your ISession and that you'd have to handle them like the above.
Part of what you might be looking for is CQRS, where you have separate read and write-sides. This might help: http://abdullin.com/cqrs/, http://cqrsinfo.com.
So to summarize; your problems might be related to the way your handle your transactions. Also, try running select log_wait_reuse_desc from sys.databases where name='MyDBName' and see what it gives you.
This thread has an explanation:
http://groups.google.com/group/nhusers/browse_thread/thread/7f5fb68a00829d13
In short, the database probably rolls back the transaction by itself due to some error, so that when you try to rollback the transaction later it is already rolled back and in a zombie state. This tends to hide the actual reason for the rollback since all you see is a TransactionException instead of the exception that actually triggered the rollback in the first place.
I don't think there is much you can do about it beyond logging it and trying to figure out what is causing the underlying error.
I know this post was a while back and assume you fixed it, but seems like you have thread sharing issues with the NHibernate ISession which is not threadsafe. Basically 1 thread is starting a transaction and another is attempting to close it causing all sorts of chaos.

SQLite with TransactionScope and Increment Identity Generator

When trying to use SQLite with System.Transactions TransactionScope with the identity generator as Increment, i noticed that i was getting an exception (given below along with code) when NHibernate was trying to retrieve the next Identity number.
This seems to be because the new SQLite connection is doing a auto enlist of the current transaction. From what i have heard SQLite only support single write transaction, but should support multiple read's, so i am surprised that i am getting a Database locked exception for a read operation. Did anybody use SQLite with Transaction Scope in this manner.
The same Code works fine if i use a NHibernate Transaction instead of TransactionScope
Code Block:
using (var scope = new TransactionScope())
{
var userRepository =
container.GetInstance<IUserRepository>();
var user = new User();
userRepository.SaveOrUpdate(user);
scope.Complete();
}
Exception:
19:34:19,126 ERROR [ 7] IncrementGenerator [(null)]- could not get
increment value
System.Data.SQLite.SQLiteException: The database file is locked
database is locked
at System.Data.SQLite.SQLite3.Step(SQLiteStatement stmt)
at System.Data.SQLite.SQLiteCommand.ExecuteNonQuery()
at System.Data.SQLite.SQLiteTransaction..ctor(SQLiteConnection
connection, Boolean deferredLock)
at System.Data.SQLite.SQLiteConnection.BeginTransaction(Boolean
deferredLock)
at System.Data.SQLite.SQLiteConnection.BeginTransaction()
at System.Data.SQLite.SQLiteEnlistment..ctor(SQLiteConnection cnn,
Transaction scope)
at
System.Data.SQLite.SQLiteConnection.EnlistTransaction(Transaction
transaction)
at System.Data.SQLite.SQLiteConnection.Open()
at NHibernate.Connection.DriverConnectionProvider.GetConnection()
at NHibernate.Id.IncrementGenerator.GetNext(ISessionImplementor
session)
19:34:20,063 ERROR [ 7] ADOExceptionReporter [(null)]- The database
file is locked
database is locked
There are two things at play here.
As you mentioned, System.Data.SQLite will auto-enlist in distributed transactions. This is on by default and can turn it off by adding Enlist=no.
The second is that System.Data.SQLite by default creates transactions with an automatic write lock. This is done based on the assumption that if a transaction is started, writes will be done. This can be overridden by starting the transactions with Serializable.ReadCommitted.
The default can also be specified in the connection string using the DefaultIsolationLevel key. Valid values are ReadCommitted and Serializable only. Other isolation levels are not supported by SQLite. ReadCommitted defers the write lock whereas Serializable obtains the write lock immediately.
Unspecified will use the default isolation level specified in the connection string. If no isolation level is specified in the connection string, Serializable is used. Serializable transactions are the default. In this mode, the engine gets an immediate lock on the database, and no other threads may begin a transaction. Other threads may read from the database, but not write.
With a ReadCommitted isolation level, locks are deferred and elevated as needed. It is possible for multiple threads to start a transaction in ReadCommitted mode, but if a thread attempts to commit a transaction while another thread has a ReadCommitted lock, it may timeout or cause a deadlock on both threads until both threads' CommandTimeout's are reached.

SQL Transactions (b)locking Selects - is my understanding correct

We are using ADO.NET to connect to a SQL 2005 server, and doing a number of inserts/updates and selects in it. We changed one of the updates to be inside a transaction however it appears to (b)lock the entire table when we do it, regardless of the IsolationLevel we set on the transaction.
The behavior that I seem to see is that:
If you have no transactions then it's an all out fight (losers getting dead locked)
If you have a few transactions then they win all the time and block all others out unless
If you have a few transactions and you set something like nolock on the rest then you get transactions and nothing blocked. This is because every statement (select/insert/delete/update) has an isolationlevel regardless of transactions.
Is this correct?
The answer to your question is: It depends.
If you are updating a table, SQL Server uses several strategies to decide how many rows to lock, row level locks, page locks or full table locks.
If you are updating more than a certain percentage of the table (configurable as I remember), then SQL Server gives you a table level lock, which may block selects.
The best reference is:
Understanding Locking in SQL Server:
http://msdn.microsoft.com/en-us/library/aa213039(SQL.80).aspx
(for SQL Server 2000)
Introduction to Locking in SQL Server: http://www.sqlteam.com/article/introduction-to-locking-in-sql-server
Isolation Levels in the Database Engine: http://msdn.microsoft.com/en-us/library/ms189122.aspx (for SQL server 2008, but 2005 version is available).
Good luck.
Your update statement (i.e one that changes data) will hold locks regardless of the isolation level and whether you have explicitly defined a transaction of not.
What you can control is the granularity of the locks by using query hints. So if the update is locking the entire table, then you can specify a query hint to only lock the affected rows (ROWLOCK hint). That is unless your query is updating the whole table of course.
So to answer your question, the first connection to request locks on a resource will hold those locks for the duration of the transaction. You can specify that a select does not hold locks by using the read uncommitted isolation level, statements that change data insert/update/delete always hold locks regardless. The next connection to request locks on the same resource will wait until the first has finished and will then hold its locks. Dead locking is a specific scenario where two connections are holding locks and each is waiting for the other connection's resource, to avoid the engine waiting forever, one connection is chosen as the deadlock victim.

Should I commit or rollback a read transaction?

I have a read query that I execute within a transaction so that I can specify the isolation level. Once the query is complete, what should I do?
Commit the transaction
Rollback the transaction
Do nothing (which will cause the transaction to be rolled back at the end of the using block)
What are the implications of doing each?
using (IDbConnection connection = ConnectionFactory.CreateConnection())
{
using (IDbTransaction transaction = connection.BeginTransaction(IsolationLevel.ReadUncommitted))
{
using (IDbCommand command = connection.CreateCommand())
{
command.Transaction = transaction;
command.CommandText = "SELECT * FROM SomeTable";
using (IDataReader reader = command.ExecuteReader())
{
// Read the results
}
}
// To commit, or not to commit?
}
}
EDIT: The question is not if a transaction should be used or if there are other ways to set the transaction level. The question is if it makes any difference that a transaction that does not modify anything is committed or rolled back. Is there a performance difference? Does it affect other connections? Any other differences?
You commit. Period. There's no other sensible alternative. If you started a transaction, you should close it. Committing releases any locks you may have had, and is equally sensible with ReadUncommitted or Serializable isolation levels. Relying on implicit rollback - while perhaps technically equivalent - is just poor form.
If that hasn't convinced you, just imagine the next guy who inserts an update statement in the middle of your code, and has to track down the implicit rollback that occurs and removes his data.
If you haven't changed anything, then you can use either a COMMIT or a ROLLBACK. Either one will release any read locks you have acquired and since you haven't made any other changes, they will be equivalent.
If you begin a transaction, then best practice is always to commit it. If an exception is thrown inside your use(transaction) block the transaction will be automatically rolled-back.
Consider nested transactions.
Most RDBMSes do not support nested transactions, or try to emulate them in a very limited way.
For example, in MS SQL Server, a rollback in an inner transaction (which is not a real transaction, MS SQL Server just counts transaction levels!) will rollback the everything which has happened in the outmost transaction (which is the real transaction).
Some database wrappers might consider a rollback in an inner transaction as an sign that an error has occured and rollback everything in the outmost transaction, regardless whether the outmost transaction commited or rolled back.
So a COMMIT is the safe way, when you cannot rule out that your component is used by some software module.
Please note that this is a general answer to the question. The code example cleverly works around the issue with an outer transaction by opening a new database connection.
Regarding performance: depending on the isolation level, SELECTs may require a varying degree of LOCKs and temporary data (snapshots). This is cleaned up when the transaction is closed. It does not matter whether this is done via COMMIT or ROLLBACK. There might be a insignificant difference in CPU time spent - a COMMIT is probably faster to parse than a ROLLBACK (two characters less) and other minor differences. Obviously, this is only true for read-only operations!
Totally not asked for: another programmer who might get to read the code might assume that a ROLLBACK implies an error condition.
IMHO it can make sense to wrap read only queries in transactions as (especially in Java) you can tell the transaction to be "read-only" which in turn the JDBC driver can consider optimizing the query (but does not have to, so nobody will prevent you from issuing an INSERT nevertheless). E.g. the Oracle driver will completely avoid table locks on queries in a transaction marked read-only, which gains a lot of performance on heavily read-driven applications.
ROLLBACK is mostly used in case of an error or exceptional circumstances, and COMMIT in the case of successful completion.
We should close transactions with COMMIT (for success) and ROLLBACK (for failure), even in the case of read-only transactions where it doesn't seem to matter. In fact it does matter, for consistency and future-proofing.
A read-only transaction can logically "fail" in many ways, for example:
a query does not return exactly one row as expected
a stored procedure raises an exception
data fetched is found to be inconsistent
user aborts the transaction because it's taking too long
deadlock or timeout
If COMMIT and ROLLBACK are used properly for a read-only transaction, it will continue to work as expected if DB write code is added at some point, e.g. for caching, auditing or statistics.
Implicit ROLLBACK should only be used for "fatal error" situations, when the application crashes or exits with an unrecoverable error, network failure, power failure, etc.
Just a side note, but you can also write that code like this:
using (IDbConnection connection = ConnectionFactory.CreateConnection())
using (IDbTransaction transaction = connection.BeginTransaction(IsolationLevel.ReadUncommitted))
using (IDbCommand command = connection.CreateCommand())
{
command.Transaction = transaction;
command.CommandText = "SELECT * FROM SomeTable";
using (IDataReader reader = command.ExecuteReader())
{
// Do something useful
}
// To commit, or not to commit?
}
And if you re-structure things just a little bit you might be able to move the using block for the IDataReader up to the top as well.
If you put the SQL into a stored procedure and add this above the query:
set transaction isolation level read uncommitted
then you don't have to jump through any hoops in the C# code. Setting the transaction isolation level in a stored procedure does not cause the setting to apply to all future uses of that connection (which is something you have to worry about with other settings since the connections are pooled). At the end of the stored procedure it just goes back to whatever the connection was initialized with.
Given that a READ does not change state, I would do nothing. Performing a commit will do nothing, except waste a cycle to send the request to the database. You haven't performed an operation that has changed state. Likewise for the rollback.
You should however, be sure to clean up your objects and close your connections to the database. Not closing your connections can lead to issues if this code gets called repeatedly.
If you set AutoCommit false, then YES.
In an experiment with JDBC(Postgresql driver), I found that if select query breaks(because of timeout), then you can not initiate new select query unless you rollback.
Do you need to block others from reading the same data? Why use a transaction?
#Joel - My question would be better phrased as "Why use a transaction on a read query?"
#Stefan - If you are going to use AdHoc SQL and not a stored proc, then just add the WITH (NOLOCK) after the tables in the query. This way you dont incur the overhead (albeit minimal) in the application and the database for a transaction.
SELECT * FROM SomeTable WITH (NOLOCK)
EDIT # Comment 3: Since you had "sqlserver" in the question tags, I had assumed MSSQLServer was the target product. Now that that point has been clarified, I have edited the tags to remove the specific product reference.
I am still not sure of why you want to make a transaction on a read op in the first place.
In your code sample, where you have
// Do something useful
Are you executing a SQL Statement that changes data ?
If not, there's no such thing as a "Read" Transaction... Only changes from an Insert, Update and Delete Statements (statements that can change data) are in a Transaction... What you are talking about is the locks that SQL Server puts on the data you are reading, because of OTHER transactions that affect that data. The level of these locks is dependant on the SQL Server Isolation Level.
But you cannot Commit, or ROll Back anything, if your SQL statement has not changed anything.
If you are changing data, then you can change the isolation level without explicitly starting a transation... Every individual SQL Statement is implicitly in a transaction. explicitly starting a Transaction is only necessary to ensure that 2 or more statements are within the same transaction.
If all you want to do is set the transaction isolation level, then just set a command's CommandText to "Set Transaction Isolation level Repeatable Read" (or whatever level you want), set the CommandType to CommandType.Text, and execute the command. (you can use Command.ExecuteNonQuery() )
NOTE: If you are doing MULTIPLE read statements, and want them all to "see" the same state of the database as the first one, then you need to set the isolation Level top Repeatable Read or Serializable...