NHibernate: exclusive locking - nhibernate

In NHibernate, I want to retrieve an instance, and put an exclusive lock on the record that represents the retrieved entity on the database.
Right now, I have this code:
With.Transaction (session, IsolationLevel.Serializable, delegate
{
ICriteria crit = session.CreateCriteria (typeof (TarificationProfile));
crit.SetLockMode (LockMode.Upgrade);
crit.Add (Expression.Eq ("Id", tarificationProfileId));
TarificationProfile profile = crit.UniqueResult<TarificationProfile> ();
nextNumber = profile.AttestCounter;
profile.AttestCounter++;
session.SaveOrUpdate (profile);
});
As you can see, I set the LockMode for this Criteria to 'Upgrade'.
This issues an SQL statement for SQL Server which uses the updlock and rowlock locking hints:
SELECT ... FROM MyTable with (updlock, rowlock)
However, I want to be able to use a real exclusive lock. That is, prevent that others can read this very same record, until I have released the lock.
In other words, I want to be able to use an xlock locking hint, instead of an updlock.
I don't know how (or even if) I can achieve that .... Maybe somebody can give me some hints about this :)
If it is really necessary, I can use the SQLQuery functionality of NHibernate, and write my own SQL Query, but, I'd like to avoid that as much as possible.

A HQL DML query will accomplish your update without needing a lock.
This is available in NHibernate 2.1, but is not yet in the reference documentation. The Java hibernate documentation is very close to the NHibernate implementation.
Assuming you are using ReadCommitted Isolation, you can then safely read your value back inside the transaction.
With.Transaction (session, IsolationLevel.Serializable, delegate
{
session.CreateQuery( "update TarificationProfile t set t.AttestCounter = 1 + t.AttestCounter where t.id=:id" )
.SetInt32("id", tarificationProfileId)
.ExecuteUpdate();
nextNumber = session.CreateQuery( "select AttestCounter from TarificationProfile where Id=:id" )
.SetInt32("id", id )
.UniqueResult<int>();
}
Depending on your table and column names, the generated SQL will be:
update TarificationProfile
set AttestCounter = 1 + AttestCounter
where Id = 1 /* #p0 */
select tarificati0_.AttestCounter as col_0_0_
from TarificationProfile tarificati0_
where tarificati0_.Id = 1 /* #p0 */

I doubt it can be done from NHibernate. Personally, I would use a stored procedure to do what you're trying to accomplish.
Update: Given the continued downvotes I'll expand on this. Frederick is asking how to use locking hints, which are syntax- and implementation-specific details of his underlying database engine, from his ORM layer. This is the wrong level to attempt to perform such an operation - even if it was possible (it isn't), the likelihood it would ever work consistently across all NHibernate-supported databases is vanishingly low.
It's great Frederick's eventual solution didn't require pre-emptive exclusive locks (which kill performance and are generally a bad idea unless you know what you're doing), but my answer is valid. Anyone who stumbles across this question and wants to do exclusive-lock-on-read from NHibernate - firstly: don't, secondly: if you have to, use a stored procedure or a SQLQuery.

If all you read are done with a IsolationLevel of Serializable and all the write are also done with a IsolationLevel of Serializable I don't see why you need to do any locking of database rows your self.
So the serialize keeps the data safe, now we still have the problem of possible dead locks....
If the deadlocks are not common, just putting the [start transaction, read, update, save] in a retry loop when you get a deadlock may be good enough.
Otherwise a simple “select for update” statement generated directly (e.g. not with nhibernate) could be used to stop another transaction reading the row before it is changed.
However I keep thinking, that if the update rate is fast enough to get lots of deadlocks a ORM may not be the correct tool for the update, or the database schema may need redesigning to avoid the value that has to be read/written (e.g calculation it when reading the data)

You could use isolation level "repeatable read" if you want to make sure that values you read from the database don't change during the transaction. But you have to do this in all critical transactions. Or you lock it in the critical reading transaction with an upgrade lock.

Related

SQL database locking difference between SELECT and UPDATE

I am re-writing an old stored procedure which is called by BizTalk. Now this has the potential to have 50-60 messages pushed through at once.
I occasionally have an issue with database locking when they are all trying to update at once.
I can only make changes in SQL (not BizTalk) and I am trying to find the best way to run my SP.
With this in mind what i have done is to make the majority of the statement to determine if an UPDATE is needed by using a SELECT statement.
What my question is - What is the difference regarding locking between an UPDATE statement and a SELECT with a NOLOCK against it?
I hope this makes sense - Thank you.
You use nolock when you want to read uncommitted data and want to avoid taking any shared lock on the data so that other transactions can take exclusive lock for updating/deleting.
You should not use nolock with update statement, it is really a bad idea, MS says that nolock are ignored for the target of update/insert statement.
Support for use of the READUNCOMMITTED and NOLOCK hints in the FROM
clause that apply to the target table of an UPDATE or DELETE statement
will be removed in a future version of SQL Server. Avoid using these
hints in this context in new development work, and plan to modify
applications that currently use them.
Source
Regarding your locking problem during multiple updates happening at the same time. This normally happens when you read data with the intention to update it later by just putting a shared lock, the following UPDATE statement can’t acquire the necessary Update Locks, because they are already blocked by the Shared Locks acquired in the different session causing the deadlock.
To resolve this you can select the records using UPDLOCK like following
DECLARE #IdToUpdate INT
SELECT #IdToUpdate =ID FROM [Your_Table] WITH (UPDLOCK) WHERE A=B
UPDATE [Your_Table]
SET X=Y
WHERE ID=#IdToUpdate
This will take the necessary Update lock on the record in advance and will stop other sessions to acquire any lock (shared/exclusive) on the record and will prevent from any deadlocks.
NOLOCK: Specifies that dirty reads are allowed. No shared locks are issued to prevent other transactions from modifying data read by the current transaction, and exclusive locks set by other transactions do not block the current transaction from reading the locked data. NOLOCK is equivalent to READUNCOMMITTED.
Thus, while using NOLOCK you get all rows back but there are chances to read Uncommitted (Dirty) data. And while using READPAST you get only Committed Data so there are chances you won’t get those records that are currently being processed and not committed.
For your better understanding please go through below link.
https://www.mssqltips.com/sqlservertip/2470/understanding-the-sql-server-nolock-hint/
https://www.mssqltips.com/sqlservertip/4468/compare-sql-server-nolock-and-readpast-table-hints/
https://social.technet.microsoft.com/wiki/contents/articles/19112.understanding-nolock-query-hint.aspx

What is wrong with the below tsql code

BEGIN TRAN
SELECT * FROM AnySchema.AnyTable
WHERE AnyColumn = SomeCondition
COMMIT
I know the transaction is not required here because it is just a select but just want to know how bad a programming it is and whether it is going to be an overhead on the DB engine.
You may use transaction on SELECT statements to ensure nobody else could update / delete records of the table of while the bunch of your select queries are executing.
Using WITH(NOLOCK):
Anyways, you may also use WITH(NOLOCK) for t_sql
SELECT * FROM AnySchema.AnyTable WITH(NOLOCK)
WHERE AnyColumn = SomeCondition
WITH (NOLOCK) is the equivalent of using READ UNCOMMITED as a transaction isolation level. Here stand the risk of reading an uncommitted row that is subsequently rolled back, i.e. data that never made it into the database.
So, while it can prevent reads being deadlocked by other operations, it comes with a risk.
TRANSACTION Block :
Using TRANSACTION block will not cause much of extra DB overload but if you keep the same type practice on, and suppose , at any SQL block you forget (you / your developers may forget, right ?) to close the transaction, then other processes can't work on the same table.
Anyways, it depends on what type of application you are using. If very frequent update and select things are there , it is advised not to use such transaction blocks. If medium level of updates and select are there, occurs next to each other, you may use transaction blocks for select (but ensure to close the transaction).
That's actually a good question. To understand what's going on, you need to know about SET IMPLICIT_TRANSACTIONS.
When ON, the system is in implicit transaction mode. This means that if ##TRANCOUNT = 0, any of the following Transact-SQL statements begins a new transaction. It is equivalent to an unseen BEGIN TRANSACTION being executed first:
When OFF, each of the preceding T-SQL statements is bounded by an unseen BEGIN TRANSACTION and an unseen COMMIT TRANSACTION statement. When OFF, we say the transaction mode is autocommit. [this is the default]
If your T-SQL code visibly issues a BEGIN TRANSACTION, we say the transaction mode is explicit.
https://msdn.microsoft.com/en-us/library/ms187807.aspx
Since the SQL Server would have created a transaction for you, manually doing doesn't actually change anything. The exact same thing would have happened either way.
Summary: What you are doing isn't 'wrong' because it has no effect, but unnecessary and confusing to the reader.
I think you would be taking a holdlock and most likely a tablock
table hints
That is not always a good thing as you would block any update or deletes (maybe even inserts)
It would be better to let SQL decide what level of of locks to take. Most likely pagelocks. I would stay away from nolock as bad stuff can happen.
On a select on a single table just let the optimizer do it's thing.

ADO.NET DeadLock

I'm experiencing an intermittent deadlock situation with following (simplified) code.
DataSet my_dataset = new DataSet()
SqlCommand sql_command = new SqlCommand();
sql_command.Connection = <valid connection>
sql_command.CommandType = CommandType.Text;
sql_command.CommandText = 'SELECT * FROM MyView ORDER BY 1'
SqlDataAdapter data_adapter = new SqlDataAdapter(sql_command);
sql_command.Connection.Open();
data_adapter.Fill(my_dataset);
sql_command.Connection.Close();
The error I get is:
Transaction (Process ID 269) was
deadlocked on lock resources with
another process and has been chosen as
the deadlock victim. Rerun the
transaction.
As I understand it, simply filling a DataSet via the ADO.Net .Fill() command shouldn't create a lock on the database.
And, it would appear from the error message that the lock is owned by another process.
The View I'm querying against has select statements only, but it does join a few table together.
Can a view that is only going a select statement be affected by locked records?
Can/Does ADO.Net .Fill() Lock Records?
Assuming I need to fill a DataSet, is there a way to do so that would avoid potential data locks?
SQL Server 2005 (9.0.4035)
A select query with joins can indeed cause a deadlock. One way to deal with this is to do the query in a SqlTransaction using Snapshot Isolation.
using(SqlTransaction sqlTran = connection.BeginTransaction(IsolationLevel.Snapshot))
{
// Query goes here.
}
A deadlock can occur because it locks each table being joined one after another before performing the join. If another query has a lock on a table that the other query needs to lock, and vice versa, there is a dead lock. With Snapshot Isolation queries that just read from tables do not lock them. Integrity is maintained because the read is actually done from a snapshot of the data at the time the transaction started.
This can have a negative impact on performance, though, because of the overhead of having to produce the snapshots. Depending on the application, it may be better to not use snapshot isolation and instead, if a query fails do to a deadlock, wait a little while and try again.
It might also be better to try to find out why the deadlocks are occurring and change the structure of the database and/or modify the application to prevent deadlocks. This article has more information.
You may try this:
Lower the transaction level for that query (for instance, IsolationLevel.ReadUncommited).
Use the NOLOCK hint on you query.
It might be far off and not the solution to your problem, check other solutions first - but, we had a similar problem (a select that locks records!) that after much effort we tracked to the file/SMB layer. It seemed that under heavy load, reading files from the networked drive (SAN) got held up, creating a waiting read lock on the actual database files. This expressed as a lock on the records contained.
But this was a race condition and not reproducable without load on the drives. Oh, and it was SQL Server 2005, too.
You should be able to determine using the SQL Server included tools which transactions are deadlocking each other.

Firebird 2.0 transaction SELECT performance

In Firebird 2.0, is using an explicit transaction faster on a SELECT command than executing the command with an implicit one?
All SQL commands (SELECT, INSERT, UPDATE etc.) can be executed ONLY within some transaction. You cannot run a command with out transaction being started prior to it.
Explicit and Implicit transaction are a feature of the component set you're using to access the database, not a feature of Firebird itself. As mentioned before, Firebird always does everything within a transaction. This has a couple of implications for you:
Using a "Implicit" transaction can't be faster then using a "Explicit" transaction because from Firebird's point of view, a transaction is a transaction, doesn't matter who started it.
Getting the best performance sometimes requires fine control over "Commits". While the "Implicit" transaction can't be faster then the "Explicit" transaction, the Explicit might be faster because you can control your StartTransactions and Commits. While you usually want to do all updates to a database within one transaction (so they all succeed or fail as a set) you sometimes want to split operations into multiple groups: If you need to bulk-insert many-many records, you probably want to Commit one every 1000 records or so.
Firebird cannot execute SQL commands without a transaction.
PS: You get the best performance results if you commit transactions, rather than rolling them back. Even if you only called SELECT and changed nothing.
Besides what was already said, take into account that the transaction can be:
Read-Write
Read-Only
For a SELECT it would be best to use a Read-Only transaction
PS: There are other types of transactions but this two are the important ones for this topic.
Usually transaction adds some overhead. However, you should be careful if you do not have some default transaction started when you connect to Firebird.
In my experience the implicit transactions tend to default to Auto commit Retaining, so they should be slower. You can always change the default behaviour.
But I would recommend using explicit transactions as Commit Retaining may cause you grief further down the line if it blocks too many transactions. If it does then access to Firebird can slow down dramatically as it traverses through all the held-up/blocked transactions to determine the correct value of the data.
Here are some discussions on it
http://forums.devshed.com/firebird-sql-development-61/difference-active-transaction-863103.html
http://www.slideshare.net/ibsurgeon/3-how-transactionswork

Should I commit or rollback a read transaction?

I have a read query that I execute within a transaction so that I can specify the isolation level. Once the query is complete, what should I do?
Commit the transaction
Rollback the transaction
Do nothing (which will cause the transaction to be rolled back at the end of the using block)
What are the implications of doing each?
using (IDbConnection connection = ConnectionFactory.CreateConnection())
{
using (IDbTransaction transaction = connection.BeginTransaction(IsolationLevel.ReadUncommitted))
{
using (IDbCommand command = connection.CreateCommand())
{
command.Transaction = transaction;
command.CommandText = "SELECT * FROM SomeTable";
using (IDataReader reader = command.ExecuteReader())
{
// Read the results
}
}
// To commit, or not to commit?
}
}
EDIT: The question is not if a transaction should be used or if there are other ways to set the transaction level. The question is if it makes any difference that a transaction that does not modify anything is committed or rolled back. Is there a performance difference? Does it affect other connections? Any other differences?
You commit. Period. There's no other sensible alternative. If you started a transaction, you should close it. Committing releases any locks you may have had, and is equally sensible with ReadUncommitted or Serializable isolation levels. Relying on implicit rollback - while perhaps technically equivalent - is just poor form.
If that hasn't convinced you, just imagine the next guy who inserts an update statement in the middle of your code, and has to track down the implicit rollback that occurs and removes his data.
If you haven't changed anything, then you can use either a COMMIT or a ROLLBACK. Either one will release any read locks you have acquired and since you haven't made any other changes, they will be equivalent.
If you begin a transaction, then best practice is always to commit it. If an exception is thrown inside your use(transaction) block the transaction will be automatically rolled-back.
Consider nested transactions.
Most RDBMSes do not support nested transactions, or try to emulate them in a very limited way.
For example, in MS SQL Server, a rollback in an inner transaction (which is not a real transaction, MS SQL Server just counts transaction levels!) will rollback the everything which has happened in the outmost transaction (which is the real transaction).
Some database wrappers might consider a rollback in an inner transaction as an sign that an error has occured and rollback everything in the outmost transaction, regardless whether the outmost transaction commited or rolled back.
So a COMMIT is the safe way, when you cannot rule out that your component is used by some software module.
Please note that this is a general answer to the question. The code example cleverly works around the issue with an outer transaction by opening a new database connection.
Regarding performance: depending on the isolation level, SELECTs may require a varying degree of LOCKs and temporary data (snapshots). This is cleaned up when the transaction is closed. It does not matter whether this is done via COMMIT or ROLLBACK. There might be a insignificant difference in CPU time spent - a COMMIT is probably faster to parse than a ROLLBACK (two characters less) and other minor differences. Obviously, this is only true for read-only operations!
Totally not asked for: another programmer who might get to read the code might assume that a ROLLBACK implies an error condition.
IMHO it can make sense to wrap read only queries in transactions as (especially in Java) you can tell the transaction to be "read-only" which in turn the JDBC driver can consider optimizing the query (but does not have to, so nobody will prevent you from issuing an INSERT nevertheless). E.g. the Oracle driver will completely avoid table locks on queries in a transaction marked read-only, which gains a lot of performance on heavily read-driven applications.
ROLLBACK is mostly used in case of an error or exceptional circumstances, and COMMIT in the case of successful completion.
We should close transactions with COMMIT (for success) and ROLLBACK (for failure), even in the case of read-only transactions where it doesn't seem to matter. In fact it does matter, for consistency and future-proofing.
A read-only transaction can logically "fail" in many ways, for example:
a query does not return exactly one row as expected
a stored procedure raises an exception
data fetched is found to be inconsistent
user aborts the transaction because it's taking too long
deadlock or timeout
If COMMIT and ROLLBACK are used properly for a read-only transaction, it will continue to work as expected if DB write code is added at some point, e.g. for caching, auditing or statistics.
Implicit ROLLBACK should only be used for "fatal error" situations, when the application crashes or exits with an unrecoverable error, network failure, power failure, etc.
Just a side note, but you can also write that code like this:
using (IDbConnection connection = ConnectionFactory.CreateConnection())
using (IDbTransaction transaction = connection.BeginTransaction(IsolationLevel.ReadUncommitted))
using (IDbCommand command = connection.CreateCommand())
{
command.Transaction = transaction;
command.CommandText = "SELECT * FROM SomeTable";
using (IDataReader reader = command.ExecuteReader())
{
// Do something useful
}
// To commit, or not to commit?
}
And if you re-structure things just a little bit you might be able to move the using block for the IDataReader up to the top as well.
If you put the SQL into a stored procedure and add this above the query:
set transaction isolation level read uncommitted
then you don't have to jump through any hoops in the C# code. Setting the transaction isolation level in a stored procedure does not cause the setting to apply to all future uses of that connection (which is something you have to worry about with other settings since the connections are pooled). At the end of the stored procedure it just goes back to whatever the connection was initialized with.
Given that a READ does not change state, I would do nothing. Performing a commit will do nothing, except waste a cycle to send the request to the database. You haven't performed an operation that has changed state. Likewise for the rollback.
You should however, be sure to clean up your objects and close your connections to the database. Not closing your connections can lead to issues if this code gets called repeatedly.
If you set AutoCommit false, then YES.
In an experiment with JDBC(Postgresql driver), I found that if select query breaks(because of timeout), then you can not initiate new select query unless you rollback.
Do you need to block others from reading the same data? Why use a transaction?
#Joel - My question would be better phrased as "Why use a transaction on a read query?"
#Stefan - If you are going to use AdHoc SQL and not a stored proc, then just add the WITH (NOLOCK) after the tables in the query. This way you dont incur the overhead (albeit minimal) in the application and the database for a transaction.
SELECT * FROM SomeTable WITH (NOLOCK)
EDIT # Comment 3: Since you had "sqlserver" in the question tags, I had assumed MSSQLServer was the target product. Now that that point has been clarified, I have edited the tags to remove the specific product reference.
I am still not sure of why you want to make a transaction on a read op in the first place.
In your code sample, where you have
// Do something useful
Are you executing a SQL Statement that changes data ?
If not, there's no such thing as a "Read" Transaction... Only changes from an Insert, Update and Delete Statements (statements that can change data) are in a Transaction... What you are talking about is the locks that SQL Server puts on the data you are reading, because of OTHER transactions that affect that data. The level of these locks is dependant on the SQL Server Isolation Level.
But you cannot Commit, or ROll Back anything, if your SQL statement has not changed anything.
If you are changing data, then you can change the isolation level without explicitly starting a transation... Every individual SQL Statement is implicitly in a transaction. explicitly starting a Transaction is only necessary to ensure that 2 or more statements are within the same transaction.
If all you want to do is set the transaction isolation level, then just set a command's CommandText to "Set Transaction Isolation level Repeatable Read" (or whatever level you want), set the CommandType to CommandType.Text, and execute the command. (you can use Command.ExecuteNonQuery() )
NOTE: If you are doing MULTIPLE read statements, and want them all to "see" the same state of the database as the first one, then you need to set the isolation Level top Repeatable Read or Serializable...