NHibernate concurrency - nhibernate

I am new to NHibernate. I am curious to know if two processes on different machines pull up the same record at the same time. Both of them modify the record and one of them submits the record before the other. Will the second process roll back the transaction and throw an error message that the record has already been updated?

No, not by default. However, using <version> mapping will help you with this.
version-mapping
Optimistic concurrency control

Related

Oracle can return time out when another connection already use the same table?

if i need run an DML (insert, update, delete) in one table of database, firstly he verify if has an active DML using that table. In this momment, if has another operation, my connection wait he has finished.
There's a way to get an "time out" in this cases? Not in a global mode, only for specific cases.
--Edit for more specifications of the problem
Not sure if any kind of lock is actually used. But in my case, there is an old application in Oracle Forms and a new application written by me.
The problem is that when the user opens a specific record to update any field in the old application, and i try to edit the same record in my app, the line is blocked.
So my app it's waiting for the unlock. But the problem is that the user thinks the application is frozen and kill him, losing the changes.
But this is not the case if another Oracle Forms application attempts to edit. When it does, Oracle Forms displays the message "Could not reserve record (2). Keep trying?". Maybe it's because this old app uses any kind of lock. But i need validate this in the code.
Obs: The number 2 is the number of tries to update.
If you do a 'lock table .... wait', then it will wait until any DML on this table that is inflight commits, then gives you the lock. This will make any one coming after you wait till you release the lock. Look at the doc to see how to use this.
Then there's the possibility of locking a single row (select for update). which is more granular.
That being said, can you please explain what are you exactly trying to do? As you may not need to do this at all.

what is the reason for sqlite roll back?

I am having the serialized access to sqlite database. All of the threads are using the same database handle.
sqlite3_config(SQLITE_CONFIG_SERIALIZED);
During one of transaction statement involving many insert statements, I am deleting rows in another thread. Both are trying to modify the same table.
I am getting the transaction is roll backed. I wanted to know whether this can be the reason for the roll back.
Can you please help me to find the issue?. Thanks in advance.
Regards,
Rajeev
One connection has one transaction.
Therefore, when using multiple threads, you should use one connection for each thread.
SQLite's threading modes can prevent the database structures themselves from becoming corrupted, but when multiple threads try to do anything with the database at the same time, they will still interfere with each other's data.

Firebird lock table / lock record

Suppose you have one table for a Desktop application and several users.
When a user opens a record, i want to lock this record. I have tried "WITH LOCK" statement. It works fine.
But when a second users want to update the same record, i want to put a message "Sorry, you cannot work on this order because it is locked. Somebody else has opened this record before you". Firebird waits the first user to commit/rollback. I don t want to wait. I want to put an error message. Is there a simple way to ask firebird record lock status ?
Is there a way to lock a full table ? Or to put a semaphore/mutex (like get_lock on mysql)
i have tried reserving on set transaction statement but it does not work.
My wish is to display a message to the user. Not waiting.
Thanks
If you don't want to wait, then configure your transaction to use NO WAIT, or a wait timeout. However controlling business rules like this through database transactions is not advisable as it requires long running transactions which inhibit garbage collection, increases the chain of interesting transactions, and increases the chance of update conflicts.
I'd advise to use different options like:
First to update wins
Change detection (eg by a timestamp or record version counter which is also used as a condition in the update statement), and allowing the user to overwrite or abandon his update (or maybe merge)
Explicit reservation by updating the record (setting the username) in a separate transaction. This might require cleanup or the ability for a user to break the reservation (eg if someone had it open for too long).
Note that Firebird uses multi version concurrency control (MVCC), so explicit locking is not really natural. See also this answer to Locking tables firebird, delphi.
Locking tables using RESERVING should be possible, but I have never used it, so I am not entirely sure how to use it although you probably also need to specify FOR PROTECTED READ (see Interbase 6.0 Embedded SQL Guide, pages 70/71).

Can I use NHibernate's AdoNetTransactionFactory with distributed transactions?

I am dealing with a strange issue related to NHibernate and distributed transactions in a WCF service. See Deadlocks causing 'Server failed to resume the transaction' with NHibernate and distributed transactions for more details.
One thing that seems to solve my problem is using NHibernate's AdoNetTransactionFactory, instead of AdoNetWithDistributedTransactionsFactory.
I believe that the AdoNetWithDistributedTransactionsFactory is involved with making NHibernate's second-level caching mechanism work right, but we're not using that. What (if any) other problems exist with using AdoNetTransactionFactory with distributed transactions?
Thanks for your time!
I notice that you mentioned from your other question/answer:
SqlConnection class is not thread-safe, and that includes closing the connection
on a separate thread. Based on this response we have filed a
bug report for NHibernate.
However, from NHibernate's documentation:
11.2. Threads and connections
You should observe the following practices when creating NHibernate Sessions:
Never create more than one concurrent ISession or ITransaction instance per database connection.
Be extremely careful when creating more than one ISession per database per transaction. The ISession itself keeps track of updates made to loaded objects, so a different ISession might see stale data.
The ISession is not threadsafe! Never access the same ISession in two concurrent threads. An ISession is usually only a single unit-of-work!
If you are trying to multi-thread the connection with NHibernate perhaps it is just not going to work. Have you considered a different ORM such as Entity Framework?
No matter what ORM you choose though, the database connection will not be thread safe. This is universal.
"many DB drivers are not thread safe. Using a singleton means that if you have many threads, they will all share the same connection. The singleton pattern does not give you thread saftey. It merely allows many threads to easily share a "global" instance." - https://stackoverflow.com/a/6507820/1026459
Using AdoNetTransactionFactory with distributed system transactions will cause those transaction to be ignored by NHibernate, which has the following consequences:
ConnectionReleaseMode.AfterTransaction will not be honored. Instead, NHibernate will release the connection after each statement, and so will re-acquire a connection from the pool for the next one. Depending on your data provider, this may trigger escalation of the transaction to distributed.
FlushMode.Commit will not be honored. Explicit flushes will be required instead. (Auto flushes before queries may still occur.)
Works needing to be isolated from current system transaction will still be included inside it. (Unless the connection string Enlist property is false.) Such works may include id generators queries such as retrieving the next high value for a table hilo generator. If the transaction gets roll-backed, NHibernate may then use conflicting ids.
The NHibernate session will not be able to correctly track locks it holds on entities. Considering itself outside of a transaction, it will consider it has no lock on them. So it may try (on user code request by example) to re-lock them with lower lock level than the one the transaction already holds on them in database. Not sure what outcome could result of that. (At best, ignored, at worst...)
Second level cache will be disabled as soon as you start modifying data. NHibernate sort of "invalidate" cache entries in such situation, and re-enable them only on transaction completion, updated. But since it will not be aware of transactions...
Some extensions (maybe Envers) may rely on NHibernate transaction events, and will no more work as expected.
I strongly recommend upgrading to nhibernate 3.2(or a version close to it). Why? Since 2.1, there has been significant improvements (read rewrite) to the AdoNetWithDistributedTransactionFactory. Matter of fact, it now handles TransactionScopes/ambient-transactions and the like correctly. When we ran 2.1 in production we encounter many issues related to distributed transactions. We pretty much had to fix a ton of stuff ourselves and recompile NHibernate. 3.2 seems to have fixed many issues around the subject.
I don't have the source near me but, if memory doesn't fail me, the AdoNetTransactionFactory doesn't check/handle ambient transactions. So, you are down to NHibernate booting transactions when one is not present in the session(by means of ISession.BeginTransaction()).

What locks are enforced by SQL Server 2005 Express?

Consider a web page having grid-view connected to SqlDataSource having all permission to insert update and delete.
Publish the web page.
This is all on one computer local
Now
opening website on browser A - pressing edit of grid-view
opening website on broswer B - pressing edit of grid-view.
Now I edit in both browsers and press update one by one fine no problem
The last update is the one retained.
But hypothetical situation:
what if there were two computers, or
what if I had two mouse pointers controlled by two independent mice
Computer has capability of running two apps at the same time
Both users get ready and press the update in the browsers at the same time
Even if you consider two different computers this is not possible but for this question
Consider it as possible
Update from two different sources to the same database same table same same row
At the same time, same second, same micro second no delay, both hit the database server at the same time.
What will happen?
In theory I have studied that database management software implement locks when writing no reading, no other writing, etc but does SQL Server 2005 Express implement locks in practical or is it assumed that situation like above will never occur?
If locks there please provide explanation or resource which would explain it keeping in view different scenarios of access
Thank you
edit:-- I am not using control like sqldatasource so please when by providing statements to avoid bling update
its like-- algo---
sqlconnection conn=new .....
sqlcommand
command text is "sql statement for updating values of a particular row"
conn.Open();
cmd.ExecuteNonQuery();
conn.close;
so as seen how can I define the check that before executenonquery that if the data is recently changed are you sure you want to proceed? or something
I am kind of confused here I think..
}
This is solved by most applications using Optimistic Concurency control. Applications simply add more conditions to the update WHERE clause in order to detect changes that occured between the time the data was read and the moment the update is applied. Is called optimistic concurency because the applicaiton assumes no concurent changes will occur, and if they do occur they are are detected and the appplicaiton has to restart the operation. The alternative to optimistic concurency is pesimistic concurency where the applications explicitly locks the data it plans to update. In practice operaitons involving user interaction are never done under pesimistic concurency model.
Other concurency model, specially in distributed applications, is the one implied by the Fiefdom and Emissaries model.
So while database locks and transaction concurency models are always omnipresent in any database operation, when user interaction is involved no application will ever rely on the database locks. User interactions are simply way to long in terms of database transactions. Acquiring locks for the while forgetful Fred is out to lunch and has a data screen open on his desktop simply doesn't work.
SQL 2005 will enforce locks. Before a row can be updated the transaction must acquire an exclusive lock on it. Only 1 transaction can be granted this at a time so the other one will have to wait for that transaction to commit (2 phase locking) before being granted the lock that it needs for the update.
The second write will "win" in that it will overwrite the first one. You can implement optimistic concurrency controls in the sqldatasource to detect that the row has changed and abort the second one rather than blindly overwriting the first edit.
Edit
Following clarification to the question. If you want to roll your own you could add a timestamp column to the table (In SQL Server 2005 this is updated automatically when a row is updated) and bring that back as a hidden dataitem in the gridview then in your UPDATE statement add a where clause UPDATE ... WHERE PrimaryKeyColumn=#PKValue AND TimeStampCol=#OriginalTimestampValue If no rows were affected (retrievable from ExecuteNonQuery - generally) then another transaction modified the row. This might be a bit more lightweight than the alternative used by the data source control where it passes back the original values of all columns and adds them into the WHERE clause with similar logic.