Nhibernate transaction level during persist help - nhibernate

I have a question. Imagine you'd have an object you'd want to save in a transaction, the object having collections of other objects etc so its a more "complex" object.
Anyway, sometimes we save objects like that, but in the meantime, we use another thread that occasionally reads said data and synchronizes them up to our central server. However we've noticed problems that in some occasions objects get synced over without all the collection objects.
Since this only happens every once in a while we figured it could be the transaction isolation level. Maybe the synchronization thread reads the data before the transaction is done persisting all the objects, thus only reading half the data needed, and sending it over.
Because we all know that the clients data is all saved, all the time, it's just that sometimes it doesn't tag along when it's being sent to us.
So we'd want some kind of lock I suppose, I just don't know anything about these locks. Which one should we use?
There are no outside sources working towards the database in this case, since it's a WPF application on a client's customer.
Any help would be appreciated!
Best regards,
E.

Every database supports a set of standard isolation levels. These are all meant to prevent to a certain level that you read data that is modified inside another transaction. I suggest you first read up on what these isoloation levels mean.
In your specific situation, I'd suggest that for the transaction that is reading the data, you use at least an isolation level of ReadCommitted. In code, this would look like this:
using (var transactionScope = new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions { IsolationLevel = IsolationLevel.ReadCommitted }))
{
// Read the data you want from the database.
...
transactionScope.Complete();
// Return the data.
}
Using a TransactionScope with IsolationLevel.ReadCommitted prevents that you read data that has not yet been committed by another transaction.
The code that writes data to the database should also be put inside one transaction. As long as you only write data inside that transaction, the isolation level for that transaction doesn't matter. This guarantees the atomicity of your updates: either all updates succeed or none of them. This also prevents another transaction from reading a partial update.

Related

Nested transactions for testing

I was wondering if it's a good practice to nest two transactions? For example wrapping my NHibernate transaction with TransactionScope for the benefit of the Tests (making sure that the db rollbacks all the changes that were made in the test).
The other option is to keep the entities that I insert into the Db in memory and delete them at the end of the test.
Which one is better?
First of all, nhibernate doesn't support nested transactions!
TransactionScope on the other side will not create a new transaction if there is already one opened. If you only use transaction scope, it will create a new transaction for the connection.
If you then open a transaction within the scope, this will still work with nhibernate.
Back to your question, it pretty much depends on the amount of objects you create within the TransactionScope. If it becomes too many, you will simply spam the transaction log of your database. Apart from that, the concept is perfectly fine I would say.
And one important thing to mention, if you use TransactionScope, and you create multiple sessions/transaction with nhibernate, the scope might switch to distributed transactions which requires MSDTC to run on the target server, otherwise it will simply fail.

Mixing eventual consistency systems and legacy ACID systems

Are there any patterns for mixing eventual consistency systems with legacy ACID-systems?
I want to store data in some(at least two) legacy systems on the mainframe that need ACID-like transactions. Those mainframe-databases(Let us call them OldWorld) are running under the same transaction manager in the same process so the consistency of the mainframe-systems is no problem.
I have a transaction manager that can handle XA-Transactions with the mainframe-tm and the ACID-able relational database in the non-mainframe environment (let us call this NewWorld).
But I do not want to use the XA-Transaction because it often causes trouble with long running locks on the mainframe-side and in many cases i do not need all ACID-Features for both worlds. I always want a consistent mainframe(All Data in the OldWorld are consistent inside the OldWorld). The NewWorld System can handle inconsistent data(Inconsistency between New and Old) when it reads data from the mainframe-side. The operations that are used to store data at the OldWorld are easy and save “add-only operations” whom cannot fail functionally (it can fail technically, but this should always be a temporary failure).
My idea to work around the need for a distributed transaction is that i update the data in the OldWorld asynchronously and use an event sourcing data layer(in the NewWorlds) to store the information what is needed to be done in the OldWorld, using “soft-transaction-id's“ to prevent double-submitting to the OldWorld. These “soft-transaction-id's” will be generated while storing the data to the event-sourcing-data-layer for a transaction that needs to be done in the OldWorld.
I don't have the change to add my „soft-transaction-id's“ to the OldWorld-Databases but i can add a new Database that can store a „Done“-State beside the „soft-transaction-id“ and make the update of this database part of the old-world-transactions. Then another async-process can read the state-information without any locking and update the NewWorld (ex. Update relational-model with data from the event sourcing store. And marking the soft-transaction-id as done(„global-consistent“)) The Update of the OldWorld will always check if the soft-transaction-id is always committed first.
As i read through my writings i get the feeling that it's like global transaction, just with less locking. The knowledge that my update to the OldWorld will functional succeed is essential, without that you need a manually merge process, which can handle the functional conflicts. The NewWorld systems needs the functionality to handle inconsistent global state. It can be done by reading the relational-database and mimic the OldSystem DataRequests by analysing the not yet committed ( into the OldWorld-Database) event-store. For all other transactions I need to use distributed transactions with their locking behavior.

Core data : how to undo operations once managed objects are saved with context

I am trying to implement downloading of bulk data from several tables on the server.
In my case there are 16 tables. For all these tables I will be firing 10 requests to the server. This means I have done a bit of logical groupings for related tables, but it is like all tables are inter-related with each other through one or the other relationship.
I need to consider three cases while doing downloading:
Saving data to each table at local.
Managing relationships between inserted objects.
Handling situation when one of the requests fails during download, say 8th request failed.
I will be following this approach for each response:
Inserting data in managed object context.
Managing relationships by firing NSPredicate and associating the related objects.
Saving the context.
In case of a response failure, I have two options:
Next time continue from the failed response.
Revert all saved data to its previous state.
1st approach may lead to some data inconsistency, so I am going with 2nd approach.
I know that if a managed object context is not saved, we can revert the changes, but
is it possible to revert the changes, if the managed object context is
saved?
I require some useful answers from the community.
Please suggest.
Is it possible to revert the changes, if the managed object context is saved?
After saving? Maybe, but it could be tricky. If you set up a separate managed object context for your network operations, and give it an NSUndoManager, you could later on tell the undo manager to roll everything back to the previous state.
It would be simpler to just not save changes until you're finished, though. Using an undo manager doesn't really help much-- the memory needed to store up all the undo actions will at least match the memory use from keeping all of the unsaved changes around until you're finished. If you're working on a separate managed object context (whether a child context or a completely separate context), handling the error case is as simple as letting the MOC get deallocated without saving changes first.

Nhibernate Transaction problem - IsolationLevel.Serializable

I have a task that takes quite a long time. So I would like to let several programs/threads/computers execute the same task to speed things up. Each task requires unique ids which are stored in a db – so I thought these ids could be obtained like this:
NHibernateSession.Current.BeginTransaction(IsolationLevel.Serializable);
list = NHibernateSession.Current.CreateCriteria<RelevantId>().SetFirstResult(0).SetMaxResults(500).List<RelevantId>();
foreach (RelevantId x in list)
{
RelevantIdsRepository.Delete(x);
}
NHibernateSession.Current.Transaction.Commit();
Unfortunately, this throws an exception after a while if several processes access the database (nr of deleted objects is not the same as batch size). Why is this? The isolation level of the db should be ok shouldn’t it? Thanks.
Best wishes,
Christian
I'm not sure that I understand what you are doing here. It looks like each process should take some ids and process them but no two processes should take the same.
It doesn't work like you implemented it. All processes are reading the same ids. After committing the transaction they disappear from the database. Until then, they are visible to everyone. Isolation level only make sure that other transactions can't read them after they got deleted. But until then, they all can read them.
It's not so easy to distribute load. You could
maintain ids in a table where each process is registering itself as the executer and commits it before starting (handling conflicts, eg. StaleObjectStateException). Make sure to clean it up even when a process crashes.
write a central service which distributes ids.
The problem that it runs slow, is possibly due to the fact that you perform multiple SQL statements in a loop.
You should see if it is not possible to delete all entities in one batch-statement.

NHibernate, ActiveRecord, Transaction database locks and when Commits are flushed

This is a common question, but the explanations found so far and observed behaviour are some way apart.
We have want the following nHibernate strategy in our MVC website:
A SessionScope for the request (to track changes)
An ActiveRecord.TransactonScope to wrap our inserts only (to enable rollback/commit of batch)
Selects to be outside a Transaction (to reduce extent of locks)
Delayed Flush of inserts (so that our insert/updates occur as a UoW at end of session)
Now currently we:
Don't get the implied transaction from the SessionScope (with FlushAction Auto or Never)
If we use ActiveRecord.TransactionScope there is no delayed flush and any contained selects are also caught up in a long-running transaction.
I'm wondering if it's because we have an old version of nHibernate (it was from trunk very near 2.0).
We just can't get the expected nHibernate behaviour, and performance sucks (using NHProf and SqlProfiler to monitor db locks).
Here's what we have tried since:
Written our own TransactionScope (inherits from ITransactionScope) that:
Opens a ActiveRecord.TransactionScope on the Commit, not in the ctor (delays transaction until needed)
Opens a 'SessionScope' in the ctor if none are available (as a guard)
Converted our ids to Guid from identity
This stopped the auto flush of insert/update outside of the Transaction (!)
Now we have the following application behaviour:
Request from MVC
SELECTs needed by services are fired, all outside a transaction
Repository.Add calls do not hit the db until scope.Commit is called in our Controllers
All INSERTs / UPDATEs occur wrapped inside a transaction as an atomic unit, with no SELECTs contained.
... But for some reason nHProf now != sqlProfiler (selects seems to happen in the db before nHProf reports it).
NOTE
Before I get flamed I realise the issues here, and know that the SELECTs aren't in the Transaction. That's the design. Some of our operations will contain the SELECTs (we now have a couple of our own TransactionScope implementations) in serialised transactions. The vast majority of our code does not need up-to-the-minute live data, and we have serialised workloads with individual operators.
ALSO
If anyone knows how to get an identity column (non-PK) refreshed post-insert without a need to manually refresh the entity, and in particular by using ActiveRecord markup (I think it's possible in nHibernate mapping files using a 'generated' attribute) please let me know!!