Sometimes SaveOrUpdate using nhibernate skips random values to insert into database - sql

I'm using Nhibernate SaveOrUpdate and during bulk insert it skips random values to insert each time.
I need to fix this what should I do?

SaveOrUpdateWithExceptionHandling relating to a ISession raises some red flags.
With NHibernate, if your session throws an exception, then it is now in an inconsistent state, and should be immediately disposed of.
You cannot do "... with exception handling" in your transaction, and this may be a source of your errors. You certainly will need to revisit your approach to this problem.
In the case of an error (perhaps due to concurrency, it's kinda unclear what you are trying to do), then you need to roll back the entire transaction, dispose of the session and try again.

Related

nhibernate RollbackTransaction and why to dispose of session after rollback?

I am working on a system using nhibernate, and I see a lot of the following two lines in exception catch blocks:
session.Flush();
session.RollbackTransaction();
I am very confused by this logic, and it looks like unnecessary work to flush changes, then use transaction rollback practices.
I wanted to setup an argument for removing these flush calls and relying on just the RollbackTransaction method, but I came across this question. Next I read more into the documentation linked, and read the following nugget of information:
If you rollback the transaction you should immediately close and discard the current session to ensure that NHibernate's internal state is consistent.
What does this mean? we currently pair our Session life time with our web request's begin and end operations, so I am worried that the reason we are calling flush THEN rollback is to keep the session in a valid state.
Any ideas?
NHibernate does object tracking via the session, and all the changes you do the entities are stored there, when you do the flush those changes are written to the db. If you get an exception while doing so the session state is not consistent with the database state, so if you do a rollback at this stage it will rollback db transaction but session values will not be rolled back.
As per design once that happens the session should not be used further in reliable manner (even Session.Clear() will not help)
If you use the session per request and if you get an error the best approach is to display an error to the user and ask to retry the operation. Other option is to create a brand new session and use it for data fetching purposes to display errors.
This Flush before Rollback is very likely a trick for working around a bug of the application caused by re-using the session after the rollback.
As you have found by yourself, the session must not be used after a rollback. The application is doing that mistake as per your comment.
Without the Flush before the Rollback, the session still consider the changes as pending, and will commit them at next Flush, defeating the Rollback purpose. Doing that Flush before the rollback causes pending changes to be flushed, then rollback-ed, helping avoiding the session to flush them later.
But the session is still not in a consistent state, so by continuing to use it, the application stays at risk. The session cache still holds the changes that were attempted then rollback-ed. The session does just no more consider them as pending changes awaiting a flush. If the application later usages of the session access those entities, their state will still be the modified one from the rollback-ed transaction, although they will not be considered dirty.

Can a NHibernate transaction be continued after an exception?

I am using NHibernate to save objects that require that one of the properties on these objects must be unique. The design of the system is such that it is possible that an attempt may be made occasionally to save the same object twice. This of course causes a violation of the uniqueness constraint and NHibernate throws an exception. The exception happens at the time I am attempting to save the object, rather than at Transaction.Commit() time. When this happens I want to simply catch the exception, discard the object and continue on saving other similar objects. However, I have not found a way to allow this to happen. Once that exception has happened I cannot carry on to save other objects and commit the transaction.
The only work-around I have found for this is to always check if the object exists first by running a query on the unique property. That works, but it seems unnecessarily expensive. I would like to avoid that extra hit to the db. Is there a way to do this?
Thanks
The issue, you've described, must be solved on a higher level then NHibernate session. Take a look at 9.8. Exception handling, extract:
If the ISession throws an exception you should immediately rollback
the transaction, call ISession.Close() and discard the ISession
instance. Certain methods of ISession will not leave the session in a
consistent state.
So, what I would suggest, wrap the call to your Data layer (DL) with some validation. Place the if/try logic outside of the Session.
Because even in case, that we are using versioning (see 5.1.7. version) (a very powerful way how to survive concurrency) ... we are provided with StaleExceptions and have to solve them outside of the DL

Usage of NHibernate session after exception on query

We are trying to implement retry logic to recover from transient errors in Azure environment.
We are using long-running sessions to keep track and commit the whole bunch of changes at the end of application transaction (which may spread over several web-requests). Along the way we need to get additional data from database. Our main problem is that we can't easily recover from db error because we can't "replay" all user actions.
So far we used straightforward recovery algorithm:
Try to perform operation in long-running session
In case of error, close the session, open a new one and merge entities into it
Retry the operation
It's very expensive approach in terms of time (merge is really long for big entity hierarchies). So we'd like to optimize things a little.
We'd like to perform query operations in separate session (to keep long running one untouched and safe) and on success, merge results back to the long-running session. Retry is relatively simple here - we just need to open new session and run query once more. However, with this approach we have an issue with initializing lazy properties/collections:
If we do this in separate session, we need to merge results back (a lot of entities) but merge could fail and break the long-running session
We tried different ways of "moving" original entity to different session, loading details and returning it back, but without success (evict, replicate, etc.)
There is known statement that session should be discarded in case of exception. However, the example shows write operation. Is it still true for read ones? I mean if I guarantee that no data is written back to the database, can I reuse the same session to run query again?
Do you have any other suggestions about retry logic with long-running sessions?
IMO there's no way to solve your issue. It's gonna take a lot of time to commit everything or you will have to do a lot of work to break it up into smaller sessions and handle every error that can occur while merging.
To answer your question about using the session after an exception: you cannot trust ANYTHING anymore inside this session, not even loaded entities.
Read this paragraph from Ayende's article about building a simple todo app with a recoveryplan in case of an exception in the session:
Then there is the problem of error handling. If you get an exception
(such as StaleObjectStateException, because of concurrency conflict),
your session and its loaded entities are toast, because with
NHibernate, an exception thrown from a session moves that session into
an undefined state. You can no longer use that session or any loaded
entities. If you have only a single global session, it means that you
probably need to restart the application, which is probably not a good
idea.

SQL FK and Pre-Checking for Existence

I was wondering what everyone's opinion was with regard to pre-checking foreign key look ups before INSERTS and UPDATES versus letting the database handle it. As you know the server will throw an exception if the corresponding row does not exist.
Within .NET we always try to avoid Exception coding in the sense of not using raised exceptions to drive code flow. This means we attempt to detect potential errors before the run-time does.
With SQL I see two opposite points
1) Whether you check or not the database always will. This means that you could be wasting (how much is subjective) CPU cycles doing the same check twice. This makes one lean towards letting the database do it only.
2) Pre-checking allows the developer to raise more informative exceptions back to the calling application. Instead of receiving the generic "foreign key violation" one could return different error codes for each check that needs to be done.
What are your thoughts?
Don't test before:
the DB engine will check anyway on INSERT (you have 2 reads of the index, not one)
it won't scale without lock hints or semaphores which reduce concurrency and performance (an 2nd overlapping concurrent call can pass the EXISTS before the first call does an INSERT)
What you can do is to wrap the INSERT in it's own TRY/CATCH and ignore error xxxx (foreign key violation, sorry don't know it). I've mentioned this before (for unique keys, error 2627)
Only inserting a row if it's not already there
Select / Insert version of an Upsert: is there a design pattern for high concurrency?
SQL Server 2008: INSERT if not exits, maintain unique column
This scales very well to high volumes.
Data integrity maintanence is the Databases's job, so I would say you let the DB handle it. Raised exceptions in this case is a valid case, and even though it could be avoided, it is a correctly raised exception, because it means something in the code didn't work right, that it is sending an orphaned record for insert (or something failed in the first insert - however way you are inserting it). Besides, you should have try/catch anyway, so you can implement a meaningful way to handle this...
I don't see the benefit of pre-checking for FK violations.
If you want more informative error statements, you can simply wrap your insert in a try-catch block and return custom error messages at that point. That way you're only running the extra queries on failure rather than every time.

NHibernate, Transaction rollback and Entity version

At the moment im trying to implement code that handles stale state exceptions (ie, another user has changed this row etc.. ) nicely when im committing a transaction using nhibernate. The idea is to, when the exception occurs when flushing, to roll back the transaction, "fix" the entities through different means, then rerun the whole transaction code again.
My problem is, when the transaction rolls back, the entities version property has still been incremented for those entities that successfully updated the database, even though the transaction in the database has been rolled back (This is actually also true for the entity that failed the transaction). This means that the second run will never succeed, because the version is out of sync with the database.
How do I solve this problem?
When an NHibernate exception is thrown, you MUST throw away that session, as the state is not considered valid anymore.
That implies re-getting the entities too.