multiple subroutine that need multiple sql rollbacks - sql

What’s your method when you have a scenario like:
I have 8 procedures in C# that calls stored procedure to insert data to different tables. In each procedure I have a transaction rollback , aka ( tran.Rollback(); ) in case of failure and currently commit on successes.
However I need to roll back all the inserts even when some had success (and maybe in the future update) when any one of the procedures was not successful.

You can do this using a TransactionScope, eg:
void RootMethod()
{
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions() { IsolationLevel= IsolationLevel.ReadCommitted})
{
/* Perform transactional work here */
SomeMethod();
scope.Complete();
}
}
void SomeMethod()
{
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions() { IsolationLevel = IsolationLevel.ReadCommitted })
{
/* Perform transactional work here */
scope.Complete();
}
}
Managing transaction flow using TransactionScopeOption.
Otherwise your transactions might be nestable. SQL Server, for instance, supports nested transactions. A COMMIT in a nested transaction isn't a final commit, but a ROLLBACK always rolls back all work.

Related

Entity Framework CORE batch processing taking long time

I am passing a few records with Jquery ajax() to a .Net CORE MVC controller to batch update a SQL table. It then calls TransactionalDeleteAndInsert() in a repository to delete then insert the records as shown in the following code.
When done, _repoContext.SaveChangesAsync() is executed. The delete/update actions take a few seconds to complete, but when I refresh the screen or navigate to another page, the get method to get the updated list took more than 2 hours. What am I doing wrong?
public int BatchInsert(IList<T> entityList)
{
int inserted = 0;
foreach(T entity in entityList)
{
this.RepositoryContext.Set<T>().Add(entity);
inserted++;
}
return inserted;
}
public int BatchDelete(IList<T> entityList)
{
int deleted = 0;
foreach(T entity in entityList)
{
this.RepositoryContext.Set<T>().Remove(entity);
deleted++;
}
return deleted;
}
public List<int> TransactionalDeleteAndInsert(IList<T> deleteEntityList, IList<T> insertEntityList)
{
using (var transaction = this.RepositoryContext.Database.BeginTransaction())
{
int totalDeleted = this.BatchDelete(deleteEntityList);
int totalInserted = this.BatchInsert(insertEntityList);
transaction.Commit();
List<int> result = new List<int>();
result.Add(totalDeleted);
result.Add(totalInserted);
return result;
}
}
In the snippet above there is not SaveChangesAsync() or SaveChanges() method executed (I take it that you execute it later on).
That means the whole process occurs locally in your context/memory only.
Transactions
The fact that BatchDelete() and BatchInsert() methods are wrapped in a transaction does not make a difference because these operations occur in your context, which will probably be recreated in your next request (given that it's lifetime is scoped).
The transaction would make more sense if your code was like this
using (var transaction = this.RepositoryContext.Database.BeginTransaction())
{
int totalDeleted = this.BatchDelete(deleteEntityList);
this.SaveChanges();
int totalInserted = this.BatchInsert(insertEntityList);
this.SaveChanges();
transaction.Commit();
List<int> result = new List<int>();
result.Add(totalDeleted);
result.Add(totalInserted);
return result;
}
So if for any reason your second db operation would fail the first one would rollback too. (I am aware that this example would not make sense in your case, you could simply execute SaveChanges() method at the end of your TransactionalDeleteAndInsert() method and you you could avoid any unwanted data saved in your db in case insert fails)
Slow db operations
That could be due to many reasons. It could be a slow sql server, very big tables, or long add/remove lists. This is the reason that when you refresh it takes long, because by refreshing you query again your database while it is already under heavy pressure.

OracleConnection does not support parallel transactions

I'm working with Oracle 11g database and ASP.NET MVC 4 using Enterprise library. I use Transactions on my commands just to be safe if case of any exceptions. I've a main method with BeginTransaction() which calls other methods (Lets call it child method) that contain Begin and Commit Transaction methods.
I'm getting "OracleConnection does not support parallel transactions." exception when I am executing BeginTransaction() method in child method.
Any help on this.
Try to use TransactionScope instead of BeginTransaction method. It supports distributed nested transactions therefore it should help.
void RootMethod()
{
using(TransactionScope scope = new TransactionScope())
{
/* Perform transactional work here */
SomeMethod();
scope.Complete();
}
}
void SomeMethod()
{
using(TransactionScope scope = new TransactionScope())
{
/* Perform transactional work here */
scope.Complete();
}
}

NHibernate new session with transaction in existing transaction

can this code makes some bad things? I found it in one project and do not know if it can be cause of some crazy bugs(deadlocks, timeouts in DB,...). Code like this is executed concurently many times in program even in threads.
Thanks a lot
class first {
void doSomething {
using (ITransaction transaction = session.BeginTransaction){
var foo = new second();
foo.doInNewTransaction(); //inner transaction in new session
transaction.Commit();
}
}
}
class second {
void doInNewTransaction(){
using (Session session = new Session()){
using (ITransaction transaction = session.BeginTransaction){
//do someting in database
transaction.Commit();
}
}
}
}
This should be fine. I'm sure I have done stuff like this in the past. The only thing that you need to be aware of is that if you modify an object in the inner session then these changes will not automatically be reflected in the outer session if the same object has already been loaded.
Having said that, if you do not need to do this then I would avoid it. Normally I would recommend AOP based transaction management when using NHibernate. This would allow your inner component to easily join in with the transaction from the outer component. However, in order to do this you need to be using a DI container that supports this, for example Spring.NET or Castle.

Is it a better practice to explicitly call transaction rollback or let an exception trigger an implicit rollback?

In the below code if any exception is thrown while executing the the SQL statements we should expect an implicit rollback on the transaction as the transaction was not committed, it goes out of scope and it gets disposed:
using (DbTransaction tran = conn.BeginTransaction())
{
//
// Execute SQL statements here...
//
tran.Commit();
}
Is the above an acceptable practice, or should one catch the exception and explicitly make a call to tran.Rollback() as shown below:
using (DbTransaction tran = conn.BeginTransaction())
{
try
{
//
// Execute SQL statements here...
//
tran.Commit();
}
catch
{
tran.Rollback();
throw;
}
}
Former. If you look up MSDN samples on similar topics, like TransactionScope, they all favor the implicit rollback. There are various reasons for that, but I'll just give you a very simple one: by the time you catch the exception, the transaction may had already rolled back. Many errors rollback the pending transaction and then they return control to the client, where the ADO.Net raises the CLR SqlException after the transaction was already rolled back on the server (1205 DEADLOCK is the typical example of such an error), so the explicit Rollback() call is, at best, a no-op, and at worse an error. The provider of the DbTransaction (eg. SqlTransaction) should know how to handle this case, eg. because there is explicit chat between the server and the client notifying of the fact that the transaction rolled back already, and the Dispose() method does the right thing.
A second reason is that transactions can be nested, but the semantics of ROLLBACK are that one rollback rolls back all transactions, so you only need to call it once (unlike Commit() which commits only the inner most transaction and has to be called paired up for each begin). Again, Dispose() does the right thing.
Update
The MSDN sample for SqlConnection.BeginTransaction() actually favors the second form and does an explicit Rollback() in the catch block. I suspect the technical writer simply intended to show in one single sample both Rollback() and Commit(), notice how he needed to add a second try/catch block around the Rollback to circumvent exactly some of the problems I mentioned originally.
You can go either way, the former being more concise, the latter being more intent revealing.
A caveat with the first approach would be that calling RollBack on disposal of transaction is dependent on the driver specific implementation. Hopefully almost all the .NET connectors do that. SqlTransaction does:
private void Dispose(bool disposing)
{
Bid.PoolerTrace("<sc.SqlInteralTransaction.Dispose|RES|CPOOL> %d#, Disposing\n", this.ObjectID);
if (disposing && (this._innerConnection != null))
{
this._disposing = true;
this.Rollback();
}
}
MySQL's:
protected override void Dispose(bool disposing)
{
if ((conn != null && conn.State == ConnectionState.Open || conn.SoftClosed) && open)
Rollback();
base.Dispose(disposing);
}
A caveat with second approach is it's not safe to call RollBack without another try-catch. This is explicitly stated in the documentation.
In short as to which is better: it depends on the driver, but it's typically better to go for the first, for the reasons mentioned by Remus.
Additionally see What happens to an uncommitted transaction when the connection is closed? for as to how connection disposal treat commits and rollbacks.
I tend to agree with "Implicit" rollback based on exception pathways. But, obviously, that depends on where you are in the stack and what you're trying to get done (i.e. is the DBTranscation class catching the exception and performing cleanup, or is it passively not "committing"?).
Here's a case where implicit handling makes sense (maybe):
static T WithTranaction<T>(this SqlConnection con, Func<T> do) {
using (var txn = con.BeginTransaction()) {
return do();
}
}
But, if the API is different, the handling of commit might be, too (granted, this :
static T WithTranaction<T>(this SqlConnection con, Func<T> do,
Action<SqlTransaction> success = null, Action<SqlTransaction> failure = null)
{
using (var txn = con.BeginTransaction()) {
try {
T t = do();
success(txn); // does it matter if the callback commits?
return t;
} catch (Exception e) {
failure(txn); // does it matter if the callback rolls-back or commits?
// throw new Exception("Doh!", e); // kills the transaction for certain
// return default(T); // if not throwing, we need to do something (bogus)
}
}
}
I can't think of too many cases where explicitly rolling-back is the right approach except where there is a strict change control policy to be enforced. But then again, I'm a bit slow on the uptake.

NHibernate - Is ITransaction.Commit really necessary?

I've just start studying NHibernate 2 days ago, and I'm looking for a CRUD method that I've written based on an tutorial.
My insert method is:
using (ISession session = Contexto.OpenSession())
using (ITransaction transaction = session.BeginTransaction())
{
session.Save(noticia);
transaction.Commit();
session.Close();
}
The complete code of "Contexto" is here: http://codepaste.net/mrnoo5
My question is: Do i really need to use ITransaction transaction = session.BeginTransaction() and transaction.Commit();?
I'm asking this because I've tested run the web app without those two lines, and I've successfully inserted new records.
If possible, can someone explain me too the purpose of Itransaction and the method Commit?
Thanks
This is the proper generic NHibernate usage pattern:
using (ISession session = sessionFactory.OpenSession())
using (ITransaction transaction = session.BeginTransaction())
{
//Do the work here
transaction.Commit();
}
All of that is required to ensure everything works as expected (unless you use additional infrastructure)
Closing the session or doing anything with the transaction besides committing is redundant, as the Dispose methods of the session and the transaction takes care of cleanup, including rollback if there are errors.
It's important to note than doing anything with the session after an exception can result in unexpected behavior, which is another reason to limit explicit exception handling inside the block.
ITransaction transaction = session.BeginTransaction() is not necessary as you found out by testing.
But imagine this, what happens when your session throws an exception? how would you get back your changes to the database?
The following is quote from Nhib...
A typical transaction should use the following idiom:
ISession sess = factory.OpenSession();
ITransaction tx;
try
{
tx = sess.BeginTransaction(); //do some work ...
tx.Commit();
}
catch (Exception e)
{
if (tx != null)
tx.Rollback();
throw;
}
finally
{
sess.Close();
}
If the ISession throws an exception, the transaction must be rolled back
and the session discarded. The internal state of the ISession might not be
consistent with the database after the exception occurs.
NHibernate.ISessionFactory
Well you call Commit() on your transaction it saves all the changes to the database.
Do i really need to use ITransaction transaction = session.BeginTransaction() and transaction.Commit();?
yes it is considered good practice using transactions for everything even simple reads.
tested run the web app without those two lines, and I've successfully inserted new records.
that because the session will commit the changes when it is disposed at the end of the using statement.
anyway this is how i would right the save :
using (ISession session = Contexto.OpenSession())
{
using (ITransaction transaction = session.BeginTransaction())
{
try
{
session.Save(noticia);
transaction.Commit();
}
catch(HibernateException)
{
transaction.Rollback();
throw;
}
}
}