I am passing a few records with Jquery ajax() to a .Net CORE MVC controller to batch update a SQL table. It then calls TransactionalDeleteAndInsert() in a repository to delete then insert the records as shown in the following code.
When done, _repoContext.SaveChangesAsync() is executed. The delete/update actions take a few seconds to complete, but when I refresh the screen or navigate to another page, the get method to get the updated list took more than 2 hours. What am I doing wrong?
public int BatchInsert(IList<T> entityList)
{
int inserted = 0;
foreach(T entity in entityList)
{
this.RepositoryContext.Set<T>().Add(entity);
inserted++;
}
return inserted;
}
public int BatchDelete(IList<T> entityList)
{
int deleted = 0;
foreach(T entity in entityList)
{
this.RepositoryContext.Set<T>().Remove(entity);
deleted++;
}
return deleted;
}
public List<int> TransactionalDeleteAndInsert(IList<T> deleteEntityList, IList<T> insertEntityList)
{
using (var transaction = this.RepositoryContext.Database.BeginTransaction())
{
int totalDeleted = this.BatchDelete(deleteEntityList);
int totalInserted = this.BatchInsert(insertEntityList);
transaction.Commit();
List<int> result = new List<int>();
result.Add(totalDeleted);
result.Add(totalInserted);
return result;
}
}
In the snippet above there is not SaveChangesAsync() or SaveChanges() method executed (I take it that you execute it later on).
That means the whole process occurs locally in your context/memory only.
Transactions
The fact that BatchDelete() and BatchInsert() methods are wrapped in a transaction does not make a difference because these operations occur in your context, which will probably be recreated in your next request (given that it's lifetime is scoped).
The transaction would make more sense if your code was like this
using (var transaction = this.RepositoryContext.Database.BeginTransaction())
{
int totalDeleted = this.BatchDelete(deleteEntityList);
this.SaveChanges();
int totalInserted = this.BatchInsert(insertEntityList);
this.SaveChanges();
transaction.Commit();
List<int> result = new List<int>();
result.Add(totalDeleted);
result.Add(totalInserted);
return result;
}
So if for any reason your second db operation would fail the first one would rollback too. (I am aware that this example would not make sense in your case, you could simply execute SaveChanges() method at the end of your TransactionalDeleteAndInsert() method and you you could avoid any unwanted data saved in your db in case insert fails)
Slow db operations
That could be due to many reasons. It could be a slow sql server, very big tables, or long add/remove lists. This is the reason that when you refresh it takes long, because by refreshing you query again your database while it is already under heavy pressure.
Related
What’s your method when you have a scenario like:
I have 8 procedures in C# that calls stored procedure to insert data to different tables. In each procedure I have a transaction rollback , aka ( tran.Rollback(); ) in case of failure and currently commit on successes.
However I need to roll back all the inserts even when some had success (and maybe in the future update) when any one of the procedures was not successful.
You can do this using a TransactionScope, eg:
void RootMethod()
{
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions() { IsolationLevel= IsolationLevel.ReadCommitted})
{
/* Perform transactional work here */
SomeMethod();
scope.Complete();
}
}
void SomeMethod()
{
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions() { IsolationLevel = IsolationLevel.ReadCommitted })
{
/* Perform transactional work here */
scope.Complete();
}
}
Managing transaction flow using TransactionScopeOption.
Otherwise your transactions might be nestable. SQL Server, for instance, supports nested transactions. A COMMIT in a nested transaction isn't a final commit, but a ROLLBACK always rolls back all work.
can this code makes some bad things? I found it in one project and do not know if it can be cause of some crazy bugs(deadlocks, timeouts in DB,...). Code like this is executed concurently many times in program even in threads.
Thanks a lot
class first {
void doSomething {
using (ITransaction transaction = session.BeginTransaction){
var foo = new second();
foo.doInNewTransaction(); //inner transaction in new session
transaction.Commit();
}
}
}
class second {
void doInNewTransaction(){
using (Session session = new Session()){
using (ITransaction transaction = session.BeginTransaction){
//do someting in database
transaction.Commit();
}
}
}
}
This should be fine. I'm sure I have done stuff like this in the past. The only thing that you need to be aware of is that if you modify an object in the inner session then these changes will not automatically be reflected in the outer session if the same object has already been loaded.
Having said that, if you do not need to do this then I would avoid it. Normally I would recommend AOP based transaction management when using NHibernate. This would allow your inner component to easily join in with the transaction from the outer component. However, in order to do this you need to be using a DI container that supports this, for example Spring.NET or Castle.
I just test PetaPoco Transaction in a multithread way...
I have a simple test case :
-- Simple value object call it MediaDevice
-- Insert a record an update it for 1000 times
void TransactionThread(Object object)
{
Database db = (Database) object;
for(int i= 0; i < 1000;i++)
{
Transaction transaction = db.GetTransaction();
MediaDevice device = new MediaDevice();
device.Name = "Name";
device.Brand = "Brand";
db.Insert(device);
device.Name = "Name_Updated";
device.Brand = "Brand_Updated";
db.Update(device);
transaction.Complete();
}
long count = db.ExecuteScalar<long>("SELECT Count(*) FROM MediaDevices");
Console.WriteLine("Number of all records:" + count);
}
And I call this in two threads like this:[ Single Database object for both threads]
void TransactionTest()
{
Database db = GetDatabase();
Thread tThread1 = ... // thread for TransactionTest()
Thread tThread2 = ... // thread for TransactionTest()
tThread1.Start(db); // pass Database to TransactionTest()
tThread2.Start(db); // pass same Database to TransactionTest()
}
I get Null error or sometimes Object disposed error for Database..
But when i supply two Database instance,
void TransactionTest()
{
Database db = GetDatabase();
Database db2 = GetDatabase();
Thread tThread1 = ... // thread for TransactionTest()
Thread tThread2 = ... // thread for TransactionTest()
tThread1.Start(db); // pass Database instance db to TransactionTest()
tThread2.Start(db2); // pass Database intance db2 to TransactionTest()
}
Everthing is OK...
Well When I check PetaPoco source code at transaction I see that at transaction.Complete
public virtual void Complete()
{
_db.CompleteTransaction();
_db = null;
}
My question is that to able to use transaction from multiple threads Do I have to use new copy of Database object? Or what am i doing wrong?
And to make it thread safe do i have to open and close NEW database at every data update-query?
Yes, you need a separate PetaPoco Database instance per-thread. See this quote from the PetaPoco documentation:
Note: for transactions to work, all operations need to use the same
instance of the PetaPoco database object. So you'll probably want to
use a per-http request, or per-thread IOC container to serve up a
shared instance of this object. Personally StructureMap is my
favourite for this.
I bolded the phrase that gives the clue. It is saying that one instance of the PetaPoco database object should be used per-thread.
Hi use with nolock in select query because the table may be locked. long count = db.ExecuteScalar("SELECT Count(*) with nolock FROM MediaDevices");
sorry dude.. yes you are right. they change the object to be null. so you cannot use the same object to threading. you have to use they use described like db=GetDataBase() ; db2=GetDataBase();
otherwise you can change the source code for your requirement. i think their license allow it. but i am not sure.
I have a few methods - a couple of calls to SQL Server and some business logic to generate a unique value. These methods are all contained inside a parent method:
GenerateUniqueValue()
{
//1. Call to db for last value
//2. Business logic to create new value
//3. Update db with new value created
}
I want the call to GenerateUniqueValue to be isolated, i.e - when two clients call it simultaneously, the second client must wait for the first one to finish.
Originally, I made my service a singleton; however, I have to anticipate future changes that may include load balancing, so I believe a singleton approach is out. Next I decided to try the transaction approach by decorating my service:
[ServiceBehavior(TransactionIsolationLevel = IsolationLevel.Serializable, TransactionTimeout = "00:00:30")]
And my GenerateUniqueValue with:
[OperationBehavior(TransactionScopeRequired = true)]
The problem is that a test of simultaneous hits to the service method results in an error:
"System.ServiceModel.ProtocolException: The transaction under which this method call was executing was asynchronously aborted."
Here is my client test code:
private static void Main(string[] args)
{
List<Client> clients = new List<Client>();
for (int i = 1; i < 20; i++)
{
clients.Add(new Client());
}
foreach (var client in clients)
{
Thread thread = new Thread(new ThreadStart(client.GenerateUniqueValue));
thread.Start();
}
Console.ReadLine();
}
If the transaction is suppose to be isolated, why are multiple threads calling out to the method clashing?
Transaction is for treating multiple actions as a single atomic action. So if you want to make the second thread to wait for the first thread's completion, you have to deal with concurrency not transaction.
Try using System.ServiceModel.ServiceBehaviorAttribute.ConcurrencyMode attribute with Single or Reentrant concurrency modes. I guess that's what you are expecting.
[ServiceBehavior(ConcurrencyMode=ConcurrencyMode.Reentrant)]
I guess you got the exception because the IsolationLevel.Serializable would enable the second thread to access the volatile data, but wouldn't let it to change it. You perhapse be doing some change operation which is not permitted with this isolation level.
All right, I've seen some posts asking almost the same thing but the points were a little bit different.
This is a classic case: I'm saving/updating an entity and, within the SAME SESSION, I'm trying to get them from the database (using criteria/find/enumerable/etc) with FlushMode = Auto. The matter is: NHibernate isn't flushing the updates before querying, so I'm getting inconsistent data from the database.
"Fair enough", some people will say, as the documentation states:
This process, flush, occurs by default at the following points:
from some invocations of Find() or Enumerable()
from NHibernate.ITransaction.Commit()
from ISession.Flush()
The bold "some invocations" clearly says that NH has no responsibility at all. IMO, though, we have a consistency problem here because the doc also states that:
Except when you explicity Flush(), there are absolutely no guarantees about when the Session executes the ADO.NET calls, only the order in which they are executed. However, NHibernate does guarantee that the ISession.Find(..) methods will never return stale data; nor will they return the wrong data.
So, if I'm using CreateQuery (Find replacement) and filtering for entities with property Value = 20, NH may NOT return entities with Value = 30, right? But that's what happens in fact, because the Flush is not happening automatically when it should.
public void FlushModeAutoTest()
{
ISession session = _sessionFactory.OpenSession();
session.FlushMode = FlushMode.Auto;
MappedEntity entity = new MappedEntity() { Name = "Entity", Value = 20 };
session.Save(entity);
entity.Value = 30;
session.SaveOrUpdate(entity);
// RETURNS ONE ENTITY, WHEN SHOULD RETURN ZERO
var list = session.CreateQuery("from MappedEntity where Value = 20").List<MappedEntity>();
session.Flush();
session.Close();
}
After all: am I getting it wrong, is it a bug or simply a non predictable feature so everybody have to call Flush to assure its work?
Thank you.
Filipe
I'm not very familiar with the NHibernate source code but this method from the ISession implementation in the 2.1.2.GA release may answer the question:
/// <summary>
/// detect in-memory changes, determine if the changes are to tables
/// named in the query and, if so, complete execution the flush
/// </summary>
/// <param name="querySpaces"></param>
/// <returns></returns>
private bool AutoFlushIfRequired(ISet<string> querySpaces)
{
using (new SessionIdLoggingContext(SessionId))
{
CheckAndUpdateSessionStatus();
if (!TransactionInProgress)
{
// do not auto-flush while outside a transaction
return false;
}
AutoFlushEvent autoFlushEvent = new AutoFlushEvent(querySpaces, this);
IAutoFlushEventListener[] autoFlushEventListener = listeners.AutoFlushEventListeners;
for (int i = 0; i < autoFlushEventListener.Length; i++)
{
autoFlushEventListener[i].OnAutoFlush(autoFlushEvent);
}
return autoFlushEvent.FlushRequired;
}
}
I take this to mean that auto flush will only guarantee consistency inside a transaction, which makes some sense. Try rewriting your test using a transaction, I'm very curious if that will fix the problem.
If you think about it, the query in your example must always go to the db. The session is not a complete cache of all records in the db. So there could be other entities with the value of 20 on disk. And since you didn't commit() a transaction or flush() the session NH has no way to know which "view" you want to query (DB | Session).
It seems like the "Best Practice" is to do everything (gets & sets) inside of explicit transactions:
using(var session = sessionFactory.OpenSession())
using(var tx = session.BeginTransaction())
{
// execute code that uses the session
tx.Commit();
}
See here for a bunch of details.
managing and tuning hibernate is an artform.
why do you set an initial value of 20, save, then change it to 30?
As a matter of practice, if you are going modify the session, then query the session, you might want to explicitly flush between those operations. You might have a slight performance hit (after all, you then don't let hibernate optimize session flushing), but you can revisit if it becomes a problem.
You quoted that "session.find methods will never return stale data". I would modify your code to use a find instead of createQuery to see if it works.