All right, I've seen some posts asking almost the same thing but the points were a little bit different.
This is a classic case: I'm saving/updating an entity and, within the SAME SESSION, I'm trying to get them from the database (using criteria/find/enumerable/etc) with FlushMode = Auto. The matter is: NHibernate isn't flushing the updates before querying, so I'm getting inconsistent data from the database.
"Fair enough", some people will say, as the documentation states:
This process, flush, occurs by default at the following points:
from some invocations of Find() or Enumerable()
from NHibernate.ITransaction.Commit()
from ISession.Flush()
The bold "some invocations" clearly says that NH has no responsibility at all. IMO, though, we have a consistency problem here because the doc also states that:
Except when you explicity Flush(), there are absolutely no guarantees about when the Session executes the ADO.NET calls, only the order in which they are executed. However, NHibernate does guarantee that the ISession.Find(..) methods will never return stale data; nor will they return the wrong data.
So, if I'm using CreateQuery (Find replacement) and filtering for entities with property Value = 20, NH may NOT return entities with Value = 30, right? But that's what happens in fact, because the Flush is not happening automatically when it should.
public void FlushModeAutoTest()
{
ISession session = _sessionFactory.OpenSession();
session.FlushMode = FlushMode.Auto;
MappedEntity entity = new MappedEntity() { Name = "Entity", Value = 20 };
session.Save(entity);
entity.Value = 30;
session.SaveOrUpdate(entity);
// RETURNS ONE ENTITY, WHEN SHOULD RETURN ZERO
var list = session.CreateQuery("from MappedEntity where Value = 20").List<MappedEntity>();
session.Flush();
session.Close();
}
After all: am I getting it wrong, is it a bug or simply a non predictable feature so everybody have to call Flush to assure its work?
Thank you.
Filipe
I'm not very familiar with the NHibernate source code but this method from the ISession implementation in the 2.1.2.GA release may answer the question:
/// <summary>
/// detect in-memory changes, determine if the changes are to tables
/// named in the query and, if so, complete execution the flush
/// </summary>
/// <param name="querySpaces"></param>
/// <returns></returns>
private bool AutoFlushIfRequired(ISet<string> querySpaces)
{
using (new SessionIdLoggingContext(SessionId))
{
CheckAndUpdateSessionStatus();
if (!TransactionInProgress)
{
// do not auto-flush while outside a transaction
return false;
}
AutoFlushEvent autoFlushEvent = new AutoFlushEvent(querySpaces, this);
IAutoFlushEventListener[] autoFlushEventListener = listeners.AutoFlushEventListeners;
for (int i = 0; i < autoFlushEventListener.Length; i++)
{
autoFlushEventListener[i].OnAutoFlush(autoFlushEvent);
}
return autoFlushEvent.FlushRequired;
}
}
I take this to mean that auto flush will only guarantee consistency inside a transaction, which makes some sense. Try rewriting your test using a transaction, I'm very curious if that will fix the problem.
If you think about it, the query in your example must always go to the db. The session is not a complete cache of all records in the db. So there could be other entities with the value of 20 on disk. And since you didn't commit() a transaction or flush() the session NH has no way to know which "view" you want to query (DB | Session).
It seems like the "Best Practice" is to do everything (gets & sets) inside of explicit transactions:
using(var session = sessionFactory.OpenSession())
using(var tx = session.BeginTransaction())
{
// execute code that uses the session
tx.Commit();
}
See here for a bunch of details.
managing and tuning hibernate is an artform.
why do you set an initial value of 20, save, then change it to 30?
As a matter of practice, if you are going modify the session, then query the session, you might want to explicitly flush between those operations. You might have a slight performance hit (after all, you then don't let hibernate optimize session flushing), but you can revisit if it becomes a problem.
You quoted that "session.find methods will never return stale data". I would modify your code to use a find instead of createQuery to see if it works.
Related
can this code makes some bad things? I found it in one project and do not know if it can be cause of some crazy bugs(deadlocks, timeouts in DB,...). Code like this is executed concurently many times in program even in threads.
Thanks a lot
class first {
void doSomething {
using (ITransaction transaction = session.BeginTransaction){
var foo = new second();
foo.doInNewTransaction(); //inner transaction in new session
transaction.Commit();
}
}
}
class second {
void doInNewTransaction(){
using (Session session = new Session()){
using (ITransaction transaction = session.BeginTransaction){
//do someting in database
transaction.Commit();
}
}
}
}
This should be fine. I'm sure I have done stuff like this in the past. The only thing that you need to be aware of is that if you modify an object in the inner session then these changes will not automatically be reflected in the outer session if the same object has already been loaded.
Having said that, if you do not need to do this then I would avoid it. Normally I would recommend AOP based transaction management when using NHibernate. This would allow your inner component to easily join in with the transaction from the outer component. However, in order to do this you need to be using a DI container that supports this, for example Spring.NET or Castle.
This is a sample code. where I am doing some test
Get entities
Delete a entity
Rollback transaction.
Change entity
Refresh entity
Get entities
I am getting this exception while excuting below code : instance was not in a valid state
ISession session = sessionFactory.OpenSession();
var list1 = session.Query<Asset>().ToList();
ITransaction transaction = session.BeginTransaction();
session.Delete(list1[0]);
transaction.Rollback();
transaction.Dispose();
list1[0].Name = "Test";
session.Refresh(list1[0]);
var list2 = session.Query<Asset>().ToList();
if I call refresh two times. it does not give any issue. it works fine.
try
{
session.Refresh(list1[0]);
}
catch (Exception)
{
session.Refresh(list1[0]);
}
Could you please tell me about your view and suggestion that what is wrong here.
I think the problem is with your handling of rollback and exception. After a rollback or an exception, the in-memory state of objects are likely no longer consistent with their persisted state, so the session is not safe for use anymore without any cleanup. It's suggested that after an exception, you should rollback any transaction, then either discard the session, or clear it using session.Clear(). The same applies with rollback, you should either start a new session, or clear it and discard all existing objects, or the inconsistencies will cause a lot of troubles.
I just test PetaPoco Transaction in a multithread way...
I have a simple test case :
-- Simple value object call it MediaDevice
-- Insert a record an update it for 1000 times
void TransactionThread(Object object)
{
Database db = (Database) object;
for(int i= 0; i < 1000;i++)
{
Transaction transaction = db.GetTransaction();
MediaDevice device = new MediaDevice();
device.Name = "Name";
device.Brand = "Brand";
db.Insert(device);
device.Name = "Name_Updated";
device.Brand = "Brand_Updated";
db.Update(device);
transaction.Complete();
}
long count = db.ExecuteScalar<long>("SELECT Count(*) FROM MediaDevices");
Console.WriteLine("Number of all records:" + count);
}
And I call this in two threads like this:[ Single Database object for both threads]
void TransactionTest()
{
Database db = GetDatabase();
Thread tThread1 = ... // thread for TransactionTest()
Thread tThread2 = ... // thread for TransactionTest()
tThread1.Start(db); // pass Database to TransactionTest()
tThread2.Start(db); // pass same Database to TransactionTest()
}
I get Null error or sometimes Object disposed error for Database..
But when i supply two Database instance,
void TransactionTest()
{
Database db = GetDatabase();
Database db2 = GetDatabase();
Thread tThread1 = ... // thread for TransactionTest()
Thread tThread2 = ... // thread for TransactionTest()
tThread1.Start(db); // pass Database instance db to TransactionTest()
tThread2.Start(db2); // pass Database intance db2 to TransactionTest()
}
Everthing is OK...
Well When I check PetaPoco source code at transaction I see that at transaction.Complete
public virtual void Complete()
{
_db.CompleteTransaction();
_db = null;
}
My question is that to able to use transaction from multiple threads Do I have to use new copy of Database object? Or what am i doing wrong?
And to make it thread safe do i have to open and close NEW database at every data update-query?
Yes, you need a separate PetaPoco Database instance per-thread. See this quote from the PetaPoco documentation:
Note: for transactions to work, all operations need to use the same
instance of the PetaPoco database object. So you'll probably want to
use a per-http request, or per-thread IOC container to serve up a
shared instance of this object. Personally StructureMap is my
favourite for this.
I bolded the phrase that gives the clue. It is saying that one instance of the PetaPoco database object should be used per-thread.
Hi use with nolock in select query because the table may be locked. long count = db.ExecuteScalar("SELECT Count(*) with nolock FROM MediaDevices");
sorry dude.. yes you are right. they change the object to be null. so you cannot use the same object to threading. you have to use they use described like db=GetDataBase() ; db2=GetDataBase();
otherwise you can change the source code for your requirement. i think their license allow it. but i am not sure.
I have a few methods - a couple of calls to SQL Server and some business logic to generate a unique value. These methods are all contained inside a parent method:
GenerateUniqueValue()
{
//1. Call to db for last value
//2. Business logic to create new value
//3. Update db with new value created
}
I want the call to GenerateUniqueValue to be isolated, i.e - when two clients call it simultaneously, the second client must wait for the first one to finish.
Originally, I made my service a singleton; however, I have to anticipate future changes that may include load balancing, so I believe a singleton approach is out. Next I decided to try the transaction approach by decorating my service:
[ServiceBehavior(TransactionIsolationLevel = IsolationLevel.Serializable, TransactionTimeout = "00:00:30")]
And my GenerateUniqueValue with:
[OperationBehavior(TransactionScopeRequired = true)]
The problem is that a test of simultaneous hits to the service method results in an error:
"System.ServiceModel.ProtocolException: The transaction under which this method call was executing was asynchronously aborted."
Here is my client test code:
private static void Main(string[] args)
{
List<Client> clients = new List<Client>();
for (int i = 1; i < 20; i++)
{
clients.Add(new Client());
}
foreach (var client in clients)
{
Thread thread = new Thread(new ThreadStart(client.GenerateUniqueValue));
thread.Start();
}
Console.ReadLine();
}
If the transaction is suppose to be isolated, why are multiple threads calling out to the method clashing?
Transaction is for treating multiple actions as a single atomic action. So if you want to make the second thread to wait for the first thread's completion, you have to deal with concurrency not transaction.
Try using System.ServiceModel.ServiceBehaviorAttribute.ConcurrencyMode attribute with Single or Reentrant concurrency modes. I guess that's what you are expecting.
[ServiceBehavior(ConcurrencyMode=ConcurrencyMode.Reentrant)]
I guess you got the exception because the IsolationLevel.Serializable would enable the second thread to access the volatile data, but wouldn't let it to change it. You perhapse be doing some change operation which is not permitted with this isolation level.
This is how we implement a generic Save() service in WCF for our EF entities. A TT does the work for us. Even though we don't have any problems with it, I hate to assume this is the best approach (even if it might be). You guys seem pretty darn bright and helpful, so I thought I would pose the question:
Is there a better way?
[OperationContract]
public User SaveUser(User entity)
{
bool _IsDeleted = false;
using (DatabaseEntities _Context = new DatabaseEntities())
{
switch (entity.ChangeTracker.State)
{
case ObjectState.Deleted:
//delete
_IsDeleted = true;
_Context.Users.Attach(entity);
_Context.DeleteObject(entity);
break;
default:
//everything else
_Context.Users.ApplyChanges(entity);
break;
}
// now, to the database
try
{
// try to save changes, which may cause a conflict.
_Context.SaveChanges(System.Data.Objects.SaveOptions.None);
}
catch (System.Data.OptimisticConcurrencyException)
{
// resolve the concurrency conflict by refreshing
_Context.Refresh(System.Data.Objects.RefreshMode.ClientWins, entity);
// Save changes.
_Context.SaveChanges();
}
}
// return
if (_IsDeleted)
return null;
entity.AcceptChanges();
return entity;
}
Why are you doing this with Self tracking entities? What was wrong with this:
[OperationContract]
public User SaveUser(User entity)
{
bool isDeleted = false;
using (DatabaseEntities context = new DatabaseEntities())
{
isDeleted = entity.ChangeTracker.State == ObjectState.Deleted;
context.Users.ApplyChanges(entity); // It deletes entities marked for deletion as well
try
{
// no need to postpone accepting changes, they will not be accepted if exception happens
context.SaveChanges();
}
catch (System.Data.OptimisticConcurrencyException)
{
context.Refresh(System.Data.Objects.RefreshMode.ClientWins, entity);
context.SaveChanges();
}
}
return isDeleted ? null : entity;
}
If I'm not mistaken, people typically don't expose their Entity Framework objects directly in a WCF service. Entity Framework is typically thought of as a data-access layer, and WCF is more of a front-end layer, so they are put on different tiers.
A Data-Transfer Object (DTO) is used in the WCF methods. This is typically a POCO which doesn't have any state-tracking on it whatsoever. The DTO is then mapped to an Entity either by hand or via a framework like AutoMapper.
Typically clients should know whether they are "adding" or "updating" an object, and I would personally prefer these to be two separate operations on the service interface. Also, I would definitely require them to use a separate method for deleting an object. However, if you absolutely need a generic "Save", you should be able to tell whether the object you've been given is "new" or not based on the presence (or absence) of a primary key value.
A lot of the code can be put into a generic utility. For example, supposing your T4 template produces attributes on the key values of your entities, you could automatically determine whether the key values are present and perform an Insert/Update accordingly. Also, the try SaveChanges catch retry block you're using--while probably unnecessary--could easily be put into a simple utility method to be more DRY.