I'm sure I've seen this discussed, but I must not be using the right keywords because I can't find anything on it now:
I have a desktop application using NHibernate persistence. I'd like to use the session's isDirty property to notify the user whether any persistent data has been changed, showing him either a Close button or an Apply and a Cancel button, depending.
The problem is that calling isDirty causes at least certain data to get flushed to the database, despite the fact that my FlushMode is Never. I'm interested in knowing why isDirty feels it has to flush those changes, but more importantly I want to know how I can get this information without flushing.
As an aside: Since I don't have a transaction wrapping the whole time the user edits the information on the form, I assume that any changes flushed are there to stay, whether I end up committing everything else or not.
So, can someone tell me how to get the functionality I want without the functionality I don't?
I haven't done this myself, but after some googling, it seems possible. How about just looping over all the loaded entities in the session and checking if they're dirty? You can even manually compare the loaded entity vs. the current entity states.
Try this code that I found here on StackOverflow: (Source)
var dirtyObjects = new List<object>();
var sessionImpl = session.GetSessionImplementation();
foreach (NHibernate.Engine.EntityEntry entityEntry in sessionImpl.PersistenceContext.EntityEntries.Values)
{
var loadedState = entityEntry.LoadedState;
var o = sessionImpl.PersistenceContext.GetEntity(entityEntry.EntityKey);
var currentState = entityEntry.Persister.GetPropertyValues(o, sessionImpl.EntityMode);
if (entityEntry.Persister.FindDirty(currentState, loadedState, o, sessionImpl) != null)
{
dirtyObjects.Add(entityEntry);
}
}
If you're worried about performance you could check the type of the entity before comparing the properties:
Type t = Type.GetType(entityEntry.EntityName);
if (t == typeof(Employee))
{
//do something
}
You can also try looking into making your own dirty-check interceptor.
I was finally able to turn my attention back to this and work out a solution, as follows:
/// <summary>
/// Check whether a session is "dirty" without triggering a flush
/// </summary>
/// <param name="session"></param>
/// <returns>true if the session is "dirty", meaning that it will update the database when flushed</returns>
/// <remarks>
/// The rationale behind this is the need to determine if there's anything to flush to the database without actually
/// running through the Flush process. The problem with a premature Flush is that one may want to collect changes
/// to persistent objects and only start a transaction later on to flush them. I have this in a Winforms application
/// and this method allows me to notify the user whether he has made changes that need saving while not leaving a
/// transaction open while he works, which can cause locking issues.
/// <para>
/// Note that the check for dirty collections may give false positives, which is good enough for my purposes but
/// coule be improved upon using calls to GetOrphans and other persistent-collection methods.</para>
/// </remarks>
public static bool IsDirtyNoFlush(this ISession session)
{
var pc = session.GetSessionImplementation().PersistenceContext;
if (pc.EntitiesByKey.Values.Any(o => IsDirtyEntity(session, o)))
return true;
return pc.CollectionEntries.Keys.Cast<IPersistentCollection>()
.Any(coll => coll.WasInitialized && coll.IsDirty);
}
The first part is basically as Shahin recommended above. The last line is what I had been missing. As I mention in the comment, it's still not as good as the built-in IsDirty(), because that one will recognize, for instance, that removing an item from a persistent collection and then putting it back in makes it not dirty anymore, whereas my method will give a false-positive. Still, it's completely adequate for my purposes (and most others, I would imagine) and avoids flushing.
I did dirty state tracking within my viewmodel properties. On each PropertyChanged i call a MarkAsDirty() of my ViewModelBase class. This reflects a property which is then bound onto my Save Button Command.
But if you want to use the NH Tracking, this SessionExtension could fit your requirements.
Related
I have been assigned a task to verify the count of changes done using SaveChanges().
It is expected that the developer should know how many records will be changed before-hand when SaveChanges() will be called.
To implement it, I have created an extension method for DbContext called SaveChangesAndVerify(int expectedChangeCount) where I am using transaction and equating this parameter with the return value of SaveChanges().
If the values match, the transaction is committed and if it doesn't match, the transaction is rolled back.
Please check the code below and let me know if it would work and if there are any considerations that I need to make. Also, is there a better way to do this?
public static class DbContextExtensions
{
public static int SaveChangesAndVerify(this DbContext context, int expectedChangeCount)
{
context.Database.BeginTransaction();
var actualChangeCount = context.SaveChanges();
if (actualChangeCount == expectedChangeCount)
{
context.Database.CommitTransaction();
return actualChangeCount;
}
else
{
context.Database.RollbackTransaction();
throw new DbUpdateException($"Expected count {expectedChangeCount} did not match actual count {actualChangeCount} while saving the changes.");
}
}
public static async Task<int> SaveChangesAndVerifyAsync(this DbContext context, int expectedChangeCount, CancellationToken cancellationToken = default)
{
await context.Database.BeginTransactionAsync();
var actualChangeCount = await context.SaveChangesAsync();
if(actualChangeCount == expectedChangeCount)
{
context.Database.CommitTransaction();
return actualChangeCount;
}
else
{
context.Database.RollbackTransaction();
throw new DbUpdateException($"Expected count {expectedChangeCount} did not match actual count {actualChangeCount} while saving the changes.");
}
}
}
A sample usage would be like context.SaveChangesAndVerify(1) where a developer is expecting only 1 record to update.
Ok so some points.
Unless you've disabled it SaveChanges works as a transaction. Nothing will be changed if anything fails
Furthermore use context.ChangeTracker.Entries() and from there you can get the count of the number of the changed entities. So this will not require you handle transactions. Also SaveChanges() simply return the numbers of rows affected so it may not tell the full story.
Generally I dislike the idea of having this kind of check from a project architecture standpoint, increases complexity of code for dynamic changes and simply adds complexity without bringing any kind of security or safety. Data integrity and proper behavior should be validated using Unit test not those kinds of methods. For example you could add Unit Tests that validate that the rows that got changed are the same as those as you expected. But that should be test code. Not code that will be shipped to production
But if you need to do it dont use transaction and count the entities before changing anything as it is much cheaper. You can even use a "cheap" forloop so you can log what entities failed and so on. Furthermore since we are policing the developers you use extensions which means a developer can freely use the SaveChanges() as far as I can tell. You should create a new custom class for DbContext and expose only those 2 methods for saving changes.
So I need some advice and insight here. Thanks in advance for your thoughts.
I have developed static functions that return a single record from a LINQ entity. Like so:
FooRecord GetRecord(Guid id)
{
using(var dc = new FooDataContext())
return dc.FooRecords.Where(a => a.Id == id).First();
}
This throws an exception because the DataContext is already disposed, which creates problems with deferred execution. This works:
FooRecord GetRecord(Guid id)
{
var dc = new FooDataContext();
return dc.FooRecords.Where(a => a.Id == id).First();
}
I am worried. How quickly will the DataContext be disposed? Obviously if I grab the record immediately this won't cause an issue. However, say I need to grab a record through association:
var record = Data.FooRecord.GetRecord(id);
//Do a bunch of stuff...
//Now we grab the related record from another entity
var barRecord = record.BarRecord
Is there a risk the DataContext be gone by this point? Any advice?
You basically do not need to Dispose() your DataContext for the reasons discussed here:
When should I dispose of a data context
http://csharpindepth.com/ViewNote.aspx?NoteID=89
The main reason for implementing IDisposable on a type is to dispose of any unmanaged resources. The only unmanaged resource allocated by the DataContext is the underlying database connection, but the DataContext already takes care of opening and closing the connection as needed.
The main thing you want to avoid is returning an IEnumerable collection and then never enumerating it, as this will cause the connection to remain open indefinitely. However, since you are only returning a single object, you shouldn't have to worry about this.
Also note that if access any relationship property on the returned object it may cause the connection to be momentarily reopened so that the property can be lazy loaded. You can avoid this by using DataLoadOptions.LoadWith() with your DataContext to eager-load any properties you intend to access. See http://msdn.microsoft.com/en-us/library/system.data.linq.dataloadoptions.aspx
As to the last part of the question, if the returned entities contain properties that can be lazy loaded, then they will contain internal references to back the DataContext that will keep it in memory. Once you have no more references to these entities, then the DataContext will of course be garbage-collected just like any other object.
I am looking for ideas to synchronize updates to a single instance of a persisted object.
A simple domain object:
public class Employee {
long id;
String phone;
String address;
}
Suppose two UI instances pull up Employee(1) where id=1. The first client edits the phone property of Employee(1); the second client edits the address property of Employee(1). When they submit their changes, both need to be persisted.
A possible solution would be to create an update function for each property:
public void updatePhone(Employee employee) {
// right now I am synchronizing _employeeUpdateLock
// synchronize instance of Employee won't work
synchronized( something ) {
// update phone
}
}
// a similar function for address
This approach unfortunately doesn't scale well. The API needs to constantly aligns itself to the properties. Note that
public void update(Employee employee) { ... }
won't work because the function can't tell which property the client intends to change, unless a copy of the original object can be pulled up within the update function.
Hibernate provides a mechanism to lock a row to the database. This doesn't scale well either.
Perhaps the solution depends on the frequency a row's expected to be modified. For low frequency modifications, synchronized and locks are fine. For high frequency modifications, a copy of the row at the time of retrieval can be used to figure out the updated properties.
I am hoping to find a better paradigm to solve this problem. Thanks.
I don't really understand your intended architecture. Several clients are to share one instance of the same data object? How is that even possible - short of using some kind of remote object model (which is considered a bad thing nowadays)?
I have a singleton class AppSetting in an ASP.NET app where I need to check a value and optionally update it. I know I need to use a locking mechanism to prevent multi-threading issues, but can someone verify that the following is a valid approach?
private static void ValidateEncryptionKey()
{
if (AppSetting.Instance.EncryptionKey.Equals(Constants.ENCRYPTION_KEY, StringComparison.Ordinal))
{
lock (AppSetting.Instance)
{
if (AppSetting.Instance.EncryptionKey.Equals(Constants.ENCRYPTION_KEY, StringComparison.Ordinal))
{
AppSetting.Instance.EncryptionKey = GenerateNewEncryptionKey();
AppSetting.Instance.Save();
}
}
}
}
I have also seen examples where you lock on a private field in the current class, but I think the above approach is more intuitive.
Thanks!
Intuitive, maybe, but the reason those examples lock on a private field is to ensure that no other piece of code in the application can take the same lock in such a way as to deadlock the application, which is always good defensive practice.
If it's a small application and you're the only programmer working on it, you can probably get away with locking on a public field/property (which I presume AppSetting.Instance is?), but in any other circumstances, I'd strongly recommend that you go the private field route. It will save you a whole lot of debugging time in the future when someone else, or you in the future having forgotten the implementation details of this bit, take a lock on AppSetting.Instance somewhere distant in the code and everything starts crashing.
I'd also suggest you lose the outermost if. Taking a lock isn't free, sure, but it's a lot faster than doing a string comparison, especially since you need to do it a second time inside the lock anyway.
So, something like:
private object _instanceLock = new object () ;
private static void ValidateEncryptionKey()
{
lock (AppSetting.Instance._instanceLock)
{
if (AppSetting.Instance.EncryptionKey.Equals(Constants.ENCRYPTION_KEY, StringComparison.Ordinal))
{
AppSetting.Instance.EncryptionKey = GenerateNewEncryptionKey();
AppSetting.Instance.Save();
}
}
}
An additional refinement, depending on what your requirements are to keep the EncryptionKey consistent with the rest of the state in AppSetting.Instance, would be to use a separate private lock object for the EncryptionKey and any related fields, rather than locking the entire instance every time.
I have a call that needs to determine if a field has changed. But calling get using that entities id returns the same entity not the prior version.
Entity e = Dao.Get(id);
//At this point e.Field is X
e.Field = y;
Dao.Save(e);
Entity Dao.Get(Guid id)
{
return Session.Get(id);
}
Entity Dao.Save(Entity e)
{
Entity olde = Session.Get(e.Id);
if (e.Field != olde.Field) <--- e.Field == olde.Field so it does not run.
DoBigMethod(e);
return e;
}
How do I handle this situation without adding an onChange method to the Entity class.
You only know one "version" of the entity: the current one. There is actually only one version of the entity. You have it in memory and you already changed it and forgot the previous state.
Call get to see the previous database state is dangerous. If changes are already flushed (NHibernate flushes before queries for instance), you get your changes. If you open another session, you see changes from other transactions.
Are you only interested in one single field? Then you can cache the old value somewhere.
If this wouldn't work, you need to tell me more about the reason why you need to know the previous value of this field.
EDIT:
Some more ideas:
cache the previous state of the field when you get the object, in DAO.Get
implement this property that it sets a flag if it changed.
consider to make this change an explicit operation called by the client, instead of an implicit operation that is called when the flag changes. For instance, if this flag is called "Activated", implement a "Activate" and "Deactivate" method. This methods change that flag and perform the "large set of code". The flag is read-only for the rest of the world.