I have been assigned a task to verify the count of changes done using SaveChanges().
It is expected that the developer should know how many records will be changed before-hand when SaveChanges() will be called.
To implement it, I have created an extension method for DbContext called SaveChangesAndVerify(int expectedChangeCount) where I am using transaction and equating this parameter with the return value of SaveChanges().
If the values match, the transaction is committed and if it doesn't match, the transaction is rolled back.
Please check the code below and let me know if it would work and if there are any considerations that I need to make. Also, is there a better way to do this?
public static class DbContextExtensions
{
public static int SaveChangesAndVerify(this DbContext context, int expectedChangeCount)
{
context.Database.BeginTransaction();
var actualChangeCount = context.SaveChanges();
if (actualChangeCount == expectedChangeCount)
{
context.Database.CommitTransaction();
return actualChangeCount;
}
else
{
context.Database.RollbackTransaction();
throw new DbUpdateException($"Expected count {expectedChangeCount} did not match actual count {actualChangeCount} while saving the changes.");
}
}
public static async Task<int> SaveChangesAndVerifyAsync(this DbContext context, int expectedChangeCount, CancellationToken cancellationToken = default)
{
await context.Database.BeginTransactionAsync();
var actualChangeCount = await context.SaveChangesAsync();
if(actualChangeCount == expectedChangeCount)
{
context.Database.CommitTransaction();
return actualChangeCount;
}
else
{
context.Database.RollbackTransaction();
throw new DbUpdateException($"Expected count {expectedChangeCount} did not match actual count {actualChangeCount} while saving the changes.");
}
}
}
A sample usage would be like context.SaveChangesAndVerify(1) where a developer is expecting only 1 record to update.
Ok so some points.
Unless you've disabled it SaveChanges works as a transaction. Nothing will be changed if anything fails
Furthermore use context.ChangeTracker.Entries() and from there you can get the count of the number of the changed entities. So this will not require you handle transactions. Also SaveChanges() simply return the numbers of rows affected so it may not tell the full story.
Generally I dislike the idea of having this kind of check from a project architecture standpoint, increases complexity of code for dynamic changes and simply adds complexity without bringing any kind of security or safety. Data integrity and proper behavior should be validated using Unit test not those kinds of methods. For example you could add Unit Tests that validate that the rows that got changed are the same as those as you expected. But that should be test code. Not code that will be shipped to production
But if you need to do it dont use transaction and count the entities before changing anything as it is much cheaper. You can even use a "cheap" forloop so you can log what entities failed and so on. Furthermore since we are policing the developers you use extensions which means a developer can freely use the SaveChanges() as far as I can tell. You should create a new custom class for DbContext and expose only those 2 methods for saving changes.
Related
I have a route that needs to make sure that only one user can make that request at a time. How would I enforce that?
Using a static boolean as a field on the controller, you can prevent execution with a guard clause. Static objects are shared across threads within a single application so this should affect all users. The volatile keyword ensures that changes are available to all the threads as soon as they're made. You may have to actually lock though, which isn't done here.
// Controller class field declaration.
static volatile bool executing = false;
...
// Inside the function that should only run once regardless of the caller.
if(!executing) {
executing = true;
execute(); // this is the code you want only a single execution of, regardless of the number of requests
executing = false:
} else {
throw new Exception ("currently executing");
}
This is not the best way, depending on your actual needs, but should work for the minimum needs.
I've got a WCF service that is making calls to my Entity Framework Repository classes to access data. I'm using Entity Framework 4 CTP, and am using my own POCO objects rather than the auto generated entity objects.
The context lifetime is limited to the method call. For Select/Insert and Update methods I create the context and dispose of it in the same method returning disconnected entity objects.
I'm now trying to work out the best way to handle concurrency issues. For example this is what my update method looks like
public static Sale Update(Sale sale)
{
using (var ctx = new DBContext())
{
var SaleToUpdate =
(from t in ctx.Sales where t.ID == sale.ID select t).FirstOrDefault();
if (SaleToUpdate == null) throw new EntityNotFoundException();
ctx.Sales.ApplyCurrentValues(sale);
ctx.SaveChanges();
return sale;
}
}
This works fine, but because I'm working in a disconnected way no exception is thrown if the record has been modified since you picked it up. This is going to cause concurrency issues.
What is the best way to solve this when your using the entity framework over WCF and are not keeping a global context?
The only method I can think of is to give my objects a version number and increment it each time a save is called. This would then allow me to check the version hasnt changed before I save. Not the neatest solution I know and would still allow the client to change their version number which I really don't want them to be able to do.
EDIT :
Using Ladislav Mrnka's suggestion of RowVersion fields in my entities, each of my entities now has a field called Version of type RowVersion. I then changed my Update method to look like this.
public static Sale Update(Sale sale)
{
using (var ctx = new DBContext())
{
var SaleToUpdate =
(from t in ctx.Sales where t.ID == sale.ID select t).FirstOrDefault();
if (SaleToUpdate == null) throw new EntityNotFoundException();
if (!sale.Version.SequenceEqual(SaleToUpdate .Version))
throw new OptimisticConcurrencyException("Record is out of date");
ctx.Sales.ApplyCurrentValues(sale);
ctx.SaveChanges();
return sale;
}
}
It seems to work but if I should be doing it differently please let me know. I tried to use Entity Frameworks built in concurrency control by setting the version fields concurrency mode to fixed, unfortunately this didn't work as when I did the query to get the unchanged SaleToUpdate it picked up its version and used that to do its concurrency check which is obviously current. It feels like the entity framework might be missing something here.
Like it mentioned, the best practice is to use a column of row version type in your DB table for concurrency checking, but how it is implemented with Code First:
When using Code First in CTP3, you would need to use the fluent API to describe which properties needs concurrency checking but in CTP4 this can be done declaratively as part of the class definition using data annotation attributes:
ConcurrencyCheckAttribute:
ConcurrencyCheckAttribute is used to specify that a property has a concurrency mode of “fixed” in the model. A fixed concurrency mode means that this property is part of the concurrency check of the entity during save operations and applies to scalar properties only:
public class Sale
{
public int SaleId { get; set; }
[ConcurrencyCheck]
public string SalesPersonName { get; set; }
}
Here, ConcurrencyCheck will be turn on for SalesPersonName property. However, if you decide to include a dedicated Timestamp property of type byte[] in your class then TimestampAttribute will definitely be a better choice to go for:
TimestampAttribute:
TimestampAttribute is used to specify that a byte[] property has a concurrency mode of “fixed” in the model and that it should be treated as a timestamp column on the store model (non-nullable byte[] in the CLR type). This attribute applies to scalar properties of type byte[] only and only one TimestampAttribute can be present on an entity.
public class Sale
{
public int SaleId { get; set; }
[Timestamp]
public byte[] Timestamp { get; set; }
}
Here, not only Timestamp property will be taken as concurrency token, but also EF Code First learn that this property has store type of timestamp and also that this is a computed column and we will not be inserting values into this property but rather, the value will be computed on the SQL Server itself.
Don't use custom version number. Use build in row version data type of your DB. Row version data type is automatically modified each time you change the record. For example MSSQL has Timestamp data type. You can use the timestamp column in EF and set it as Fixed concurrency handler (not sure how to do it with EF Code First but I believe that fluent API has this possibility). The timestamp column has to be mapped to POCO entity as byte array (8 bytes). When you call your update method you can check timestamp of loaded object with timestamp of incomming object by yourselves to avoid unnecessary call to DB. If you do not make the check by yourselves it will be handled in EF by setting where condition in update statement.
Take a look at Saving Changes and Managing Concurrency
from the article:
try
{
// Try to save changes, which may cause a conflict.
int num = context.SaveChanges();
Console.WriteLine("No conflicts. " +
num.ToString() + " updates saved.");
}
catch (OptimisticConcurrencyException)
{
// Resolve the concurrency conflict by refreshing the
// object context before re-saving changes.
context.Refresh(RefreshMode.ClientWins, orders);
// Save changes.
context.SaveChanges();
Console.WriteLine("OptimisticConcurrencyException "
+ "handled and changes saved");
}
I am looking for ideas to synchronize updates to a single instance of a persisted object.
A simple domain object:
public class Employee {
long id;
String phone;
String address;
}
Suppose two UI instances pull up Employee(1) where id=1. The first client edits the phone property of Employee(1); the second client edits the address property of Employee(1). When they submit their changes, both need to be persisted.
A possible solution would be to create an update function for each property:
public void updatePhone(Employee employee) {
// right now I am synchronizing _employeeUpdateLock
// synchronize instance of Employee won't work
synchronized( something ) {
// update phone
}
}
// a similar function for address
This approach unfortunately doesn't scale well. The API needs to constantly aligns itself to the properties. Note that
public void update(Employee employee) { ... }
won't work because the function can't tell which property the client intends to change, unless a copy of the original object can be pulled up within the update function.
Hibernate provides a mechanism to lock a row to the database. This doesn't scale well either.
Perhaps the solution depends on the frequency a row's expected to be modified. For low frequency modifications, synchronized and locks are fine. For high frequency modifications, a copy of the row at the time of retrieval can be used to figure out the updated properties.
I am hoping to find a better paradigm to solve this problem. Thanks.
I don't really understand your intended architecture. Several clients are to share one instance of the same data object? How is that even possible - short of using some kind of remote object model (which is considered a bad thing nowadays)?
I have a singleton class AppSetting in an ASP.NET app where I need to check a value and optionally update it. I know I need to use a locking mechanism to prevent multi-threading issues, but can someone verify that the following is a valid approach?
private static void ValidateEncryptionKey()
{
if (AppSetting.Instance.EncryptionKey.Equals(Constants.ENCRYPTION_KEY, StringComparison.Ordinal))
{
lock (AppSetting.Instance)
{
if (AppSetting.Instance.EncryptionKey.Equals(Constants.ENCRYPTION_KEY, StringComparison.Ordinal))
{
AppSetting.Instance.EncryptionKey = GenerateNewEncryptionKey();
AppSetting.Instance.Save();
}
}
}
}
I have also seen examples where you lock on a private field in the current class, but I think the above approach is more intuitive.
Thanks!
Intuitive, maybe, but the reason those examples lock on a private field is to ensure that no other piece of code in the application can take the same lock in such a way as to deadlock the application, which is always good defensive practice.
If it's a small application and you're the only programmer working on it, you can probably get away with locking on a public field/property (which I presume AppSetting.Instance is?), but in any other circumstances, I'd strongly recommend that you go the private field route. It will save you a whole lot of debugging time in the future when someone else, or you in the future having forgotten the implementation details of this bit, take a lock on AppSetting.Instance somewhere distant in the code and everything starts crashing.
I'd also suggest you lose the outermost if. Taking a lock isn't free, sure, but it's a lot faster than doing a string comparison, especially since you need to do it a second time inside the lock anyway.
So, something like:
private object _instanceLock = new object () ;
private static void ValidateEncryptionKey()
{
lock (AppSetting.Instance._instanceLock)
{
if (AppSetting.Instance.EncryptionKey.Equals(Constants.ENCRYPTION_KEY, StringComparison.Ordinal))
{
AppSetting.Instance.EncryptionKey = GenerateNewEncryptionKey();
AppSetting.Instance.Save();
}
}
}
An additional refinement, depending on what your requirements are to keep the EncryptionKey consistent with the rest of the state in AppSetting.Instance, would be to use a separate private lock object for the EncryptionKey and any related fields, rather than locking the entire instance every time.
The setup
Some of the "old old old" tables of our database use an exotic primary key generation scheme [1] and I'm trying to overlay this part of the database with NHibernate. This generation scheme is mostly hidden away in a stored procedure called, say, 'ShootMeInTheFace.GetNextSeededId'.
I have written an IIdentifierGenerator that calls this stored proc:
public class LegacyIdentityGenerator : IIdentifierGenerator, IConfigurable
{
// ... snip ...
public object Generate(ISessionImplementor session, object obj)
{
var connection = session.Connection;
using (var command = connection.CreateCommand())
{
SqlParameter param;
session.ConnectionManager.Transaction.Enlist(command);
command.CommandText = "ShootMeInTheFace.GetNextSeededId";
command.CommandType = CommandType.StoredProcedure;
param = command.CreateParameter() as SqlParameter;
param.Direction = ParameterDirection.Input;
param.ParameterName = "#sTableName";
param.SqlDbType = SqlDbType.VarChar;
param.Value = this.table;
command.Parameters.Add(param);
// ... snip ...
command.ExecuteNonQuery();
// ... snip ...
return ((IDataParameter)command
.Parameters["#sTrimmedNewId"]).Value as string);
}
}
The problem
I can map this in the XML mapping files and it works great, BUT....
It doesn't work when NHibernate tries to batch inserts, such as in a cascade, or when the session is not Flush()ed after every call to Save() on a transient entity that depends on this generator.
That's because NHibernate seems to be doing something like
for (each thing that I need to save)
{
[generate its id]
[add it to the batch]
}
[execute the sql in one big batch]
This doesn't work because, since the generator is asking the database every time, NHibernate just ends up getting the same ID generated multiple times, since it hasn't actually saved anything yet.
The other NHibernate generators like IncrementGenerator seem to get around this by asking the database for the seed value once and then incrementing the value in memory during subsequent calls in the same session. I would rather not do this in my implementation if I have to, since all of the code that I need is sitting in the database already, just waiting for me to call it correctly.
Is there a way to make NHibernate actually issue the INSERT after each call to generating an ID for entities of a certain type? Fiddling with the batch size settings don't seem to help.
Do you have any suggestions/other workarounds besides re-implementing the generation code in memory or bolting on some triggers to the legacy database? I guess I could always treat these as "assigned" generators and try to hide that fact somehow within the guts of the domain model....
Thanks for any advice.
The update: 2 months later
It was suggested in the answers below that I use an IPreInsertEventListener to implement this functionality. While this sounds reasonable, there were a few problems with this.
The first problem was that setting the id of an entity to the AssignedGenerator and then not actually assigning anything in code (since I was expecting my new IPreInsertEventListener implementation to do the work) resulted in an exception being thrown by the AssignedGenerator, since its Generate() method essentially does nothing but check to make sure that the id is not null, throwing an exception otherwise. This is worked around easily enough by creating my own IIdentifierGenerator that is like AssignedGenerator without the exception.
The second problem was that returning null from my new IIdentifierGenerator (the one I wrote to overcome the problems with the AssignedGenerator resulted in the innards of NHibernate throwing an exception, complaining that a null id was generated. Okay, fine, I changed my IIdentifierGenerator to return a sentinel string value, say, "NOT-REALLY-THE-REAL-ID", knowing that my IPreInsertEventListener would replace it with the correct value.
The third problem, and the ultimate deal-breaker, was that IPreInsertEventListener runs so late in the process that you need to update both the actual entity object as well as an array of state values that NHibernate uses. Typically this is not a problem and you can just follow Ayende's example. But there are three issues with the id field relating to the IPreInsertEventListeners:
The property is not in the #event.State array but instead in its own Id property.
The Id property does not have a public set accessor.
Updating only the entity but not the Id property results in the "NOT-REALLY-THE-REAL-ID" sentinel value being passed through to the database since the IPreInsertEventListener was unable to insert in the right places.
So my choice at this point was to use reflection to get at that NHibernate property, or to really sit down and say "look, the tool just wasn't meant to be used this way."
So I went back to my original IIdentifierGenreator and made it work for lazy flushes: it got the high value from the database on the first call, and then I re-implemented that ID generation function in C# for subsequent calls, modeling this after the Increment generator:
private string lastGenerated;
public object Generate(ISessionImplementor session, object obj)
{
string identity;
if (this.lastGenerated == null)
{
identity = GetTheValueFromTheDatabase();
}
else
{
identity = GenerateTheNextValueInCode();
}
this.lastGenerated = identity;
return identity;
}
This seems to work fine for a while, but like the increment generator, we might as well call it the TimeBombGenerator. If there are multiple worker processes executing this code in non-serializable transactions, or if there are multiple entities mapped to the same database table (it's an old database, it happened), then we will get multiple instances of this generator with the same lastGenerated seed value, resulting in duplicate identities.
##$##$#.
My solution at this point was to make the generator cache a dictionary of WeakReferences to ISessions and their lastGenerated values. This way, the lastGenerated is effectively local to the lifetime of a particular ISession, not the lifetime of the IIdentifierGenerator, and because I'm holding WeakReferences and culling them out at the beginning of each Generate() call, this won't explode in memory consumption. And since each ISession is going to hit the database table on its first call, we'll get the necessary row locks (assuming we're in a transaction) we need to prevent duplicate identities from happening (and if they do, such as from a phantom row, only the ISession needs to be thrown away, not the entire process).
It is ugly, but more feasible than changing the primary key scheme of a 10-year-old database. FWIW.
[1] If you care to know about the ID generation, you take a substring(len - 2) of all of the values currently in the PK column, cast them to integers and find the max, add one to that number, add all of that number's digits, and append the sum of those digits as a checksum. (If the database has one row containing "1000001", then we would get max 10000, +1 equals 10001, checksum is 02, resulting new PK is "1000102". Don't ask me why.
A potential workaround is to generate and assign the ID in an event listener rather than using an IIdentifierGenerator implementation. The listener should implement IPreInsertEventListener and assign the ID in OnPreInsert.
Why dont you just make private string lastGenerated; static?