Atomic Read and Write with Entity Framework - sql

I have two different processes (on different machines) that are reading and updating a database record.
The rule I need to ensure is that the record must only be updated if the value of it, lets say is "Initial". Also, after the commit I would want to know if it actually got updated from the current process or not (in case if value was other than initial)
Now, the below code performs something like:
var record = context.Records
.Where(r => (r.id == id && r.State == "Initial"))
.FirstOrDefault();
if(record != null) {
record.State = "Second";
context.SaveChanges();
}
Now couple of questions
1) From looking at the code it appears that after the record is fetched with state "Initial", some other process could have updated it to state "Second" before this process performs SaveChanges.
In this case we are unnecessarily overwriting the state to the same value. Is this the case happening here ?
2) If case 1 is not what happens then EntityFramework may be translating the above to something like
update Record set State = "Second" where Id = someid and State = "Initial"
and performing this as a transaction. This way only one process writes the value. Is this the case with EF default TransactionScope ?
In both cases again how do I know for sure that the update was made from my process as opposed to some other process ?
If this were in-memory objects then in code it would translate to something like assuming multiple threads accessing same data structure
Record rec = FindRecordById(id);
lock (someobject)
{
if(rec.State == "Initial")
{
rec.State = "Second";
//Now, that I know I updated it I can do some processing
}
}
Thanks

In general there are 2 main concurrency patterns that can be used:
Pessimistic concurrency: You lock a row to prevent others from unexpectedly changing the data you are currently attempting to update. EF does not provide any native support for this type of concurrency pattern.
Optimistic concurrency: Citing from EF's documentation: "Optimistic concurrency involves optimistically attempting to save your entity to the database in the hope that the data there has not changed since the entity was loaded. If it turns out that the data has changed then an exception is thrown and you must resolve the conflict before attempting to save again." This pattern is supported by EF, and can be used rather simply.
Focusing on the optimistic concurrency option, which EF does support, let's compare how your example behaves with and without EF's optimistic concurrency control handling. I'll assume you are using SQL Server.
No concurrency control
Let's start with the following script in the database:
create table Record (
Id int identity not null primary key,
State varchar(50) not null
)
insert into Record (State) values ('Initial')
And here is the code with the DbContext and Record entity:
public class MyDbContext : DbContext
{
static MyDbContext()
{
Database.SetInitializer<MyDbContext>(null);
}
public MyDbContext() : base(#"Server=localhost;Database=eftest;Trusted_Connection=True;") { }
public DbSet<Record> Records { get; set; }
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder.Conventions.Remove<PluralizingTableNameConvention>();
modelBuilder.Configurations.Add(new Record.Configuration());
}
}
public class Record
{
public int Id { get; set; }
public string State { get; set; }
public class Configuration : EntityTypeConfiguration<Record>
{
public Configuration()
{
this.HasKey(t => t.Id);
this.Property(t => t.State)
.HasMaxLength(50)
.IsRequired();
}
}
}
Now, let's test your concurrent update scenario with the following code:
static void Main(string[] args)
{
using (var context = new MyDbContext())
{
var record = context.Records
.Where(r => r.Id == 1 && r.State == "Initial")
.Single();
// Insert sneaky update from a different context.
using (var sneakyContext = new MyDbContext())
{
var sneakyRecord = sneakyContext.Records
.Where(r => r.Id == 1 && r.State == "Initial")
.Single();
sneakyRecord.State = "Sneaky Update";
sneakyContext.SaveChanges();
}
// attempt to update row that has just been updated and committed by the sneaky context.
record.State = "Second";
context.SaveChanges();
}
}
If you trace the SQL, you will see that the update statement looks like this:
UPDATE [dbo].[Record]
SET [State] = 'Second'
WHERE ([Id] = 1)
So, in effect, it doesn't care that another transaction sneaked in an update. It just blindly writes over whatever the other update did. And so, the final value of State in the database for that row is 'Second'.
Optimistic concurrency control
Let's adjust our initial SQL script to include a concurrency control column to our table:
create table Record (
Id int identity not null primary key,
State varchar(50) not null,
Concurrency timestamp not null -- add this row versioning column
)
insert into Record (State) values ('Initial')
Let's also adjust our Record entity class (the DbContext class stays the same):
public class Record
{
public int Id { get; set; }
public string State { get; set; }
// Add this property.
public byte[] Concurrency { get; set; }
public class Configuration : EntityTypeConfiguration<Record>
{
public Configuration()
{
this.HasKey(t => t.Id);
this.Property(t => t.State)
.HasMaxLength(50)
.IsRequired();
// Add this config to tell EF that this
// property/column should be used for
// concurrency checking.
this.Property(t => t.Concurrency)
.IsRowVersion();
}
}
}
Now, if we try to re-run the same Main() method we used for the previous scenario, you will notice a change in how the update statement is generated and executed:
UPDATE [dbo].[Record]
SET [State] = 'Second'
WHERE (([Id] = 1) AND ([Concurrency] = <byte[]>))
SELECT [Concurrency]
FROM [dbo].[Record]
WHERE ##ROWCOUNT > 0 AND [Id] = 1
In particular, notice how EF automatically includes the column defined for concurrency control in the where clause of the update statement.
In this case, because there was in fact a concurrent update, EF detects it, and throws a DbUpdateConcurrencyException exception on this line:
context.SaveChanges();
And so, in this case, if you check the database, you'll see that the State value for the row in question will be 'Sneaky Update', because our 2nd update failed to pass the concurrency check.
Final thoughts
As you can see, there isn't much that needs to be done to activate automatic optimistic concurrency control in EF.
Where it gets tricky though is, how do you handle the DbUpdateConcurrencyException exception when it gets thrown? It will largely be up to you to decide what you want to do in this case. But for further guidance on the topic, you'll find more information here: EF - Optimistic Concurrency Patterns.

Related

Problem with simultaneous read and save to database

I have a problem with joining users to game room.
Controller
[HttpPut("join/")]
public async Task<ActionResult<string>> JoinRoom([FromQuery] int leagueId, [FromQuery] int userId)
{
var data = await classicGameService.JoinRoom(leagueId, userId);
if (data == "")
{
return NotFound();
}
else
{
return Ok(data);
}
}
Service
public async Task<string> JoinRoom(int leaguePosition, int userId)
{
var gameRoom = await
context.ClassicGames.Include(x => x.League)
.FirstOrDefaultAsync(x => x.League.Position == leaguePosition && x.User2 == 0 && x.User2State == (int)EClassicGameUserState.OBSERVER);
if (gameRoom is null)
{
return "";
}
else
{
gameRoom.User2 = userId;
gameRoom.User2State = (int)EClassicGameUserState.STAGNATION;
await context.SaveChangesAsync();
return $"{gameRoom.Id},{gameRoom.User1}";
}
}
When users send request to join simultaneously they getting correct respose for join.
It is a big problem for my game.
How to make response for the first user and then for the second?
I tried to change the methods to synchronous and there was the same problem.
If your database supports row versioning, such as Timestamps within SQL Server, you can configure your entities to observe these and reject concurrent changes.
For example to reproduce this kind of issue with an Update statement I have an entity called Game with a Player 1 and Player 2 value which I intend should only be updated once at a time. Concurrent access is a problem within web applications as two requests can come in simultaneously and both "capture" data in the same effective state which is perfectly valid for both to try and update. To simulate this you can use the following code:
using (var context = new TestDbContext())
{
var gameA = context.Games.SingleOrDefault(x => x.GameId == 1 && x.Player2 == null);
using (var context2 = new TestDbContext())
{
var gameB = context2.Games.SingleOrDefault(x => x.GameId == 1 && x.Player2 == null);
if (gameA != null)
gameA.Player2 = "Roy";
if (gameB != null)
gameB.Player2 = "George";
context.SaveChanges();
context2.SaveChanges();
}
}
In this example we use 2 separate DbContext instances representing our two simultaneous requests. Each loads our desired game satisfied that Player2 is empty. We now have 2 object references, one tracked by each DbContext and we tell both instances to set Player2's name. We then tell the contexts to SaveChanges(). The resulting output will be "George". If we reverse the SaveChanges() call order, the output would be "Roy". We don't want to allow the 2nd call to update. We cannot change the fact that both concurrent reads will get the game and be satisfied that Player2 has not been set unless we were to do something drastic like lock the table or row when trying to read the Games, and only unlock it after saving/aborting. (Pessimistic locking) This would potentially lead to big issues with timeouts or deadlocks.
The alternative is optimistic locking. We update our table to include a Timestamp column (in this example named RowVersion), then configure that column in our EF entity:
public class Game
{
[Key]
public int GameId { get; set; }
public string Player1 { get; set; }
public string Player2 { get; set; }
[Timestamp]
public byte[] RowVersion { get; set; }
}
Now if you run the above code, without any changes at all, the first SaveChanges() call will succeed, while the second SaveChanges() will fail with a DbUpdateConcurrencyException which you will need to handle. Basically in your case you'd likely want to return to the client that their game selection failed, refresh the list, and they'd see that the game was no longer available.
If your storage doesn't support optimistic concurrency then things get a bit more tricky. You would need to develop something like a marshal of sorts where join requests are queued to be performed by a single process responsible for updating player state. The initial call would return a status of something like "Joining" along with a Queue ID which would result in a user seeing a spinner while their client continued to poll with that Queue ID for an update from the marshal. The marshal processes the requests on a first come, first serve basis, and evaluates the rules. When a game is empty and allows Player 2 to join, that queued job gets a "Join Successful" status which comes back to the client on the next poll.. The duplicate request processes and finds Player 2 is filled so that Queued job gets a "Join Failed" response for that client on it's next poll. (Serializing the join operation)

Exclude columns from INSERT [duplicate]

We have a field in our SQL Server database table which is autogenerated by SQL Server, the field is called CreatedTime.
We have mapped the whole database table to our datamodel in Entity Framework, thus also the field CreatedTime.
When we insert a new row in the database, via Entity Framework, we thus do not provide any value for CreatedTime.
This causes the insert to fail with the error:
SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM
So the question is: Is there is a way to to exclude a particular field in the Entity datamodel in the Entity Framework insert statement? So that we will not get the above error?
We would like to keep the field CreatedTime in the Entity model, because we might want to access it later.
If using Fluent API:
using System.ComponentModel.DataAnnotations.Schema;
this.Property(t => t.CreatedTime)
.HasDatabaseGeneratedOption(DatabaseGeneratedOption.Computed);
If using Annotations
[DatabaseGenerated(DatabaseGeneratedOption.Computed)]
public System.DateTime CreatedTime { get; set; }
I found a simple solution to the problem on this thread:
http://social.msdn.microsoft.com/Forums/en-US/adodotnetentityframework/thread/7db14342-b259-4973-ac09-93e183ae48bb
There Fernando Soto writes:
"If you go to the EDM designer click on the field in the table that is auto-generated by the database, right click on it and select Properties and
look at the properties windows click on StoreGeneratedPattern and set its value to Computed, I believe it will give you what you are looking for."
The above solution was super quick and easy and it seems to work.
Also thank you for your contributions guys, but the above solution seems to do the job.
Try to use NotMapped attribute on this property
http://msdn.microsoft.com/en-us/library/system.componentmodel.dataannotations.schema.notmappedattribute.aspx
there are two things you can do:
If you have access to the database, check if the field has a default value. If it doesn't you can set it to GETDATE(), and the field should be set correctly, and you don't have to add/update it through Entity Framework.
If you don't have access to the database, or don't want to make any changes there, you can alter the behavior of the Entity Data Model to automatically set the date. Simply extend your ObjectContext model.
public partial class MyEntities
{
public override int SaveChanges()
{
var entityChangeSet = ChangeTracker.Entries<SomeEntity>();
if (entityChangeSet != null)
{
foreach (DbEntityEntry<SomeEntity> entry in entityChangeSet )
{
switch (entry.State)
{
case EntityState.Modified:
entry.Entity.LastModifiedDate = DateTime.UtcNow;
break;
case EntityState.Added:
entry.Entity.CreatedDate = DateTime.UtcNow;
break;
}
}
}
return base.SaveChanges();
}
}
This way you don't have to add any information for those fields when you add or update an item, the model will do it for you. If you have multiple entities which need this behavior, you can create an interface and make the Entity classes inherit that:
public interface IHaveCreatedDate {
DateTime CreatedDate { get; set; }
}
public partial class MyEntity : IHaveCreatedDate {
//MyEntity already implements this!
}
Then all you need to do is change the call to the ChangeTracker:
var entityChangeSet = ChangeTracker.Entries<IHaveCreatedDate>();
Is CreatedTime nullable?
One possible workaround - if CreatedTime is NOT nullable:
DateTime sqlServerMinDateTime = new DateTime(1753, 1, 1, 12, 0, 1, 0);
if(myEntity.CreatedTime < sqlServerMinDateTime)
{
myEntity.CreatedTime = sqlServerMinDateTime;
}
// do insert here
// ....
One possible workaround - if CreatedTime is nullable:
DateTime sqlServerMinDateTime = new DateTime(1753, 1, 1, 12, 0, 1, 0);
if(myEntity.CreatedTime < sqlServerMinDateTime)
{
myEntity.CreatedTime = null;
}
// do insert here
// ....

edit only changed or mentionned values with entity framework core

I need to update only mentioned fields in the put request body , the current issue is that all the values that are not mentioned in the entity to update are set to null
below is my currrent update implementation in the generic repository.
public virtual void Update(T entity)
{
Context.Attach(entity);
Context.Entry(entity).State = EntityState.Modified;
}
You need two different steps. First you have to perform a patch operation. Description here
public IActionResult PatchEntity(int id, [FromBody] JsonPatchDocument<Entity> patchdoc)
{
var entity = dbContext.Entities.Find(e=>e.Id == id);
patchdoc.ApplyTo(entity);
dbContext.Update(entity);
return Ok(entity);
}
Here is a method to perform partial update on DB (take a look at this question too):
public virtual void Update(params object[] keys, T entity)
{
var current = Context.Entities.Find(keys);
Context.Entry(entity).CurrentValues.SetValues(entity);
Context.SaveChanges();
}
If you donĀ“t need to partially update the database record you are fine with:
public virtual void Update(T entity)
{
Context.Update(entity); // entity is attached by default after select of entity
Context.SaveChanges();
}
What you could do is to get the entity before updating it :
Get your entity from your Context
Update the fields of your entity with the data from your model. You can use tools like Automapper to achieve this goal in a clean way.
Then call your Update method on the entity
Another way would be to check the state of each field such as in this answer.
EDIT Update point 2.
Hope it helps.
finally figured it out without even changing the repository
i just added a config within the automapper config file to ignore any null value
CreateMap<TeamDto, Team>().ForAllMembers(opts => opts.Condition((src, dest, srcMember) => srcMember != null));

Nhibernate mapping at run time

I am developing a site in which nhibernate is using. that is working fine for static mapping. but problem that i apply this application on existing database. so is there any way that mapping of classes took place at run time. i mean user provide tables and column names for mapping. Thanks
From your question I interpret you saying that the POCO classes exists, but you don't know the table or column names at build time.
So, if you already had this class:
public class MyGenericClass
{
public virtual long Id { get; set; }
public virtual string Title { get; set; }
}
You could bind it to a table and columns at runtime:
string tableName; // Set somewhere else by user input
string idColumnName; // Set somewhere else by user input
string titleColumnName; // Set somewhere else by user input
var configuration = new NHibernate.Cfg.Configuration();
configuration.Configure();
var mapper = new NHibernate.Mapping.ByCode.ModelMapper();
mapper.Class<MyGenericClass>(
classMapper =>
{
classMapper.Table(tableName);
classMapper.Id(
myGenericClass => myGenericClass.Id,
idMapper =>
{
idMapper.Column(idColumnName);
idMapper.Generator(Generators.Identity);
}
);
classMapper.Property(c => c.Title,
propertyMapper =>
{
propertyMapper.Column(titleColumnName);
}
);
}
);
ISessionFactory sessionFactory = configuration.BuildSessionFactory();
ISession session = sessionFactory.OpenSession();
////////////////////////////////////////////////////////////////////
// Now we can run an SQL query over this newly specified table
//
List<MyGenericClass> items = session.QueryOver<MyGenericClass>().List();
I don't think that could be possibly with NHibernate, but you could use a workaround.
You could use a view instead a table for the NHibernate mapping.
And in runtime, you could create that View or update it with the especified user mapping you need.
For example, you define a mapping in NHibernate to a view named ViewMapped with two columns Name and Mail.
And in the other hand, the user has a table with three columns Name, SecondName, EMail.
you can create a view on runtime with the following select:
(SELECT Name + ' ' + SecondName as Name, EMail as Mail FROM tableName) AS ViewMapped
I hope that helps you, or at least leads you to a solution.

NHibernate database versioning: object level schema and data upgrades

I would like to approach database versioning and automated upgrades in NHibernate from a different direction than most of the strategies proposed out there.
As each object is defined by an XML mapping, I would like to take size and checksum for each mapping file/ configuration and store that in a document database (raven or something) along with a potential custom update script. If no script is found, use the NHibernate DDL generator to update the object schema. This way I can detect changes, and if I need to make DML changes in addition to DDL, or perform a carefully ordered transformation, I can theoretically do so in a controlled, testable manner. This should also maintain a certain level of persistence-layer agnosticism, although I'd imagine the scripts would still necessarily be database system-specific.
The trick would be, generating the "old" mapping files from the database and comparing them to the current mapping files. I don't know if this is possible. I also don't know if I'm missing anything else that would make this strategy prohibitively impractical.
My question, then: how practical is this strategy, and why?
what i did to solve just that problem
version the database in a table called SchemaVersion
query the table to see if schema is up to date (required version stored in DAL), if yes goto 6.
get updatescript with version == versionFromBb from resources/webservices/...
run the script which also alters the schemaversion to the new version
goto 2.
run app
to generate the scripts i have used 2 options
support one rdbms: run SchemaUpdate to export into file and add DML statements manually
support multiple rdbms: use Nhibernate class Table to generate at runtime ddl to add/alter/delete tables and code which uses a session DML
Update:
"what method did you use to store the current version"
small example
something like this
public static class Constants
{
public static readonly Version DatabaseSchemaVersion = new Version(1, 2, 3, 4);
}
public class DBMigration
{
private IDictionary<Version, Action> _updates = new Dictionary<Version, Action>();
private Configuration _config;
private Dialect _dialect;
private IList<Action<ISession>> _actions = new List<Action<ISession>>(16);
private string _defaultCatalog;
private string _defaultSchema;
private void CreateTable(string name, Action<Table> configuretable)
{
var table = new Table(name);
configuretable(table);
string createTable = table.SqlCreateString(_dialect, _config.BuildMapping(), _defaultCatalog, _defaultSchema);
_actions.Add(session => session.CreateSQLQuery(createTable).ExecuteUpdate());
}
private void UpdateVersionTo(Version version)
{
_actions.Add(session => { session.Get<SchemaVersion>(1).Value = version; session.Flush(); });
}
private void WithSession(Action<session> action)
{
_actions.Add(action);
}
public void Execute(Configuration config)
{
_actions.Clear();
_defaultCatalog = config.Properties[NH.Environment.DefaultCatalog];
_defaultSchema = config.Properties[NH.Environment.DefaultSchema];
_config = config;
_dialect = Dialect.GetDialect(config.Properties);
using (var sf = _config.BuildSessionFactory())
using (var session = sf.OpenSession())
using (var tx = session.BeginTransaction())
{
Version dbVersion = session.Get<SchemaVersion>(1).Value;
while (dbVersion < Constants.DatabaseSchemaVersion)
{
_actions.Clear();
_updates[dbVersion].Invoke(); // init migration, TODO: error handling
foreach (var action in _actions)
{
action.Invoke(session);
}
tx.Commit();
session.Clear();
dbVersion = session.Get<SchemaVersion>(1).Value;
}
}
}
public DBMigration()
{
_updates.Add(new Version(1, 0, 0, 0), UpdateFromVersion1);
_updates.Add(new Version(1, 0, 1, 0), UpdateFromVersion1);
...
}
private void UpdateFromVersion1()
{
AddTable("Users", table => table.AddColumn(...));
WithSession(session => session.CreateSqlQuery("INSERT INTO ..."));
UpdateVersionTo(new Version(1,0,1,0));
}
...
}