Multiple Dbset/entity modification with single call to SaveChanges() - sql

I am working on a .NET Core Web API which needs to interact using EF Core 5.0.2 with an Azure SQL database.
I have different repository methods where I am interacting with DbContext to add/edit/delete records for different DbSet.
For example:
UserRepository.AddUser(userdata);
Implementation of AddUser is like this,
ourDbContext.UserTable.AddAsync(userdata);
So in user service method, am calling different repository method sequentially and none of those methods call ourDbContext.SaveChangesAsync() individually. A single call to SaveChanges is present after all the repository methods calls which is acting like a unit of work pattern for all the calls as single transaction.
Example:
UserRepository.AddUser(userdata);
ActivityRepository.AddActivity("New User got added");
ourDbContext.SaveChangesAsync();
So my question is: if any saving changes to any of the tables/entities fails, will the previous successful tables change will be rolled back?
For example, suppose this operation
UserRepository.AddUser(userdata);
was successful and the new user record was added to the User table.
But this was not successful:
ActivityRepository.AddActivity("New User got added");
So no activity record was added to the Activity table.
Will SaveChangesAsync() be able to handle this situation automatically and will roll back User table new changes as well?
If not are we supposed to wrap the above codes with transaction scope? Or what is the recommended way to do it.

Briefly how DbContext's Change Tracker works:
You load entities: ChangeTracker remembers current values of all loaded entities (except you use AsNoTracking())
You have modified loaded entities, delete, add new.
You call SaveChanges: ChangeTracker starts searching which objects are changed since last load by comparing with previous values.
DML SQL is generated and everything saved in one SQL statement or in several statements in Transaction.
So, if you have one DbContext for each repository, you do not need to worry about rollbacking, just do not call SaveChanges(). For sure for restart process, you have to recreate DbContext because it contains not needed state.

Related

Prevent entry of GemFire cache being accessed by more than one request

I have an application using Springboot, Gemfire and MySQL. The Springboot application serves as a rest api. I want to "lock" the cache entry so that only one request sent to rest api can access certain entry in GemFire at a time. Others cannot do CRUD on that entry until the entry owner release the possession. I have two approaches as of now.
Approach 1 - Create a GemFire function, which performs a lock/unlock on the entry when invoked by rest api(at different time) using org.apache.geode.cache.Region.getDistributedLock.
Approach 2 - Create a region(eg. Lock) where an entry is created when an entry of target region(eg. Customer) is accessed for the fist time. When the 2nd request wants to access the same entry, the rest api checks the region Lock first. Rest api retrieves and returns the entry from region Customer if the key does not exist in region Lock. Otherwise, no entry will be returned. Once the first requester finishes, rest api removes the entry in region Lock.
I am wondering if there are any alternatives besides these two options.
If you want a more space efficient solution, you could add a boolean field to the value to indicate if it was locked. You can then use region.replace(K,V,V) to efficiently set the "lock" on the entry as well. Although, this will leak your locking concerns into your business objects.

Nhibernate: Is it possible to throw an exception if Save() is called without beginning a transaction

I was wondering if we can some how extend NHiberate to throw an exception if a piece of code attempted to save an object to a database without beginning a transaction? Since beginning a transaction is a requirement for calling Save() to work properly, I can't see a programmer calling Save() without beginning a transaction in the first place.
Solution is not in the exception throwing. It is about keeping us (developers on the project) aware of what and where we are doing
1) Project shared approach. Firstly, explain to the team how the Architecture of application works. All team members should now, that Unit of Work, or Session per Request patterns are in place.
2) The FlushMode. Secondly, explain the concept of the Session object. ISession. Save() or Update() is far away from SQL execute INSERT or UPDATE. What count is the ISession.Flush(). And (based on the first step) we can decide when that Flush() happens.
I would suggest setting the FlushMode to on Commit or None (with explict call). And then, if a team member would like to Execute any write command - one standard place, one common way (used on the project) will guide hin/her.
The call Session.Flush() should/will/must be wrapped in a transaction.

Why there is the need of detaching and merging entities in a ORM?

The question is about Doctrine but I think that can be extended to many ORM's.
Detach:
An entity is detached from an EntityManager and thus no longer managed
by invoking the EntityManager#detach($entity) method on it or by
cascading the detach operation to it. Changes made to the detached
entity, if any (including removal of the entity), will not be
synchronized to the database after the entity has been detached.
Merge:
Merging entities refers to the merging of (usually detached) entities
into the context of an EntityManager so that they become managed
again. To merge the state of an entity into an EntityManager use the
EntityManager#merge($entity) method. The state of the passed entity
will be merged into a managed copy of this entity and this copy will
subsequently be returned.
I understand (almost) how this works, but the question is: why one would need detaching/merging entitiies? Can you give me an example/scenario when these two operations can be used/needed?
When should I Detaching an entity?
Detaching an entity from the an EM (EntityManager) is widely used when you deal with more than one EM and avoid concurrency conflicts, for example:
$user= $em->find('models\User', 1);
$user->setName('Foo');
// You can not remove this user,
// because it still attached to the first Entity Manager
$em2->remove($user);
$em2->flush();
You can not take control of $user object by $em2 because its session belongs to $em that initially load the $user from database. Them how to solve the problem above? You need to detaching the object from the original $em first:
$user= $em->find('models\User', 1);
$user->setName('Foo');
$em->detach($user);
$em2->remove($user);
$em2->flush();
When should I use merging function?
Basically when you want to update an entity:
$user= $em->find('models\User', 1);
$user->setName('Foo');
$em->merge($user);
$em->flush();
The EM will make a compare between the $user in database vs the $user in memory. Once the EM recognize the changed fields, it only updates them and keeps the old ones.
The flush method triggers a commit and the user name will updated in the database
You would need to detach an entity when dealing with issues of concurrency.
Suppose you are using an asynchronous API that makes callbacks to your project. When you issue the API call along with callback instruction, you might still be managing the entity that is affected by the callback, and therefore overwrite the changes made by the callback.
You can also detach entity when you have pernamently data in your database, but in your code you modify this entities depending on the user account.
For example browser game which have some characters and some attacks to fight. AttackOne used by "UserFoo" (lvl 90) will be modified by better bonuses than used by "UserBarr" (lvl 20), but in our database AttackOne all the time is the same attack

Get a number of resources asynchronously and "asynchronously" save them to a database. Which good pattern to use? (AFNetworking, Core Data)

I need to populate my map with annotations. Each annotation has corresponding Place resource that is being fetched from remote server. Each Place has associated Category - it is fetched from the server too as a separate resource.
Let's assume that to populate a given region I need to fetch 100 places, each belonging to the one of 20 categories (actually there are much more of them).
I use AFNetworking to fetch the both of resources. I try to cache both places and categories for offline use, so before the annotations are displayed on map, I write fetched resources to the Core Data tables.
Each place retrieves its associated category resource by demand and I need to write both a place in the 'places' table, and category in the 'categories' table.
Because of fetching is being done asynchronously, when writing particular category to table I can't know if maybe another place "thread" attempts to write the same associated category to 'categories' table.
So, the question is: what is the pattern for working with Core Data tables, when they need to be populated with information retrieved asynchronously? Specifically how any given thread which is going to write a category could know that there is already one trying to do that?
UPDATE 1: My current problem is that currently I am having the duplication of categories. My guess is that obviously each category which attempts to be written is not aware about the parallel write of the same category.
UPDATE 2: The most simple description of my case is the following:
I create a new Category entity with some fields in one thread and in the meantime in another thread I create exactly the same Category entity with the same fields aiming to be the same Category object like the first thread has.
One thread wins calling [managedObjectContext save:&error], but then before the actual record is appeared in the PersistentStore, the second does call the save too. The question is: how to prevent the duplication of records in 'categories' table?
UPDATE 3: I am considering both variants of using managed object contexts: 1) reusing one shared moc instance by all threads 2) instantiate a new moc on each thread
Thanks!
The "official" answer is going to be something about using an NSOperationQueue and/or taking manual steps to ensure that all your accesses to the NSManagedObjectContext occur on the same thread that created the context. There are a number of references and tutorials that you can follow to implement this approach.
As an alternative, there's a thread-safe Core Data extension on github that will do this for you. If you use it, it will automatically synchronize your database operations so that you don't have to worry about whether or not another thread is doing something with the context. You can just insert things as they come in, and the framework will ensure that your operations are translated into something that won't make Core Data explode.
Full disclosure: I built the github project.

How does one gracefully merge object graphs after NHibernate StaleObjectStateException?

We are trying to combine objects after a StaleObjectStateException has been thrown to save a merged copy.
Here's our environmental situation:
List item
Multi-user system
WPF Desktop application, SQL Server 2008 database
NHibernate 3.1.0.4000, FluentNHibernate 1.2.0.712
Global, long-running NHibernate sessions [for the moment. We understand session-per-presenter is the recommended pattern, but do not have time in our project schedule to convert at present.]
Top-down saves and property navigation (that is to say we save the top-level object (herein called Parent) in our domain graph)
.Cascade.AllDeleteOrphan() used in most cases.
Users exclusively own some objects in the domain graph, but share ownership of the Parent.
Navigation properties on Children objects do not exist.
All classes have numeric ID and numeric Version fields.
Use case:
User 1 starts application and opens Parent.
User 2 starts application and opens Parent.
User 2 adds a child (herein C2).
User 2 saves Parent.
User 1 adds a child (herein C1).
User 1 saves Parent.
User 1 receives a StaleObjectStateException (and rightly so)
We want to gracefully handle the exception.
Because the users share ownership of the parent, User 1 should be able to save successfully, and save the Parent with both his new child, and User 2's child.
When SOSE is thrown, according to Ayende (http://msdn.microsoft.com/en-us/magazine/ee819139.aspx):
your session and its loaded entities are toast, because with NHibernate, an exception thrown
from a session moves that session into an undefined state. You can no longer use that session
or any loaded entities
C1 has already been assigned an ID and Version # by the now-not-useful session. (I wish it had not been.)
How do we combine the use of ISession.Merge() and ISession.Refresh() to get a newly saved Parent that has both C1 and C2 ?
We have tried a number of arcane permutations, none of which fully work.
Usually, either a "row was updated or deleted by another transaction (or unsaved-value mapping was incorrect" or an actual ID collision at the ODBC level.
Our theory, at the moment:
Reset version numbers on C1 (to prevent "unsaved-value mapping was incorrect")
Get a new session
newSession.Refresh(C1);
newParent = newSession.QueryOver[...]
newParent.Add(C1);
newSession.SaveOrUpdate(newParent)
However, all the documentation suggests that newSession.Merge is supposed to be sufficient.
Other posts used as research:
Fluent NHibernate Newbie: Row was updated or deleted by another transaction
Is there an alternative to ISession.Merge() that doesn't throw when using optimistic locking?
StaleObjectstateException row was updated or deleted by
How I can tell NHibernate to save only changed properties
Hibernate (JPA): how to handle StaleObjectStateException when several object has been modified and commited (java, but relevant, i think)
Because the users share ownership of the parent, User 1 should be able to save successfully, and save the Parent with both his new child, and User 2's child.
Why don't you just disable optimistic locking on the child collection? Then anyone can add childs and it won't increase the version of the parent.
Otherwise, here is the solution my current project uses for all recoverable exceptions a session could throw (e.g. connection to DB lost, foreign key violated, ...):
Before calling session.Flush() the session is serialized to a MemoryStream.
If session.Flush() or transaction.Commit() throws an exception that is recoverable, the original session is disposed and the saved one is deserialized.
The calling screen then gets the information that the session was recovered after an exception and calls the same queries again that were called when the screen was opened the first time. And because all the modified entities are still in the recovered session the user now has the state of just before he pressed save.