NHibernate new session with transaction in existing transaction - nhibernate

can this code makes some bad things? I found it in one project and do not know if it can be cause of some crazy bugs(deadlocks, timeouts in DB,...). Code like this is executed concurently many times in program even in threads.
Thanks a lot
class first {
void doSomething {
using (ITransaction transaction = session.BeginTransaction){
var foo = new second();
foo.doInNewTransaction(); //inner transaction in new session
transaction.Commit();
}
}
}
class second {
void doInNewTransaction(){
using (Session session = new Session()){
using (ITransaction transaction = session.BeginTransaction){
//do someting in database
transaction.Commit();
}
}
}
}

This should be fine. I'm sure I have done stuff like this in the past. The only thing that you need to be aware of is that if you modify an object in the inner session then these changes will not automatically be reflected in the outer session if the same object has already been loaded.
Having said that, if you do not need to do this then I would avoid it. Normally I would recommend AOP based transaction management when using NHibernate. This would allow your inner component to easily join in with the transaction from the outer component. However, in order to do this you need to be using a DI container that supports this, for example Spring.NET or Castle.

Related

_context.SaveChanges() works but await _context.SaveChangesAsync() doesn't

I'm struggling to understand something. I have a .Net Core 2.2 Web API, with a MySQL 8 database, and using the Pomelo library to connect to MySQL Server.
I have a PUT action method that looks like this:
// PUT: api/Persons/5
[HttpPut("{id}")]
public async Task<IActionResult> PutPerson([FromRoute] int id, Person person)
{
if (id != person.Id)
{
return BadRequest();
}
_context.Entry(person).State = EntityState.Modified;
try
{
_context.SaveChanges(); // Works
// await _context.SaveChangesAsync(); // Doesn't work
}
catch (DbUpdateConcurrencyException)
{
if (!PersonExists(id))
{
return NotFound();
}
else
{
throw;
}
}
return NoContent();
}
As per my comments in the code snippet above, when I call _context.SaveChanges(), it works (i.e. it updates the relevant record in the MySQL database, and returns a 1) but when I call await _context.SaveChangesAsync(), it doesn't work (it does not update the record, and it returns a 0). It's not throwing an exception or anything - it just doesn't update the record.
Any ideas?
As I said in my comment above, EF Core has no true sync methods. The sync methods (e.g. SaveChanges) merely block on the async methods (e.g. SaveChangesAsync). As such, it's impossible that SaveChanges would work, if SaveChangesAsync doesn't, as the former just proxies to the latter. There's some other issue at at play here, which is not evident from the code you've provided.
However, the reason I'm writing this as an answer is that the way you're doing this, in general, is wrong, and I believe by doing it right, the problem may disappear. You should never and I mean never directly save an instance created from the request body directly into your database. This provides an attack vector that would allow a malicious user to alter your database in undesirable ways. You've covered that partially by checking the id has not been modified, but a user could still alter things they should not be allowed to.
That security vulnerability aside, there's a practical reason not to do it this way. An API serves as an anti-corruption layer, but only if you decouple your entity from the object the client interacts with. When you use your entity directly, you're tightly coupling your database to your API layer, such that any change at the database level necessitates a new version of your API, and worse, provides no opportunity for deprecating the previous version. All clients must immediately update or their implementations will break. By exposing a DTO class instead to your client, the database can evolve independently of the API, as you can add any anti-corruption logic necessary to bridge the gap between the two.
Long and short, this is how your method should be structured:
// PUT: api/Persons/5
[HttpPut("{id}")]
public async Task<IActionResult> PutPerson([FromRoute] int id, PersonModel model)
{
// not necessary if using `[ApiController]`
if (!ModelState.IsValid)
return BadRequest();
var person = await _context.People.FindAsync(id);
if (person == null)
return NotFound();
// map `model` onto `person`
try
{
await _context.SaveChangesAsync();
}
catch (DbUpdateConcurrencyException)
{
// use an optimistic concurrency strategy from:
// https://learn.microsoft.com/en-us/ef/core/saving/concurrency#resolving-concurrency-conflicts
}
return NoContent();
}
I wanted to keep the code straight-forward, but for handling optimistic concurrency, I'd actually recommend using the Polly exception handling library. You can set up retry policies with that which can keep trying to make the update after error correction. Otherwise, you'd need try/catch within try/catch within try/catch, etc. Also, the DbUpdateConcurrencyException is something you should always handle in some way, so re-throwing it makes no sense.
I'm truly sorry to anyone who's time I have wasted with this question. I figured out the problem, and it was a stupid mistake I made in my dbContext. I have an audit trail setup, so I am overriding SaveChangesAsync, OnBeforeSaveChanges and OnAfterSaveChanges. There was a bug in that code. However, I am not overriding SaveChanges, which is why that still worked. Sorry!

Why isn't this transaction isolated?

I have a few methods - a couple of calls to SQL Server and some business logic to generate a unique value. These methods are all contained inside a parent method:
GenerateUniqueValue()
{
//1. Call to db for last value
//2. Business logic to create new value
//3. Update db with new value created
}
I want the call to GenerateUniqueValue to be isolated, i.e - when two clients call it simultaneously, the second client must wait for the first one to finish.
Originally, I made my service a singleton; however, I have to anticipate future changes that may include load balancing, so I believe a singleton approach is out. Next I decided to try the transaction approach by decorating my service:
[ServiceBehavior(TransactionIsolationLevel = IsolationLevel.Serializable, TransactionTimeout = "00:00:30")]
And my GenerateUniqueValue with:
[OperationBehavior(TransactionScopeRequired = true)]
The problem is that a test of simultaneous hits to the service method results in an error:
"System.ServiceModel.ProtocolException: The transaction under which this method call was executing was asynchronously aborted."
Here is my client test code:
private static void Main(string[] args)
{
List<Client> clients = new List<Client>();
for (int i = 1; i < 20; i++)
{
clients.Add(new Client());
}
foreach (var client in clients)
{
Thread thread = new Thread(new ThreadStart(client.GenerateUniqueValue));
thread.Start();
}
Console.ReadLine();
}
If the transaction is suppose to be isolated, why are multiple threads calling out to the method clashing?
Transaction is for treating multiple actions as a single atomic action. So if you want to make the second thread to wait for the first thread's completion, you have to deal with concurrency not transaction.
Try using System.ServiceModel.ServiceBehaviorAttribute.ConcurrencyMode attribute with Single or Reentrant concurrency modes. I guess that's what you are expecting.
[ServiceBehavior(ConcurrencyMode=ConcurrencyMode.Reentrant)]
I guess you got the exception because the IsolationLevel.Serializable would enable the second thread to access the volatile data, but wouldn't let it to change it. You perhapse be doing some change operation which is not permitted with this isolation level.

how to use explicit transactions without nested transactions

ok, so Ayende recommends always using a transaction, even for read operations.
but supposing I have the following scenario:
public Employee GetEmployeeByName(string name)
{
using (ITransaction tx = CurrentSession.BeginTransaction())
{
return dao.GetEmployeeByName(name);
}
}
public void SaveNewEmployee(Employee employee)
{
using (ITransaction tx = CurrentSession.BeginTransaction())
{
if (GetEmployeeByName(employee.Name) != null)
{
throw new ArgumentException("employee with same name found");
}
CurrentSession.Save(employee);
}
}
this would actually throw an exception, since nhibernate doesn't support nested transactions.
how can I get around this?
EDIT
this is even a better solution than the one I accepted...
Typically you would get around it by using a unit of work pattern in which you can start your transaction at the same time you open your session. That is to say at the beginning of the unit of work. And you would commit it at the end of the unit of work.

NHibernate FlushMode Auto Not Flushing Before Find

All right, I've seen some posts asking almost the same thing but the points were a little bit different.
This is a classic case: I'm saving/updating an entity and, within the SAME SESSION, I'm trying to get them from the database (using criteria/find/enumerable/etc) with FlushMode = Auto. The matter is: NHibernate isn't flushing the updates before querying, so I'm getting inconsistent data from the database.
"Fair enough", some people will say, as the documentation states:
This process, flush, occurs by default at the following points:
from some invocations of Find() or Enumerable()
from NHibernate.ITransaction.Commit()
from ISession.Flush()
The bold "some invocations" clearly says that NH has no responsibility at all. IMO, though, we have a consistency problem here because the doc also states that:
Except when you explicity Flush(), there are absolutely no guarantees about when the Session executes the ADO.NET calls, only the order in which they are executed. However, NHibernate does guarantee that the ISession.Find(..) methods will never return stale data; nor will they return the wrong data.
So, if I'm using CreateQuery (Find replacement) and filtering for entities with property Value = 20, NH may NOT return entities with Value = 30, right? But that's what happens in fact, because the Flush is not happening automatically when it should.
public void FlushModeAutoTest()
{
ISession session = _sessionFactory.OpenSession();
session.FlushMode = FlushMode.Auto;
MappedEntity entity = new MappedEntity() { Name = "Entity", Value = 20 };
session.Save(entity);
entity.Value = 30;
session.SaveOrUpdate(entity);
// RETURNS ONE ENTITY, WHEN SHOULD RETURN ZERO
var list = session.CreateQuery("from MappedEntity where Value = 20").List<MappedEntity>();
session.Flush();
session.Close();
}
After all: am I getting it wrong, is it a bug or simply a non predictable feature so everybody have to call Flush to assure its work?
Thank you.
Filipe
I'm not very familiar with the NHibernate source code but this method from the ISession implementation in the 2.1.2.GA release may answer the question:
/// <summary>
/// detect in-memory changes, determine if the changes are to tables
/// named in the query and, if so, complete execution the flush
/// </summary>
/// <param name="querySpaces"></param>
/// <returns></returns>
private bool AutoFlushIfRequired(ISet<string> querySpaces)
{
using (new SessionIdLoggingContext(SessionId))
{
CheckAndUpdateSessionStatus();
if (!TransactionInProgress)
{
// do not auto-flush while outside a transaction
return false;
}
AutoFlushEvent autoFlushEvent = new AutoFlushEvent(querySpaces, this);
IAutoFlushEventListener[] autoFlushEventListener = listeners.AutoFlushEventListeners;
for (int i = 0; i < autoFlushEventListener.Length; i++)
{
autoFlushEventListener[i].OnAutoFlush(autoFlushEvent);
}
return autoFlushEvent.FlushRequired;
}
}
I take this to mean that auto flush will only guarantee consistency inside a transaction, which makes some sense. Try rewriting your test using a transaction, I'm very curious if that will fix the problem.
If you think about it, the query in your example must always go to the db. The session is not a complete cache of all records in the db. So there could be other entities with the value of 20 on disk. And since you didn't commit() a transaction or flush() the session NH has no way to know which "view" you want to query (DB | Session).
It seems like the "Best Practice" is to do everything (gets & sets) inside of explicit transactions:
using(var session = sessionFactory.OpenSession())
using(var tx = session.BeginTransaction())
{
// execute code that uses the session
tx.Commit();
}
See here for a bunch of details.
managing and tuning hibernate is an artform.
why do you set an initial value of 20, save, then change it to 30?
As a matter of practice, if you are going modify the session, then query the session, you might want to explicitly flush between those operations. You might have a slight performance hit (after all, you then don't let hibernate optimize session flushing), but you can revisit if it becomes a problem.
You quoted that "session.find methods will never return stale data". I would modify your code to use a find instead of createQuery to see if it works.

NHibernate - Is ITransaction.Commit really necessary?

I've just start studying NHibernate 2 days ago, and I'm looking for a CRUD method that I've written based on an tutorial.
My insert method is:
using (ISession session = Contexto.OpenSession())
using (ITransaction transaction = session.BeginTransaction())
{
session.Save(noticia);
transaction.Commit();
session.Close();
}
The complete code of "Contexto" is here: http://codepaste.net/mrnoo5
My question is: Do i really need to use ITransaction transaction = session.BeginTransaction() and transaction.Commit();?
I'm asking this because I've tested run the web app without those two lines, and I've successfully inserted new records.
If possible, can someone explain me too the purpose of Itransaction and the method Commit?
Thanks
This is the proper generic NHibernate usage pattern:
using (ISession session = sessionFactory.OpenSession())
using (ITransaction transaction = session.BeginTransaction())
{
//Do the work here
transaction.Commit();
}
All of that is required to ensure everything works as expected (unless you use additional infrastructure)
Closing the session or doing anything with the transaction besides committing is redundant, as the Dispose methods of the session and the transaction takes care of cleanup, including rollback if there are errors.
It's important to note than doing anything with the session after an exception can result in unexpected behavior, which is another reason to limit explicit exception handling inside the block.
ITransaction transaction = session.BeginTransaction() is not necessary as you found out by testing.
But imagine this, what happens when your session throws an exception? how would you get back your changes to the database?
The following is quote from Nhib...
A typical transaction should use the following idiom:
ISession sess = factory.OpenSession();
ITransaction tx;
try
{
tx = sess.BeginTransaction(); //do some work ...
tx.Commit();
}
catch (Exception e)
{
if (tx != null)
tx.Rollback();
throw;
}
finally
{
sess.Close();
}
If the ISession throws an exception, the transaction must be rolled back
and the session discarded. The internal state of the ISession might not be
consistent with the database after the exception occurs.
NHibernate.ISessionFactory
Well you call Commit() on your transaction it saves all the changes to the database.
Do i really need to use ITransaction transaction = session.BeginTransaction() and transaction.Commit();?
yes it is considered good practice using transactions for everything even simple reads.
tested run the web app without those two lines, and I've successfully inserted new records.
that because the session will commit the changes when it is disposed at the end of the using statement.
anyway this is how i would right the save :
using (ISession session = Contexto.OpenSession())
{
using (ITransaction transaction = session.BeginTransaction())
{
try
{
session.Save(noticia);
transaction.Commit();
}
catch(HibernateException)
{
transaction.Rollback();
throw;
}
}
}