UnitOfWork exception triggered by flush between deletes - eclipselink

I am using EclipseLink with container managed entity managers. I have been attempting to determine the cause of a QueryException I sometimes see during delete. The exception description is:
The object MySubEntity is not from this UnitOfWork object space, but the parent session's.
Here's a simplified depiction of the model:
#Entity
public class MyEntity
{
#OneToMany(targetEntity = MySubEntity.class,
cascade = { CascadeType.ALL },
orphanRemoval = true)
#JoinColumn(name = "PARENT_ID")
private List<MySubEntity> subEntities;
}
What I have discovered is that the following succeeds:
MyEntity firstEntity = entityManager.find(MyEntity.class, firstKey);
MyEntity secondEntity = entityManager.find(MyEntity.class, secondKey);
entityManager.remove(firstEntity);
//entityManager.flush();
entityManager.remove(secondEntity);
but if I uncomment the flush, this fails with the QueryException.
The code shown was only written to verify the problem, in practice the deletes don't occur so close together and the flush is implicitly triggered by a read query.
I verified this behavior with both EclipseLink 2.5.1 and 2.6.2. Is this a known EclipseLink bug? I couldn't find one documented. Is this expected behavior?
I have tried calling UnitOfWork.validateObjectSpace() both before and after the first delete but validateObjectSpace() succeeds in both cases.

I encountered this issue in my own work and while looking for help I stumbled upon this question. I can't say for certain if my issue is the same but it is quite similar. My goal is to possibly help others who find this as well.
Let me first explain the slight differences in my model compared to the question. I do not think these differences should be much of a problem however I am including them in the event I am wrong (or it helps someone in the future). The relationship between MyEntity and MySubEntity is not a list and is one to one. It is named accordingly.
#Entity
public class MyEntity
{
#OneToOne(cascade = { CascadeType.REMOVE, CascadeType.PERSIST }, orphanRemoval = true,fetch = FetchType.EAGER)
#JoinColumn(name = "SUB_ENTITY_KEY")
private MySubEntity subEntity;
}
Execution:
My execution is essentially the same as the question. Use the entity manager to remove multiple entities and flush. The flush is not required for my issue to occur (since the entity manager will eventually flush automatically) however it does make it fail faster.
Expected Results:
I expect the removal of both firstEntity and secondEntity and their respective subEntity objects should be removed as well since they are now orphans.
Actual Results:
The UnitOfWork exception described in the question. What I found is that EclipseLink is unable to properly handle the relationship upon remove when multiple MyEntity objects are being removed. This doesn't seem to happen in every situation but if you are able to produce a situation to confuse EclipseLink it will produce this exception consistently.
Solution:
Manually disassociate MyEntity and subEntity prior to removal. This can be done trivially by doing myEntity1.setSubEntity(null);
Even better would be adding it into the pre removal of MyEntity so it is automatically disassociated.
Example:
#PreRemove
public void onRemove() {
this.subEntity= null;
}
Platform Info:
EclipseLink v2.4.3
Oracle JRE 1.7.0_79
Weblogic 12.1.2

Related

How to save and then update same class instance during one request with NHibernate?

I'm relatively new to NHibernate and I've got a question about it.
I use this code snippet in my MVC project in Controller's method:
MyClass entity = new MyClass
{
Foo = "bar"
};
_myRepository.Save(entity);
....
entity.Foo = "bar2";
_myRepository.Save(entity);
The first time entity saved in database succesfully. But the second time not a single request doesnt go to database. My method save in repository just does:
public void Save(T entity)
{
_session.SaveOrUpdate(entity);
}
What should I do to be able to save and then update this entity during one request? If I add _session.Flush(); after saving entity to database it works, but I'm not sure, if it's the right thing to do.
Thanks
This is the expected behavior.
Changes are only saved on Flush
Flush may be called explicitly or implicitly (see 9.6. Flush)
When using an identity generator (not recommended), inserts are sent immediately, because that's the only way to return the ID.
you should be using transactions.
a couple of good sources: here and here.
also, summer of nHibernate is how I first started with nHibernate. it's a very good resource for learning the basics.

NHibernate ISession.Save() - Why is this persisting my entities immediately?

I am creating a large number of entities with NHibernate, attaching them to my ISession, and then using a transaction to commit my changes to the database. Code sample is below:
ISession _context = SessionProvider.OpenSession();
//Create new entities
for(int i=0; i<100; i++)
{
MyEntity entity = new MyEntity(i);
//Attach new entity to the context
_context.Save(entity);
}
//Persist all changes to the database
using(var tx = _context.BeginTransaction())
{
//Flush the session
tx.Commit();
}
I was under the impression that the line _context.Save() simply makes the ISession aware of the new entity, but that no changes are persisted to the database until I Flush the session via the line tx.Commit().
What I've observed though, is that the database gets a new entity every time I call _context.Save(). I end up with too many individual calls to the database as a result.
Does anyone know why ISession.Save() is automatically persisting changes? Have I misunderstood something about how NHibernate behaves? Thanks.
***EDIT - Just to clarify (in light of the two suggested answers) - my problem here is that the database IS getting updated as soon as I call _context.Save(). I don't expect this to happen. I expect nothing to be inserted into the database until I call tx.Commit(). Neither of the two suggested answers so far helps with this unfortunately.
Some good information on identity generators can be found here
Try:
using(Session _context = SessionProvider.OpenSession())
using(var tx = _context.BeginTransaction())
{
//Create new entities
for(int i=0; i<100; i++)
{
MyEntity entity = new MyEntity(i);
//Attach new entity to the context
_context.Save(entity);
}
//Flush the session
tx.Commit();
}
Which identity generator are you using? If you are using post-insert generators like MSSQL/MySQL's Identity or Oracle's sequence to generate the value of your Id fields, that is your problem.
From NHibernate POID Generators Revealed:
Post insert generators, as the name
suggest, assigns the id’s after the
entity is stored in the database. A
select statement is executed against
database. They have many drawbacks,
and in my opinion they must be used
only on brownfield projects. Those
generators are what WE DO NOT SUGGEST
as NH Team.
Some of the drawbacks are the
following
Unit Of Work is broken with the use of
those strategies. It doesn’t matter if
you’re using FlushMode.Commit, each
Save results in an insert statement
against DB. As a best practice, we
should defer insertions to the commit,
but using a post insert generator
makes it commit on save (which is what
UoW doesn’t do).
Those strategies
nullify batcher, you can’t take the
advantage of sending multiple queries
at once(as it must go to database at
the time of Save)
You can set your batch size in your configuration:
<add key="hibernate.batch_size" value="10" />
Or you can set it in code. And make sure you do your saves within a transaction scope.
Try setting the FlushMode to Commit:
ISession _context = SessionProvider.OpenSession();
context.FlushMode = FlushMode.Commit;
peer's suggestion to set the batch size is good also.
My understanding is that when using database identity columns, NHibernate will defer inserts until the session is flushed unless it needs to perform the insert in order to retrieve a foreign key or ensure that a query returns the expected results.
Well
rebelliard's answer is a possibility depending on your mapping
you are not using explicit transactions (StuffHappens' answer)
default flush mode is auto and that complicates things (Jamie Ide's answer)
if by any change you make any queries using the nhibernate api the default behaviour is to flush the cache to the database first so that the results of those queries will match the session entity representation.
What about :
ISession _context = SessionProvider.OpenSession();
//Persist all changes to the database
using(var tx = _context.BeginTransaction())
{
//Create new entities
for(int i=0; i<100; i++)
{
MyEntity entity = new MyEntity(i);
//Attach new entity to the context
_context.Save(entity);
}
//Flush the session
tx.Commit();
}

NHibernate Get() followed by Flush or Commit?

My ISession object's FlushMode is FlushMode.Commit.
I use the unit of work and repository pattern as defined here:
http://nhforge.org/wikis/patternsandpractices/nhibernate-and-the-unit-of-work-pattern.aspx
I recall seeing some examples where some people call a Get() immediately followed by a Flush or a transaction commit. We're they just off their rocker, or is there a reason to do this?
From my test:
[TestMethod]
public void TestMethod1()
{
Employee e;
IRepository<Employee> empRepo;
using(UnitOfWork.Start(Enums.Databases.MyDatabase))
{
empRepo = new Repository<Employee>();
e = empRepo.GetByID(21);
}
Debug.WriteLine(e.UserName);
}
My GetByID repository function just calls Session.Get(id) and I can view the username in the output window (after the session is killed)... so whats the point of any sort of Flush or transaction commit after a Get() ? I would understand if there was a save in there somewhere.
NHibernate assumes that all database operations are done within transactions, so people use them explicitly instead of having NHibernate the RDBMS use them implicitly.
Ayende explains this in more detail in his post NH Prof Alerts: Use of implicit transactions is discouraged.
Edit: Learned something new today. It's not NHibernate using implicit transactions but the DB.

NHibernate does not delete entity

In the TestFixtureTearDown-part of an NUnit test I try to delete some test-entities created in the TestFixtureSetUp-part. I use the following code
sessionFactory = NHibernateHelper.CreateSessionFactory(cssc["DefaultTestConnectionString"].ConnectionString);
uow = new NHibernateUnitOfWork(sessionFactory);
var g = reposGebruiker.GetByName(gebruiker.GebruikerNaam);
reposGebruiker.Delete(g);
var k = reposKlant.GetByName(klant.Naam);
reposKlant.Delete(k);
// Commit changes to persistant storage
uow.Commit();
However, after the commit, the two entities were still in the database. After searching on I came across this page on SO and so I added:
uow.Session.Flush();
However, still the entities remain in the DB. Does anyone have an idea as to why this is?
I've never used the UoW class you're using, but my projects are implemented using ISession.BeginTransaction and ISession.Transaction.Commit in a helper like this:
public void CreateContext(Action logic)
{
ISession.BeginTransaction();
logic();
ISession.Transaction.Commit();
}
And then:
CreateContext(() =>
Session.Delete(someObject));
This should work.
I want to mention that this is an example, and you'd want to make some abstractions.
How are the repositories created? In for the delete to succeed, the objects must be loaded in the same UoW (ISession) in which the Delete command is issued. The Delete method makes the objects non-persistent and marks them for deletion.

Flushing in NHibernate

This question is a bit of a dupe, but I still don't understand the best way to handle flushing.
I am migrating an existing code base, which contains a lot of code like the following:
private void btnSave_Click()
{
SaveForm();
ReloadList();
}
private void SaveForm()
{
var foo = FooRepository.Get(_editingFooId);
foo.Name = txtName.Text;
FooRepository.Save(foo);
}
private void ReloadList()
{
fooRepeater.DataSource = FooRepository.LoadAll();
fooRepeater.DataBind();
}
Now that I am changing the FooRepository to Nhibernate, what should I use for the FooRepository.Save method? Should the FooRepository always flush the session when the entity is saved?
I'm not sure if I understand your question, but here is what I think:
Think in "putting objects to the session" instead of "getting and storing data". NH will store all new and changed objects in the session without any special call to it.
Consider this scenarios:
Data change:
Get data from the database with any query. The entities are now in the NH session
Change entities by just changing property values
Commit the transaction. Changes are flushed and stored to the database.
Create a new object:
Call a constructor to create a new object
Store it to the database by calling "Save". It is in the session now.
You still can change the object after Save
Commit the changes. The latest state will be stored to the database.
If you work with detached entities, you also need Update or SaveOrUpdate to put detached entities to the session.
Of course you can configure NH to behave differently. But it works best if you follow this default behaviour.
It doesn't matter whether or not you explicitly flush the session between modifying a Foo entity and loading all Foos from the repository. NHibernate is smart enough to auto-flush itself if you have made changes in the session that may affect the results of the query you are trying to run.
Ideally I try to use one session per "unit of work". This means one cohesive piece of work which may involve several smaller steps. If you feel that you do not have a seam in your architecture where you can achieve this, then managing the session inside the repository will also work. Just be aware that you are missing out on some of the power that NHibernate provides you.
I'd vote up Stefan Moser's answer if I could - I'm still getting to grips with Nh myself but I think it's nice to be able to write code like this:
private void SaveForm()
{
using (var unitofwork = UnitOfWork.Start())
{
var foo = FooRepository.Get(_editingFooId);
var bar = BarRepository.Get(_barId);
foo.Name = txtName.Text;
bar.SomeOtherProperty = txtBlah.Text;
FooRepository.Save(foo);
BarRepository.Save(bar);
UnitOfWork.CommitChanges();
}
}
so this way either the whole action succeeds or it fails and rolls back, keeping flushing/transaction management outside of the Repositories.