EclipseLink Cache is not refreshed correctly in Cache Coordination - eclipselink

I'm trying to build cache coordination with eclipselink and here is the issue. There is a table in db which have NAME and CHECK_TIME columns. My first application have FULL cache and with getAll() it loads all values to cache from DB in the begining. After that prints objects with get("name_"+i) in loop as [name_1,2014-07-04 00:00:00],[name_2,2014-07-04 00:00:00] and so on .
There is a second application which are using same cache, thanks to cache coordination, and update CHECK_TIME values with merge() and I can see these new values in db. So this second app check a line in each second and update CHECK_TIME column.
After second application is starting, first application prints still old CHECK_TIME values even rmi log says objects are merged. So I think that cache coordination is not working at first, however, if I tried to create a new object with persist() like [new_name_1,2014-07-04 16:43:32] in second app, then first application printed it to screen immediately that means cache is refreshed correctly.
Do you know why get('name_'+i) return always old value even cache is refreshed? It is like there is a secondary cache in eclipselink.
persistence.xml
<property name="eclipselink.cache.coordination.protocol" value="rmi" />
<property name="eclipselink.cache.coordination.naming-service" value="rmi" />
<property name="eclipselink.cache.coordination.rmi.url" value="rmi://$HOST:1090" />
Subsciber.java
#NamedQueries({
#NamedQuery(name = "get", query = "select b from Subscriber b where b.name = :name ", hints = {
#QueryHint(name = QueryHints.QUERY_TYPE, value = QueryType.ReadObject),
#QueryHint(name = QueryHints.CACHE_USAGE, value = CacheUsage.CheckCacheOnly)}),
#NamedQuery(name = "getAll", query = "select b from Subscriber b ", hints = {
#QueryHint(name = QueryHints.QUERY_TYPE, value = QueryType.ReadAll),
#QueryHint(name = QueryHints.CACHE_USAGE, value = CacheUsage.DoNotCheckCache)})})
#Cache(type = CacheType.FULL, coordinationType = CacheCoordinationType.SEND_NEW_OBJECTS_WITH_CHANGES)
Edit:
private SubscriberIdentity get(String name) {
SubscriberIdentity result = null;
try {
//entityManager is global
Query query = entityManager.createNamedQuery("get");
query.setParameter("name", name);
result = (SubscriberIdentity) query.getSingleResult();
} catch (Exception e) {
if (e.getCause() instanceof NoResultException || e instanceof NoResultException) {
//TODO
}
}
return result;
}

With #Chris big help in comments, I understand that the problem is using static entityManager object because it use own cache instead of checking shared cache in case of second call for get(name_1). So I changed
Query query = entityManager.createNamedQuery("get");
part as
EntityManager em = new EntityManager();
Query query = em.createNamedQuery("get");
in getter and cache coordination works truely for UPDATE case too.

As #Chris said, and you can also check the document EL Cache:
By default EclipseLink uses a shared object cache, that caches a
subset of all objects read and persisted for the persistence unit. The
EclipseLink shared cache differs from the local EntityManager cache.
The shared cache exists for the duration of the persistence unit
(EntityManagerFactory, or server) and is shared by all EntityManagers
and users of the persistence unit. The local EntityManager cache is
not shared, and only exists for the duration of the EntityManager or
transaction.
The limitation of the shared cache, is that if the database is changed directly through JDBC, or by another application or server, the objects in the shared cache will be stale.
EclipseLink offers several mechanism to deal with stale data including:
Refreshing
Invalidation
Optimistic locking
Cache coordination

Related

The Nhibernate transaction has been successfully committed, but the result of the query is still the unmodified value

Execute the following server code, and then check the promotion table and task table in the database. The related fields have been updated correctly, which indicates that the transaction has been successfully committed.
using (ITransaction tx = session.BeginTransaction())
{
try
{
Promotion p = session.Get<Promotion>(request.PromotionId);
p.Status = PromotionStatus.Canceled;
foreach (Task task in p.Tasks)
{
if (task.AnnounceStatus == TaskAnnounceStatus.New)
{
task.AnnounceStatus = TaskAnnounceStatus.PromotionCanceled;
task.CancelTime = DateTime.Now;
//session.Update(task);
}
}
tx.Commit();
}
catch
{
tx.Rollback();
throw;
}
}
Then execute the following query(Query A), the data obtained is also the updated value. It looks like everything is very good.
tasks = session.Query<Task>().Where(p => p.AnnounceStatus == Model.TaskAnnounceStatus.New && p.ProcessStatus == Model.TaskProcessStatus.New).ToList();
However, if I execute a query on the task using the following code before committing the transaction, the result of the above query(Query A) will get the old unmodified value. At the same time, what you see in the database is still the correctly updated value.
Task task = session.Get<Task>(taskId);
So I modified the first piece of code and explicitly called the update method (see the code at the comment), and everything worked fine this time.
My guess is that Nhibernate's cache is causing the above problem. I use syscache2 to manage the second-level cache, the cache was set to ReadWrite, and use sessionFacotry.getCurrentSession to manage Nhibernate's session.
Hope someone can help me explain how this works.
You execute query session.Get<Task>(taskId); first. This loads the entity in first level cache.
Then in your transaction, you Get the Promotion entity. The Task is the IEnumerable property of it. As lazy load may be, your foreach loop iterate through Task entity with ID taskID - Modifies it - Updates it - Transaction is successful. As all this is happening inside the transaction, your initial entity returned by session.Get<Task>(taskId); is not updated. It still hold the old values.
Then, you again session.Query<Task>() outside the transaction. This time, NHibernate see that the entity with same identifier is already loaded in session cache (with session.Get<Task>(taskId); query), it does not load that entity again, it simply returns the entity already in session cache. As that entity hold the old values, you see the problem.
To confirm this, put all these queries inside the transaction block and check the result.
Alternatively, manage so scope of session properly.
Understand that your ISession is your Unit Of Work; scope it carefully.

Session Flush not Showing SQL when persisting unsaved entities

The scenario is a (more complex) version of the following:
IList<T> ts = Session.QueryOvery<T>().List();
// modify data of multiple objects
ts[0].Foo = "foo0";
ts[1].Foo = "foo1";
using (ITransaction trx = Session.BeginTransaction())
{
// save only one object
Session.Save (ts[0]);
trx.Commit();
}
As NH goes, this will also save ts[1] by default, to prevent stale state (side note : we love control over our SQL, so we turn that off by setting Session.FlushMode=FlushMode.Never).
What really vexes me is the fact that, even though Show_SQL is activated, no sql is shown for the ts[1] updates that are definitely sent to the Database by the flush.
Is there any way I can get those to show up?
As stated in https://stackoverflow.com/a/9403516/1236044 , you just need to add adonet.batch_size setting with value 0 to your config :
<property name="adonet.batch_size">0</property>

Why is a session.Clear() needed to reflect the changes in the db in the this example?

I have the following code:
public class A
{
private ISessionFactory _sf;
A(ISessionFactory sf)
{
_sf = sf;
}
public void SomeFunc()
{
using (var session = _sf.OpenSession())
using (var transaction = session.BeginTransaction())
{
// query for a object
// change its properties
// save the object
transaction.commit();
}
}
}
Its used as follows in a unit test
_session.CreateCriteria ... // some setting up values for this test
var objectA = new A(_sessionFactory);
objectA.SomeFunc();
// _session.Clear();
var someVal = _session.CreateCriteria ... // retrieve value from db to
//check if it was set to the
//proper value
//it uses a restriction on a property
//and a uniqueresult to get the object.
//it doesnt use get or load.
Assert.That(someVal, Is.EqualTo(someOtherValue)); // this is false as long
//as the _session.Clear() is commented.
//If uncommented, the test passes
I am testing against a sqlite file database. In my tests I make some changes to the db to setup it up properly. I then call SomeFunc(). It makes the required modifications. Once I am back in my test, the session however doesnt get the updated values. It still returns the value as was before calling SomeFunc(). I have to execute _session.Clear() to have the changes reflect in my assertion in the test.
Why is this needed?
Edit: cache.use_second_level_cache and cache.use_query_cache are both set to false
Edit2: Read the following statements in the NH Documentation.
From time to time the ISession will
execute the SQL statements needed to
synchronize the ADO.NET connection's
state with the state of objects held
in memory. This process, flush, occurs
by default at the following points
* from some invocations of Find() or Enumerable()
* from NHibernate.ITransaction.Commit()
* from ISession.Flush()
And in section 10.1 it says,
Ensure you understand the semantics of
Flush(). Flushing synchronizes the
persistent store with in-memory
changes but not vice-versa.
So, how do I get the in memory objects to get updated? I understand that objects are cached per session. But executing a UniqueResult() or a List() should sync with the db and invalidate the cache, right?
What I cannot understand is why is the session reporting stale data?
It depens on what king of operations do you make. NHibernate has first level cache by default. It uses cache to get entities by ID and so on.
The in memory view of objects (the level 1 cache) is per session.
A takes an ISessionFactory and opens its own session with its own transaction scope.
Even if the contents of the ISession used in SomeFunc are flushed to the database, _session will not see those changes until its level 1 cache is cleared.
You have two sessions. One is in A.SomeFunc, and the other is in your unit test. Each session has it's own instance of the entities in the session-cache (1st level cache). The sessions do not communicate or coordinate with each other. When one session writes its changes, the other session isn't notified. It still has it's own outdated instance in its session cache.
When you call _session.Clear(), you make the session "forget" everything by clearing the session cache. When you re-query, you are reading fresh data from the database, which includes the changes from the other session.

Flushing in NHibernate

This question is a bit of a dupe, but I still don't understand the best way to handle flushing.
I am migrating an existing code base, which contains a lot of code like the following:
private void btnSave_Click()
{
SaveForm();
ReloadList();
}
private void SaveForm()
{
var foo = FooRepository.Get(_editingFooId);
foo.Name = txtName.Text;
FooRepository.Save(foo);
}
private void ReloadList()
{
fooRepeater.DataSource = FooRepository.LoadAll();
fooRepeater.DataBind();
}
Now that I am changing the FooRepository to Nhibernate, what should I use for the FooRepository.Save method? Should the FooRepository always flush the session when the entity is saved?
I'm not sure if I understand your question, but here is what I think:
Think in "putting objects to the session" instead of "getting and storing data". NH will store all new and changed objects in the session without any special call to it.
Consider this scenarios:
Data change:
Get data from the database with any query. The entities are now in the NH session
Change entities by just changing property values
Commit the transaction. Changes are flushed and stored to the database.
Create a new object:
Call a constructor to create a new object
Store it to the database by calling "Save". It is in the session now.
You still can change the object after Save
Commit the changes. The latest state will be stored to the database.
If you work with detached entities, you also need Update or SaveOrUpdate to put detached entities to the session.
Of course you can configure NH to behave differently. But it works best if you follow this default behaviour.
It doesn't matter whether or not you explicitly flush the session between modifying a Foo entity and loading all Foos from the repository. NHibernate is smart enough to auto-flush itself if you have made changes in the session that may affect the results of the query you are trying to run.
Ideally I try to use one session per "unit of work". This means one cohesive piece of work which may involve several smaller steps. If you feel that you do not have a seam in your architecture where you can achieve this, then managing the session inside the repository will also work. Just be aware that you are missing out on some of the power that NHibernate provides you.
I'd vote up Stefan Moser's answer if I could - I'm still getting to grips with Nh myself but I think it's nice to be able to write code like this:
private void SaveForm()
{
using (var unitofwork = UnitOfWork.Start())
{
var foo = FooRepository.Get(_editingFooId);
var bar = BarRepository.Get(_barId);
foo.Name = txtName.Text;
bar.SomeOtherProperty = txtBlah.Text;
FooRepository.Save(foo);
BarRepository.Save(bar);
UnitOfWork.CommitChanges();
}
}
so this way either the whole action succeeds or it fails and rolls back, keeping flushing/transaction management outside of the Repositories.

NHibernate update on single property updates all properties in sql

I am performing a standard update in NHibernate to a single property. However on commit of the transaction the sql update seems to set all fields I have mapped on the table even though they have not changed. Surely this can't be normal behaviour in Nhibernate? Am I doing something wrong? Thanks
using (var session = sessionFactory.OpenSession())
{
using (var transaction = session.BeginTransaction())
{
var singleMeeting = session.Load<Meeting>(10193);
singleMeeting.Subject = "This is a test 2";
transaction.Commit();
}
}
This is the normal behavior. You can try adding dynamic-update="true" to your class definition to override this behavior.
Well. yes this is normal behaviour for NHibernate. You can use generated attribute for your properties to change the behaviour. Details on Ayende's blog.
Why is this default is because with dynamics you don't get your query plan cached. And usually you don't mind that you send few more bytes over high speed network connection between your application server and database. Unless you are saving long strings where this setting is perfectly appropriate.