Object graph and Infinispan - infinispan

We have a large object graph in a Java application.
We would like to share the object graph between several nodes
We are now looking into Infinispan
The issue we are getting is when we are updating an object which is being referenced by several objects, Inifinispan is creating a new object and we are losing the reference to it from all other objects (which still point to the old object)
Is there a way to overcome this?

No, when the cache is clustered, Infinispan has no way of preserving the references between objects in different cache entries.
You could try Hibernate OGM on top of Infinispan, or implement something similar yourself: store the shared objects as separate entries (either in the same cache or in a different cache) and replace references to them with their cache keys on store, then replace cache keys with object references on load.

Related

Remove deleted items from first level cache in NHibernate? Or: how to check if cached items have been deleted?

We have a brownfield multi-user application (99% Delphi, 1% .net) which uses NHibernate for the persistency of the .net modules. In my application I can add categories to some entity. If I select one and decide to not use it (thus removing the category again) I has been loaded by NHibernate and will stay in the session's first level cache. Now, if some other user deletes this category and I try to save my entity my application throws an exception because the object loaded doesn't exist anymore.
my question: is there a way to check if my cache has items loaded which don't exist anymore? and if so, is there a way to remove non-exist entities from my cache?
So what happens:
I load an entity (added to session cache)
I add a category (added to session cache)
Someone else deletes the category from the database.
I save my entity and the exception occurs because the category doesn't exist anymore.
It's still in the session cache. It would be nice if I could (automatically) remove it from my session's cache? is there a way to clean up the cache and remove objects that don't exist anymore?
Regards, Ted
There's no option in NHibernate to do it automatically, at least not with ISession. You could use IStatelessSession for loading, since it doesn't have first-level cache, but you'll lose many other features that ISession provides.
You could also call ISession.Clear() to clear the session (first-level) cache, or ISession.Evict() to evict certain entities from session, but that's not automatic.
How long do you keep your session object? Maybe you need a different session management context.
If the lifespan of your session is shorter, you can still achieve entity caching, but with second-level cache. SysCache2 is one of second-level cache providers that has a support for SqlCacheDependency. This means that you could set cache expiration when some objects in database change.

NSManagedObjectContext confusion

I am learning about CoreData. Obviously, one of the main classes you entouer is NSManagedObjectContext. I am unclear about the exact role of this. From the articles i've read, it seems that you can have multiple NSManagedObjectContexts. Does this mean that NSManagedObjectContext is basically a copy of the backend?
How would this resolve into a consistent backend when there is multiple different copies lying around?
So, 2 questions basically:
Is NSManagedContext a copy of the backend database?
and...
For example, say I make a change in context A and make some other change in context B. Then I call save on A first, then B? will B prevail?
Thanks
The NSManagedObjectContext is not a copy of the backend database. The documentation describes it as a scratch pad
An instance of NSManagedObjectContext represents a single “object
space” or scratch pad in an application. Its primary responsibility is
to manage a collection of managed objects. These objects form a group
of related model objects that represent an internally consistent view
of one or more persistent stores. A single managed object instance
exists in one and only one context, but multiple copies of an object
can exist in different contexts. Thus object uniquing is scoped to a
particular context.
The NSManagedObjectContext is just a temporary place to make changes to your managed objects in a transactional way. When you make changes to objects in a context it does not effect the backend database until and if you save the context, and as you know you can have multiple context that you can make changes to which is really important for concurrency.
For question number 2, the answer for who prevails will depend on the merge policy you set for your context and which one is called last which would be B. Here are the merge policies that can be set that will effect the second context to be saved.
NSErrorMergePolicyType
Specifies a policy that causes a save to fail
if there are any merge conflicts.
NSMergeByPropertyStoreTrumpMergePolicyType
Specifies a policy that
merges conflicts between the persistent store’s version of the object
and the current in-memory version, giving priority to external
changes.
NSMergeByPropertyObjectTrumpMergePolicyType
Specifies a policy that merges conflicts between the persistent store’s version
of the object and the current in-memory version, giving priority to
in-memory changes.
NSOverwriteMergePolicyType
Specifies a policy that
overwrites state in the persistent store for the changed objects in
conflict.
NSRollbackMergePolicyType
Specifies a policy that
discards in-memory state changes for objects in conflict.
An NSManagedObjectContext is specific representation of your data model. Each context maintains its own state (e.g. context) so changes in one context will not directly affect other contexts. When you work with multiple contexts it is your responsibility to keep them consistent by merging changes when a context saves its changes to the store.
Your question is regarding this process and may also involve merge conflicts. Whenever you save a context its changes are committed to the store and a merge policy is used to resolve conflicts.
When you save a context, it will post various notifications regarding progress. In your case, if [contextA save:&error] succeeds, the context will post the notification NSManagedObjectContextDidSaveNotification. When you have multiple contexts, you typically observe this notification and call:
[contextB mergeChangesFromContextDidSaveNotification:notification];
This will merge the changes saved on contextA into contextB.
EDIT: removed the 'thread-safe' comment. NSManagedObjectContext is not thread safe.

Two persistent stores for one managed object context - possible?

I have a fairly complex data model with approximately 10 entities. Some need to be stored to disk and others just need to be available in memory when the application is running. Is it possible to achieve this using two persistent stores for the same managed object context, or should I separate my data models accordingly?
Yes, your NSManagedObjectContext uses a NSPersistentStoreCoordinator to determine which store a particular model should use. By setting the persistent store coordinator of your managed object context you can define a custom mapping which uses multiple persistent stores of different types.
http://developer.apple.com/library/ios/documentation/Cocoa/Conceptual/CoreData/Articles/cdBasics.html#//apple_ref/doc/uid/TP40001650-SW4
You may use configurations as TechZen mentioned:
Create Configurations in managed object model editor (.xcdatamodel file);
In code add several persistent stores to persistent store coordinator, providing appropriate configuration name.
For details check my other answer here.

NHibernate in disconnected scenarios

What are your experiences with the latest version of NHibernate (2.0.1 GA) regarding disconnected scenarios?
A disconnected scenario is where I fetch some object graph from NHibernate, disconnect from the session (and database connection), do some changes in the object graph (deleting in collections, adding entities, updating entities) and then reconnect and save....
We tried this in a client-server architecture. Now we are moving to DTO (data transfer objects). This means, the detached entities are not directly sent to the client anymore, but specialized objects.
The main reason to move in this direction is not NHibernate, it is actually the serialization needed to send entities to the client. While you could use lazy-loading (and you will!) while you are attached to the session, you need to get all references from the database to serialize it.
We had lots of Guids instead of references and lots of properties which are mapped but not serialized ... and it became a pain. So it's much easier to copy the stuff you really want to serialize to its own structure.
Besides of that - working detached could work well.
Be careful with lazy loading, which will cause exceptions to be thrown when accessing non loaded objects on a detached instance.
Be careful with concurrency, the chance that entities had changed while they where detached is high.
Be careful if you need some sort of security or even if you want your server alown to make some data changes. The detached objects could potentially return in any state.
You may take a look on session methods SaveOrUpdateCopy and Merge.
Here is an article which gives you more details:
NHibernate feature: SaveOrUpdateCopy & Merge

NHibernate NonUniqueObjectException when reattaching objects to the session (with Lock)

Basic order of execution:
A collection of PersistentObjects is queried then cached separately from the session.
The collection is passed to a module that needs to reattach them to the session in order to lazily load some of the properties (using session.Lock(obj, LockMode.None)).
After the module has completed processing, another module attempts to SaveOrUpdate a UserSetting object with some usage statistics for the user who initialized the action.
On session.Flush() NHibernate throws a NonUniqueObjectException.
I've found that one way of working around this issue is to get new copies of the objects with:
obj = session.Get(obj.GetType(), (obj as PersistentObject).Id);
instead of reattaching with session.Lock. However, this is non-optimal as some of the record sets are potentially quite large, and re-getting each object individually could become a performance drag.
The object which is non-unique is a referenced object that exists only on the PersistentObject class, and not the UserSetting class. So I cannot understand why a flush would cause this exception.
I've tried evicting the cached objects after the module is done with them, but this does not help.
Does anyone know of a better way to attach objects to the session that could avoid this problem?
Can you use a fresh session (or transaction) for processing each item and for updating the UserSetting? This would probably prevent the NonUniqueException.
Cheers,
-Maarten