NHibernate: Setting up references with IResultTransformer - nhibernate

I have a custom class that implements IResultTransformer.
It is easy to make this work for single values but what is the correct way to setup references if only the id is in the tuple? What if they are supposed to be lazy loaded? Should I just load them from the Session using the Get or Load methods?
For example:
public class FoobarResultTransformer : IResultTransformer
{
public object TransformTuple(object[] tuple, string[] aliases)
{
var foobar = new Foobar();
for (int i = 0; i < aliases.Length; i++)
{
switch(aliases[i])
{
case "IntProperty":
// This one is easy
foobar.IntProperty = Convert.ToInt32(tuple[i]);
break;
case "ReferencedEntityId":
// Assuming the tuple contains a GUID identifier, what should I do here?
foobar.ReferencedEntity =
break;
}
}
return foobar;
}
}

You should load the referred entity by its primary key using Get or Load. If they should be lazy loaded, use Load.
Get will return null if the object does not exist. This usually results in a select against the database, but it will check the session cache and the 2nd level cache first.
Load will never return null. It will return a proxy.

Related

Java 8 map with Map.get nullPointer Optimization

public class StartObject{
private Something something;
private Set<ObjectThatMatters> objectThatMattersSet;
}
public class Something{
private Set<SomeObject> someObjecSet;
}
public class SomeObject {
private AnotherObject anotherObjectSet;
}
public class AnotherObject{
private Set<ObjectThatMatters> objectThatMattersSet;
}
public class ObjectThatMatters{
private Long id;
}
private void someMethod(StartObject startObject) {
Map<Long, ObjectThatMatters> objectThatMattersMap = StartObject.getSomething()
.getSomeObject.stream()
.map(getSomeObject::getAnotherObject)
.flatMap(anotherObject-> anotherObject.getObjectThatMattersSet().stream())
.collect(Collectors.toMap(ObjectThatMatters -> ObjectThatMatters.getId(), Function.identity()));
Set<ObjectThatMatters > dbObjectThatMatters = new HashSet<>();
try {
dbObjectThatMatters.addAll( tartObject.getObjectThatMatters().stream().map(objectThatMatters-> objectThatMattersMap .get(objectThatMatters.getId())).collect(Collectors.toSet()));
} catch (NullPointerException e) {
throw new someCustomException();
}
startObject.setObjectThatMattersSet(dbObjectThatMatters);
Given a StartObject that contains a set of ObjectThatMatters
And a Something that contains the database structure already fetched filled with all valid ObjectThatMatters.
When I want to swap the StartObject set of ObjectThatMatters to the valid corresponding db objects that only exist in the scope of the Something
Then I compare the set of ObjectThatMatters on the StartObject
And replace every one of them with the valid ObjectThatMatters inside the Something object
And If some ObjectThatMatters doesn't have a valid ObjectThatMatters I throw a someCustomException
This someMethod seems pretty horrible, how can I make it more readable?
Already tried to change the try Catch to a optional but that doesn't actually help.
Used a Map instead of a List with List.contains because of performance, was this a good idea? The total number of ObjectThatMatters will be usually 500.
I'm not allowed to change the other classes structure and I'm only showing you the fields that affect this method not every field since they are extremely rich objects.
You don’t need a mapping step at all. The first operation, which produces a Map, can be used to produce the desired Set in the first place. Since there might be more objects than you are interested in, you may perform a filter operation.
So first, collect the IDs of the desired objects into a set, then collect the corresponding db objects, filtering by the Set of IDs. You can verify whether all IDs have been found, by comparing the resulting Set’s size with the ID Set’s size.
private void someMethod(StartObject startObject) {
Set<Long> id = startObject.getObjectThatMatters().stream()
.map(ObjectThatMatters::getId).collect(Collectors.toSet());
HashSet<ObjectThatMatters> objectThatMattersSet =
startObject.getSomething().getSomeObject().stream()
.flatMap(so -> so.getAnotherObject().getObjectThatMattersSet().stream())
.filter(obj -> id.contains(obj.getId()))
.collect(Collectors.toCollection(HashSet::new));
if(objectThatMattersSet.size() != id.size())
throw new SomeCustomException();
startObject.setObjectThatMattersSet(objectThatMattersSet);
}
This code produces a HashSet; if this is not a requirement, you can just use Collectors.toSet() to get an arbitrary Set implementation.
It’s even easy to find out which IDs were missing:
private void someMethod(StartObject startObject) {
Set<Long> id = startObject.getObjectThatMatters().stream()
.map(ObjectThatMatters::getId)
.collect(Collectors.toCollection(HashSet::new));// ensure mutable Set
HashSet<ObjectThatMatters> objectThatMattersSet =
startObject.getSomething().getSomeObject().stream()
.flatMap(so -> so.getAnotherObject().getObjectThatMattersSet().stream())
.filter(obj -> id.contains(obj.getId()))
.collect(Collectors.toCollection(HashSet::new));
if(objectThatMattersSet.size() != id.size()) {
objectThatMattersSet.stream().map(ObjectThatMatters::getId).forEach(id::remove);
throw new SomeCustomException("The following IDs were not found: "+id);
}
startObject.setObjectThatMattersSet(objectThatMattersSet);
}

Pros & cons bean vs SSJS?

I was trying to build a bean that always retrieves the same document ( a counter document), gets the current value, increment it and save the document with the new value. Finally it should return the value to the calling method and that would get me a new sequential number in my Xpage.
Since the Domino objects cannot be serialized or singleton'ed what's the benefit creating a bean doing this, over creating a SSJS function doing the exact same thing?
My bean must have calls to session, database, view and document, which then will be called every time.
The same within the SSJS-function except for session and database.
Bean:
public double getTransNo() {
try {
Session session = ExtLibUtil.getCurrentSession();
Database db = session.getCurrentDatabase();
View view = db.getView("vCount");
view.refresh();
doc = view.getFirstDocument();
transNo = doc.getItemValueDouble("count");
doc.replaceItemValue("count", ++transNo);
doc.save();
doc.recycle();
view.recycle();
} catch (NotesException e) {
e.printStackTrace();
}
return transNo;
}
SSJS:
function getTransNo() {
var view:NotesView = database.getView("vCount");
var doc:NotesDocument = view.getFirstDocument();
var transNo = doc.getItemValueDouble("count");
doc.replaceItemValue("count", ++transNo);
doc.save();
doc.recycle();
view.recycle();
return transNo;
}
Thank you
Both pieces of code are not good (sorry to be blunt).
If you have one document in your view, you don't need a view refresh which might be queued behind a refresh on another view and be very slow. Presumably you are talking about a single sever solution (since replication of the counter document would for sure lead to conflicts).
What you do in XPages is to create a Java class and declare it as application bean:
public class SequenceGenerator {
// Error handling is missing in this class
private double sequence = 0;
private String docID;
public SequenceGenerator() {
// Here you load from the document
Session session = ExtLibUtil.getCurrentSession();
Database db = session.getCurrentDatabase();
View view = db.getView("vCount");
doc = view.getFirstDocument();
this.sequence = doc.getItemValueDouble("count");
this.docID = doc.getUniversalId();
Utils.shred(doc, view); //Shred currenDatabase isn't a good idea
}
public synchronized double getNextSequence() {
return this.updateSequence();
}
private double updateSequence() {
this.sequence++;
// If speed if of essence I would spin out a new thread here
Session session = ExtLibUtil.getCurrentSession();
Database db = session.getCurrentDatabase();
doc = db.getDocumentByUnid(this.docID);
doc.ReplaceItemValue("count", this.sequence);
doc.save(true,true);
Utils.shred(doc);
// End of the candidate for a thread
return this.sequence;
}
}
The problem for the SSJS code: what happens if 2 users hit that together? At least you need to use synchronized there too. Using a bean makes it accessible in EL too (you need to watch out not to call it too often). Also in Java you can defer the writing back to a different thread - or not write it back at all and in your class initialization code read the view with the actual documents and pick the value from there.
Update: Utils is a class with static methods:
/**
* Get rid of all Notes objects
*
* #param morituri = the one designated to die, read your Caesar!
*/
public static void shred(Base... morituri) {
for (Base obsoleteObject : morituri) {
if (obsoleteObject != null) {
try {
obsoleteObject.recycle();
} catch (NotesException e) {
// We don't care we want go get
// rid of it anyway
} finally {
obsoleteObject = null;
}
}
}
}

Does NHibernate really deliver transparent persistency

Starting to use Nhibernate for persistency being seduced by the promise that it respects your domain model, I tried to implement a relation manager for my domain objects. Basically, to DRY my code with respect to managing bidirectional one to many and many to many relations, I decided to have those relations managed by a separate class. When a one to many or many to one property is set an entry for the two objects is made in an dictionary, the key is either a one side with a collection value to hold the many sides, or a many side with a value of the one side.
A one to many relation for a specific combination of types looks as follows:
public class OneToManyRelation<TOnePart, TManyPart> : IRelation<IRelationPart, IRelationPart>
where TOnePart : class, IRelationPart
where TManyPart : class, IRelationPart
{
private readonly IDictionary<TOnePart, Iesi.Collections.Generic.ISet<TManyPart>> _oneToMany;
private readonly IDictionary<TManyPart, TOnePart> _manyToOne;
public OneToManyRelation()
{
_manyToOne = new ConcurrentDictionary<TManyPart, TOnePart>();
_oneToMany = new ConcurrentDictionary<TOnePart, Iesi.Collections.Generic.ISet<TManyPart>>();
}
public void Set(TOnePart onePart, TManyPart manyPart)
{
if (onePart == null || manyPart == null) return;
if (!_manyToOne.ContainsKey(manyPart)) _manyToOne.Add(manyPart, onePart);
else _manyToOne[manyPart] = onePart;
}
public void Add(TOnePart onePart, TManyPart manyPart)
{
if (onePart == null || manyPart == null) return;
if (!_manyToOne.ContainsKey(manyPart)) _manyToOne.Add(manyPart, onePart);
else _manyToOne[manyPart] = onePart;
if (!_oneToMany.ContainsKey(onePart)) _oneToMany.Add(onePart, new HashedSet<TManyPart>());
_oneToMany[onePart].Add(manyPart);
}
public Iesi.Collections.Generic.ISet<TManyPart> GetManyPart(TOnePart onePart)
{
if (!_oneToMany.ContainsKey(onePart)) _oneToMany[onePart] = new HashedSet<TManyPart>();
return _oneToMany[onePart];
}
public TOnePart GetOnePart(TManyPart manyPart)
{
if(!_manyToOne.ContainsKey(manyPart)) _manyToOne[manyPart] = default(TOnePart);
return _manyToOne[manyPart];
}
public void Remove(TOnePart onePart, TManyPart manyPart)
{
_manyToOne.Remove(manyPart);
_oneToMany[onePart].Remove(manyPart);
}
public void Set(TOnePart onePart, Iesi.Collections.Generic.ISet<TManyPart> manyPart)
{
if (onePart == null) return;
if (!_oneToMany.ContainsKey(onePart)) _oneToMany.Add(onePart, manyPart);
else _oneToMany[onePart] = manyPart;
}
public void Clear(TOnePart onePart)
{
var list = new HashedSet<TManyPart>(_oneToMany[onePart]);
foreach (var manyPart in list)
{
_manyToOne.Remove(manyPart);
}
_oneToMany.Remove(onePart);
}
public void Clear(TManyPart manyPart)
{
if (!_manyToOne.ContainsKey(manyPart)) return;
if (_manyToOne[manyPart] == null) return;
_oneToMany[_manyToOne[manyPart]].Remove(manyPart);
_manyToOne.Remove(manyPart);
}
}
On the many side a code snippet looks like:
public virtual SubstanceGroup SubstanceGroup
{
get { return RelationProvider.SubstanceGroupSubstance.GetOnePart(this); }
protected set { RelationProvider.SubstanceGroupSubstance.Set(value, this); }
}
On the one side, so, in this case the SubstanceGroup, the snippet looks like:
public virtual ISet<Substance> Substances
{
get { return RelationProvider.SubstanceGroupSubstance.GetManyPart(this); }
protected set { RelationProvider.SubstanceGroupSubstance.Set(this, value); }
}
Just using my domain objects, this works excellent. In the domain object I just have to reference an abstract factory that retrieves the appropriate relation and I can set the relation from one side, wich automatically becomes thus bidirectional.
However, when NH kicks in the problem is that I get duplicate keys in my dictionaries. Somehow NH sets a relation property with a null value(!) with a new copy(?) of a domain object. So when the domain object gets saved, I have two entries of that domain object in, for example the many side of the relation, i.e. _manyToOne dictionary.
This problem makes me lose my hair, I do not get it what is happening??
To answer your first, very general question: "Does NHibernate really deliver transparent persistency", I just can say: nothing is perfect. NH tries its best to be as transparent as possible, by also trying to keep its complexity as low as possible.
There are some assumptions, particularly regarding collections: Collections and their implementations are not considered to be part of your domain model. NH provides its own collection implementations. You are not only expected to use the interfaces like ISet and IList. You should also take the instances given by NH when the object is read from the database and never replace it with your own. (I don't know what your relation class is actually used for, so I don't know if this is the problem here.)
Domain objects are unique within the same instance of the session. If you get new instances of domain objects each time, you probably implemented the "session-per-call" anti-pattern, which creates a new session for each database interaction.
I don't have a clue what you actually are doing. How is this OneToManyRelation actually used for? What are you doing when NH doesn't behave as expected? This is a very specific problem to your specific implementation.
Besides the comments on 'convoluted code' and 'what the heck are you doing'. The problem was that I was replacing the persistence collections of NH like in the below code snippet:
public void Add(TOnePart onePart, TManyPart manyPart)
{
if (onePart == null || manyPart == null) return;
if (!_manyToOne.ContainsKey(manyPart)) _manyToOne.Add(manyPart, onePart);
else _manyToOne[manyPart] = onePart;
if (!_oneToMany.ContainsKey(onePart)) _oneToMany.Add(onePart, new HashedSet<TManyPart>());
_oneToMany[onePart].Add(manyPart);
}
I create a new Hashed set for the many part. And that was the problem. If just has set the many part with the collection coming in (in case of the persistence collection implementation of NH) than it would have worked.
As a NH newbie, this replacing of collections with a special implementation from NH has been an important source of errors. Just as a warning to other NH newbies.

NHibernate Validator: Using Attributes vs. Using ValidationDefs

I've been using NH Validator for some time, mostly through ValidationDefs, but I'm still not sure about two things:
Is there any special benefit of using ValidationDef for simple/standard validations (like NotNull, MaxLength etc)?
I'm worried about the fact that those two methods throw different kinds of exceptions on validation, for example:
ValidationDef's Define.NotNullable() throws PropertyValueException
When using [NotNull] attribute, an InvalidStateException is thrown.
This makes me think mixing these two approaches isn't a good idea - it will be very difficult to handle validation exceptions consistently. Any suggestions/recommendations?
ValidationDef is probably more suitable for business-rules validation even if, having said that, I used it even for simple validation. There's more here.
What I like about ValidationDef is the fact that it has got a fluent interface.
I've been playing around with this engine for quite a while and I've put together something that works quite well for me.
I've defined an interface:
public interface IValidationEngine
{
bool IsValid(Entity entity);
IList<Validation.IBrokenRule> Validate(Entity entity);
}
Which is implemented in my validation engine:
public class ValidationEngine : Validation.IValidationEngine
{
private NHibernate.Validator.Engine.ValidatorEngine _Validator;
public ValidationEngine()
{
var vtor = new NHibernate.Validator.Engine.ValidatorEngine();
var configuration = new FluentConfiguration();
configuration
.SetDefaultValidatorMode(ValidatorMode.UseExternal)
.Register<Data.NH.Validation.User, Domain.User>()
.Register<Data.NH.Validation.Company, Domain.Company>()
.Register<Data.NH.Validation.PlanType, Domain.PlanType>();
vtor.Configure(configuration);
this._Validator = vtor;
}
public bool IsValid(DomainModel.Entity entity)
{
return (this._Validator.IsValid(entity));
}
public IList<Validation.IBrokenRule> Validate(DomainModel.Entity entity)
{
var Values = new List<Validation.IBrokenRule>();
NHibernate.Validator.Engine.InvalidValue[] values = this._Validator.Validate(entity);
if (values.Length > 0)
{
foreach (var value in values)
{
Values.Add(
new Validation.BrokenRule()
{
// Entity = value.Entity as BpReminders.Data.DomainModel.Entity,
// EntityType = value.EntityType,
EntityTypeName = value.EntityType.Name,
Message = value.Message,
PropertyName = value.PropertyName,
PropertyPath = value.PropertyPath,
// RootEntity = value.RootEntity as DomainModel.Entity,
Value = value.Value
});
}
}
return (Values);
}
}
I plug all my domain rules in there.
I bootstrap the engine at the app startup:
For<Validation.IValidationEngine>()
.Singleton()
.Use<Validation.ValidationEngine>();
Now, when I need to validate my entities before save, I just use the engine:
if (!this._ValidationEngine.IsValid(User))
{
BrokenRules = this._ValidationEngine.Validate(User);
}
and return, eventually, the collection of broken rules.

An NHibernate audit trail that doesn't cause "collection was not processed by flush" errors

Ayende has an article about how to implement a simple audit trail for NHibernate (here) using event handlers.
Unfortunately, as can be seen in the comments, his implementation causes the following exception to be thrown: collection xxx was not processed by flush()
The problem appears to be the implicit call to ToString on the dirty properties, which can cause trouble if the dirty property is also a mapped entity.
I have tried my hardest to build a working implementation but with no luck.
Does anyone know of a working solution?
I was able to solve the same problem using following workaround: set the processed flag to true on all collections in the current persistence context within the listener
public void OnPostUpdate(PostUpdateEvent postEvent)
{
if (IsAuditable(postEvent.Entity))
{
//skip application specific code
foreach (var collection in postEvent.Session.PersistenceContext.CollectionEntries.Values)
{
var collectionEntry = collection as CollectionEntry;
collectionEntry.IsProcessed = true;
}
//var session = postEvent.Session.GetSession(EntityMode.Poco);
//session.Save(auditTrailEntry);
//session.Flush();
}
}
Hope this helps.
The fix should be the following. Create a new event listener class and derive it from NHibernate.Event.Default.DefaultFlushEventListener:
[Serializable]
public class FixedDefaultFlushEventListener: DefaultFlushEventListener
{
private static readonly log4net.ILog log = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
protected override void PerformExecutions(IEventSource session)
{
if (log.IsDebugEnabled)
{
log.Debug("executing flush");
}
try
{
session.ConnectionManager.FlushBeginning();
session.PersistenceContext.Flushing = true;
session.ActionQueue.PrepareActions();
session.ActionQueue.ExecuteActions();
}
catch (HibernateException exception)
{
if (log.IsErrorEnabled)
{
log.Error("Could not synchronize database state with session", exception);
}
throw;
}
finally
{
session.PersistenceContext.Flushing = false;
session.ConnectionManager.FlushEnding();
}
}
}
Register it during NHibernate configuraiton:
cfg.EventListeners.FlushEventListeners = new IFlushEventListener[] { new FixedDefaultFlushEventListener() };
You can read more about this bug in Hibernate JIRA:
https://hibernate.onjira.com/browse/HHH-2763
The next release of NHibernate should include that fix either.
This is not easy at all. I wrote something like this, but it is very specific to our needs and not trivial.
Some additional hints:
You can test if references are loaded using
NHibernateUtil.IsInitialized(entity)
or
NHibernateUtil.IsPropertyInitialized(entity, propertyName)
You can cast collections to the IPersistentCollection. I implemented an IInterceptor where I get the NHibernate Type of each property, I don't know where you can get this when using events:
if (nhtype.IsCollectionType)
{
var collection = previousValue as NHibernate.Collection.IPersistentCollection;
if (collection != null)
{
// just skip uninitialized collections
if (!collection.WasInitialized)
{
// skip
}
else
{
// read collections previous values
previousValue = collection.StoredSnapshot;
}
}
}
When you get the update event from NHibernate, the instance is initialized. You can safely access properties of primitive types. When you want to use ToString, make sure that your ToString implementation doesn't access any referenced entities nor any collections.
You may use NHibernate meta-data to find out if a type is mapped as an entity or not. This could be useful to navigate in your object model. When you reference another entity, you will get additional update events on this when it changed.
I was able to determine that this error is thrown when application code loads a Lazy Propery where the Entity has a collection.
My first attempt involed watching for new CollectionEntries (which I've never want to process as there shouldn't actually be any changes). Then mark them as IsProcessed = true so they wouldn't cause problems.
var collections = args.Session.PersistenceContext.CollectionEntries;
var collectionKeys = args.Session.PersistenceContext.CollectionEntries.Keys;
var roundCollectionKeys = collectionKeys.Cast<object>().ToList();
var collectionValuesClount = collectionKeys.Count;
// Application code that that loads a Lazy propery where the Entity has a collection
var postCollectionKeys = collectionKeys.Cast<object>().ToList();
var newLength = postCollectionKeys.Count;
if (newLength != collectionValuesClount) {
foreach (var newKey in postCollectionKeys.Except(roundCollectionKeys)) {
var collectionEntry = (CollectionEntry)collections[newKey];
collectionEntry.IsProcessed = true;
}
}
However this didn't entirly solve the issue. In some cases I'd still get the exception.
When OnPostUpdate is called the values in the CollectionEntries dictionary should all already be set to IsProcessed = true. So I decided to do an extra check to see if the collections not processed matched what I expected.
var valuesNotProcessed = collections.Values.Cast<CollectionEntry>().Where(x => !x.IsProcessed).ToList();
if (valuesNotProcessed.Any()) {
// Assert: valuesNotProcessed.Count() == (newLength - collectionValuesClount)
}
In the cases that my first attempt fixed these numbers would match exactly. However in the cases where it didn't work there were extra items alreay in the dictionary. In my I could be sure these extra items also wouldn't result in updates so I could just set IsProcessed = true for all the valuesNotProcessed.