How to duplicate an entity with all of its properties and collections - jspresso

The standard Jspresso action cloneEntityCollectionFrontAction allows to duplicate the selected rows in a table.
The duplication is limited to the current model and do not take account of collections if exist (ie : the collections are not automatically duplicated)
how to deeply duplicate an entity with all of its collections ?
Second related question : I tried to write by myself an action in order to realize the duplication of the collections. Below a part of the action I wrote :
Offer newOffer = bc.getEntityFactory().createEntityInstance(Offer.class);
Offer clonedNewOffer = bc.cloneInUnitOfWork(newOffer);
clonedNewOffer.setCustomer(curOf.getCustomer());
clonedNewOffer.setEndApplicationDate(curOf.getEndApplicationDate());
clonedNewOffer.setName(curOf.getName());
clonedNewOffer.setStartApplicationDate(curOf.getStartApplicationDate());
I called the getter and setter for each property which is not satisfying because if I add new property or collection to the model, the method must be manually updated.
Is there a way to write a more smart / flexible method ?
Hi Vincent,
Regarding the answer you made and your latest proposal, I changed my backend with the following one :
Offer newOffer = bc.getEntityFactory().createEntityInstance(Offer.class);
Offer clonedNewOffer = bc.cloneInUnitOfWork(newOffer);
CarbonEntityCloneFactory.carbonCopyComponent(curOf, clonedNewOffer, bc.getEntityFactory());
bc.registerForUpdate(clonedNewOffer);
But the registerForUpdate failed due to Data constraints are not satisfied error.
I checked the Id property of the clonedNewOffer and the Id is already the same than curOf Id property.
I understand the meaning of a "carbon copy" which is a strictly copy of all the properties, so, from a backend,
how could I duplicate an entity in order to create a new one ?

Both CloneComponentCollectionAction and CloneComponentAction perform the actual component and entity cloning using a configurable strategy that implement IEntityCloneFactory. Jspresso provide 3 implementations of this interface :
CarbonEntityCloneFactory that deals with scalar cloneable properties but ignores all relationships. It's almost never used directly by application code.
SmartEntityCloneFactory inherits from CarbonEntityCloneFactory and deals with relationships the following way :
clone the references if they are compositions or assign the same references to the clone.
add the cloned component to the same collections than the original belongs to.
HibernateAwareSmartEntityCloneFactory inherits from SmartEntityCloneFactory and deals with lazy initialized properties. This is the implementation that is used by default if you use an Hibernate backend.
As a rule of thumb, you can expect the SmartEntityCloneFactory to perform what you expect about references but ignore dependent collections in order to avoid too deep recursive cloning; so what you've experienced is per-design. If you feel like there is room for improvement, feel free to open a feature request on the Jspresso GitHub. Thinking about it, we could maybe do better about composition dependent collections.
When you want to deal with deeper cloning than what's provided by the SmartEntityCloneFactory (or HibernateSmartEntityCloneFactory), the way to go is to create your own cloning strategy. Of course, you can inherit the default strategy and complete the cloning by overriding the cloneEntity method by calling the super implementation and deal specifically with the collections you want to clone.
Once your strategy is implemented, just inject it either globally in the application by replacing the default one, i.e. :
bean('smartEntityCloneFactory', class: 'your.CustomEntityCloneFactory',
parent: 'smartEntityCloneFactoryBase')
or specifically on one of the clone actions of your application by injecting your custom strategy on the action, e.g. :
bean('myCustomEntityCloneFactory', class: 'your.CustomEntityCloneFactory',
parent: 'smartEntityCloneFactoryBase')
action('customCloneAction', parent: 'cloneEntityCollectionFrontAction',
custom:[entityCloneFactory_ref: 'myCustomEntityCloneFactory']
)
Regarding your second related question, if you are inside your entity clone factory implementation (or have access to an instance of it) and want to clone an entity or a component using the strategy, just call the cloneComponent or cloneEntity method.
If you just want to copy all the scalar properties of an entity or component on a clone and don't have access to a clone factory, you can use the following static utility method :
CarbonEntityCloneFactory.carbonCopyComponent(IComponent, IComponent, IEntityFactory)
Using the above method will solve your implementation robustness.

Related

Neo4j - Wrapping/Inheriting a Node object

I'm developing kind of a social network with neo4j, and i wanted to make my Node object a bit more specific for my own needs. Does it considered a good practice to wrap a neo4j Node object or to inherit from it?
My problem with the wrapping approach arises when indexing the nodes objects with the built in Lucene engine. For example, what benefits will i earn if i'll wrap my Node object with a "Profile" class (with methods such as "addFriend", "setFirstName", etc..), but on the other hand, whenever i will run a query against my index i'll get back raw Node objects and not my wrapped objects? I can make some dirty solution for this case, by saving a reference for the wrapped object inside my node properties, but it looks very strange for me to do it.
What would you recommend to do in such case, in order to get a clean and well designed code?
Thanks.
I have found that wrapping a Node does not lead to very maintainable code/design. As you mentioned, one thing you need to take care of is not returning a Node but translating it to your domain object.
If your object has mostly getX methods, then you can just execute Cypher queries, compose your domain object(s) and return those. You don't even need to wrap the Node in this case- all you need is some property that you can use to look up the Node.
If you have setX methods, then you can update the Node via Cypher statements either via a save that updates all properties or on each setX (not great, as you'd be updating too often the setX method now implies persistence). Either of the two approaches does not require the Node to be wrapped.
I tried in earlier projects to wrap the Node but found that it leads to much more trouble and a generally smelly design. Now I work with pure domain POJOS's and keep Neo4j code in the persistence layer only, and this works much better for me. You haven't mentioned which language you're using- if Java, then I believe Spring Data can take care of a lot of boilerplate code.
Put your search code INTO the class they belongs to.
If you need to get, I don't know, something like getFriends from a Post class, you will create the method fromPosts into the Person class, and the getFriends method into Post.
From post, you will call the query from Person class, execute the query and return an Array / List of the nodes mapped into the Person class.
So your getFriends method into the Post class will be something like:
Person.fromPosts(self).results.map { |node| Person.new(node) }
Is simple to do that doing just a map of the result with a Person.new (or new Person, depend from which language are you using) and pass the node to the Person. This means that you must have a new method that populate object from a node.
Spring Data Neo4j is the definitive solution to your need, it maps annotated entity classes to Neo4j with advanced mapping functionality and provides access to nodes and relationships at different levels of abstraction.

How to raise Play's max_fetch_depth?

I am getting NullPointerException when accessing member fields only 3 levels deep in my view template:
#tfz.modelTfzTyp.simulierteTfzTyp.typ
If I use getter functions instead, it works. But it is cumbersome.
I am using Ebean and I read that Hibernate has a max_fetch_depth. I am suspecting that something similar is causing my problems. How do I make Play load more objects eagerly?
This has nothing to do with the max_fetch_depth property.
Dynamic fetching is allowed by byte code enhancement on the models, and it works only for the getters.
See the official documentation:
Enhancement of direct Ebean field access (enabling lazy loading) is only applied to Java classes, not to Scala. Thus, direct field access from Scala source files (including standard Play 2 templates) does not invoke lazy loading, often resulting in empty (unpopulated) entity fields. To ensure the fields get populated, either (a) manually create getter/setters and call them instead, or (b) ensure the entity is fully populated before accessing the fields.

Repository design for complex objects?

What is the best way to design repositories for complex objects, assuming use of an ORM such as NHibernate or Entity Framework?
I am creating an app using Entity Framework 4. The app uses complex objects--a Foo object contains a collection of Bar objects in a Foo.Bars property, and so on. In the past, I would have created a FooRepository, and then a BarRepository, and I would inject a reference to the BarRepository into the FooRepository constructor.
When a query is passed to the FooRepository, it would call on the BarRepository as needed to construct the Foo.Bars property for each Foo object. And when a Foo object is passed to the FooRepository for persistence, the repository would call the BarRepository to persist the objects in the Foo.Bars property.
My question is pretty simple: Is that a generally accepted way to set up the repositories? Is there a better approach? Thanks for your help.
In domain-driven design, there is the concept of a "root aggregate" object. The accepted answer to a related question has good information on what it is and how you would use it in your design. I don't know about the Entity Framework but NHibernate does not require the usage pattern you are describing. As long as all the nested objects and their relationships are properly mapped to tables, saving the aggregate root will also save all its child object. The exception is when a nested object has specific business logic that needs to performed as part of its access or persistence. In that case, you would need to pass the "child" repositories so you are not duplicating that business logic.
Repository pattern helps grouping of business transactions among related entities. Meaning if you have two domain objects foo and bar and have a common transactions like GetList(),Update() then a common repository like FoobarReporsitory can be created. You can even abstract that to an interface called IFoobarReporsitory to make application loosely coupled.

DDD: Repositories are in-memory collections of objects?

I've noticed Repository is usually implemented in either of the following ways:
Method 1
void Add(object obj);
void Remove(object obj);
object GetBy(int id);
Method 2
void Save(object obj); // Used both for Insert and Update scenarios
void Remove(object obj);
object GetBy(int id);
Method 1 has collection semantics (which is how repositories are defined). We can get an object from a repository and modify it. But we don't tell the collection to update it. Implementing a repository this way requires another mechanism for persisting the changes made to an in-memory object. As far as I know, this is done using Unit of Work. However, some argue that UoW is only required when you need transaction control in your system.
Method 2 eliminates the need to have UoW. You can call the Save() method and it determines if the object is new and should be Inserted or is modified and should be Updated. It then uses the data mappers to persist the changes to the database. Whilst this makes life much easier, a repository modeled doesn't have collection semantics. This model has DAO semantics.
I'm really confused about this. If repositories mimic in-memory collection of objects, then we should model them according to Method 1.
What are your thoughts on this?
Mosh
I personally have no issue with the Unit of Work pattern being a part of the solution. Obviously, you only need it for the CUD in CRUD. The fact that you are implementing a UoW pattern, though, does nothing more than dictate that you have a set of operations that need to go as a batch. That is slightly different than saying it needs to be a part of a transaction. If you abstract your repositories well enough, your UoW implementation can be agnostic to the backing mechanism that you are using - whether it is database, XML, etc.
As to the specific question, I think the difference between method one and method two are trivial, if for no other reason than most instances of method two contain a check to see if the identifier is set. If set, treat as update, otherwise, treat as insert. This logic is often built into the repository and is more for simplification of the exposed interface, in my opinion. The repository's purpose is to broker objects between a consumer and a data source and to remove having to have knowledge of the data source directly. I go with method two, because I trust the simple logic of detecting an identifier than having to rely on tracking object states all over the application.
The fact that the terminology for repository usage is so similar to both data access and object collections lend to the confusion. I just treat them as their own first class citizen and do what is best for the domain. ;-)
Maybe you want to have:
T Persist(T entityToPersist);
void Remove(T entityToRemove);
"Persist" being the same as "Save Or Update" or "Add Or Update" - ie. the Repo encapsulates creating new identities (the db may do this) but always returns the new instance with the identity reference.

Pattern for Ownership and References Between Multiple Controllers and Semi-Shared Objects?

For example, I have window (non-document model) - it has a controller associated with it. Within this window, I have a list and an add button. Clicking the add button brings up another "detail" window / dialog (with an associated controller) that allows the user to enter the detail information, click ok, and then have the item propagated back to the original window's list. Obviously, I would have an underlying model object that holds a collection of these entities (let's call the singular entity an Entity for reference).
Conceivably, I have just one main window, so I would likely have only one collection of entities. I could stash it in the main window's controller – but then how do I pass it to the detail window? I mean, I probably don't want to be passing this collection around - difficult to read / maintain / multithread. I could pass a reference to the parent controller and use it to access the collection, but that seems to smell as well. I could stash it in the appDelegate and then access it as a "global" variable via [[NSApplication sharedApplication] delegate] - that seems a little excessive, considering an app delegate doesn't really have anything to do with the model. Another global variable style could be an option - I could make the Entity class have a singleton factory for the collection and class methods to access the collection. This seems like a bigger abuse than the appDelegate - especially considering the Entity object and the collection of said entities are two separate concerns. I could create an EntityCollection class that has a singleton factory method and then object methods for interaction with the collection (or split into a true factory class and collection class for a little bit more OO goodness and easy replacement for test objects). If I was using the NSDocument model, I guess I could stash it there, but that's not much different than stashing it in the application delegate (although the NSDocument itself does seemingly represent the model in some fashion).
I've spent quite a bit of time lately on the server side, so I haven't had to deal with the client-side much, and when I have, I just brute forced a solution. In the end, there are a billion ways to skin this cat, and it just seems like none of them are terribly clean or pretty. What is the generally accepted Cocoa programmer's way of doing this? Or, better yet, what is the optimum way to do this?
I think your conceptual problem is that you're thinking of the interface as the core of the application and the data model as something you have to find a place to cram somewhere.
This is backwards. The data model is the core of the program and everything else is grafted onto the data model. The model should encapsulate all the logical operations that can be performed on the data. An interface, GUI or otherwise, merely sends messages to the data model requesting certain actions.
Starting with this concept, it's easy to see that having the data model universally accessible is not sloppy design. Since the model contains all the logic for altering the data, you can have an arbitrarily large number of interfaces accessing it without the data becoming muddled or code complicated because the model changes the data only according to its own internal rules.
The best way to accomplish universal access is to create a singleton producing class and then put the header for the class in the application prefix headers. That way, any object in the app can access the data model.
Edit01:
Let me clarify the important difference between a naked global variable and a globally accessible class encapsulated data model.
Historically, we viewed global variables as bad design because they were just raw variables. Any part of the code could alter them at will. This nakedness led to obvious problems has you had to continuously guard against some stray fragment of code altering the global and then bringing the app down.
However, in a class based global, the global variable is encapsulated and protected by the logic implemented by the encapsulating class. This encapsulation means that while any stray fragment of code may attempt to alter the global variable inside the class, it can only do so if the encapsulating class permits the alteration. The automatic validation reduces the complexity of the code because all the validation logic resides in one single class instead of being spread out all over the app in any random place that data might be manipulated.
Instead of creating a weak point as in the case of a naked global variable, you create strong and universal validation and management of the data. If you find a problem with the data management, you only have to fix it in one place. Once you have a properly configured data model, the rest of the app becomes ridiculously easy to write.
My initial reaction would be to use a "modal delegate," a lot like NSAlerts do. You'd create your detail window by passing a reference to a delegate, which the detail window would message when it is done creating the object. The delegate—which would probably be the controller for the main window—could then handle the "done editing" message and add the object to the collection. I'd tend to not want to pass the collection around directly.
I support the EntityCollection class. If you have a list of objects, that list should be managed outside a specific controller, in my opinion.
I use the singleton method where the class itself manages it's own collections, setup and teardown. I find this separates the database/storage functionality from the controllers and keeps things clean. It's nice and easy to just call [Object objects] and have it return a reference to my list of objects.