Android ORMLite and create if not exists logic - crud

Create if not exist IMHO means that ORM will create record in table only if it doesn't exist already.
I thought that ORMLite uses such logic by default. But DAO's create method just makes duplicates of records. I have tried createOrUpdate and createIfNotExist methods, but none of them work, as I wish.
I've found another SO question, which explains that createOrUpdate perform check on ids, which have default (null or 0) values in my DTOs.
So do I need to create my own DAOs, inherit them from ORMLite's BaseDaoImpl class and override createIfNotExist method? This is the only idea, which comes to my mind so far. Is there better way to realize desirable logic?
EDIT
I've tried to override createIfNotExist method of custom DAO, but seems like call to it can be performed only directly, that is top-level table doesn't performs call of foreign key's createIfNotExist.
Seems like I need to define CRUD interface with my own methods and realize them on BaseDaoImpl methods.

you may want to use my tiny library, which adds such feature
DuplicatelessORMLite

Related

Class with a list of materials: best practice

I've created the custom class ZMaterial that can be instantiated passing an ID to the constructor which sets the properties for a single material using SELECTs and BAPIs. This class is basically used to READ and UPDATE a single material.
Now I need to create a service to return a list of materials. I already have the procedural code for it in a static method (for now actually a function module), but I would like to keep using a full OOP approach and instantiate a list of my custom material object. The first approach I found is to enhance the static method to instantiate a list of my single material object after the selects are executed and I have the data in internal tables, but it does not seem the most OOP.
The second option in my mind is to create a new class ZMaterialList with one property being a list of objects ZMaterial and then a constructor with the necessary input parameters for the database select. The problem I see with this option is that I create a full class just for the constructor.
What do you think is the best way to proceed?
Create a separate class to produce the list of materials. The single responsibility principle says each class should do exactly one thing. In all but the most simple cases, using a thing is a different responsibility than producing it.
Don’t make a ZMaterialList class. A list’s focus would be managing the list items, i.e. adding, removing, iterating, sorting etc. But you should be fine with a regular STANDARD TABLE OF REF TO ZMaterial.
Make a ZMaterialReader, -Repository, -Query or -Factory class or the like, depending on the precise way you want to produce the ZMaterials. Readers read by keys, repositories read and write, queries use varying sets of selection criteria, factories instantiate with possibly different sets of inputs.
You can well let that class use the original FUNCTION underneath. It’s good style to exploit what’s already there. Just make sure you trust that code, put it in a test harness, and keep it afar from the rest of your oo code.
Extract all public interaction of ZMaterial to an interface and use only that interface. That allows you to offer alternative implementations of ZMaterial, ones that differ in the way they are produced or how they store their data.
Split single production from mass production. Reading MARA to retrieve a single material is okay. But you don’t want thousands of ZMaterials reading MARA individually - that wrecks performance.
Now you’ve got the interface, you could offer a second implementation of ZMaterial whose constructor receives all relevant data and relies on it already having been validated to avoid additional SELECTs.
You could also offer an implementation that doesn’t store its data at all but only stores pointers to rows in internal tables somewhere else. See the flyweight pattern for ideas.
If you expect mass updates on the materials, such as “reclassify all of these as B”, consider extracting these list-oriented operations to separate classes as well.

How to duplicate an entity with all of its properties and collections

The standard Jspresso action cloneEntityCollectionFrontAction allows to duplicate the selected rows in a table.
The duplication is limited to the current model and do not take account of collections if exist (ie : the collections are not automatically duplicated)
how to deeply duplicate an entity with all of its collections ?
Second related question : I tried to write by myself an action in order to realize the duplication of the collections. Below a part of the action I wrote :
Offer newOffer = bc.getEntityFactory().createEntityInstance(Offer.class);
Offer clonedNewOffer = bc.cloneInUnitOfWork(newOffer);
clonedNewOffer.setCustomer(curOf.getCustomer());
clonedNewOffer.setEndApplicationDate(curOf.getEndApplicationDate());
clonedNewOffer.setName(curOf.getName());
clonedNewOffer.setStartApplicationDate(curOf.getStartApplicationDate());
I called the getter and setter for each property which is not satisfying because if I add new property or collection to the model, the method must be manually updated.
Is there a way to write a more smart / flexible method ?
Hi Vincent,
Regarding the answer you made and your latest proposal, I changed my backend with the following one :
Offer newOffer = bc.getEntityFactory().createEntityInstance(Offer.class);
Offer clonedNewOffer = bc.cloneInUnitOfWork(newOffer);
CarbonEntityCloneFactory.carbonCopyComponent(curOf, clonedNewOffer, bc.getEntityFactory());
bc.registerForUpdate(clonedNewOffer);
But the registerForUpdate failed due to Data constraints are not satisfied error.
I checked the Id property of the clonedNewOffer and the Id is already the same than curOf Id property.
I understand the meaning of a "carbon copy" which is a strictly copy of all the properties, so, from a backend,
how could I duplicate an entity in order to create a new one ?
Both CloneComponentCollectionAction and CloneComponentAction perform the actual component and entity cloning using a configurable strategy that implement IEntityCloneFactory. Jspresso provide 3 implementations of this interface :
CarbonEntityCloneFactory that deals with scalar cloneable properties but ignores all relationships. It's almost never used directly by application code.
SmartEntityCloneFactory inherits from CarbonEntityCloneFactory and deals with relationships the following way :
clone the references if they are compositions or assign the same references to the clone.
add the cloned component to the same collections than the original belongs to.
HibernateAwareSmartEntityCloneFactory inherits from SmartEntityCloneFactory and deals with lazy initialized properties. This is the implementation that is used by default if you use an Hibernate backend.
As a rule of thumb, you can expect the SmartEntityCloneFactory to perform what you expect about references but ignore dependent collections in order to avoid too deep recursive cloning; so what you've experienced is per-design. If you feel like there is room for improvement, feel free to open a feature request on the Jspresso GitHub. Thinking about it, we could maybe do better about composition dependent collections.
When you want to deal with deeper cloning than what's provided by the SmartEntityCloneFactory (or HibernateSmartEntityCloneFactory), the way to go is to create your own cloning strategy. Of course, you can inherit the default strategy and complete the cloning by overriding the cloneEntity method by calling the super implementation and deal specifically with the collections you want to clone.
Once your strategy is implemented, just inject it either globally in the application by replacing the default one, i.e. :
bean('smartEntityCloneFactory', class: 'your.CustomEntityCloneFactory',
parent: 'smartEntityCloneFactoryBase')
or specifically on one of the clone actions of your application by injecting your custom strategy on the action, e.g. :
bean('myCustomEntityCloneFactory', class: 'your.CustomEntityCloneFactory',
parent: 'smartEntityCloneFactoryBase')
action('customCloneAction', parent: 'cloneEntityCollectionFrontAction',
custom:[entityCloneFactory_ref: 'myCustomEntityCloneFactory']
)
Regarding your second related question, if you are inside your entity clone factory implementation (or have access to an instance of it) and want to clone an entity or a component using the strategy, just call the cloneComponent or cloneEntity method.
If you just want to copy all the scalar properties of an entity or component on a clone and don't have access to a clone factory, you can use the following static utility method :
CarbonEntityCloneFactory.carbonCopyComponent(IComponent, IComponent, IEntityFactory)
Using the above method will solve your implementation robustness.

Iterate over non null entities in model using linq

How do you iterate over the entities within a model in mvc 4 using entity framework 5.0? Looking for a more elegant process using linq.
Example: AnimalModel may have Cat, Dog, Pig entities. How would I detect just the entities and ignore other properties in the AnimalModel such as isHarry, Name, isWalking, isJumping. Is there a way to do this without using reflection, something within EF5 that allows for just looking at non-null entity values.
The main reason I am interested in this technique is to reduce code bloat and perform generic CRUD operations on the data across all entities and sub entities.
Possible Reference: link
I can't see how you can achieve this without using reflection at all.
You could try the following : Get all the EF types in the assembly which hosts them e.g.
var types = from t in Assembly.GetExecutingAssembly().GetTypes()
where t.IsClass && t.Namespace == "NamespaceWhereEFEntitiesLive"
select t;
You may need to ply around a bit with the above query, but you get the idea.
You can then iterate through the properties of AnimalModel, check whether the property is of any type returned in types. e.g.
foreach(var prop in AnimalModelProperties) {
if (types.Contains(prop.GetType())
}
Note that the above for loop is a bit of a guess, but the pseudo-code should clarify what I'm looking to explain.
When you use EF to insert/update, it automatically ingores all irrelevant properties. If you want an implementation that takes properties from existing objects, then applies them to the database, you could use the relatively new upsert.
If you want a custom way to upsert a graph of objects...
If you are using either database-first or model-first (if you have an EDMX), you could use T4 templates to generate code that does this.
If you want this technique to support navigation properties, you will need some sort of assumption to prevent loops e.g. update from one to many, not the other way around and not many-to-many properties, or use the EDMX's optional description to place a hint on which navigation properties to visit.
Using reflection is a simpler solution, however, although, even with reflection you'll need to decide which way to go (e.g. using attributes (which you can get the T4s to add via the above assumptions/tricks)).
Alternatively, you could convert this technique (that I wrote) to work with EF, thus explicitly specifying where to visit in the graph in the using code, (using dbset.SaveNavigation(graph, listOfPropertyPaths) instead of writing complex code that assumes what you want it to do when you write dbset.Save(graph) (I have successfully done so in the past, but haven't uploaded it yet).
Also see this related article that I have recently found (I haven't tried it yet).
By the way, null properties do have significance in updating the database, often, you won't want to ignore them.

nHibernate: determine property value before save

we are currently evaluating whether nHibernate supports the requirements for our project. We share the database with another application so that we are not completely free as regards changes to the schema.
Some columns are filled with unique and consecutive numbers (e.g. for invoices). The next number is determined by a stored procedure that also implements a locking algorithm so that the numbers are guaranteed to be consecutive.
On the one hand we could define a trigger on the respective tables that sets the value for the column when an empty or special value is provided. This would require changing the existing database definition - though it might be the most reliable way to implement this.
In order to avoid the change of the database definition we are trying to solve this in the nHibernate ORM. We've first tried to implement a user type that calls the stored procedure in NullSafeSet if an empty value is provided. Unfortunately, the connection and transaction of the provided command are not set yet when NullSafeSet is called.
How can we solve this with nHibernate?
Thanks in advance,
Markus
If you decide to go with trigger route, then you'll need to add generated attribute to your property mapping.
Generated properties are properties which have their values generated
by the database. Typically, NHibernate applications needed to Refresh
objects which contain any properties for which the database was
generating values. Marking properties as generated, however, lets the
application delegate this responsibility to NHibernate. Essentially,
whenever NHibernate issues an SQL INSERT or UPDATE for an entity which
has defined generated properties, it immediately issues a select
afterwards to retrieve the generated values.
Aside from that, I'm not quite sure how would you call stored procedure from NHibernate issued INSERT, without adding a trigger or default constraint on column.
Edit
Looks like NHibernate has a notion of class persisters, through the interface IEntityPersister. Maybe you could hack something out from that.
The persister attribute lets you customize the persistence strategy
used for the class. You may, for example, specify your own subclass of
NHibernate.Persister.EntityPersister or you might even provide a
completely new implementation of the interface
NHibernate.Persister.IClassPersister that implements persistence via,
for example, stored procedure calls, serialization to flat files or
LDAP. See NHibernate.DomainModel.CustomPersister for a simple example
(of "persistence" to a Hashtable).
You could start from NHibernate's source.
If you have the ability to add triggers to database, that would probably be the best, straightforward way, without investing too much time to fight with NHibernate's internals.

How do I get the entity framework to work with archive flags?

I'm trying to create a set of tables where we don't actually delete them, but rather we set the archive flags instead. When we delete an entity, it shouldn't be deleted, it should be marked as archived instead.
What are the programming patterns to support this?
I would also prefer not to have to roll out my own stored procs for every table that have these archive flags if there is another solution.
This is an old question and it doesn't specify the EntityFramework version. There are a few good solution for newer versions:
Entity Framework: Soft Deletes Are Easy
Soft Delete pattern for Entity Framework Code First
Entity Framework 5 Soft Delete
Also there are sources for EF 6.1.1+
Highlights of Rowan Miller’s EF6/EF7 Talk at TechEd 2014
Entity Framework: Building Applications with Entity Framework 6
myEntity.IsArchived = true;
context.SaveChanges();
if your requirements are to not delete, then don't delete ;-)
You'll have to write your own logic to do this, and steer clear of the "MarkForDeletion" method on those entities.
Your logic will need to take a provided entity, alter it in some way to signify it is now "archived", and then Save the changes on the context.
You'll then need to make sure any code pulling from the DB honors these values that signify an archived record.
To make it simpler, you can create partial classes to match your entity classes, so they honor say, a custom interface. That way you can code against the interface and not have to use reflection to set the entity values.
If you can use .NET 4.0, EF supports POCOs and you can mark the entities natively with the appropriate interfaces, which will cut down the number of files you have to work with.
I'm not sure about best practices, but you might try writing your own DeleteObject method and putting it in a class of some sort (EFHelper is the name of the class I use for these sorts of things). Then instead of calling ObjectContext.DeleteObject, you call EFHelper.DeleteObject, and do any custom logic you care to do in that method. If you're consistent with the way you name these archive flag properties, you can use .NET's reflection API to find the archive_flag property of each EntityObject you're "deleting" and set it appropriately.