How do I get the entity framework to work with archive flags? - sql

I'm trying to create a set of tables where we don't actually delete them, but rather we set the archive flags instead. When we delete an entity, it shouldn't be deleted, it should be marked as archived instead.
What are the programming patterns to support this?
I would also prefer not to have to roll out my own stored procs for every table that have these archive flags if there is another solution.

This is an old question and it doesn't specify the EntityFramework version. There are a few good solution for newer versions:
Entity Framework: Soft Deletes Are Easy
Soft Delete pattern for Entity Framework Code First
Entity Framework 5 Soft Delete
Also there are sources for EF 6.1.1+
Highlights of Rowan Miller’s EF6/EF7 Talk at TechEd 2014
Entity Framework: Building Applications with Entity Framework 6

myEntity.IsArchived = true;
context.SaveChanges();
if your requirements are to not delete, then don't delete ;-)

You'll have to write your own logic to do this, and steer clear of the "MarkForDeletion" method on those entities.
Your logic will need to take a provided entity, alter it in some way to signify it is now "archived", and then Save the changes on the context.
You'll then need to make sure any code pulling from the DB honors these values that signify an archived record.
To make it simpler, you can create partial classes to match your entity classes, so they honor say, a custom interface. That way you can code against the interface and not have to use reflection to set the entity values.
If you can use .NET 4.0, EF supports POCOs and you can mark the entities natively with the appropriate interfaces, which will cut down the number of files you have to work with.

I'm not sure about best practices, but you might try writing your own DeleteObject method and putting it in a class of some sort (EFHelper is the name of the class I use for these sorts of things). Then instead of calling ObjectContext.DeleteObject, you call EFHelper.DeleteObject, and do any custom logic you care to do in that method. If you're consistent with the way you name these archive flag properties, you can use .NET's reflection API to find the archive_flag property of each EntityObject you're "deleting" and set it appropriately.

Related

How to do Object–Relational mapping in ABAP?

I'm currently tring to port an application using hibernate to ABAP.
So short version is: I've (at least) two tables, let's say Entity(entity_id, ...) and SubEntity(sub_entity_id, entity_id).
Now in ABAP OO I'm representing these entities as classes, like zcl_app_entity. Now I'm wondering how I could use ABAP to persist these entities and relationships.
I've use-cases like:
Lock entity, and then add subentity
Get all subEntities and send them via http as json
In Java with JPA I'd do something like
Entity entity = userRepository.findById(entityId);
entity.lock(); // granted, this mechanism would on DB Level here while ABAP needs ABAP Locks
entity.getSubEntities.add(address);
There's a session with UnitOfWork automatically with the repository call. But as far as I'm aware ABAP doesn't offer a Repository pattern which automatically transforms classes to managed entities.
I could of course add INSERT etc directly into the add calls, or create a load / persist method on every class. But then I lose the testability.
I could create Repositories myself, passing in the objects. But then my Objects are repository aware itself (addAddress would call the repository).
Another way would be in the service class to call the repository, and then pass that object to the add method after it's persisted. Quite error prone.
Also lazy loading of e.g. xstrings (like 50MB) would be great, this won't work when the object has no access to the repository / sql interface to load on demand though.
I'd be super suprised if there isn't something like this (JPA/Hibernate), since these are common patterns.
IEntityRepository, ISubEntitiyRepository, IMyService (calls repo interfaces)
All calls make objects managed, maybe with OneToMany etc relationships, rollbacks, lazy load.
Weirdly I found the most ABAP way to have some logic in classes (e.g. the Entity->lock( ), entity->add_subentity( xyz ) but then just use an SQL persistency interface to get all data and return some structures. There wouldn't really be OO relationships. At most a class would be used as a short time driver of a struct. But when I'd say update all sub_entities it would be more like data_provider->get_sub_entity( entity_id ) which returns an internal table. And then the requestor has to persist it again if required data_provider->update_sub_entity_status( entity_id, 'R' )
So how do I use Object–Relational mapping in ABAP, e.g. when I want to update the status of all sub_entities of entitiy X to 'DONE' while keeping it testable?
You can take a look at the basic ABAP persistence service.
In se24 if you create a class, you can mark it as "persistent".
Eg SFLIGHT.
https://blogs.sap.com/2012/04/18/abap-persistent-object-services-demystified/#:~:text=The%20Persistence%20Object%20Services%20can,again%20when%20you%20need%20them.
There is an generated example on every system CL_SPFLI_PERSISTENT
If you have used ORMs in other languages like c# , this will be a very disappointing experience.
This toolset began around 20 yrs ago, but offers a questionable return on invest if you ask me.
Apart from the fact the approach doesnt conform to traditional OO principals.
When this first came out SAP already had 3GL code updating traditional
tables. Most developers even SAP internal, had no clear idea how to implement an ABAP OO model.
90% of the code SAP delivered didnt use this type of model. So there where no good examples to base your own work on.
Unfortunately it never took off and was never extended to have the functionality expected in a true ORM persistence tool.
I dont recall seeing anything inside the toolset that manages things like cascade delete. Nor A proper class relationship model?
Please correct me if I missed that.
If you ask me, sap generated a class with GET and SET methods and a PERSIST method and that was where it stopped.
Actually using Classes not DDIC structures as the model and implementing things like cascade delete look outstanding.
If you google SAP and ORM you will see a Javascript tool using Hibernate and HANA as the DB. Not an ABAP based tool.
or you will see ORM meaning Operational Risk Mgt.
That pretty much says it all. The ABAP layer has a toy Class generator but no true ORM tool like entity framework on .net.

Iterate over non null entities in model using linq

How do you iterate over the entities within a model in mvc 4 using entity framework 5.0? Looking for a more elegant process using linq.
Example: AnimalModel may have Cat, Dog, Pig entities. How would I detect just the entities and ignore other properties in the AnimalModel such as isHarry, Name, isWalking, isJumping. Is there a way to do this without using reflection, something within EF5 that allows for just looking at non-null entity values.
The main reason I am interested in this technique is to reduce code bloat and perform generic CRUD operations on the data across all entities and sub entities.
Possible Reference: link
I can't see how you can achieve this without using reflection at all.
You could try the following : Get all the EF types in the assembly which hosts them e.g.
var types = from t in Assembly.GetExecutingAssembly().GetTypes()
where t.IsClass && t.Namespace == "NamespaceWhereEFEntitiesLive"
select t;
You may need to ply around a bit with the above query, but you get the idea.
You can then iterate through the properties of AnimalModel, check whether the property is of any type returned in types. e.g.
foreach(var prop in AnimalModelProperties) {
if (types.Contains(prop.GetType())
}
Note that the above for loop is a bit of a guess, but the pseudo-code should clarify what I'm looking to explain.
When you use EF to insert/update, it automatically ingores all irrelevant properties. If you want an implementation that takes properties from existing objects, then applies them to the database, you could use the relatively new upsert.
If you want a custom way to upsert a graph of objects...
If you are using either database-first or model-first (if you have an EDMX), you could use T4 templates to generate code that does this.
If you want this technique to support navigation properties, you will need some sort of assumption to prevent loops e.g. update from one to many, not the other way around and not many-to-many properties, or use the EDMX's optional description to place a hint on which navigation properties to visit.
Using reflection is a simpler solution, however, although, even with reflection you'll need to decide which way to go (e.g. using attributes (which you can get the T4s to add via the above assumptions/tricks)).
Alternatively, you could convert this technique (that I wrote) to work with EF, thus explicitly specifying where to visit in the graph in the using code, (using dbset.SaveNavigation(graph, listOfPropertyPaths) instead of writing complex code that assumes what you want it to do when you write dbset.Save(graph) (I have successfully done so in the past, but haven't uploaded it yet).
Also see this related article that I have recently found (I haven't tried it yet).
By the way, null properties do have significance in updating the database, often, you won't want to ignore them.

nHibernate: determine property value before save

we are currently evaluating whether nHibernate supports the requirements for our project. We share the database with another application so that we are not completely free as regards changes to the schema.
Some columns are filled with unique and consecutive numbers (e.g. for invoices). The next number is determined by a stored procedure that also implements a locking algorithm so that the numbers are guaranteed to be consecutive.
On the one hand we could define a trigger on the respective tables that sets the value for the column when an empty or special value is provided. This would require changing the existing database definition - though it might be the most reliable way to implement this.
In order to avoid the change of the database definition we are trying to solve this in the nHibernate ORM. We've first tried to implement a user type that calls the stored procedure in NullSafeSet if an empty value is provided. Unfortunately, the connection and transaction of the provided command are not set yet when NullSafeSet is called.
How can we solve this with nHibernate?
Thanks in advance,
Markus
If you decide to go with trigger route, then you'll need to add generated attribute to your property mapping.
Generated properties are properties which have their values generated
by the database. Typically, NHibernate applications needed to Refresh
objects which contain any properties for which the database was
generating values. Marking properties as generated, however, lets the
application delegate this responsibility to NHibernate. Essentially,
whenever NHibernate issues an SQL INSERT or UPDATE for an entity which
has defined generated properties, it immediately issues a select
afterwards to retrieve the generated values.
Aside from that, I'm not quite sure how would you call stored procedure from NHibernate issued INSERT, without adding a trigger or default constraint on column.
Edit
Looks like NHibernate has a notion of class persisters, through the interface IEntityPersister. Maybe you could hack something out from that.
The persister attribute lets you customize the persistence strategy
used for the class. You may, for example, specify your own subclass of
NHibernate.Persister.EntityPersister or you might even provide a
completely new implementation of the interface
NHibernate.Persister.IClassPersister that implements persistence via,
for example, stored procedure calls, serialization to flat files or
LDAP. See NHibernate.DomainModel.CustomPersister for a simple example
(of "persistence" to a Hashtable).
You could start from NHibernate's source.
If you have the ability to add triggers to database, that would probably be the best, straightforward way, without investing too much time to fight with NHibernate's internals.

Does adding PetaPoco attributes to POCO's have any negative side effects?

Our current application uses a smart object style for working with the database. We are looking at the feasibility of moving to PetaPoco instead. Looking over the features I notice you can add attributes to make it easier to CRUD objects. Does adding these attributes have any negative side effects that I should be aware of?
Has anyone found a reason NOT to use these decorators?
Directly to the use of the POCO object instance itself? None.
At least not that I would be aware of. Jon Skeet should be able to provide more info because he knows compiler inner workings through and through, so he knows exactly what happens with this metadata after it's been compiled.
Other implications indirectly related to these
There are of course implications when accessing these declarative attributes, because they're read using reflection which is normally a slow process.
But there's nothing to worry here, because PetaPoco is a smart library and reads these only once then compiles & caches these things, so you only get penalized once then you get blazing performance afterwards. Because it uses compiled code.
Non-performance related implications
By putting attributes (any) on your classes/properties/methods you somehow bind your code to particular engine that will use this class, because they're directives for this particular engine to understand your code.
In case of PetaPoco attributes this means that your class can be used with PetaPoco but not with some other DAL (ie. EF) unless you add attributes of that one as well (EF Code First uses the very same approach with attributes).
The second implication is related to back-end database. In case you rename a table, column or any other part that is provided in your PetaPoco attribute as a constant magic string, you will subsequently have to change this string as well. This just means that you have to be thorough when doing database changes...
One downside is that it breaks the separation between the "domain" layer and the "data" layer, since it introduces the PetaPoco file (which contains data logic) to domain classes that should really not have any knowledge or dependency on the data layer.
If you're doing a single-project MVC app or something then it's okay to just use the Models directory for both, but for non-trivial and separated apps you'll have to have two PetaPoco files or play around with abstracting portions of the file in order to annotate your models without making them "know too much" about the underlying data, or else have you specify the table and/or primary key name all over the place.

Mapping Entity Framework auto generated entities to Data Transfer Objects?

Using Entity Framework 4.1 what is the best way to map an auto generated entity framework entity to an object suitable for Data Transfer?
What i'm working with looks like this:
WPF Application -> WCF Service -> Entity Framework (DAL) -> Database
The WPF application could be swapped out to an ASP.NET website at some point in addition to the WPF Application. Hence the use of a WCF service.
The WCF Service, Database and Entity Framework code will all sit on the same physical tier.
In previous versions of Entity Framework (Before 4.0) I believe you had to write your own mapping code for your classes. Is there a better way to do this now?
Also an additional question is would it be bad practice to include methods on the DTO's that performs business logic? Where would be the best place to apply business logic in this case?
The best way to map from EF entities to DTOs is to project. My example linked there uses view models rather than DTOs, but the idea is the same.
With most ORMs, EF included, there is a cost to materializing entities which you don't need to pay if you just need a DTO. The cost includes:
Fixup -- when two objects reference the same related object, make sure they point to the same instance.
Tracking -- overhead for tracking the instance in a context.
Unnecessary columns -- you might not need all properties for your DTOs.
Aggregates -- functions like .Count() are far more efficient in SQL than in object space.
If you use a L2E projection, you don't incur any of this cost. If you follow the common advice to use the AutoMapper hammer for every problem which looks like a nail, you pay all of it.
I do agree with #sternr about not putting methods on entities / DTOs. For a more detailed examination of this idea, read "At the Boundaries, Applications are Not Object-Oriented."
Use AutoMapper to map between simple DTO's and EF entities
[EDIT:] for your other questions:
For the mapping: you could either use the built in edmx designer which allows you to use an existing DB schema to generate your model entities, or the other way around (define your entities, and let EF create your DDL).
Newer versions (as of 4.1) you could simply code your entities and add DataAnnotations on the associated properties and EF will do the magic mapping (here's a good sample)
As for adding logic to DTO's, well DTO's by their definition are a data contact - the consumer will probably create it's own implementation of them (be them by the automatic proxy or other manual wrappers), so placing logic in them kinda makes no sense.