Restricting deletion with NHibernate - sql

I'm using NHibernate (fluent) to access an old third-party database with a bunch of tables, that are not related in any explicit way. That is a child tables does have parentID columns which contains the primary key of the parent table, but there are no foreign key relations ensuring these relations. Ideally I would like to add some foreign keys, but cannot touch the database schema.
My application works fine, but I would really like impose a referential integrity rule that would prohibit deletion of parent objects if they have children, e.i. something similar 'ON DELETE RESTRICT' but maintained by NHibernate.
Any ideas on how to approach this would be appreciated. Should I look into the OnDelete() method on the IInterceptor interface, or are there other ways to solve this?
Of course any solution will come with a performance penalty, but I can live with that.

I can't think of a way to do this in NHibernate because it would require that NHibernate have some knowledge of the relationships. I would handle this in code using the sepecification pattern. For example (using a Company object with links to Employee objects):
public class CanDeleteCompanySpecification
{
bool IsSatisfiedBy(Company candidate)
{
// Check for related Employee records by loading collection
// or using COUNT(*).
// Return true if there are no related records and the Company can be deleted.
// Hope that no linked Employee records are created before the delete commits.
}
}

Related

Fluent NHibernate One-To-Many Cascade Delete if Many-Side is not referenced anywhere else

I have a working solution but would be interested to know if there is a way to achieve this through fluent mapping..
For simplicity, I will use a illustrative example:
class Tag {
string name;
IList<Book> books;
}
class Book {
string title;
Tag primaryTag;
}
There is a business case, where Books are deleted and right now, I query the db to check if any other book references the current tag as primary. If not, I delete the book and after that, I delete the tag because it is not used anywhere else. If the tag is stil used, I only delete the book.
Now it's your turn... do you know a way to achieve this using mappings? I tried the following:
BookMap : ClassMap<Book> {
...
References(x => x.primaryTag)
.Cascade.All() //the collection in TagMap is set to "inverse"
}
But not surprisingly, it throws a foreign key constraint error when the tag is used in other books.
Regards,
Martin
There's no way to do that. NHIbernate is mimicking what you can do in Sql Server config with cascade-deletes. There's no way to go up to a parent and delete "orphans" without using triggers in Sql Server.
There's a way to mimic triggers in NHibernate using "Interceptors" - a way to listen for CRUD on specific Entities and then perform actions. But really it's an anti-pattern since you may as well add the same code to the method that removes the Tag (rather in some hidden/obscure approach like the following, which is usefull for cross-cutting concerns such as Auditing).
This is a really nice article on how to do it (but there's loads out there just google "NHibernate Interceptors").
I'd make sure to use Session.Delete(entity) to ensure deleted entities are removed from Session (for sanity) rather than

Should I let JPA or the database cascade deletions?

Let's say we have two entities, A and B. B has a many-to-one relationship to A like follows:
#Entity
public class A {
#OneToMany(mappedBy="a_id")
private List<B> children;
}
#Entity
public class B {
private String data;
}
Now, I want to delete the A object and cascade the deletions to all its children B. There are two ways to do this:
Add cascade=CascadeType.ALL, orphanRemoval=true to the OneToMany annotation, letting JPA remove all children before removing the A-object from the database.
Leave the classes as they are and simply let the database cascade the deletion.
Is there any problem with using the later option? Will it cause the Entity Manager to keep references to already deleted objects? My reason for choosing option two over one is that option one generates n+1 SQL queries for a removal, which can take a prolonged time when object A contains a lot of children, while option two only generates a single SQL query and then moves on happily. Is there any "best practice" regarding this?
I'd prefer the database. Why?
The database is probably a lot faster doing this
The database should be the primary place to hold integrity and relationship information. JPA is just reflecting that information
If you're connecting with a different application / platform (i.e. without JPA), you can still cascadingly delete your records, which helps increase data integrity
In EclipseLink you can use both if you use the #CascadeOnDelete annotation. EclipseLink will also generate the cascade DDL for you.
See,
http://wiki.eclipse.org/EclipseLink/Examples/JPA/DeleteCascade
This optimizes the deletion by letting the database do it, but also maintains the cache and the persistence unit by removing the objects.
Note that orphanRemoval=true will also delete objects removed from the collection, which the database cascade constraint will not do for you, so having the rules in JPA is still necessary. There are also some relationships that the database cannot handle deletion for, as the database can only cascade in the inverse direction of the constraint, a OneToOne with a foreign key, or a OneToMany with a join table cannot be cascaded on the database.
This answer raises some really strong arguments about why it should be JPA that handles the cascade, not the database.
Here's the relevant quote:
...if you would make cascades on database, and not declare them in
Hibernate (for performance issues) you could in some circumstances get
errors. This is because Hibernate stores entities in its session
cache, so it would not know about database deleting something in
cascade.
When you use second-level cache, your situation is even worse, because
this cache lives longer than session and such changes on db-side will
be invisible to other sessions as long old values are stored in this
cache.

Why doesn't NHibernate delete orphans first?

I'm trying to figure out why NHibernate handles one-to-many cascading (using cascade=all-delete-orphan) the way it does. I ran into the same issue as this guy:
Forcing NHibernate to cascade delete before inserts
As far as I can tell NHibernate always performs inserts first, then updates, then deletes. There may be a very good reason for this, but I can't for the life of me figure out what that reason is. I'm hoping that a better understanding of this will help me come up with a solution that I don't hate :)
Are there any good theories on this behavior? In what scenario would deleting orphans first not work? Do all ORMs work this way?
EDIT: After saying there is no reason, here is a reason.
Lets say you have the following scenario:
public class Dog {
public DogLeg StrongestLeg {get;set;}
public IList<DogLeg> Legs {get;set;
}
If you were to delete first, and lets say you delete all of Dog.Legs, then you may delete the StrongestLeg which would cause a reference violation. Hence you cannot DELETE before you UPDATE.
Lets say you add a new leg, and that new leg is also the StrongestLeg. Then you must INSERT before you UPDATE so that the Leg has an Id that can be inserted into Dog.StrongestLegId.
So you must INSERT, UPDATE, then DELETE.
Also as nHibernate is based on Hibernate, I had a look into Hibernate and found several people talking about the same issue.
Support one-to-many list associations with constraints on both (owner_id, position) and (child_id)
Non lazy loaded List updates done in wrong order, cause exception
wrong insert/delete order when updating record-set
Why does Hibernate perform Inserts before Deletes?
Unidirection OneToMany causes duplicate key entry violation when removing from list
And here is the best answer from them:
Gail Badner added a comment - 21/Feb/08 2:30 PM: The problem arises when a new
association entity with a generated ID
is added to the collection. The first
step, when merging an entity
containing this collection, is to
cascade save the new association
entity. The cascade must occur before
other changes to the collection.
Because the unique key for this new
association entity is the same as an
entity that is already persisted, a
ConstraintViolationException is
thrown. This is expected behavior.

How to find if a referenced object can be deleted?

I have an object called "Customer" which will be used in the other tables as foreign keys.
The problem is that I want to know if a "Customer" can be deleted (ie, it is not being referenced in any other tables).
Is this possible with Nhibernate?
What you are asking is to find the existence of the Customer PK value in the referenced tables FK column.
There are many ways you can go about this:
as kgiannakakis noted, try to do the delete and if an exception is thrown rollback. Effective but ugly and not useful. This also requires that you have set a CASCADE="RESTRICT" in your database. This solution has the drawback that you have to try to delete the object to find out that you can't
Map the entities that reference Customer as collections and then for each collection if their Count > 0 then do not allow the delete. This is good because this is safe against schema changes as long as the mapping is complete. It is also a bad solution because additional selects will have to be made.
Have a method that performs a query like bool IsReferenced(Customer cust). Good because you can have a single query which you will use when you want. Not so good because it may be susceptible to errors due to schema and/or domain changes (depending on the type of query you will do: sql/hql/criteria).
A computed property on the class it self with a mapping element like <property name="IsReferenced" type="long" formula="sql-query that sums the Customer id usage in the referenced tables" />. Good because its a fast solution (at least as fast as your DB is), no additional queries. Not so good because it is susceptible to schema changes so when you change your DB you mustn't forget to update this query.
crazy solution: create a schema bound view that makes the calculation. Make the query on it when you want. Good because its schema-bound and is less susceptible to schema changes, good because the query is quick, not-so-good because you still have to do an additional query (or you map this view's result on solution 4.)
2,3,4 are also good because you can also project this behavior to your UI (don't allow the delete)
Personally i would go for 4,3,5 with that preference
I want to know if a "Customer" can be deleted (ie, it is not being referenced in any other tables).
It is not really the database responsibility to determine if the Customer can be deleted. It is rather part of your business logic.
You are asking to check the referential integrity on the database.
It is ok in non OOP world.
But when dealing with objects (like you do) you better add the logic to your objects (objects have state and behavior; DB - only the state).
So, I would add a method to the Customer class to determine if it can be deleted or not. This way you can properly (unit) test the functionality.
For example, let's say we have a rule Customer can only be deleted if he has no orders and has not participated in forum.
Then you will have Customer object similar to this (simplest possible case):
public class Customer
{
public virtual ISet<Order> Orders { get; protected set; }
public virtual ISet<ForumPost> ForumPosts { get; protected set; }
public virtual bool CanBedeleted
{
get
{
return Orders.Count == 0 && ForumPosts.Count == 0
}
}
}
This is very clean and simple design that is easy to use, test and does not heavily relies on NHibernate or underlying database.
You can use it like this:
if (myCustomer.CanBeDeleted)
session.Delete(mycustomer)
In addition to that you can fine-tune NHibernate to delete related orders and other associations if required.
The note: of course the example above is just simplest possible illustrative solution. You might want to make such a rule part of the validation that should be enforced when deleting the object.
Thinking in entities and relations instead of tables and foreign keys, there are these different situations:
Customer has a one-to-many relation which builds a part of the customer, for instance his phone numbers. They should also be deleted by means of cascading.
Customer has a one-to-many or many-to-many relation which is not part of the customer, but they are known/reachable by the customer.
Some other entity has a relation to the Customer. It could also be an any-type (which is not a foreign key in the database). For instance orders of the customer. The orders are not known by the customer. This is the hardest case.
As far as I know, there is no direct solution from NHibernate. There is the meta-data API, which allows you to explore the mapping definitions at runtime. IMHO, this is the wrong way to do it.
In my opinion, it is the responsibility of the business logic to validate if an entity can be deleted or not. (Even if there are foreign keys and constraints which ensures integrity of the database, it is still business logic).
We implemented a service which is called before deletion of an entity. Other parts of the software register for certain types. They can veto against the deletion (eg. by throwing an exception).
For instance, the order system registers for deletion of customers. If a customer should be deleted, the order system searches for orders by this customer and throws if it found one.
It's not possible directly. Presumably your domain model includes Customer's related objects, such as Addresses, Orders, etc. You should use the specification pattern for this.
public class CustomerCanBeDeleted
{
public bool IsSatisfiedBy(Customer customer)
{
// Check that related objects are null and related collections are empty
// Plus any business logic that determines if a Customer can be deleted
}
}
Edited to add:
Perhaps the most straightforward method would be to create a stored procedure that performs this check and call it before deleting. You can access an IDbCommand from NHibernate (ISession.Connection.CreateCommand()) so that the call is database agnostic.
See also the responses to this question.
It might be worth looking at the cascade property, in particular all-delete-orphan in your hbm.xml files and this may take care of it for you.
See here, 16.3 - Cascading Lifecycle
A naive solution will be to use a transaction. Start a transaction and delete the object. An exception will inform you that the object can't be deleted. In any case, do a roll-back.
Map the entities that reference Customer as collections. Name each collection in your Customer class with a particular suffix.For example if your Customer entity has some Orders, name the Orders collection as below:
public virtual ISet<Order> Orders_NHBSet { get; set; } // add "_NHBSet" at the end
Now by using Reflection you can get all properties of Customer at run time and get those properties that their names ends with your defined suffix( In this case "_NHBSet" ) Then check each collection if they contain any element and if so avoid deleting customer.
public static void DeleteCustomer(Customer customer)
{
using (var session = sessions.OpenSession())
{
using (var transaction = session.BeginTransaction())
{
var listOfProperties =typeof(Customer).GetProperties();
foreach (var classProperty in listOfProperties )
{
if (classProperty.Name.EndsWith("_NHBSet"))
{
PropertyInfo myPropInfo = typeof(Customer).GetProperty(classProperty.Name);
dynamic Collection = myPropInfo.GetValue(customer, null);
if (Enumerable.FirstOrDefault(Collection) !=null)// Check if collection contains any element
{
MessageBox.Show("Customer Cannot be deleted");
return;
}
}
}
session.Delete(customer);
transaction.Commit();
}
}
}
The Advantage of this approach is that you don't have to change your code later if you add new collections to your customer class.And you don't need change your sql query as Jaguar suggested.
The only thing you must care about is to add the particular suffix to your newly added collections.

NHibernate Legacy Database Mappings Impossible?

I'm hoping someone can help me with mapping a legacy database. The
problem I'm describing here has plagued others, yet I was unable to
find a real good solution around the web.
DISCLAIMER: this is a legacy DB. I have no control over the composite
keys. They suck and can't be changed no matter much you tell me they
suck. I can't add surrogate keys either. Please don't suggest either of these as they are not options.
I have 2 tables, both with composite keys. One of the keys from one
table is used as part of the key to get a collection from the other
table. In short, the keys don't fully match between the table.
ClassB is used everywhere I would like to avoid adding properties for
the sake of this mapping if possible.
public class ClassA
{
//[PK]
public string SsoUid;
//[PK]
public string PolicyNumber;
public IList<ClassB> Others;
//more properties....
}
public class ClassB
{
//[PK]
public string PolicyNumber;
//[PK]
public string PolicyDateTime;
//more properties
}
I want to get an instance of ClassA and get all ClassB rows that match
PolicyNumber. I am trying to get something going with a one-to-many,
but I realize that this may technically be a many-to-many that I am
just treating as one-to-many.
I've tried using an association class but didn't get far enough to see
if it works. I'm new to these more complex mappings and am looking
for advice. I'm open to pretty much any ideas.
Thanks,
Corey
The easiest way to handle mapping legacy database schemas is to add a surrogate generated primary key (i.e. identity in SQL Server) to each database table and change your existing composite primary keys to unique constraints. This allows you to keep your existing foreign keys and makes the NHibernate mapping easy.
If that's not possible then you may be able to use property-ref in your mappings to accomplish this.
Edit: You can always fall back to an anemic domain model. That is, map each class but exclude the relationships. You would have one data access method to get ClassA by key, and one to get a collection of ClassB by PolicyNumber.
I was eventually able to get the DB team to concede to adding some surrogate keys to the tables I was dealing with. This is the document I used to plead my case.