I would like to know a best way to design a deletion of an object, with triggers deletion of many dependent objects.
Here is an example. There is an Employer class. When an employer is deleted, all its jobs, invoices are deleted. When a job is deleted its category selection is deleted as well. And so on. So as you can see deletion of Employer triggers deletion on many more objects. The problem is that I have to pass many arguments required for deletion of dependent objects to the delete method in the Employer class.
Here is a simplified example. Imagine a class Main. When a Main object is deleted, objects Dep1, Dep2 have to be deleted as well. When Dep1 is deleted, Dep11 has to be deleted as well. If delete methods look like this: Dep1.delete(arg1), Dep2.delete(arg2), Dep11.delete(arg3), then the delete method on Main has to look like this: Main.delete(arg1, arg2, arg3). You see? The more objects depend on the Main - more arguments will be needed for deletion.
I must also point out that I am interested in deletion from the database, i.e. deletion in its "business logic" sense. I do not even unset "deleted" objects in the delete method.
What options I have considered:
grouping arguments required for deletion into a separate object. I just do not see how all these arguments can be grouped. They simply do no belong together. For example if an Invoice_searcher and Job_searcher are needed - why would they go together in one object? And what object could that be?
moving deletion of dependent objects out of the delete method in the Employer class. In this case not calling delete methods on children explicitly would leave the system in an inconsistent state. I would like to avoid that.
If you are passing parameters to deletion functions, you are making a mistake.
Each object on which you call your deletion function should be able to identify the other objects which it is the parent of.
I emphasise object because it sounds like you are coming at this from a relational perspective.
Try to use the Composition in such a way that once u make the employee reference less other will become automatically reference less....
Other way is to take the help of nested class , if the top most encapsulating class will get
unreferenced other will also be automatically get unreferenced..(but make sure u are not pulling out the reference of nested class(say your inner class) out in some other class.)
In my line of work I don't have to write much code to delete, so take my advice with a grain of salt. It might help to look at how these objects/records are created, and handle the deletion in the same way. For example, if the create logic is like this:
Create Employer
Create Invoice
Associate Invoice with Employer
than maybe the delete logic should look like this:
Remove Associations of Invoice(s) with Employer
Delete Invoice(s)
Delete Employer
As long as the entities are created in a consistant way, deleting them in the reverse order should also prove to be consistant.
Related
I have a 1-to-many relationship in my example app, taken from the Core Data documentation, where one Manager has multiple employees. I get the part on how to set the Manager-to-Employee relationship delete rule, but what about the Employee-to-Manager relationship? If I want a case where, if ALL the employees have been deleted, I want the Manager to also be deleted, what kind of delete rule should I apply? Cascade doesn't make sense, because then if one employee is deleted, the manager will get deleted even though he/she has other employees still linked. Nullify will delete the relationships correctly, but it won't delete the Manager when the last employee has been deleted.
Am I missing something, or do I have to do something custom in this case?
The delete rules don't have enough specificity to say, "delete self if relationship 'bobs' contains fewer than 'x' objects."
Instead, you should put such business logic in a custom NSManagedObject subclass. You can put a check in the Manager classes removeEmployeeObject: and removedEmployeeObjects: method that tells the Manager instances to delete itself if the employees relationship is empty.
You can also use validation methods for this or the willSave methods.
I have an application in mind which dicates database tables be append-only; that is, I can only insert data into the database but never update or delete it. I would like to use LINQ to SQL to build this.
Since tables are append-only but I still need to be able to "delete" data, my thought is that each table Foo needs to have a corresponding FooDeletion table. The FooDeletion table contains a foreign key which references a Foo that has been deleted. For example, the following tables describe the state "Foos 1, 2, and 3 exist, but Foo 2 and Foo 3 have been deleted".
Foo FooDeletion
id id fooid
---- -------------
1 1 2
2 2 3
3
Although I could build an abstraction on top of the data access layer which (a) prevents direct access to LINQ to SQL entities and (b) manages deletions in this manner, one of my goals is to keep my data access layer as thin as possible, so I'd prefer to make the DataContext or entity classes do the work behind the scenes. So, I'd like to let callers use Table<Foo>.DeleteOnSubmit() like normal, and the DAL knows to add a row to FooDeletion instead of deleting a row from Foo.
I've read through "Implementing Business Logic" and "Customizing the Insert, Update, and Delete Behavior of Entity Classes", but I can't find a concrete way to implement what I want. I thought I could use the partial method DataContext.DeleteFoo() to instead call ExecuteDynamicInsert(FooDeletion), but according to this article, "If an inapplicable method is called (for example, ExecuteDynamicDelete for an object to be updated), the results are undefined".
Is this a fool's errand? Am I making this far harder on myself than I need to?
You have more than one option - you can either:
a) Override SubmitChanges, take the change set (GetChangeSet()) and translate updates and deletes into inserts.
b) Use instead-of triggers db-side to change the updates/delete behavior.
c) Add a new Delete extension method to Table that implements the behavior you want.
d) ...or combine a+b+c as needed...
if you want a big-boy enterprise quality solution, you'd put it in the database - either b) from above or CRUD procedures <- my preference... triggers are evil.
If this is a small shop, not a lot of other developers or teams, or data of minimal value such that a second or third app trying to access the data isn't likely than stick with whatever floats your boat.
I have 2 domain classes with a many-to-many relationship in grails: decks and cards.
The setup looks like this:
class Deck {
static hasMany = [cards: Card]
}
class Card {
static hasMany = [decks: Deck]
static belongsTo = Deck
}
After I delete a deck, I want to also delete all cards which no longer belong to a deck. The easiest way to accomplish this is to write something like the following sql:
delete from card where card.id not in(select card_id from deck_cards);
However, I can't figure out how to write a HQL query which will resolve to this SQL because the join table, deck_cards, does not have a corresponding grails domain class. I can't write this statement using normal joins because HQL doesn't let you use joins in delete statements, and if I use a subquery to get around this restriction mySQL complains because you're not allowed to refer to the table you're deleting from in the "from" section of the subquery.
I also tried using the hibernate "delete-orphan" cascade option but that results in all cards being deleted when a deck is deleted even if those cards also belong to other decks. I'm going crazy - this seems like it should be a simple task.
edit
There seems to be some confusion about this specific use of "decks" and "cards". In this application, the "cards" are flashcards and there can be tens of thousands of them in a deck. Also, it is sometimes necessary to make a copy of a deck so that users can edit it as they see fit. In this scenario, rather than copying all the cards over, the new deck will just reference the same cards as the old deck, and if a card is changed only then will a new card be created. Also, while I can do this delete in a loop in groovy, it will be very slow and resource-intensive since it will generate tens of thousands of sql delete statements rather than just 1 (using the above sql). Is there no way to access a property of the join table in HQL?
First, I don't see the point in your entities.
It is illogical to make a card belong to more than one deck. And it is illogical to have both belongTo and hasMany.
Anyway, Don't use HQL for delete.
If you actually need a OneToMany, use session.remove(deck) and set the cascade of cards to REMOVE or ALL.
If you really want ManyToMany, do the checks manually on the entities. In pseudocode (since I don't know grails):
for (Card card : deck.cards} {
if (card.decks.size == 0) {
session.remove(card);
}
}
I won't be answering the technical side, but challenging the model. I hope this will also be valuable to you :-)
Functionally, it seems to me that your two objects don't have the same lifecycle:
Decks are changing : they are created, filled with Cards, modified, and deleted. They certainly need to be persisted to your database, because you wouldn't be able to recreate them using code otherwise.
Cards are constant : the set of all cards is known from the beginning, they keep existing. If you delete a Card once in the database, then you will need to recreate the same Card later when someone needs to put it in a Deck, so in all cases you will have a data structure that is responsible for providing the list of possible Cards. If they are not saved in your database, you could recreate them...
In the model you give, the cards have a set of Decks that hold them. But that information has the same lifecycle than the Decks' (changing), so I suggest to hold the association only on the Deck's side (uni-directional Many-To-Many relationship).
Now you've done that, your Cards are really constant information, so they don't even need to be persisted into the database. You would still have a second table (in addition to the Deck), but that Card table would only contain the identifying information for the Card (could be a simple integer 1 to 52, or two values, depending what you need to "select" in your queries), and not other fields (an image, the strength, some points etc...).
In Hibernate, these choices turns the Many-To-Many relationship to a Collection of values (see Hibernate reference).
With a Collection of Values, Card is not an Entity but a Component. And you don't have to delete them, everything is automatically taken care by Hibernate.
I am wondering how can one delete an entity having just its ID and type (as in mapping) using NHibernate 2.1?
If you are using lazy loading, Load only creates a proxy.
session.Delete(session.Load(type, id));
With NH 2.1 you can use HQL. Not sure how it actually looks like, but something like this: note that this is subject to SQL injection - if possible use parametrized queries instead with SetParameter()
session.Delete(string.Format("from {0} where id = {1}", type, id));
Edit:
For Load, you don't need to know the name of the Id column.
If you need to know it, you can get it by the NH metadata:
sessionFactory.GetClassMetadata(type).IdentifierPropertyName
Another edit.
session.Delete() is instantiating the entity
When using session.Delete(), NH loads the entity anyway. At the beginning I didn't like it. Then I realized the advantages. If the entity is part of a complex structure using inheritance, collections or "any"-references, it is actually more efficient.
For instance, if class A and B both inherit from Base, it doesn't try to delete data in table B when the actual entity is of type A. This wouldn't be possible without loading the actual object. This is particularly important when there are many inherited types which also consist of many additional tables each.
The same situation is given when you have a collection of Bases, which happen to be all instances of A. When loading the collection in memory, NH knows that it doesn't need to remove any B-stuff.
If the entity A has a collection of Bs, which contains Cs (and so on), it doesn't try to delete any Cs when the collection of Bs is empty. This is only possible when reading the collection. This is particularly important when C is complex of its own, aggregating even more tables and so on.
The more complex and dynamic the structure is, the more efficient is it to load actual data instead of "blindly" deleting it.
HQL Deletes have pitfalls
HQL deletes to not load data to memory. But HQL-deletes aren't that smart. They basically translate the entity name to the corresponding table name and remove that from the database. Additionally, it deletes some aggregated collection data.
In simple structures, this may work well and efficient. In complex structures, not everything is deleted, leading to constraint violations or "database memory leaks".
Conclusion
I also tried to optimize deletion with NH. I gave up in most of the cases, because NH is still smarter, it "just works" and is usually fast enough. One of the most complex deletion algorithms I wrote is analyzing NH mapping definitions and building delete statements from that. And - no surprise - it is not possible without reading data from the database before deleting. (I just reduced it to only load primary keys.)
I've got a parent and child object. Depending on a value in the parent object changes the table for the child object. So for example if the parent object had a reference "01" then it will look in the following table "Child01" whereas if the reference was "02" then it would look in the table "Child02". All the child tables are the same as in number of columns/names/etc.
My question is that how can I tell Fluent Nhibernate or nhibernate which table to look at as each parent object is unique and can reference a number of different child tables?
I've looked at the IClassConvention in Fluent but this seems to only be called when the session is created rather than each time an object is created.
I found only two methods to do this.
Close and recreate the nhibernate session every time another dynamic table needs to be looked at. On creating the session use IClassConvention to dynamically calculate the name based on user data. I found this very intensive as its a large database and a costly operation to create the session every time.
Use POCO object for these tables with custom data access.
As statichippo stated I could use a basechild object and have multiple child object. Due to the database size and the number of dynamic table this wasn't really a valid option.
Neither of my two solutions I was particularly happy with but the POCO's seemed the best way for my problem.
NHibernate is intended to be an object relational mappers. It sounds like you're doing more of a scripting style and hoping to map your data instead of working in an OOP manner.
It sounds like you have the makings of an class hierarchy though. What it sounds like you're trying to create in your code (and then map accordingly) is a hierarchy of different kinds of children:
BaseChild
--> SmartChild
--> DumbChild
Each child is either smart or dumb, but since they all have a FirstName, LastName, Age, etc, they all are instances of the BaseChild class which defines these. The only differences might be that the SmartChild has an IQ and the DumbChild has a FavoriteFootballTeam (this is just an example, no offense to anyone of course ;).
NHibernate will let you map this sort of relationship in many ways. There could be 1 table that encompasses all classes or (what it sounds like you want in your case), one table per class.
Did I understand the issue/what you're looking for?