Detect models deletion - activejdbc

I have a working SQLite database with ActiveJDBC. I would like to perform some operation when I detect that a model is being deleted (either through direct deletion, or as a result of a cascade delete). I can detect direct deletions by overriding the delete() method in my models. Is it possible to detect the other model deletions?
In addition, I found a somewhat strage behavior. If a model is deleted as a result of a cascade operation, it is not frozen in the process, so I can still work with it even when the database does not store it any longer. Is this supposed to be the expected behavior?
Regards.
PS: I cannot use the javalite tag, as it does not exist and I cannot create new tags.

#alberto-anguita, please see Life Cycle callbacks: http://javalite.io/lifecycle_callbacks, specifically these callbacks:
void beforeDelete();
void afterDelete();
They will allow you to get notified if your model is deleted.
If a model is deleted as a result of a cascade, and not marked frozen, this might be an issue. Please, submit and document it here:
https://github.com/javalite/activejdbc/issues. Specify exactly what cascade method you are using. Defects like that usually takes less than 24 hours to fix.
You cannot create a javalite tag because your reputation on SO is 1 :)

Related

Should I ignore _NSCoreDataConstraintViolationException?

For some reason I only recently found out about unique constraints for Core Data. It looks way cleaner than the alternative (doing a fetch first, then inserting the missing entities in the designated context) so I decided to refactor all my existing persistence code.
If I got it right, the gist of it is to always insert a new entity, and, as longs as I have a proper merge policy, saving the context will take care of the uniqueness and in a more efficient way. The problem is every time I save a context with the inserted entity I get a NSCoreDataConstraintViolationException, no error though. When I do the fetch to make sure
there is indeed only one instance with a unique field
other changes to this entity were applied
everything seems to be okay, but I’m still concerned about this exception, since I do saves and therefore get it quite often, a few times per second in some cases.
My project is in objective-c and I know exceptions are expensive there so I’m having doubts if I’m missing something.
Here is a sample project with this issue (just a few lines of code, be sure to add an exception breakpoint)
NSMergeByPropertyObjectTrumpMergePolicy and constraints are not useful tools and should also never be used. The correct way to manage uniqueness is with a fetch before the insert as it appears you have already been doing.
Let's starts with why the only correct merge policy is NSErrorMergePolicy. You should only be writing to core data in on synchronous say (performBackgroundTask is not enough you also need an operation queue). If you have two performBackgroundTask running at the same time and they contradict then you will lose data. Merge policy is answer the question of "which data would you like to lose?" the correct answer is "Don't lose my data!" which is NSErrorMergePolicy.
The same issue happens when you have a constraint. Let's says you have an entity with a unique constraint on the phone number. And you try to insert another entity with the same phone number. What would you like to happen? It depends on what exactly the data is. It might be two different people, and the phone number should be made different (perhaps they were lacking area code), or it might be one person and the data should be merged. Or you might have a constraint on an uniqueID and the number should just be incremented. But on the database level it doesn't know. It always just does a merge. It will silently lose data.
You can create a custom NSMergePolicy and inspect NSConstraintConflict to decide what do to. But in practice you'd have to think about every time you edit the database and what each change means, which can be very hard outside of the context of writing a change to the database. In other words, the problem with a constraints and merge policy is that it the run is on the wrong level of your application to effectively deal with the problem.
Using constraints with a merge policy of error is OK, as it is a way to find problems with your app (as long as you are monitoring crashes and fixing them). But you still need to do the fetch before the insert to make sure the error doesn't happen.
If you want to clean up code then just have one place that you create your objects. Something like objectWithId:createIfNeed:inContext: which does the fetch and create.

Remove a database field when deleting property from a class using datamapper

I am using datamapper in a Sinatra application. I currently use the command
DataMapper.finalize.auto_upgrade!
to handle the migrations. I had two Classes (Artists and Events) with a 'has_n' and 'belongs_to' association. An Event 'belonged_to' one Artist and an Artist could have many Events associated with it.
I changed the association to be a many_to_many relationship by deleting the previous parts of the class definition which governed the original one_to_many association in the models and adding
has n, :artists, :through => Resource
to the Event class and the corresponding code to the Artist class. When I make a new Event, an error is kicked off.
#<DataObjects::IntegrityError: events.artist_id may not be NULL
The :artist_id field is a relic of the original association between the two classes. The new many_to_many association is accessed by event.artists[i] (where 'i' is just an integer index going from 0 to the number of associated artists -1). Apparently the original association method between the Artist and Event classes is still there? My guess is the solution to this is to not just use the auto_upgrade method built into datamapper but rather to write an explicit migration. If there is a way to handle this type of change to a database and still have the auto_upgrade method work, that would be great!
If you need more details about my models or anything please ask and I'll gladly add them.
In my experience, DataMapper's auto_upgrade does not work very well -- or, to say the least, it doesn't work the way I expect it to. If you want to add a new column to your model, it will do what it should; try to do anything more sophisticated to a column and it probably won't behave as you expect.
For example, if you create a property of type String, it will initially have a length of 50 characters. If you notice that 50 characters is not enough to hold your string, adding :length => 100 to the model won't be enough to make auto_upgrade change the column's width.
It seems you have stumbled upon another shortcoming, although one may argue that, in your case, maybe DataMapper's behavior isn't that bad (think of legacy databases). But the fact is that, when you changed the association, the Event's artist_id column wasn't removed, and then when you try to save an Event, you'll get an error because the database says it is a required field.
Notice that the error you are getting is not a validation error: DataMapper thinks everything looks ok, but gets an error from the database when trying to save the object.
Hope this helps!
Auto-upgrade is not a shortcoming at all! I think of auto-upgrade as a convenience feature of DataMapper. It's only intended purpose is to add columns for you as far as I know. So it is great for getting a project started quickly, and managing test and dev environments without having to write migrations, but not the best tool for making modifications to a mature, live project.
For that, DataMapper does have migrations! Use the dm-migrations gem. The shortcoming is that they are not very well documented... at all. I'm actually working on changing a current project of mine over to using migrations, and I hope to contribute some instructions to the dm-migrations github wiki. But if you aren't ready to switch to migrations, you can also just update columns manually using an SQL client, and then continue to use auto-upgrade for new columns. That's what I have been doing for 2 years on my project :)

(Fluent) NHibernate progress events for lengthy transactions?

We've hooked up the ISaveOrUpdateEventListener event and hoped we could tie it to a progress bar update for each node being visited during the save traversal of a pretty big model, BUT the event only fires once when the save operations starts (only on the node on which the Save( ) was inititated and not on any subnodes).
Are there any other events that are more appropriate to listen to for this?
We've also tried breaking up the save operation (of a hierarchical model) by doing the traversal ourselves, but that seems to degrade the performance even further.
Perhaps we're trying to solve a problem for which FNH wasn't aimed to be used. We're new to it.
We've also set up an alternative solution using SqlBulkCopy, as recommended elsewhere.
We've seen the comments that FNH is primarily supposed for smaller transactions (OLTP) and not the type of exhaustive model we're bound to by our problem (signal processing of huge data volumes).
Background:
We're trying to use Fluent NHibernate on a larger database project with data gathered from fairly complex real time analysis (high frequency, multiple input signals, long experiment times etc). In a prototype we've built we see pretty scary wait times for the moment, and need to hook in some sort of reliable progress indicator.
Yes, now confirmed - as mentioned in my comment above. One (possible) solution to this is to simply turn of Cascades and do the model traversal manually and do explicit Save( ) calls.
This works, although it's not as neat as just handling an event. Still, given the genuin design of NHibernate, I bet there's certainly an event somewhere that could be intercepted - the question is just under what name. ... I bet someone on here knows more.
Also to improve performance we used a Stateless Session, experiemented with differnet batch size, and periodically/explicitly call Flush() and Clear(). See articles below for further details:
http://davybrion.com/blog/2008/10/bulk-data-operations-with-nhibernates-stateless-sessions/
http://ideas-net.blogspot.com/2009/03/nhibernate-update-performance-issue.html
Hope this helps.

Is it bad to rely on foreign key cascading?

The lead developer on a project I'm involved in says it's bad practice to rely on cascades to delete related rows.
I don't see how this is bad, but I would like to know your thoughts on if/why it is.
I'll preface this by saying that I rarely delete rows period. Generally most data you want to keep. You simply mark it as deleted so it won't be shown to users (ie to them it appears deleted). Of course it depends on the data and for some things (eg shopping cart contents) actually deleting the records when the user empties his or her cart is fine.
I can only assume that the issue here is you may unintentionally delete records you don't actually want to delete. Referential integrity should prevent this however. So I can't really see a reason against this other than the case for being explicit.
I would say that you follow the principle of least surprise.
Cascading deletes should not cause unexpected loss of data. If a delete requires related records to be deleted, and the user needs to know that those records are going to go away, then cascading deletes should not be used. Instead, the user should be required to explicitly delete the related records, or be provided a notification.
On the other hand, if the table relates to another table that is temporary in nature, or that contains records that will never be needed once the parent entity is gone, then cascading deletes may be OK.
That said, I prefer to state my intentions explicitly by deleting the related records in code, rather than relying on cascading deletes. In fact, I've never actually used a cascading delete to implicitly delete related records. Also, I use soft deletion a lot, as described by cletus.
I never use cascading deletes. Why? Because it is too easy to make a mistake. Much safer to require client applications to explicitly delete (and meet the conditions for deletion, such as deleting FK referred records.)
In fact, deletions per se can be avoided by marking records as deleted or moving into archival/history tables.
In the case of marking records as deleted, it depends on the relative proportion of marked as deleted data, since SELECTs will have to filter on 'isDeleted = false' an index will only be used if less than 10% (approximately, depending on the RDBMS) of records are marked as deleted.
Which of these 2 scenarios would you prefer:
Developer comes to you, says "Hey, this delete won't work". You both look into it and find that he was accidently trying to delete entire table contents. You both have a laugh, and go back to what you were doing.
Developer comes to you, and sheepishly asks "Do we have backups?"
There's another great reason to not use cascading UPDATES or DELETES: they hold a serializable lock. Holding a serializable lock can kill performance.
Another huge reason to avoid cascading deletes is performance. They seem like a good idea until you need to delete 10,000 records from the main table which in turn have millions of records in child tables. Given the size of this delete, it is likely to completely lock down all of the table for hours maybe even days. Why would you ever risk this? For the convenience of spending ten minutes less time writing the extra delete statements for one record deletes?
Further, the error you get when you try to delete a record that has a child record is often a good thing. It tells you that you don't want to delete this record becasue there is data that you need that you would lose if you did so. Cascade delete would just go ahead and delete the child records resulting in loss of information about orders for instance if you deleted a customer who had orders in the past. This sort of thing can thoroughly mess up your financial records.
I was likewise told that cascading deletes were bad practice... and thus never used them until I came across a client who used them. I really didn't know why I was not supposed to use them but thought they were very convenient in not having to code out deleting all the FK records as well.
Thus I decided to research why they were so "bad" and from what I've found so far their doesn't to appear to be anything problematic about them. In fact the only good argument I've seen so far is what HLGLEM stated above about performance. But as I am usually not deleting this number of records I think in most cases using them should be fine. I would like to hear of any other arguments others may have against using them to make sure I've considered all options.
I'd add that ON DELETE CASCADE makes it difficult to maintain a copy of the data in a data warehouse using binlog replication which is how most commercial ETL tools work. Explicit deletion from each table maintains a full log record and is much easier on the data team :)
I actually agree with most of the answers here, YET not all scenarios are the same, and it depends on the situation at hand and what would be the entropy of that decision, for example:
If you have a deletion command for an entity that has multiple many/belong relationships with a large number of entities, each time you would call that deletion process you would also need to remember to delete all the corresponding FKs from each relational pivot that A has corrosponding relationships with.
Whereas via a cascade on delete, you write that once as part of your schema and it will ONLY delete those corresponding FKs and cleanup the pivots from relations that are no longer necessary, imagine 24 relations for an entity + other entities that would also have large number of relations on top of that, again, it really depends on your setup and what YOU feel comfortable with. In anycase just for FYIs, in an Illuminate migration schema file, you would write it as such:
$table->dropForeign(['permission_id']);
$table->foreign('permission_id')
->references('id')
->on('permission')
->onDelete('cascade');

Mark "deleted" instead of physical deletion with Castle ActiveRecord

In my current project, we got a rather unusual request(to me). The client wants all the deletion procedure to mark a flag instead of physically delete the record from the database table. It looks pretty easy at the first glance. I just have change
public void DeleteRecord(Record record)
{
record.DeleteAndFlush();
}
public IList GetAllRecords()
{
Record.FindAll().ToList();
}
To
public void DeleteRecord(Record record)
{
record.Deleted = true;
record.UpdateAndFlush();
}
public IList GetAllRecords()
{
Record.FindAll().Where(x=>x.Deleted==false).ToList();
}
But as after I get a bit of time and think though again. I found that this little change would cause a huge problem in my cascade settings. As I am pretty new to the Active Record business. I wouldn't trust myself to simply change all the CascaeEnum.Delete to CascadeEnum.SaveUpdate. So, I am looking some input here.
1) Is the mark a flag instead of physical requirement a common one?
2) If the answer to question 1 is Yes, then I believe there is something built-in in NHibernate to handle this. Can someone tell me what is the right approach for this kind of problem?
Thanks for your input.
This is known as Soft Deletes and it is very common. There is some debate about the best practice - check out this recent blog post: http://ayende.com/Blog/archive/2009/08/30/avoid-soft-deletes.aspx
This is quite common and called "soft delete". Here's an implementation of this for NHibernate.
This is a relatively common request and it's sometimes implemented for (at least) two reasons:
Auditing and history - Marking a row as deleted rather than physically deleting it means the information is still available if needed (including recovery of information if, for example, you accidentally delete the wrong customer).
Performance - I've seen systems that batch up deletes with this method so they can be performed physically at a quiet time. I'm doubtful this is needed with modern DBMS' but I can see how it might have been so in the past if you wanted to avoid cascaded deletes on severely overloaded systems (of course, you shouldn't be running on such a system in the first place). Oracle 8 introduced a feature like this where you could drop columns in this manner and it would only physically remove them when you asked - you couldn't use the column at all even though the information had not yet been removed fully. Granted removal of a column is more intensive than removal of a row but it may still help.