Trigger for updated date column vs explicitly setting it - sql

I'm wondering if using a trigger to set the updated date column of a table (or all tables) is considered a better practice versus having the application explicitly set it. I realize this could devolve into a debate over preferences and design patterns, so in an effort to parameterize the question a bit - I'd like to get a take from a best practices stand point. Taking separation of concerns into consideration (keeping business logic out of the database type of thing) as well as making sure database columns have what is intended within them (actually updating the "last modified date" column), I'm more inclined to let the database handle this via a trigger. That said, I also tend to shy away from triggers since they tend to hide the consequences of an action in a database. I'm hoping that the many smarter people here on SO have a more concrete thought than mine.

It rather depends on what you want to do. If you want to know when any column has changed, then a trigger would be the most suitable. It would be a bit of a drag to have to change code every time you added a new column.
However, if you were only interested in certain columns, you might simply wish to handle this in the update SQL.
There are of course shades in between - you could choose to handle only certain columns in the trigger.
Another consideration is how many bits of code can update a given table. You might have some all singing all dancing update code, but it is perfectly possible this could be spread out.
One word of caution - triggers tend to be the last place you consider when tracking an issue. For this sake (and this is a personal preference), I tend to avoid them unless they're absolutely necessary.

Apart from the tuning, the issue with triggers is that if they are disabled, you will never know this, and transactions will be committed. This could cause a major problem if the update-date is important to your business logic (for instance for sync processes or for finance). I prefer to explicitly handle all business related data within the procedures. And use triggers mainly for auditing.

Related

Model validation logic in the database via constraints. Good idea, bad idea, or not worth it?

It's always rubbed me the wrong way to write code in my model's clean method to validate various constraints on the data when these same constraints aren't also present in the database.
After all, the database already has constraints for some of my data, like NOT NULL.
So, I've been writing RawSQL migrations that ADD CONSTRAINT some_logic in my most recent project that matches whatever logic I have in my clean() method.
It works OK, but it isn't an insignificant task to remember to add these constraints, add tests for these migrations, and update them when my model changes. Also, of course, I'm violating DRY by writing code in two places to do the same thing.
Should I give up this quixotic quest?
This is by no means a comprehensive answer, but at least I wanted to give my opinion.
There has been many frameworks that have pushed the idea of removing the constraints from the database, in order to check them at the application level. The idea seemed nice to me at first (in the early 2000s) but after some years I came to the (very personal) conclusion that this is a bad idea.
I think, to me it boils down to two things:
Data survives much longer than the applications. Whole systems go obsolete, but the data survives many more years. Sometimes the application is replaced, but the database is stil the same one.
The application is not as reliable when it comes to validate data. I'm talking about programming defects here. One version of the app may work well and then the next one has a bug. It may be that one developer moves out of the company, then the new replacement -- who doesn't know as much -- changes the app with disastrous consequences. All that time a simple database constraint (that is usually very cheap to implement) could have enforced data quality.
Yep, I'm a fan of strict database constraint. Nevertheless, this doesn't mean I'm against application validations. These ones can show much nicer error messages.
If writing too much logic in clean() feels dirty, an in-between solution would be to use Django's built-in validators directly on your model fields.
The validation logic isn't saved in the database, but it is tracked in migrations. Like clean() logic, Validators require you to call Model.clean_fields(), but a ModelForm does this automatically.
You can also dig into django-db-constraints. The library might help do what you're looking to do, and the source code might help you roll a solution that fits your needs.

Change detection in complex system

This might seem like a fairly specific question but I'm wondering if there is any technology/pattern out there that might help me in a current project. I have a hugely complex database which is updated by multiple systems. I now need to do change tracking on various bits of data that is spread across multiple tables so that I can send it to a third party system.
I've considered a number of options but unfortunately I can't seem to come to any other conclusion than using database triggers. I'm thinking of storing a flag in a table (or queue) to identify the rows that have changed and then building an xml diff containing the changed data to send to a web service. This feels a little dirty so I was wondering if anyone could think of a better alternative.
Depending on the database platform you're using, you might look into Change Data Capture. Since you mention .NET, here's some info about it: http://technet.microsoft.com/en-us/library/bb522489(v=sql.105).aspx
Other database systems may offer something similar.
Another option would be insert/update/delete triggers on the tables, however triggers should be approached carefully as they can cause some significant performance problems if not done right.
And yet another option still would be what you describe - some sort of flag to monitor for changes. A simple CREATED and MODIFIED timestamp fields can go a long way here, as rather than just a bit indicator suggesting that the row may need attention, you'll know when the update happened, and your export process can be programmed accordingly (e.g., select * from table where modified > getdate()-1).

Reasons for objection to SQL triggers that insert data into other tables?

I'm being told by a person with some authority in our company that it's a "database no-no" to create triggers in a database that change rows in another table.
I've used this technique to create default initial configuration, auto-maintaining audit logs, and various other things that would have been a nightmare to consistently maintain inside the heterogeneous applications that connect to that database. For over a decade, I've read that this as an appropriate way to centralize relationship constraint maintenance and get the responsibility out of the applications interacting with the data.
As such, my BS meter is pegging with this. Am I missing something fundamentally wrong with that technique that makes it a bad practice in general?
If you are careful with your trigger code, there is nothing inherently bad about it. Some people get bitten by bad trigger code and then decide that triggers are bad (eventhough it was the bad trigger code that was the problem). They then generalize this as, "never use triggers".
The other problem is....
Using the audit tables as an example, suppose you have a stored procedure that updates a table AND puts data in to an audit table. Now suppose you write trigger code to put data in to the audit table. You could end up with duplicate audit data.
It's personal preference. In my opinion it's bad practice, though. It can be a bit unruly to manage a db that has triggers on tables updating other tables, which raises another trigger to update another table, etc..
To me, makes more sense to wrap all of the functionality into a stored procedure so all the logic is in one place.
To each their own, though.
It may be a company policy, but it is not a definitive no-no. The problem with doing it unless you know and control the database is that the amendments of other tables may be inefficient ( and this can often be difficult to identify as an issue ), and there is a danger of them cascading - that is the amendment of this table fires another trigger which may update another table ....
So I would not call it a no-no as such, but something to be done with care.
I think it's similar to the admonition to avoid goto statements in structured programming. You should look pretty hard to find a "better" answer than putting a bunch of DML into triggers, because it's easy to shoot your foot off by mishandling triggers, but sometimes that's the best answer for the problem at hand.
My experience is mostly in SQL Server. Older versions didn't have change tracking or cascading referential integrity, so you might have had to write triggers to handle logging and referential integrity management yourself in the bad old days. Now there are superior tools for both these functions that are built right into the platform. They're "cleaner" than DML-laden triggers, but still allow you to keep those functions centralized.
Even so, there are still cases where writing code in a trigger gets the job done best. Don't let dogma stop you from doing your job, if you conclude through analysis that triggers are the tool you need.
Not all triggers that insert rows into another table are bad, for example a trigger than keeps a previous version of the row in an AUDIT table, as you say. But without knowing the particulars, I do suspect that your "default initial configuration" created via trigger might be better done in a stored procedure, because the SP is more self-documenting.
Idiosyncractic side-effects are to be avoided -- and by 'idiosyncractic' I mean one person's idea of a short-cut that circumvents the tried-and-true standard way of doing something. They make ongoing maintenance difficult and can be booby traps for the unwary programmer who happens along later.
I'm still looking for that book in which "best practice" is defined - I think I'll find it next to "The big book of database no-nos".
Personally, I've encouraged teams to avoid triggers where possible.
I've had bad experiences with them - I agree that hitting oneself with a hammer doesn't make all hammers evil, but triggers are more like nail guns than hammers - the opportunity for harm is definitely there.
Triggers are "side effects" - your explicit intention is to insert a record in a table; as a side effect of this intention, a whole bunch of other stuff happens. Most teams I work with don't have "the database guy" - developers work across multiple layers, and keeping all the side effects in their brain is taxing. This creates the opportunity for bugs and performance problems - not because someone is stupid or lazy, but because the technology has been made more complex.
There are problems which are best (or even only) solved through triggers - enforcing referential integrity is a classic. There are problems which are often solved through triggers which make me nervous - especially when they reflect business rules, or the use of audit table. There are problems which - in my view - are too close to the nail gun trigger to be solved by triggers e.g calculations in the business domain (e.g. calculating the tax amount for a sale).

Is adding a bit mask to all tables in a database useful?

A colleague is adding a bit mask to all our database tables. In theory this is so we can track certain properties of each row across the entire system. For example...
Is the row shipped with the system or added by the client once they've started using the system
Has the row been deleted from the table (soft deletes)
Is the row a default value within a set of rows
Is this a good idea? Are there other uses where this approach would be beneficial?
My preference is these properties are obviously important, and having a dedicated column for each property is justified to make what is happening clearer to fellow developers.
Not really, no.
You can only store bits in it, and only so many. So, seems to me like it's asking for a lot of application-level headaches later on keeping track of what each one means and potential abuse later on because "hey they're everywhere". Is every bitmask on every table going to use the same definition for each bit? Will it be different on each table? What happens when you run out of bits? Add another?
There are lots of potential things you could do with it, but it begs the question "why do it that way instead of identifying what we will use those bits for right now and just make them proper columns?" You don't really circumvent the possibility of schema changes this way anyway, so it seems like it's trying to solve a problem that you can't really "solve" and especially not with bitmasks.
Each of the things you mentioned can be (and should be) solved with real columns on the database, and those are far more self-documenting than "bit 5 of the BitMaskOptions field".
A dedicated column is is better, because it's undoubtedly more obvious and less error-prone. SQL Server already stores BIT columns efficiently, so performance is not an issue.
The only argument I could see for a bitmask is not having to change the DB schema every time you add a new flag, but really, if you're adding new flags that often then something is not right.
No, it is not even remotely a good idea IMO. Each column should represent a single concept and value. Bit masks have all kinds of performance and maintenance problems. How do new developers understand what each of the bits mean? How do you prevent someone from accidentally mixing the meaning of the order of the bits?
It would be better to have a many-to-many relationship or separate columns rather than a bit mask. You will be able to index on it, enable referential integrity (depending on approach), easily add new items and change the order of the results to fit different reports and so on.

Getting rid of hard coded values when dealing with lookup tables and related business logic

Example case:
We're building a renting service, using SQL Server. Information about items that can be rented is stored in a table. Each item has a state that can be either "Available", "Rented" or "Broken". The different states reside in a lookup table.
ItemState table:
id name
1 'Available'
2 'Rented'
3 'Broken'
Adding to this we have a business rule which states that whenever an item is returned, it's state is changed from "Rented" to "Available".
This could be done with a an update statement like "update Items set state=1 where id=#itemid". In application code we might have an enum that maps to the ItemState id:s. However, these contain hard coded values that could lead to maintenance issues later on. Say if a developer were to change the set of states but forgot to fix the related business logic layer...
What good methods or alternate designs are there for dealing with this type of design issues?
Links to related articles are also appreciated in addition to direct answers.
In my experience this is a case where you actually have to hardcode, preferably by using an Enum which integer values match the id's of your lookup tables. I can't see nothing wrong with saying that "1" is always "Available" and so forth.
Most systems that I've seen hard code the lookup table values and live with it. That's because, in practice, code tables rarely change as much as you think they might. And if they ever do change, you generally need to re-compile any programs that rely on that DDL anyway.
That said, if you want to make the code maintainable (a laudable goal), the best approach would be to externalize the values into a properties file. Then you can edit this file later without having to re-code your entire app.
The limiting factor here is that your app depends for its own internal state on the value you get from the lookup table, so that implies a certain amount of coupling.
For lookups where the app doesn't rely on that code, (for instance, if your code table stores a list of two-letter state codes for use in an address drop-down), then you can lazily load the codes into an object and access them only when needed. But that won't work for what you're doing.
When you have your lookup tables as well as enums defined in the code, then you always have an issue with keeping them in sync. There is not much that can be done here. Both live effectively in two different worlds and are generally unaware of each other.
You may wish to reject using lookup tables and only let your business logic operate these values. In that case you miss the options of relying on referential integrity to back you ap on the data integrity.
The other option is to build up your application in that way that you never need these values in your code. That means moving part of your business logic to the database layer, meaning, putting them in stored procedures and triggers. This will also have the benefit of being agnostic to the client. Anyone can invoke SPs and get assured the data will be kept in the consistence state, consistent with your business logic rules as well.
You'll need to have some predefined value that never changes, be it an integer, a string or something else.
In your case, the numerical value of the state is the state's surrogate PRIMARY KEY which should never change in a well-designed database.
If you're concerned about the consistency, use a CHAR code: A, R or B.
However, you should stick to it as well as to a numerical code so that A always means Available etc.
You database structure should be documented as well as the code is.
The answer depends entirely on the language you're using: solutions for this are not the same in Java, PHP, Smalltalk or even Assembler...
But let me tell you something: while it's true hard coded values are not a great thing, there are times in which you do need them. And this one is pretty much one of them: you need to declare in your code your current knowledge of the business logic, which includes these hard coded states.
So, in this particular case, I would hard code those values.
Don't overdesign it. Before trying to come up with a solution to this problem, you need to figure out if it's even a problem. Can you think of any legit hypothetical scenario where you would change the values in the itemState table? Not just "What if someone changes this table?" but "Someone wants to change this table in X way for Y reason, what effect would that have?". You need to stay realistic.
New state? you add a row, but it doesn't affect the existing ones.
Removing a state? You have to remove the references to it in code anyway.
Changing the id of a state? There is no legit reason to do that.
Changing the name of a state? There is no legit reason to do that.
So there really should be no reason to worry about this. But if you must have this cleanly maintainable in the case of irrational people who randomly decide to change Available to 2 because it just fits their Feng Shui better, make sure all tables are generated via a script which reads these values from a configuration file, and then make sure all code reads constants from that same configuration file. Then you have one definition location and any time you want to change the value you modify that configuration file instead of the DB/code.
I think this is a common problem and a valid concern, that's why I googled and found this article in the first place.
What about creating a public static class to hold all the lookup values, but instead of hard-coding, we initialize these values when the application is loaded and use names to refer them?
In my application, we tried this, it worked. Also you can do some checking, e.g. the number of different possible values of a lookup in code should be the same as in db, if it's not, log/email/etc. But I don't want to manually code this for the status of 40+ biz entities.
Moreover, this can be part of the bigger problem of OR mapping. We're exposed with too much details of the persistence layer, and thus we have to take care of it. With technologies like Entity Framework, we don't need to worry about the "sync" part because it's automated, am I right?
Thanks!
I've used a similar method to what you're describing - a table in the database with values and descriptions (useful for reporting, etc.) and an enum in code. I've handled the synchronization with a comment in code saying something like "these values are taken from table X in database ABC" so that the programmer knows the database needs to be updated. To prevent changes from the database side without the corresponding changes in code I set permissions on the table so that only certain people (who hopefully remember they need to change the code as well) have access.
The values have to be hard-coded, which effectively means that they can't be changed in the database, which means that storing them in the database is redundant.
Therefore, hard-code them and don't have a lookup table in the database. Instead store the items state directly in the items table.
You can structure your database so that your application doesn't actually have to care about the codes themselves, but rather the business rules behind them.
I have done both of the following:
Do one or more of your codes have a certain characteristic, such as IsAvailable, that the application cares about? If so, add it as a flag column to the code table, where those that match are set to true (or your DB's equivalent), and those that don't are set to false.
Do you need to use a specific, single code under a certain condition? You can create a singleton table, named something like EnvironmentSettings, with a column such as ItemStateIdOnReturn that's a foreign key to the ItemState table.
If I wanted to avoid declaring an enum in the application, I would use #2 to address the example in the question.
Whether you take this approach depends on your application's priorities. This type of structure comes at the cost of additional development and lookup overhead. Plus, if every individual code comes with its own business rules, then it's not practical to create one new column per required code.
But, it may be worthwhile if you don't want to worry about synchronizing your application with the contents of a code table.