Is Entity Inheritance better done thru a single table or multiple? - sql

I have a table that holds agreement information. It works well for 95% of the agreements we record.
But there is a certain type of agreement that would require another 6 or so fields to capture info specific to that type of agreement.
My question is if its better to just add those 6 fields to the existing agreement table knowing that the info is meaningless to many of the agreement records or if its better to create another table w/ a 1:1 relationship w/ the original agreement table to extend it in the case of these special types of agreements.
Neither option is all that attractive to me, but I wanted to know if one was considered a better practice than the other when you have a choice.
Thanks for any help.

Multiple tables are probably your best bet from an extensibility perspective. Although you currently have a single agreement type that requires extra fields, the very existence of such a thing suggests that similar variations in the future are worth accomodating in your design. When other variants arise, a multiple table approach will allow you to include those agreement types and their characteristic attributes gracefully.
Also, it would be worth considering an agreement_type attribute (or something similar) in your superclass table, for cases in which you may want to perform agreement type analysis without incurring the hit of a join to the subclass table(s). The intended usage of the data will be your guide as to whether or not such an attribute makes sense.

My choice is a separated table, because for 95% of agreements its pointless information.

It depends.
A single table is easier to query, and 6 additional fields aren't that much in the way of additional storage. If the number of rows in your existing table is small, or the number of existing columns is already pretty big, then on balance I would add the additional fields to the existing table.
On the other hand, if this change would massively inflate the size of an already-large existing table, it would definitely be worth considering setting up a new table.

Hope this is self-explanatory.

Related

Is there a database design pattern name for reducing duplicate join table data?

I have two tables with a join table to allow a many-to-many relationship.
It's a very familiar design pattern. It indicates which Branches each Member has access to.
As the number of members and branches increases I end up with a lot of data in the join table that is duplicated across members. Members tend to have access to the same groups of Branches as other Members.
So I'm looking at normalizing my data by creating a MemberProfile table that is effectively immutable. And rather than creating MemberBranch records for every Member I check for a matching MemberProfile, use if it already exists, or create one if it doesn't:
The idea being if I have a million Members with only a hundred access profiles this will save me a lot of space in my database.
I'm happy that it all works and that the development effort is worth is.
My question is "Is this a standard database design pattern, and if so, what is it called?"
EDIT: It's been pointed out that this is compressing the data not normalizing it. Which is the intent behind the design.
Unless your many:many table is always the join of particular other base tables, one is not normalizing. You aren't normalizing here. Normalization does not introduce new column names. It just rearranges the current ones among different base tables.
You are just compressing/encoding your data. There is not necessarily any benefit in this, since now some queries and updates will be slower although your database is smaller. (You have reported that it is worth it in your case.)
I understand you'd like to put a label on that precise transformation, but unfortunately, there aren't many books that discuss database design or refactoring patterns. One of the few is Martin Fowler's Refactoring Databases, which you may know for his work on analysis patterns (he also has a great blog, worth following!). In that book, Martin presents a bunch of refactoring patterns that can be applied to databases and has put a name on common database transformations, including the one you have presented, which he called Split Table.
Split Table. Vertically split (e.g. by columns) an existing table into one or more tables.
A catalog of the database refactorings presented in that book are available here.
Hi I don't know about a pattern name but I've used the same principle before.
To keep this performing well, introduce a checksum to memberProfile based upon the branches for the profile, this way a lookup for an existing profile is plain easy and fast.
But do remember that the checksum is not necessarily unique, in case of collisions you will still have to check the branches, but only for the profiles sharing the same checksum.
Cleanup can be a scheduled task is is nothing more then deleting the profiles without users.

Implementing Review flags in Databases; best practices

I need store some review flags that relate to some entities. Each review flag can only related to a single entity property group. For example table Parents has a ParentsStatus flag and table Children has a set of ChildrenStatus flags.
In the current design proposal I have three tables:
ReviewTypes: stores the flags and the properties they relate to.
ReviewPositions: stores the values the flags can have.
Reviews: stores the transaction data, the actual reviews. It is like UsersToFlags: Flags in a database rows, best practices.
The problem is I am getting push back that there is no need to have the Reviews table and it would be better to just store this actual review data on each entity. For example add an extra column to Parents to hold ParentsStatus. They feel it is a simpler solution and separating the data out is just “overkill” for out scenario.
I don’t like this idea as this means that every time we want to add a new review flag we need to update the core entity table to hold that flag.
Space is not a problem.
Do people have any strong opinions?
Edit:
This comment applies to the three answers. The consensus is the relational approach is best but I think I need to read up a little more on the EAV model as from some very basic reading Best beginner resources for understanding the EAV database model? and its related links it does not appear to be super straightforward and I don't want to dig myself a hole. Thanks to wildplasser. I'll loop back once I read up a bit more.
Oh yes. Their idea is simpler, until you want to enhance it. Given the scheme they are proposing what if two reviews were need per entity. What if you wanted to attach other things such as notes/annotations. Once they find out how much of an inflatable dartboard their idea is, what do you have to move to a more useful one? Not to mention you need some way of identifying status fields, with fragile rubbish like Column name ends with "_Status", or you have to hard code them somewhere.
Doing it properly is not that much more work, it's not more complex, in fact in many ways it's simpler and it will cope with the invetible changes at far less cost.
normalization is always preferable to premature optimization.
One reason why I like the reviews table separate is that you can hold changes you may not want to display yet (as it hasn't been reviewed and approved) and still maintain the old dat until the new is approved. I don't know if your situation requires that.
To make future programming simpler for when you want to display the changes, you can write a view that shows the old and new data.

What are common pitfalls when developing a new SQL database?

I know, I quite dislike the catch-all survey type questions, but I couldn't think of a better way to find out what I think I need to know. I'm very green in the world of database development, having only worked on a small number of projects that merely interacted with the database rather than having to actually create a new one from scratch. However, things change and now I am faced with creating my own database.
So far, I have created the tables I need and added the columns that I think I need, including any link tables for many-many relationships and columns for one-to-many relationships. I have some specific questions on this, but I felt that rather than get just these answered, it would make more sense to ask about things I may not even know, which I should address now rather than 6 months from now when we have a populated database and client tools using it.
First the questions on my database which have led me to realise I don't know enough:
How do I ensure my many-to-many link tables and my one-to-many columns are up-to-date when changes are made to the referenced tables? What problems may I encounter?
I am using nvarchar(n) and nvarchar(MAX) for various text fields. Should I use varchar equivalents instead (I had read there may be performance risks in using nvarchar)? Are there any other gotchas regarding the selection of datatypes besides being wary of using fixed length char arrays to hold variable length information? Any rules on how to select the appropriate datatype?
I use int for the ID column of each table, which is my primary key in all but the link tables (where I have two primary keys, the IDs of the referenced table rows). This ID is set as the identity. Are there pitfalls to this approach?
I have created metadata tables for things like unit types and statuses, but I don't know if this was the correct thing to do or not. Should you create new tables for things like enumerated lists or is there a better way?
I understand that databases are complex and the subject of many many worthy tomes, but I suspect many of you have some tips and tricks to augment such reading material (though tips for essential reading would also be welcome).
Community wiki'd due to the rather subjective nature of these kinds of posts. Apologies if this is a duplicate, I've conducted a number of searches for something like this but couldn't find any, though this one is certainly related. Thanks.
Update
I just found this question, which is very similar in a roundabout way.
Not normalising
Not using normalisation
Trying to implement a denormalised schema from the start
Seriously:
Foreign keys will disallow deletes or updates from the parent tables. Or they can be cascaded.
Small as possible: 2 recent SO questions datatypes and (n)varchar
May not be portable and your "natural key" (say "product name") still needs a unique constraint. Otherwise no, but remember that an IDENTITY column is a "surrogate key"
Edit: Say you expect to store fruit with columns FruitID and FruitName. You have no way to restrict to one occurence of "Apple" or "Orange" because although this is your "natural key", you are using a surrogate key (FruitID). So, to maintain integrity, you need a unique constraint on FruitName
Not sure or your meaning, sorry. Edit: Don't do it. Ye olde "One true lookup table" idea.
I'll reply to your subjective query with some vague generalities. :)
The most common pitfall of designing a database is the same pitfall of any programming solution, not fully understanding the problem being solved. In the case of a database, it is understanding the nature of the data. How big it is, how it comes and goes, what business rules must it adhere to.
Here are some questions to ponder.
What is updated the most frequently? Is keeping that table write-locked going to lock up queries? Will it become a hot spot? Even a seemingly well normalized schema can be a poor performer if you don't understand your read versus write ratios.
What are your external interface needs? I've been on projects where the dotted line to "that other system" nearly scuttled the whole project because implementing it was delayed until everything else was in place, that is to say, everything else was inflexible.
Any other unspoken requirements? My favorite is date sensitivity. All the data is there, your reports are beautiful, the boss looks them over and asks, when did that datum change? Who did it and when? Is the database supposed to track itself and its users, or just the data? Will your front end do it for you?
Just some things to think about.
It does sounds like you've got a good grasp on what you're meant to be doing, and indeed there isn't "one true path" to doing databases.
Have you set up cascades for your hierarchical objects (i.e., a single delete at the 'head' of your object in the database will delete all entries in tables relating to that entry)?
Your link tables and 1:n columns should be foreign keys, so there isn't much to worry about if the data changes. By "two primary keys" here, did you mean indexes?
As for metadata tables, I've done them in the past, and I've not done them. A single char status with SQL comment can suffice for a limited set of statuses, but beyond a certain amount, or where you can think of adding more in the future, you might want to have a reference to another table of metadata, or maybe a char(8ish). E.g., I've seen user tables have "NORMAL", "ADMIN", "SUPER", "GUEST", etc for user type, which could have been 1,2,3,4,5 fkeys to a "UserType" table, but with such a restricted enumeration does it matter? Other people have a table of permissions (booleans of what a user can do) instead - many ways to skin a cat.
You might find some usable stuff in these slides:
[http://www.slideshare.net/billkarwin/sql-antipatterns-strike-back][1]
I also am a beginner to database design, but I found this online tutorial very, very helpful:
Database design with UML and SQL, 3rd edition
The author explains all the fundamental design aspects of database, and in a very clear manner. Before I found this online guide I did a lot of wikipedia reading about normalization. While that helped, this author explains the exact same stuff (through 3rd normal form, at least) but in a much, much easier to read way. It pretty much addresses all your questions as well.
I'd suggest a good book. The best IMO is this:
http://www.amazon.com/Server-2005-Database-Design-Optimization/dp/1590595297/ref=ntt_at_ep_dpt_1
In addition to not normalizing, a common problem I see is overindexing, done before there are performance measurements that take into account your in-production mix of reads vs. writes.
It's really, really easy to add an index to speed up a query, and harder to figure out which one to remove when you have several that are getting updated during an INSERT or UPDATE.
The middle ground is to go after obvious secondary indexes (e.g., for common, frequent lookups by name on large tables), deferring other candidate indexes until you have reasonable performance tests in place.
Among other things, not using primary keys, not thinking ahead about whether you'll be using indexed views (and designing tables accordingly; I once had to drop and recreate a large table at my site to change its ANSI_NULL attribute to ON so that I could then use it with an indexed view), using indices.

How to handle column growth of wide, flat tables

How would you DBA's handle this? I have taken ownership of an existing app (VB6) and database that was written in 1999. The database design is fairly 'flat', meaning the main tables are fairly wide (100+ columns) and developers have continued to tack on additional columns to the end of the tables. This has resulted in columns that have a lot of Nulls since they don't directly relate to the primary key.
I am considering splitting the main table out as a way to abstract myself from the years and years of 'column explosion'. I am certain that new fields will continue to be added as new requirements come up.
So the question is, as new fields are needed, do you continue to grow the width of the existing table? Or do you STOP extending an existing table and split it out into a separate supporting table that will house new fields, thereby creating a 1-to-1 relationship? If you were to split the main table, what would your naming scheme be?
Let's assume for this example I have a table called 'Foreclosure' with 150 fields.
What is a good name for the new 1-to-1 table? 'ForeclosureExtended'? ForeclosureOtherInfo'?
By the way, there are Views and Stored Procs that will need to be modified to support any new tables, but that is inevitable anyway when columns are added.
thanks in advance for any thoughts.
80% of the time, your nulls have definite patterns.
These patterns define subclasses of your table. In your case, they will be subclasses of Foreclosure.
Your splitting should be based on these subclass relationships.
Say, for example, some Foreclosure instances have a bunch of fields related to legal proceeding that are nearly all filled in. And other Foreclosure instances have the legal proceeding fields entirely filled with nulls.
You have two classes. You need to work out the relationship between them -- are they superclass-subclass or are they peer subclasses of some other superclass?
This tells you how to partition your table to make useful stuff happen.
You may have proper superclass subclass relationships
You may have found a thing (a LegalProceeding) which should have been a separate table all along. It should not have been permanently joined into Foreclosure. This is remarkably common.
You now have some relational implementation choices.
One common choice is to put all subclasses into a single, massive table with a lot of nulls. This is what you have today, and it isn't working.
One choice is to split the two subclass relationship tables into peers, duplicating the common information.
One choice is to have a superclass table with an optional FK reference to the additional information in the subclass.
One choice is to have a subclass table with a mandatory FK reference to the superclass information.
Unless you are really brave, app is very small/simple, or there are major performance issues do not fix the schema. If it ain't broke, don't fix it.
Just create a new table ForeclosureExtended, as you suggest with the same key and start adding columns. Or, you could make proper tables with grouped columns as new columns appear. Either way, if the schema is this bad, I'll bet the code is very fragile.
Why do you feel that you have a problem? To my mind it's easier to deal with one table that has a lot of columns than it is to deal with a ton of narrower tables and all the associated views you have to maintain.

Deciding on a database structure for pricing wizard

Option A
We are working on a small project that requires a pricing wizard for custom tables. (yes, actual custom tables- the kind you eat at. From here out I'll call them kitchen tables so we don't get confused) I came up with a model where each kitchen table part was a database table. So the database looked like this:
TableLineItem
-------------
ID
TableSizeID
TableEdgeWoodID
TableBaseID
Quantity
TableEdgeWoodID
---------------
ID
Name
MaterialUnitCost
LaborSetupHours
LaborWorkHours
Each part has to be able to calculate its price. Most of the calculations are very similar. I liked this structure because I can drag it right into the linq-to-sql designer, and have all of my classes generated. (Less code writing means less to maintain...) I then implement a calculate cost interface which just takes in the size of the table. I have written some tests and this functions pretty well. I added also added a table to filter parts in the UI based on previous selections. (You can't have a particular wood with a particular finish.) There some other one off exceptions in the model, and I have them hard coded. This model is very rigid, and changing requirements would change the datamodel. (For example, if all the tables suddenly need umbrellas.)
Option B:
After various meetings with my colleagues (which probably took more time than it should considering the size of this project), my colleagues decided they would prefer a more generic approach. Something like this:
Spec
----
SpecID
SpecTypeID
TableType_LookupID
Name
MaterialUnitCost
LaborSetupHours
LaborWorkHours
SpecType
--------
SpecTypeID
ParentSpecType_SpecTypeID
IsCustomerOption
IsRequiredCustomerOption
etc...
This is a much more generic approach that could be used to construct any product. (like, if they started selling chairs...) I think this would take longer time to implement, but would be more flexible in the future. (although I doubt we will revisit this.) Also you lose some referential integrity- you would need triggers to enforce that a table base cannot be set for a table wood.
Questions:
Which database structure do you prefer? Feel free to suggest your own.
What would be considered a best practice? If you have several similar database tables, do you create 1 database table with a type column, or several distinct tables? I suspect the answer begins with "It depends..."
What would an estimated time difference be in the two approaches (1 week, 1 day, 150% longer, etc)
Thanks in advance. Let me know if you have any questions so I can update this.
Having been caught out much more often than I should have by designing db structures that met my clients original specs but which turned out to be too rigid, I would always go for the more flexible approach, even though it takes more time to set up.
I don't have time for a complete answer right now, but I'll throw this out:
It's usually a bad idea to design a database based on the development tool that you're using to code against it.
You want to be generic to a point. Tables in a database should represent something and it is possible to make it too generic. For example, a table called "Things" is probably too generic.
It may be possible to make constraints that go beyond what you expect. Your example of a "table base" with a "table wood" didn't make sense to me, but if you can expand on a specific example someone might be able to help with that.
Finally, if this is a small application for a single store then your design is going to have much less impact on the project outcome than it would if you were designing for an application that would be heavily used and constantly changed. This goes back to the "too generic" comment above. It is possible to overdesign a system when its use will be minimal and well-defined. I hope that makes sense.
Given your comment below about the table bases and woods, you could set up a table called TableAttributes (or something similar) and each possible option would be of a particular table attribute type. You could then enforce that any given option is only used for the attribute to which it applies all through foreign keys.
There is a tendency to over-abstract with database schema design, because the cost of change can be high. Myself, I like table names that are fairly descriptive. I often equate schema design with OO design. E.g., you wouldn't normally create a class named Thing, you would probably call it Product, Furniture, Item, something that relates to your business.
In the schema you have provided there is a mix of the abstract (spec) and the specific (TableType_LookupID). I would tend to equalize the level of abstraction, so use entities like:
ProductGroup (for the case where you have a product that is a collection of other products)
Product
ProductType
ProductDetail
ProductDetailType
etc.
Here's what my experience would tell me:
Which database structure do you prefer? Without a doubt, I'd go for approach one. Go for the simplest setup that might work. If you add complexity, always ask yourself, what value will it have to the customer?
What would be considered a best practice? That does indeed depend, among others on the size of the project and the expected rate of change. As a general rule, generic tables are worth it when you expect the customer to be adding new types. For example, if your customer wants to be able to add a new "color" entity to the table, you'd need generic tables. You can't predict beforehand what they will add.
What would an estimated time difference be in the two approaches? Not knowing your business, skill, and environment, it's impossible to give a valid estimate. The approach that you are confident in coding will take the least time. Here, my guess would be approach #1 could be 5x-50x as fast. Generic tables are hard, both on the database and the client side.
Option B..
Generic is generally better than specific. Software already is doomed to fail or reach it's capacity by it's design for a certain set of tasks only. If you build something generic it will break less if abstracted with a realistic analysis of where it might head. As long as you stay away from over-abstraction and under-abstraction, it's probably the sweet spot.
In this case the adage "less code is more" would probably be drawn in that you wouldn't have to come back and re-write it again.