Combine multiple outrigger tables into one? - sql

I have a dimensional table and several outriggers.
create table dimFoo (
FooKey int primary key,
......
)
create table triggerA (
FooKey int references dimFoo (FooKey),
Value varchar(255),
primary key nonclustered (FooKey, Value)
)
create table triggerB (
FooKey int references dimFoo (FooKey),
Value varchar(255)
primary key nonclustered (FooKey, Value)
)
create table triggerC (
FooKey int references dimFoo (FooKey),
Value varchar(255)
primary key nonclustered (FooKey, Value)
)
Should these outrigger tables be merged into one table?
create table Triggers (
FooKey int references dimFoo (FooKey),
TriggerType varchar(20), // triggerA, triggerB, triggerC, etc....
Value varchar(255),
primary key nonclustered (FooKey, TriggerType, Value)
)

In order to meet this kind of scenario, such as with dimCustomer with customers potentially having multiple hobbies, the typical Kimball approach is to use a Bridge table between dimensions (dimCustomer and dimHobby).
This link provides a summary of how bridge tables could solve this problem and also alternatives which may work better for you.
Without knowing more about your specific scenario, including what the business requirements are, how many of these value types you have, how 'uniform' the various value types and values are, and the BI technology you'll be using for accessing the data, its hard to give a definitive answer to whether you should combine the bridges into one uber-bridge that caters for the various many-to-manys. All the above influence the answer to some extent.
Typically the 'generic tables' approach is more useful behind the scenes for administration than it is for presenting for analytics. My default approach would be to have specific bridge tables until/unless this became unmanageable from an ETL perspective or perceived as much more complex from a user query perspective. I wouldn't look to 'optimise' to a combined table from the get-go.
If your situation is outside the usual norms (do you have three as per your example, or ten?), combining could well be a good idea. This would make it more like a factless fact, with dimensions of dimCustomer, dimValueType and dimValue, and would be a perfectly reasonable solution.

Related

Problems on having a field that will be null very often on a table in SQL Server

I have a column that sometimes will be null. This column is also a foreign key, so I want to know if I'll have problems with performance or with data consistency if this column will have weight
I know its a foolish question but I want to be sure.
There is no problem necessarily with this, other than it is likely indication that you might have poorly normalized design. There might be performance implications due to the way indexes are structured and the sparseness of the column with nulls, but without knowing your structure or intended querying scenarios any conclusions one might draw would be pure speculation.
A better solution might be a shared primary key where table A has a primary key, and there is zero or one records in B with the same primary key.
If table A can have one or zero B, but more than one A can refer to B, then what you have is a one to many relationship. This can be represented as Pieter laid out in his answer. This allows multiple A records to refer to the same B, and in turn each B may optionally refer to an A.
So you see there are two optional structures to address this problem, and choosing each is not guesswork. There is a distinct rational between why you would choose one or the other, but it depends on the nature of your relationships you are modelling.
Instead of this design:
create table Master (
ID int identity not null primary key,
DetailID int null references Detail(ID)
)
go
create table Detail (
ID int identity not null primary key
)
go
consider this instead
create table Master (
ID int identity not null primary key
)
go
create table Detail (
ID int identity not null primary key,
MasterID int not null references Master(ID)
)
go
Now the Foreign Key is never null, rather the existence (or not) of the Detail record indicates whether it exists.
If a Detail can exist for multiple records, create a mapping table to manage the relationship.

SQL - Denormalization

I am trying to familiarize myself with a new database that is structured like this:
CREATE TABLE [TableA] (ID int not null, Primary Key (ID))
CREATE TABLE [TableB] (ID int not null, Primary Key (ID))
CREATE TABLE [TableC] (ID int not null, ID2 int, ID3 int, ID4 int, primary key (ID),
FOREIGN KEY (ID2) REFERENCES TableA(ID), FOREIGN KEY (ID3) REFERENCES TableB(ID))
Table C is a many to many junction table between tableA and tableB. TableC.ID is unique (as it is a Primary Key). TableC.ID4 is also unique and does not seem to refer to anything. I contacted the developer who described it as a "denormalization of the M1 (many to 1) entity". I fully understand the purpose of dernormalization (normalizing a database and then intentionally introducing anomalies for performance reasons), however I still do not understand the reasoning behind this. Is there a pattern or concept that I am unaware of? The application is written in C++ with a bit of VB.NET.
It's fair denormalization if tableC.ID4 contains values that ordinarily you'd have to perform an additional join or lookup for. So have you checked the application code to see what that column is being populated for? If it doesn't refer to anything and doesn't provide any enrichment to the row data as a whole, you may safely move on with your development.
This is not an answer per se, just a related thought. Please don't start down voting it.
Is there any sort of link between tableC.ID and tableC.ID4 ? In one of my projects I had similar case - I was having userid and username in user table. Both were unique with userid as primary key. There is one way to remove username from that table and reate a separate table mapping userid to username. I am not a great fan of normalization. So I thought its an overhead to fire a join query every time I need data from user table containing username and kept my design as it is.

Variable amount of sets as SQL database tables

More of a question concerning the database model for a specific problem. The problem is as follows:
I have a number of objects that make up the rows in a fixed table, they are all distinct (of course). One would like to create sets that contain a variable amount of these stored objects. These sets would be user-defined, therefore no hard-coding. Each set will be characterized by a number.
My question is: what advice can you experienced SQL programmers give me to implement such a feature. My most direct approach would be to create a table for each such set using table-variables or temporary tables. Finally, an already present table that contains the names of the sets (as a way to let the user know what sets are currently present in the database).
If not efficient, what direction would I be looking in to solve this?
Thanks.
Table variables and temporary tables are short lived, narrow of scope and probably not what you want to use for this. One table for each Set is also not a solution I would choose.
By the sound of it you need three tables. One for Objects, one for Sets and one for the relationship between Objects and Sets.
Something like this (using SQL Server syntax to describe the tables).
create table [Object]
(
ObjectID int identity primary key,
Name varchar(50)
-- more columns here necessary for your object.
)
go
create table [Set]
(
SetID int identity primary key,
Name varchar(50)
)
go
create table [SetObject]
(
SetID int references [Object](ObjectID),
ObjectID int references [Set](SetID),
primary key (SetID, ObjectID)
)
Here is the m:m relation as a pretty picture:

How to design my ingredient database tables?

I'm working on an recipe module (ASP.NET MVC, entity framework, sql server), and one of the entities I have to setup in the database are ingredients, their characteristics and translations to a number of languages.
I was thinking of creating two tables as follows:
Table Ingredient
Id, nvarchar(20), primary key
EnergyInKCal, float
... other characteristics
Source, nvarchar(50)
Table IngredientTranslation
Id, nvarchar(20), primary key
LanguageCode, nvarchar(2)
Name, nvarchar(200)
So each ingredient will be defined once in the Ingredient table, with a unique code as their primary key, for example:
'N32-004669', 64, 368, 'NUBEL'
and translated in the IngredientTranslation table, for example
'N32-004669', 'NL', 'Aardappel, zoete'
'N32-004669', 'FR', 'Pomme de terre, douce'
'N32-004669', 'EN', 'Potatoe, sweet'
I think querying ingredients becomes easy like this... do you think it's a good idea to use code (which is nvarchar(20)) as a primary key? Or is a simple bigint better, but then I have to use JOINS in my queries. Maybe other approaches that are better - performance wise?
EDIT: after reading the answers, I redesigned the tables as follows:
Table Ingredient
Id, bigint, primary key
ExternalId, nvarchar(20)
EnergyInKCal, float
... other characteristics
Source, nvarchar(50)
Table IngredientTranslation
Id, bigint, primary key
IngredientId, bigint (relation with Id of Ingredient table)
LanguageCode, nvarchar(2)
Name, nvarchar(200)
Thanks,
L
Since a primary key is included in every other index, it's best to keep the primary key small. So an int identity is an excellent choice.
One side note: storing translations in a database has a rather hefty performance impact. Both on the database and the rendering engine that has to build the web page. Since translations are fairly constant, most websites store them outside the database. In ASP.NET, the typical choice would be resource files.
In a purely relational schema, using a natural key (such as ID, above) as the primary key should be fine, although in OO design surrogate keys are likely to be preferred.
A couple of additonal notes:
The primary key on IngredientTranslation would need to be a compound key on ID and Language Code, not just ID.
From First Normal Form: remove derived values. So you only need one field for Energy - pick one of the energy units (kJ or kCal) and use the appropriate multiplication factor to convert where appropriate.
EDIT: in the alternative scenario, I suggest adding a unique index to each of the Ingredient (on code) and IngredientTranslation (on Code and Language Code) tables. I also suggest renaming the code field as IngredientCode on the IngredientTranslation table.

Generic Database table design

Just trying to figure out the best way to design my table for the following scenario:
I have several areas in my system (documents, projects, groups and clients) and each of these can have comments logged against them.
My question is should I have one table like this:
CommentID
DocumentID
ProjectID
GroupID
ClientID
etc
Where only one of the ids will have data and the rest will be NULL or should I have a separate CommentType table and have my comments table like this:
CommentID
CommentTypeID
ResourceID (this being the id of the project/doc/client)
etc
My thoughts are that option 2 would be more efficient from an indexing point of view. Is this correct?
Option 2 is not a good solution for a relational database. It's called polymorphic associations (as mentioned by #Daniel Vassallo) and it breaks the fundamental definition of a relation.
For example, suppose you have a ResourceId of 1234 on two different rows. Do these represent the same resource? It depends on whether the CommentTypeId is the same on these two rows. This violates the concept of a type in a relation. See SQL and Relational Theory by C. J. Date for more details.
Another clue that it's a broken design is that you can't declare a foreign key constraint for ResourceId, because it could point to any of several tables. If you try to enforce referential integrity using triggers or something, you find yourself rewriting the trigger every time you add a new type of commentable resource.
I would solve this with the solution that #mdma briefly mentions (but then ignores):
CREATE TABLE Commentable (
ResourceId INT NOT NULL IDENTITY,
ResourceType INT NOT NULL,
PRIMARY KEY (ResourceId, ResourceType)
);
CREATE TABLE Documents (
ResourceId INT NOT NULL,
ResourceType INT NOT NULL CHECK (ResourceType = 1),
FOREIGN KEY (ResourceId, ResourceType) REFERENCES Commentable
);
CREATE TABLE Projects (
ResourceId INT NOT NULL,
ResourceType INT NOT NULL CHECK (ResourceType = 2),
FOREIGN KEY (ResourceId, ResourceType) REFERENCES Commentable
);
Now each resource type has its own table, but the serial primary key is allocated uniquely by Commentable. A given primary key value can be used only by one resource type.
CREATE TABLE Comments (
CommentId INT IDENTITY PRIMARY KEY,
ResourceId INT NOT NULL,
ResourceType INT NOT NULL,
FOREIGN KEY (ResourceId, ResourceType) REFERENCES Commentable
);
Now Comments reference Commentable resources, with referential integrity enforced. A given comment can reference only one resource type. There's no possibility of anomalies or conflicting resource ids.
I cover more about polymorphic associations in my presentation Practical Object-Oriented Models in SQL and my book SQL Antipatterns.
Read up on database normalization.
Nulls in the way you describe would be a big indication that the database isn't designed properly.
You need to split up all your tables so that the data held in them is fully normalized, this will save you a lot of time further down the line guaranteed, and it's a lot better practice to get into the habit of.
From a foreign key perspective, the first example is better because you can have multiple foreign key constraints on a column but the data has to exist in all those references. It's also more flexible if the business rules change.
To continue from #OMG Ponies' answer, what you describe in the second example is called a Polymorphic Association, where the foreign key ResourceID may reference rows in more than one table. However in SQL databases, a foreign key constraint can only reference exactly one table. The database cannot enforce the foreign key according to the value in CommentTypeID.
You may be interested in checking out the following Stack Overflow post for one solution to tackle this problem:
MySQL - Conditional Foreign Key Constraints
The first approach is not great, since it is quite denormalized. Each time you add a new entity type, you need to update the table. You may be better off making this an attribute of document - I.e. store the comment inline in the document table.
For the ResourceID approach to work with referential integrity, you will need to have a Resource table, and a ResourceID foreign key in all of your Document, Project etc.. entities (or use a mapping table.) Making "ResourceID" a jack-of-all-trades, that can be a documentID, projectID etc.. is not a good solution since it cannot be used for sensible indexing or foreign key constraint.
To normalize, you need to the comment table into one table per resource type.
Comment
-------
CommentID
CommentText
...etc
DocumentComment
---------------
DocumentID
CommentID
ProjectComment
--------------
ProjectID
CommentID
If only one comment is allowed, then you add a unique constraint on the foreign key for the entity (DocumentID, ProjectID etc.) This ensures that there can only be one row for the given item and so only one comment. You can also ensure that comments are not shared by using a unique constraint on CommentID.
EDIT: Interestingly, this is almost parallel to the normalized implementation of ResourceID - replace "Comment" in the table name, with "Resource" and change "CommentID" to "ResourceID" and you have the structure needed to associate a ResourceID with each resource. You can then use a single table "ResourceComment".
If there are going to be other entities that are associated with any type of resource (e.g. audit details, access rights, etc..), then using the resource mapping tables is the way to go, since it will allow you to add normalized comments and any other resource related entities.
I wouldn't go with either of those solutions. Depending on some of the specifics of your requirements you could go with a super-type table:
CREATE TABLE Commentable_Items (
commentable_item_id INT NOT NULL,
CONSTRAINT PK_Commentable_Items PRIMARY KEY CLUSTERED (commentable_item_id))
GO
CREATE TABLE Projects (
commentable_item_id INT NOT NULL,
... (other project columns)
CONSTRAINT PK_Projects PRIMARY KEY CLUSTERED (commentable_item_id))
GO
CREATE TABLE Documents (
commentable_item_id INT NOT NULL,
... (other document columns)
CONSTRAINT PK_Documents PRIMARY KEY CLUSTERED (commentable_item_id))
GO
If the each item can only have one comment and comments are not shared (i.e. a comment can only belong to one entity) then you could just put the comments in the Commentable_Items table. Otherwise you could link the comments off of that table with a foreign key.
I don't like this approach very much in your specific case though, because "having comments" isn't enough to put items together like that in my mind.
I would probably go with separate Comments tables (assuming that you can have multiple comments per item - otherwise just put them in your base tables). If a comment can be shared between multiple entity types (i.e., a document and a project can share the same comment) then have a central Comments table and multiple entity-comment relationship tables:
CREATE TABLE Comments (
comment_id INT NOT NULL,
comment_text NVARCHAR(MAX) NOT NULL,
CONSTRAINT PK_Comments PRIMARY KEY CLUSTERED (comment_id))
GO
CREATE TABLE Document_Comments (
document_id INT NOT NULL,
comment_id INT NOT NULL,
CONSTRAINT PK_Document_Comments PRIMARY KEY CLUSTERED (document_id, comment_id))
GO
CREATE TABLE Project_Comments (
project_id INT NOT NULL,
comment_id INT NOT NULL,
CONSTRAINT PK_Project_Comments PRIMARY KEY CLUSTERED (project_id, comment_id))
GO
If you want to constrain comments to a single document (for example) then you could add a unique index (or change the primary key) on the comment_id within that linking table.
It's all of these "little" decisions that will affect the specific PKs and FKs. I like this approach because each table is clear on what it is. In databases that's usually better then having "generic" tables/solutions.
Of the options you give, I would go for number 2.
Option 2 is a good way to go. The issue that I see with that is you are putting the resouce key on that table. Each of the IDs from the different resources could be duplicated. When you join resources to the comments you will more than likely come up with comments that do not belong to that particular resouce. This would be considered a many to many join. I would think a better option would be to have your resource tables, the comments table, and then tables that cross reference the resource type and the comments table.
If you carry the same sort of data about all comments regardless of what they are comments about, I'd vote against creating multiple comment tables. Maybe a comment is just "thing it's about" and text, but if you don't have other data now, it's likely you will: date the comment was entered, user id of person who made it, etc. With multiple tables, you have to repeat all these column definitions for each table.
As noted, using a single reference field means that you could not put a foreign key constraint on it. This is too bad, but it doesn't break anything, it just means you have to do the validation with a trigger or in code. More seriously, joins get difficult. You can just say "from comment join document using (documentid)". You need a complex join based on the value of the type field.
So while the multiple pointer fields is ugly, I tend to think that's the right way to go. I know some db people say there should never be a null field in a table, that you should always break it off into another table to prevent that from happening, but I fail to see any real advantage to following this rule.
Personally I'd be open to hearing further discussion on pros and cons.
Pawnshop Application:
I have separate tables for Loan, Purchase, Inventory & Sales transactions.
Each tables rows are joined to their respective customer rows by:
customer.pk [serial] = loan.fk [integer];
= purchase.fk [integer];
= inventory.fk [integer];
= sale.fk [integer];
I have consolidated the four tables into one table called "transaction", where a column:
transaction.trx_type char(1) {L=Loan, P=Purchase, I=Inventory, S=Sale}
Scenario:
A customer initially pawns merchandise, makes a couple of interest payments, then decides he wants to sell the merchandise to the pawnshop, who then places merchandise in Inventory and eventually sells it to another customer.
I designed a generic transaction table where for example:
transaction.main_amount DECIMAL(7,2)
in a loan transaction holds the pawn amount,
in a purchase holds the purchase price,
in inventory and sale holds sale price.
This is clearly a denormalized design, but has made programming alot easier and improved performance. Any type of transaction can now be performed from within one screen, without the need to change to different tables.