SQL Server table design advice for linking 3 Entities - sql

I have three entities Plan, Feature, Sensor represented by tables using the same structure as follows:
[Entity]
--------
Id (int)
Name (varchar)
Each entity links to the other with 1-many relationships:
Each Plan can have multiple Features
Each Feature can have multiple Sensors (note that different features may have common sensors, but have different requirement levels - optional/mandatory)
This data will be used inside an application where the user will select a plan and ultimately will be shown a list of required sensors.
Design Issue #1:
I firstly want to have a table that will describe relationship between Plan and Feature and my though is to have a table:
PlanFeatures
--------------
PlanId (int)
FeatureId (int)
Required (bit)
If I have a Required column then I will effectively need to have a record for each combination of PlanId and FeatureId. The alternative is to only add a record if the PlanId and FeatureId combination exists. Which is better?
Design Issue #2
Similarly to #1, I want to have a table that describes the relationship between Features and Sensors with the only difference being that a Sensor may be either not required/optional/mandatory. So the idea is to have a table as follows:
FeatureSensors
--------------
FeatureId(int)
SensorId(int)
RequirementLevel(int)
As for #1, I am questioning whether I need to have a record for each combination of FeatureId and SensorId and if it is not needed then I just use a 0 for the RequirementLevel, otherwise I only have a record where it is optional (1) or mandatory (2).
Am I going down the right path here or is there a much better way to structure this data?

First of all, this is definitely one way to do it, on principle. Connecting tables via "relational tables" (EDIT: Don't know whether there's an official name for these) is a known practice.
Even before I finished reading, I asked myself the same question you did regarding issue #1: Why would you need a record for each combination of plan and feature? Similarly, why would you need a record for each combination of feature and requirement? I'd simply store the combinations that are actually applied in practice in the database, and not store the combinations that aren't used at all.
The requirement level (optional/mandatory) can then be stored as a boolean flag; i.e. maybe rename it to "required_flag" or something like that. If the flag is set to true, the feature/sensor is required for the plan/feature, respectively. If the flag is false, the combination is optional. This way, you also have a uniform presentation of requirement levels in both tables.

Related

Modelling many-to-many relation between more than two tables

I'm modelling a tier-list database using PostgreSQL. This is how it works:
A user can create a new Tier List;
A user can add as many tiers he wants to the list;
A user can add as many items as he can. Initially, the items are added to an "unranked" section (not assigned to any tier), then the user can rank them as he wants.
Modeling details:
A tier necessarily belongs to a tier_list;
An item can be in multiple tier_lists and in multiple tiers as well;
An item added to a tier_list has not necessarily been added to one of the tiers.
For modelling the relations between item-tier and item-tier_list, I thought about two scenarios:
Creating a junction with a composite PFK key of item and tier_list with a nullable tier FK. The records with no tier value would be the unranked ones, while the ones with an assigned tier would be the ranked;
Creating two M-N relations: one between item and tier, storing ranked items, and another between item and tier_list, storing unranked items.
I feel like the first option would be easier to deal with when having to persist things like moving a product between tiers (or even unranking it), while the second looks more compliant to SQL standards. Am I missing something?
First proposed solution model:
Second proposed solution model:
You can create a joint key using 3 different fields.
First of all, why using smallint and not int? Not fluent in Posgres, but it's usually better to have the biggest integer possible as primary key (things can grow faster than you expect).
Second, I strongly suggest to put ID_ before and not after the name of the filed used for lookup. It makes it easier to read.
As how to build your tables:
Item
ID PK
Title
Descriptions
I see no problems here. I'd just change the name in tblProducts, for easier reading.
Tier_List
ID PK
Description
Works fine too. Again I'll look for a better name. I'd call this one tblTiers or tblLegues instead. Usign similar names can bring troubles in 2-3 years when you have to add things and you're not sure what's what. Better use distinctive names for the tables.
Tier (suggesting tblTiers or tblRankings)
ID PK
Tier_List_ID PK FK
Title
Description
Here I see a HUGE problem. For experience, I don't really understand why you create a combination key here with ID and Tier_List_ID. Do you need to reuse the same ID for different tiers? If that ID has a meaning bring it out from the PK absolutely! PK must be simple counters, that will NEVER be changed. I saw people using the ID with a meaning for the end-user. It was a total disaster! I can't even start describing the quantity of garbage data that that DB was containing.
I suppose, because you were talking about ranking, that the ID there is a Rank, a level or something like that.
The table should become
ID PK uuid
Tier_List_ID FK
Rank smallint
Title
Description
There's another reason why I had you do this: when you have a combined PK, certain DBRMs require you to use the same combined key in the lookup tables, and that can become messy fast!
Now, the lookup table:
tier_list_item (tblRankingLookup?)
ID_Product FK PK
ID_Tier_List FK PK
ID_Tier FK PK
You don't need anything else to make it work smoothly! At least, that's how I'd envision it.
Instead I'd add an ID_User (because I'm not sure if all users can see all tiers and all rankings, or they can see only theirs).
Addendum: if you need to have unique combinations of different elements, I'm pretty sure you can create a combined index and mark it as "unique" (don't remember the correct syntax, not sure it is the same in Postgres).
In exmple, if you don't want the Tier table to have the rank repeated only once per tier_list_ID, you can create an index using tier_list_ID and Ranking and mark it unique. This way a two tiers in the same tier_list will not have the same value for the field Rank (rank can still be null).

Two identical tables: Keep them seperate or merge them with extra key column

Lets assume that we have two statuses tables which are identical. Each table has his own values. They don't share the same.
Now, what is best practice?
Keep those seperated, or
Merge them with additional key column
Separate:
Table: offer_statuses
- id
- name //For example: calculating, sent
Table: project_statuses
- id
- name //For example: preparation, in progress
Merged:
Table: statuses
- id
- status //For example: offer, project
- name //For example: calculating, sent, preparation, in progress
Id keep then separate. A project status is not an offer status. You gain nothing by combining, and now every foreign key to the combined status table will be two columns instead of one. You can also introduce errors, as your foreign keys will not prevent you from using an offer status where only a project status is valid.
You can go either way. Normally, there would be a separate reference table for each type of status, because this would allow both:
foreign key relations from offers and projects to the appropriate statuses; and
no accidental mixing of statuses between the two entities.
There are some situations where having all statuses in the same table is useful. For instance, if the statuses really do overlap, then putting them in the same table makes sense. Another use-case is internationalization. If the application needs to be easily translatable, then having all language strings (such as status descriptions) in a single table (or small number thereof) is helpful.
In other words, I would usually go for separate tables. However, there might be good reasons for combining them.
I'd recommend to keep them separated. It sounds like they're different from the logical point of view. Having the same attributes is an indicator two tables might model the same entity but no proof.
Keeping them different makes it easier to enforce correctness of the data, as you can use foreign key constraints, that point to the right status. If your just have one table it becomes more difficult to ensure, that e.g. offers only use status for offers.

Normalization of SQL tables

I am creating some tables for a project and just realized that many of the tables have the same structure (Id, Name), but are used for different things. How far should I go with normalization? Should I build them all into one table or keep them apart for better understanding? How does it affect performence?
Example 1:
TableObjectType (used for types of objects in the log)
Id Name
1 User
2 MobileDevice
3 SIMcard
TableAction (used for types of actions in a log)
Id Name
1 Create
2 Edit
3 Delete
TableStatus (used for a status a device can have)
Id Name
1 Stock
2 Lost
3 Repair
4 Locked
Example 2:
TableConstants
Id Name
1 User
2 MobileDevice
3 SIMcard
4 Create
5 Edit
6 Delete
7 Stock
8 Lost
9 Repair
10 Locked
Ignore the naming, as my tables have other names, but I am using these for clarification.
The downside for using one table for all constants is that if I want to add more later on, they dont really come in "groups", but on the other hand in SQL I should never rely on a specific order when I use the data.
Just because a table has a similar structure to another doesn't mean it stores the data describing identical entities.
There are some obvious reasons not to go with example 2.
Firstly, you may want to limit the values in your ObjectTypeID column to values that are valid object types. The obvious way to do this is to create a foreign key relationship to the ObjectType table. Creating a similar check on TableConstants would be much harder (in most database engines, you can't use the foreign key restraint in this way).
Secondly, it makes the database self describing - someone who is inspecting the schema will understand that "object type" is a meaningful concept in your business domain. This is important for long-lived applications, or applications with large development teams.
Thirdly, you often get specific business logic with those references - for instance, "status" often requires some logic to say "you can't modify a record in status LOCKED". This business logic often requires storing additional data attributes - that's not really possible with a "Constants" table.
Fourthly - "constants" have to be managed. If you have a large schema, very quickly people start to re-use constants to reflect slightly different concepts. Your "create" constant might get applied to a table storing business requests as well as your log events. This becomes almost unintelligible - and if the business decides log events don't refer to "create" but "write", your business transactions all start to look wrong.
What you could do is to use an ENUM (many database engines support this) to model attributes that don't have much logic beyond storing a name. This removes risks 1, 2 and 4, but does mean your logic is encoded in the database schema - adding a new object type is a schema change, not a data insertion.
I think that generally it is better to keep tables apart (it helps documentation too). In some particular cases (your is the choice...) you could "merge" all similar tables into one (of course adding other columns, as TAB_TYPE to distinct them): this could give you some advantage in developing apps and reducing the overall number of tables (it this is a problem for you).
If they are all relatively small table (with not many records), you should have not performance problems.

SQL one to one relationship vs. single table

Consider a data structure such as the below where the user has a small number of fixed settings.
User
[Id] INT IDENTITY NOT NULL,
[Name] NVARCHAR(MAX) NOT NULL,
[Email] VNARCHAR(2034) NOT NULL
UserSettings
[SettingA],
[SettingB],
[SettingC]
Is it considered correct to move the user's settings into a separate table, thereby creating a one-to-one relationship with the users table? Does this offer any real advantage over storing it in the same row as the user (the obvious disadvantage being performance).
You would normally split tables into two or more 1:1 related tables when the table gets very wide (i.e. has many columns). It is hard for programmers to have to deal with tables with too many columns. For big companies such tables can easily have more than 100 columns.
So imagine a product table. There is a selling price and maybe another price which was used for calculation and estimation only. Wouldn't it be good to have two tables, one for the real values and one for the planning phase? So a programmer would never confuse the two prices. Or take logistic settings for the product. You want to insert into the products table, but with all these logistic attributes in it, do you need to set some of these? If it were two tables, you would insert into the product table, and another programmer responsible for logistics data would care about the logistic table. No more confusion.
Another thing with many-column tables is that a full table scan is of course slower for a table with 150 columns than for a table with just half of this or less.
A last point is access rights. With separate tables you can grant different rights on the product's main table and the product's logistic table.
So all in all, it is rather rare to see 1:1 relations, but they can give a clearer view on data and even help with performance issues and data access.
EDIT: I'm taking Mike Sherrill's advice and (hopefully) clarify the thing about normalization.
Normalization is mainly about avoiding redundancy and relateded lack of consistence. The decision whether to hold data in only one table or more 1:1 related tables has nothing to do with this. You can decide to split a user table in one table for personal information like first and last name and another for his school, graduation and job. Both tables would stay in the normal form as the original table, because there is no data more or less redundant than before. The only column used twice would be the user id, but this is not redundant, because it is needed in both tables to identify a record.
So asking "Is it considered correct to normalize the settings into a separate table?" is not a valid question, because you don't normalize anything by putting data into a 1:1 related separate table.
Creating a new table with 1-1 relationships is not a reasonable solution. You might need to do it sometimes, but there would typically be no reason to have two tables where the user id is the primary key.
On the other hand, splitting the settings into a separate table with one row per user/setting combination might be a very good idea. This would be a three-table solution. One for users, one for all possible settings, and one for the junction table between them.
The junction table can be quite useful. For instance, it might contain the effective and end dates of the setting.
However, this assumes that the settings are "similar" to each other, in a SQL sense. If the settings are different such as:
Preferred location as latitude/longitude
Preferred time of day to receive an email
Flag to be excluded from certain contacts
Then you have a data-type problem when storing them in a table. So, the answer is "it depends". A lot of the answer depends on what the settings look like, how they will be used, and the type of constraints on them.
You're all wrong :) Just kidding.
On a very high load, high volume, heavily updated system splitting a table by 1:1 helps optimize I/O.
For example, this way you can place heavily read columns onto separate physical hard-drives to speed-up parallel reads (the 1-1 tables have to be in different "filegroups" for this). Or you can optimize table-level locks. Etc. Etc.
But this type of optimization usually does not happen until you have millions of rows and huge read/write concurrency
Splitting tables into distinct tables with 1:1 relationships between them is usually not practiced, because :
If the relationship is really 1:1, then integrity enforcement boils down to "inserts being done in all concerned tables, or none at all". Achieving this on the server side requires systems that support deferred constraint checking, and AFAIK that's a feature of the rather high-end systems. So in many cases the 1:1 enforcement is pushed over to the application side, and that approach has its own obvious downsides.
A case when splitting tables is nonetheless advisable, is when there are security perspectives, i.e. when not all columns can be updated by one user. But note that by definition, in such cases the relationship between the tables can never be strictly 1:1 .
(I also suggest you read carefully the discussion between Thorsten/Mike. You used the word 'normalization' but normalization has very little to do with your scenario - except if you were considering 6NF, which I think is rather unlikely.)
It makes more sense that your settings are not only in a separate table, but also use a on-to-many relationship between the ID and Settings. This way, you could potentially have a as many (or as few) setting as required.
UserSettings
[Settings_ID]
[User_ID]
[Settings]
In fact, one could make the same argument for the [Email] field.

How to model a mutually exclusive relationship in SQL Server

I have to add functionality to an existing application and I've run into a data situation that I'm not sure how to model. I am being restricted to the creation of new tables and code. If I need to alter the existing structure I think my client may reject the proposal.. although if its the only way to get it right this is what I will have to do.
I have an Item table that can me link to any number of tables, and these tables may increase over time. The Item can only me linked to one other table, but the record in the other table may have many items linked to it.
Examples of the tables/entities being linked to are Person, Vehicle, Building, Office. These are all separate tables.
Example of Items are Pen, Stapler, Cushion, Tyre, A4 Paper, Plastic Bag, Poster, Decoration"
For instance a Poster may be allocated to a Person or Office or Building. In the future if they add a Conference Room table it may also be added to that.
My intital thoughts are:
Item
{
ID,
Name
}
LinkedItem
{
ItemID,
LinkedToTableName,
LinkedToID
}
The LinkedToTableName field will then allow me to identify the correct table to link to in my code.
I'm not overly happy with this solution, but I can't quite think of anything else. Please help! :)
Thanks!
It is not a good practice to store table names as column values. This is a bad hack.
There are two standard ways of doing what you are trying to do. The first is called single-table inheritance. This is easily understood by ORM tools but trades off some normalization. The idea is, that all of these entities - Person, Vehicle, whatever - are stored in the same table, often with several unused columns per entry, along with a discriminator field that identifies what type the entity is.
The discriminator field is usually an integer type, that is mapped to some enumeration in your code. It may also be a foreign key to some lookup table in your database, identifying which numbers correspond to which types (not table names, just descriptions).
The other way to do this is multiple-table inheritance, which is better for your database but not as easy to map in code. You do this by having a base table which defines some common properties of all the objects - perhaps just an ID and a name - and all of your "specific" tables (Person etc.) use the base ID as a unique foreign key (usually also the primary key).
In the first case, the exclusivity is implicit, since all entities are in one table. In the second case, the relationship is between the Item and the base entity ID, which also guarantees uniqueness.
Note that with multiple-table inheritance, you have a different problem - you can't guarantee that a base ID is used by exactly one inheritance table. It could be used by several, or not used at all. That is why multiple-table inheritance schemes usually also have a discriminator column, to identify which table is "expected." Again, this discriminator doesn't hold a table name, it holds a lookup value which the consumer may (or may not) use to determine which other table to join to.
Multiple-table inheritance is a closer match to your current schema, so I would recommend going with that unless you need to use this with Linq to SQL or a similar ORM.
See here for a good detailed tutorial: Implementing Table Inheritance in SQL Server.
Find something common to Person, Vehicle, Building, Office. For the lack of a better term I have used Entity. Then implement super-type/sub-type relationship between the Entity and its sub-types. Note that the EntityID is a PK and a FK in all sub-type tables. Now, you can link the Item table to the Entity (owner).
In this model, one item can belong to only one Entity; one Entity can have (own) many items.
your link table is ok.
the trouble you will have is that you will need to generate dynamic sql at runtime. parameterized sql does not typically allow the objects inthe FROM list to be parameters.
i fyou want to avoid this, you may be able to denormalize a little - say by creating a table to hold the id (assuming the ids are unique across the other tables) and the type_id representing which table is the source, and a generated description - e.g. the name value from the inital record.
you would trigger the creation of this denormalized list when the base info is modified, and you could use that for generalized queries - and then resort to your dynamic queries when needed at runtime.