Is it efficient to have 2 SQL tables with the same structure? - sql

In inventory / production systems I usually implement a table structure similar to the following description...
--- Raw Item ---
id INT(10) UNSIGNED AUTO_INCREMENT,
name VARCHAR(32) NOT NULL,
description VARCHAR(128) NOT NULL,
ideal INT(10) UNSIGNED,
PRIMARY KEY(id)
And another table with the same fields for the processed items...
Next tables for clients and providers with similar structures.
Then tables for income orders and outcome orders with similar structures.
And finally, a table which stablishes relationships between a certain processed item and the types of raw items (With quantities) required to produce a batch...
It performs well... But I wonder if it would be better to merge the similar tables and adding a field such as 'type', would like some advice please.

In my mind adding an artificial type to combine similar data is not 3NF. Go with separate tables unless you need the data in the same table.
A client and a provider have similar fields but they are two different things.
If an order migrates from PO to Processed then it is the same thing and it would be appropriate to have a status flag. If you are moving data from one table to the other then combine with a flag is preferred.

That's a good question and as many SQL good questions the answer is "it depends".
IMHO it's ok to create similar (internal structured tables) for different artifacts (with similar properties).
See you can get:
Owner
Id, Name
Pet
Id, Name
Tables with same columns but different meaning.
Sure you can got Items and RawItens in the same table and just a Flag column to differentiate between both. You can even use a self referencing FK to relate Items with RamItems but how that can affect performance?
Well as your table grows engine will need more time (resources, mem, cpu) to retrieve rows/data. If you double the rows... for most DBMS doubling tables ill affect performance a lot less doubling a table rows.
Also it affects evolutive maintenance. If now you need to add one column for your RawItems but not for your Items you can become wasting space.
"Merging" similar tables can increase dificult to understand your schema, not simplify it.

I would imagine this would depend slightly on your database schema you choose. See this link for a little more information; https://dev.mysql.com/doc/refman/5.1/en/storage-engines.html
Each one would offer you various degrees of performance improvements and lack of features in certain areas. I guess the real question is do you want to have to deal with multiple tables, or would it be easier to deal with a single table?
A simple solution to this would be to add something simple like a status to the table. Then just change your queries to run a check against the status. Very simple, and space efficient way to save both tables into one.
--- Raw Item ---
id INT(10) UNSIGNED AUTO_INCREMENT,
name VARCHAR(32) NOT NULL,
description VARCHAR(128) NOT NULL,
ideal INT(10) UNSIGNED,
status VARCHAR(9) NOT NULL,
PRIMARY KEY(id)
SELECT * from items WHERE status="PROCESSED";
Personally I would prefer this approach. It leaves you with only a single table of inventory, as opposed to having multiple tables cluttering up the database schema. Not to mention if it ever expands further (say, you have new, processed, and archived).

How are you going to use the data over time? If you need to combine all these tables together for reporting, one table is likely preferable with partitioning if it is very large. If you need to move the records from one table to the other as it moves throguh a process, it is easier to check on the status of a process if it is all in one table.
Also, if the things are very different entities, they may have differnt related tables and then combining them just muddies the waters, makes the datbase more difficult to understand, and reduces the effectiveness of your PK/FK relationships. In this case, separate tables makes the most sense. It also makes the most sense if you think the data will diverge over time as currently planned features are added.
Take for instance a customer and a Sales rep. They might have many of the same fields but what they are related to will be very differnt and you would not want a customer to be able to be put into the child table for a rep. So now you have to enfore the relationships with more than an FK. Further if you have enough child tables, it makes deleting records to be difficult. The db has to check all the tables even if only half of them might apply to a particular record.
In one database I have seen, the orginal designer combined two things that were dissimlar but had the same fields at the parent level and ended up with well over 100 FKs on that table. It was a nightmare to delete from that table and never fast. And there were occasional data integrity problems where one type of record ended up in the wrong child tables.

Related

SQL Database Design: Bad practice to have parent table with only primary key?

My database holds two types of orders - internal and external. Since they are both order types I want them to share a primary key, which comes from the superentity 'RentOrder'. This design is shown below:
My questions:
Would it be considered bad practice that my RentOrder table contains only one column, which is the primary key - 'id'?
ExternalRentOrder and InternalRentOrder have a number of fields in common (e.g. orderDate, rentStartDate, rentEndDate etc.). Obviously these columns could be in the parent RentOrder table. However that would mean I would need to do a parent-child JOIN to get all InternalRentOrder or ExternalRentOrder data. This seems less efficient and performance is my priority. Is there a right way to do this and is my current solution ok?
Thank you for your time.
It is not unreasonable. That said, I usually have other columns in tables, such as:
createdAt -- datetime row was inserted
createdBy -- who created the row
In addition, common columns might also be helpful. In your case:
orderId
supplierId
orderDate
and so on.
In fact, there might be a fair amount of commonality, so you might find a single table is sufficient. Separate tables are helpful if you want foreign key relationships to InternalRentOrder and/or to ExternalRentOrder.
Finally, a type column might also be helpful. Depending on the database you are using, this can make it easier to ensure no duplication between the two tables.

SQL one to one relationship vs. single table

Consider a data structure such as the below where the user has a small number of fixed settings.
User
[Id] INT IDENTITY NOT NULL,
[Name] NVARCHAR(MAX) NOT NULL,
[Email] VNARCHAR(2034) NOT NULL
UserSettings
[SettingA],
[SettingB],
[SettingC]
Is it considered correct to move the user's settings into a separate table, thereby creating a one-to-one relationship with the users table? Does this offer any real advantage over storing it in the same row as the user (the obvious disadvantage being performance).
You would normally split tables into two or more 1:1 related tables when the table gets very wide (i.e. has many columns). It is hard for programmers to have to deal with tables with too many columns. For big companies such tables can easily have more than 100 columns.
So imagine a product table. There is a selling price and maybe another price which was used for calculation and estimation only. Wouldn't it be good to have two tables, one for the real values and one for the planning phase? So a programmer would never confuse the two prices. Or take logistic settings for the product. You want to insert into the products table, but with all these logistic attributes in it, do you need to set some of these? If it were two tables, you would insert into the product table, and another programmer responsible for logistics data would care about the logistic table. No more confusion.
Another thing with many-column tables is that a full table scan is of course slower for a table with 150 columns than for a table with just half of this or less.
A last point is access rights. With separate tables you can grant different rights on the product's main table and the product's logistic table.
So all in all, it is rather rare to see 1:1 relations, but they can give a clearer view on data and even help with performance issues and data access.
EDIT: I'm taking Mike Sherrill's advice and (hopefully) clarify the thing about normalization.
Normalization is mainly about avoiding redundancy and relateded lack of consistence. The decision whether to hold data in only one table or more 1:1 related tables has nothing to do with this. You can decide to split a user table in one table for personal information like first and last name and another for his school, graduation and job. Both tables would stay in the normal form as the original table, because there is no data more or less redundant than before. The only column used twice would be the user id, but this is not redundant, because it is needed in both tables to identify a record.
So asking "Is it considered correct to normalize the settings into a separate table?" is not a valid question, because you don't normalize anything by putting data into a 1:1 related separate table.
Creating a new table with 1-1 relationships is not a reasonable solution. You might need to do it sometimes, but there would typically be no reason to have two tables where the user id is the primary key.
On the other hand, splitting the settings into a separate table with one row per user/setting combination might be a very good idea. This would be a three-table solution. One for users, one for all possible settings, and one for the junction table between them.
The junction table can be quite useful. For instance, it might contain the effective and end dates of the setting.
However, this assumes that the settings are "similar" to each other, in a SQL sense. If the settings are different such as:
Preferred location as latitude/longitude
Preferred time of day to receive an email
Flag to be excluded from certain contacts
Then you have a data-type problem when storing them in a table. So, the answer is "it depends". A lot of the answer depends on what the settings look like, how they will be used, and the type of constraints on them.
You're all wrong :) Just kidding.
On a very high load, high volume, heavily updated system splitting a table by 1:1 helps optimize I/O.
For example, this way you can place heavily read columns onto separate physical hard-drives to speed-up parallel reads (the 1-1 tables have to be in different "filegroups" for this). Or you can optimize table-level locks. Etc. Etc.
But this type of optimization usually does not happen until you have millions of rows and huge read/write concurrency
Splitting tables into distinct tables with 1:1 relationships between them is usually not practiced, because :
If the relationship is really 1:1, then integrity enforcement boils down to "inserts being done in all concerned tables, or none at all". Achieving this on the server side requires systems that support deferred constraint checking, and AFAIK that's a feature of the rather high-end systems. So in many cases the 1:1 enforcement is pushed over to the application side, and that approach has its own obvious downsides.
A case when splitting tables is nonetheless advisable, is when there are security perspectives, i.e. when not all columns can be updated by one user. But note that by definition, in such cases the relationship between the tables can never be strictly 1:1 .
(I also suggest you read carefully the discussion between Thorsten/Mike. You used the word 'normalization' but normalization has very little to do with your scenario - except if you were considering 6NF, which I think is rather unlikely.)
It makes more sense that your settings are not only in a separate table, but also use a on-to-many relationship between the ID and Settings. This way, you could potentially have a as many (or as few) setting as required.
UserSettings
[Settings_ID]
[User_ID]
[Settings]
In fact, one could make the same argument for the [Email] field.

Table referenced by other tables having different PKs

I would like to create a table called "NOTES". I was thinking this table would contain a "table_name" VARCHAR(100) which indicates what table put in the note, a "key" or multiple "key" columns representing the primary key values of the table that this note applies to and a "note" field VARCHAR(MAX). When other tables use this table they would supply THEIR primary key(s) and their "table_name" and get all the notes associated with the primary key(s) they supplied. The problem is that other tables might have 1, 2 or more PKs so I am looking for ideas on how I can design this...
What you're suggesting sounds a little convoluted to me. I would suggest something like this.
Notes
------
Id - PK
NoteTypeId - FK to NoteTypes.Id
NoteContent
NoteTypes
----------
Id - PK
Description - This could replace the "table_name" column you suggested
SomeOtherTable
--------------
Id - PK
...
Other Columns
...
NoteId - FK to Notes.Id
This would allow you to keep your data better normalized, but still get the relationships between data that you want. Note that this assumes a 1:1 relationship between rows in your other tables and Notes. If that relationship will be many to one, you'll need a cross table.
Have a look at this thread about database normalization
What is Normalisation (or Normalization)?
Additionally, you can check this resource to learn more about foreign keys
http://www.w3schools.com/sql/sql_foreignkey.asp
Instead of putting the other table name's and primary key's in this table, have the primary key of the NOTES table be NoteId. Create an FK in each other table that will make a note, and store the corresponding NoteId's in the other tables. Then you can simply join on NoteId from all of these other tables to the NOTES table.
As I understand your problem, you're attempting to "abstract" the auditing of multiple tables in a way that you might abstract a class in OOP.
While it's a great OOP design principle, it falls flat in databases for multiple reasons. Perhaps the largest single reason is that if you cannot envision it, neither will someone (even you) looking at it later have an easy time reassembling the data. Smaller that that though, is that while you tend to think of a table as a container and thus similar to an object, in reality they are implemented instances of this hypothetical container you are trying to put together and operate better if you treat them as such. By creating an audit table specific to a table or a subset of tables that share structural similarity and data similarity, you increase the performance of your database and you won't run in to strange trigger or select related issues later.
And you can't envision it not because you're not good at what you're doing, but rather, the structure is not conducive to database logging.
Instead, I would recommend that you create separate logging tables that manage the auditing of each table you want to audit or log. In fact, some fast google searches show many scripts already written to do much of this tasking for you: Example of one such search
You should create these individual tables and then if you want to be able to report on multiple table or even all tables at once, you can create a stored procedure (if you want to make queries based on criterion) or a view with an included SELECT statement that JOINs and/or UNIONs the tables you are interested in - in a fashion that makes sense to the report type. You'll still have to write new objects in to the view, but even with your original table design, you'd have to account for that.

Hierarchical Database, multiple tables or column with parent id?

I need to store info about county, municipality and city in Norway in a mysql database. They are related in a hierarchical manner (a city belongs to a municipality which again belongs to a county).
Is it best to store this as three different tables and reference by foreign key, or should I store them in one table and relate them with a parent_id field?
What are the pros and cons of either solution? (both structural end efficiency wise)
If you've really got a limit of these three levels (county, municipality, city), I think you'll be happiest with three separate tables with foreign keys reaching up one level each. This will make queries almost trivial to write.
Using a single table with a parent_id field referencing the same table allows you to represent arbitrary tree structures, but makes querying to extract the full path from node to root an iterative process best handled in your application code.
The separate table solution will be much easier to use.
three different tables:
more efficient, if your application mostly accesses information about only one entity (county, municipality, city)
owner-member-relationship is a clear and elegant model ;)
County, Municipality, and City don't sound like they are the same kind of data ; so, I would use three different tables : one per data-type.
And, then, I would indeed use foreign keys between those.
Efficiency-speaking, not sure it'll change much :
you'll do joins on 3 tables instead of joining 3 times on the same table ; I suppose it's quite the same.
it might make a little difference when you need to work on only one of those three type of data ; but with the right indexes, the differences should be minimal.
But, structurally speaking, if those are three different kind of entities, it makes sense to use three different tables.
I would recommend for using three different tables as they are three different entities.
I would use only one table in those cases you don´t know the depth of the hierarchy, but it is not case.
I would put them in three different tables, just on the grounds that it is 3 different concepts. This will hamper speed and will complicate your queries. However given that MySQL does not have any special support for hirachical queries (like Oracle's connect by statement) these would be complicated anyway.
Different tables: it's just "right". I doubt you'll see any performance gains/losses either way but this is one where modelling it properly up-front will probably save you lots of headaches later on. For one thing it'll make SQL SELECTs easier to write and read.
You'll get different opinions coming back to you on this but my personal preference would be to have separate tables because they are separate entities.
In reality you need to think about the queries you will doing on this data and usually your answer will come from that. With separate tables your queries will look much cleaner and in the end your not saving yourself anything because you'll still be joining tables together, even if they are the same table.
I would use three separate tables, since you know exactly what categories of information you are working with, and won't need to dynamically alter the 'depth' of your hierarchy.
It'll also make the data simpler to manage, as you'll be able to tell if the data is for a city, municipality or a county just by knowing the table (and without having to discern the 'depth' of a record in the hierarchy first!).
Since you'll probably be doing self joins anyway to get the hierarchy to work, I'd doubt there would be any benefits from having all the data in a single table.
In dataware housing applications, adherents of the Kimball methodology might place these fields in the same attribute table:
create table city (
id int not null,
county varchar(50) not null,
municipality varchar(50),
city varchar(50),
primary key(id)
);
The idea being that attibutes should never be more than l join away from the fact table.
I just state this as an alternative view. I would go with the 3 table design personally.
This is a case of ‘Database Normalization’, which is the process of organizing the fields and tables of a relational database to minimize redundancy and dependency. The purpose is to isolate data so that additions, deletions, and modifications of a field can be made in just one table and then propagated through the rest of the database via the defined relationships.
Multiple tables will help in the situation if the task has been distributed among different developers, or users at different levels require different rights to view and change the data or the small tables help when you need this data for other purposes as well or so.
My vote would be for multiple tables - with data appropriately distributed.

Decision between storing lookup table id's or pure data

I find this comes up a lot, and I'm not sure the best way to approach it.
The question I have is how to make the decision between using foreign keys to lookup tables, or using lookup table values directly in the tables requesting it, avoiding the lookup table relationship completely.
Points to keep in mind:
With the second method you would
need to do mass updates to all
records referencing the data if it
is changed in the lookup table.
This is focused more
towards tables that have a lot of
the column's referencing many lookup
tables.Therefore lots of foreign
keys means a lot of
joins every time you query the
table.
This data would be coming from drop
down lists which would be pulled
from the lookup tables. In order to match up data when reloading, the values need to be in the existing list (related to the first point).
Is there a best practice here, or any key points to consider?
You can use a lookup table with a VARCHAR primary key, and your main data table uses a FOREIGN KEY on its column, with cascading updates.
CREATE TABLE ColorLookup (
color VARCHAR(20) PRIMARY KEY
);
CREATE TABLE ItemsWithColors (
...other columns...,
color VARCHAR(20),
FOREIGN KEY (color) REFERENCES ColorLookup(color)
ON UPDATE CASCADE ON DELETE SET NULL
);
This solution has the following advantages:
You can query the color names in the main data table without requiring a join to the lookup table.
Nevertheless, color names are constrained to the set of colors in the lookup table.
You can get a list of unique colors names (even if none are currently in use in the main data) by querying the lookup table.
If you change a color in the lookup table, the change automatically cascades to all referencing rows in the main data table.
It's surprising to me that so many other people on this thread seem to have mistaken ideas of what "normalization" is. Using a surrogate keys (the ubiquitous "id") has nothing to do with normalization!
Re comment from #MacGruber:
Yes, the size is a factor. In InnoDB for example, every secondary index stores the primary key value of the row(s) where a given index value occurs. So the more secondary indexes you have, the greater the overhead for using a "bulky" data type for the primary key.
Also this affects foreign keys; the foreign key column must be the same data type as the primary key it references. You might have a small lookup table so you think the primary key size in a 50-row table doesn't matter. But that lookup table might be referenced by millions or billions of rows in other tables!
There's no right answer for all cases. Any answer can be correct for different cases. You just learn about the tradeoffs, and try to make an informed decision on a case by case basis.
In cases of simple atomic values, I tend to disagree with the common wisdom on this one, mainly on the complexity front. Consider a table containing hats. You can do the "denormalized" way:
CREATE TABLE Hat (
hat_id INT NOT NULL PRIMARY KEY,
brand VARCHAR(255) NOT NULL,
size INT NOT NULL,
color VARCHAR(30) NOT NULL /* color is a string, like "Red", "Blue" */
)
Or you can normalize it more by making a "color" table:
CREATE TABLE Color (
color_id INT NOT NULL PRIMARY KEY,
color_name VARCHAR(30) NOT NULL
)
CREATE TABLE Hat (
hat_id INT NOT NULL PRIMARY KEY,
brand VARCHAR(255) NOT NULL,
size INT NOT NULL,
color_id INT NOT NULL REFERENCES Color(color_id)
)
The end result of the latter is that you've added some complexity - instead of:
SELECT * FROM Hat
You now have to say:
SELECT * FROM Hat H INNER JOIN Color C ON H.color_id = C.color_id
Is that extra join a huge deal? No - in fact, that's the foundation of the relational design model - normalizing allows you to prevent possible inconsistencies in the data. But every situation like this adds a little bit of complexity, and unless there's a good reason, it's worth asking why you're doing it. I consider possible "good reasons" to include:
Are there other attributes that "hang off of" this attribute? Are you capturing, say, both "color name" and "hex value", such that hex value is always dependent on color name? If so, then you definitely want a separate color table, to prevent situations where one row has ("Red", "#FF0000") and another has ("Red", "#FF3333"). Multiple correlated attributes are the #1 signal that an entity should be normalized.
Will the set of possible values change frequently? Using a normalized lookup table will make future changes to the elements of the set easier, because you're just updating a single row. If it's infrequent, though, don't balk at statements that have to update lots of rows in the main table instead; databases are quite good at that. Do some speed tests if you're not sure.
Will the set of possible values be directly administered by the users? I.e. is there a screen where they can add / remove / reorder the elements in the list? If so, a separate table is a must, obviously.
Will the list of distinct values power some UI element? E.g. is "color" a droplist in the UI? Then you'll be better off having it in its own table, rather than doing a SELECT DISTINCT on the table every time you need to show the droplist.
If none of those apply, I'd be hard pressed to find another (good) reason to normalize. If you just want to make sure that the value is one of a certain (small) set of legal values, you're better off using a CONSTRAINT that says the value must be in a specific list; keeps things simple, and you can always "upgrade" to a separate table later if the need arises.
One thing no one has considered is that you would not join to the lookup table if the data in it can change over time and the records joined to are historical. The example is a parts table and an order table. The vendors may drop parts or change part numbers, but the orders table should alawys have exactly what was ordered at the time it was ordered. Therefore, it should lookup the data to do the record insert but should never join to the lookup table to get information about an existing order. Instead the part number and description and price, etc. should be stored in the orders table. This is espceially critical so that price changes do not propagate through historical data and make your financial records inaccurate. In this case, you would also want to avoid using any kind of cascading update as well.
rauhr.myopenid.com wrote:
The way we decided to solve this problem is with 4th normal form.
...
That is not 4th normal form. That is a common mistake called One True Lookup:
http://www.dbazine.com/ofinterest/oi-articles/celko22
4th normal form is :
http://en.wikipedia.org/wiki/Fourth_normal_form
Normalization is pretty universally regarded as part of best practices in databases, and normalization says yeah, you push the data out and refer to it by key.
Since no one else has addressed your second point: When queries become long and difficult to read and write due to all those joins, a view will usually resolve that.
You can even make it a rule to always program against the views, having the view get the lookups.
This makes it possible to optimize the view and make your code resistant to changes in the tables.
In oracle, you could even convert the view into a materialized view if you ever need to.