I'd really appreciate some advice on how to setup my table design and what type of indexing I should use.
I figured this type of requirement has come up alot before so hopefully I could benefit from your advice!
The requirement, and my initial plan is as follows:
I have one table which identifies a limited amount of forms
FormID FormName Desc etc..
I will have a second table that populates information for those forms.
(EquipmentIds are unique. So a piece of equipment may require one of the forms from the previous table.)
ID FormID EquipmentID Element Value
-----------------------------------------------------
1 25 3432 lightswitch GE Lightbulb
2 25 3432 lamp nice lamp
3 25 3432 rug really ties the room together
4 25 3432 shelf good shelf
5 25 3432 ... ....
6 23 2314 ... ....
So essentially all of the information for the forms would be in the second table. To populate a form I would then select from form filler on the FormID AND EquipmentID.
Is there a better way of doing this? It makes sense to me but I could see the table growing very quickly and I am wondering what the best way to index this second table would be.
Thank you very much for your time and help
You are implementing a variation on the Entity-Attribute-Value design. I have written several times that this is not a valid design for a relational database.
You lose the ability to use SQL data types appropriately.
You lose the ability to use SQL constraints like UNIQUE, FOREIGN KEY, and even NOT NULL.
Queries against EAV data don't scale well.
The storage is bulky and inefficient. Not only does it create many rows, but the first three columns have a lot of duplication in values, making indexes less efficient.
To use EAV data, you have to write lots more application code to check data types, simulate constraints, and reformat result sets. This is extra work that the RDBMS already does for you more efficiently.
The correct design for a relational database is to design a table for each of your forms, with form fields in distinct columns, so you can choose the column names, data types, constraints appropriately. Form fields that allow multiple values require separate child tables too.
I apologize if this may seem like somewhat of a novice question (which it probably is), but I'm just introducing myself to the idea of relational databases and I'm struggling with this concept.
I have a database with roughly 75 fields which represent different characteristics of a 'user'. One of those fields represents a the locations that user has been and I'm wondering what the best way is to store the data so that it is easily retrievable and can be used later on (i.e. tracking a route on Google Maps, identifying if two users shared the same location etc.)
The problem is that some users may have 5 locations in total while others may be well over 100.
Is it best to store these locations in a text file named using the unique id of each user(one location on each line, or in a csv)?
Or to create a separate table for each individual user connected to their unique id (that seems like overkill to me)?
Or, is there a way to store all of the locations directly in the single field in the original table?
I'm hoping that I'm missing a concept, or there is a link to a tutorial that will help my understanding.
If it helps, you can assume that the locations will be stored in order and will not be changed once stored. Also, these locations are static (I don't need to add any more locations once as they can't be updated).
Thank you for time in helping me. I appreciate it!
Store the location data for the user in a separate table. The location table would link back to the user table by a common user_id.
Keeping multiple locations for a particular user in a single table is not a good idea - you'll end up with denormalized data.
You may want to read up on:
Referential Integrity
Relational denormalization
The most common way would be to have a separate table, something like
USER_LOCATION
+------------+------------------+
| USER_ID | LOCATION_ID |
+------------+------------------+
| | |
If user 3 has 5 locations, there will be five rows containing user_id 3.
However, if you say the order of locations matter then an additional field specifying the ordinal position of the location within a user can be used.
The separate table approach is what we call normalized.
If you store a location list as a comma-separated string of location ids, for example, it is trival to maintain the order, but you lose the ability for the database to quickly answer the question "which users have been at location x?". Your data would be what we call denormalized.
You do have options, of course, but relational databases are pretty good with joining tables, and they are not overkill. They do look a little funny when you have ordering requirements, like the one you mention. But people use them all the time.
In a relational database you would use a mapping table. So you would have user, location and userlocation tables (user is a reserved word so you may wish to use a different name). This allows you to have a many-to-many relationship, i.e. many users can visit many locations. If you want to model a route as an ordered collection of locations then you will need to do more work. This site gives an example
I have this table phonebook SQL Server 2005:
username(PK) Serial(PK) contact_name contact_adr contact_email contact_phone
bob 1 Steve 12 abc street steve#bb.com 1234
bob 2 John 34 xyz street john#bb.com 5345
bob 3 Mark 98 ggs street mark#bb.com 1234
patrick 4 lily 77 fgs street lily#bb.com 1234
patrick 5 mily 76 fgs street mily#bb.com 1234
von 8 jim 6767 jsd way jim#bb.com 4564
Now you can see the phonebook stores all contacts of same user together.
Storing this way has advantages which I can't avoid.
My question is:
If I have 100 million entries in the table for all users, will my future insertion in the above table be very expensive?
Since SQL Engine needs to find the actual location where to enter the data (I mean under which username)
I tested with 1 million rows, I don't see noticeable issues.
I am asking if anyone has this experience or suggestions for me?
Thanks
The approach that is optimal for an address book is a NOSQL hashed-table. There's no need for an index on the PK. The algorithm returns the "page" where the row identified by the PK can be found. The address book of the user is also stored with the user, as a denormalized relation. Insert overhead is negligible. Hashed-PK is optimized for insert/retrieval when the PK is known. Excellent for OLTP systems. Now if you want to do something like figure out who knows whom, so that a given user's contacts need to be related to the contacts of all other users, then you have a different can of worms. But a straightforward address-book application, where the contacts of a given user remain "private" to that user, then a hashed primary key system is superb.
One of the first principles in db design is data non-redundancy: your db table design doesn't comply to that principle as you have same data repeated many times. A resonable solution would be to create separate table for users, a separate table for contacts and a table for realationship between users and contacts.
It depends on the underlying database. Every implementation has something different under its sleeves.
But! Performance will almost definitely suffer if you use indexes on that table and you have many, many, many, many rows inside of it.
First of all, username doesn't seem to be a primary key for your table by itself. You will probably have to use it in combination with another field if you want it to work. At this point, I would rather use your serial column as primary key, and have an index on username to answer the query get bob's contacts efficiently.
You insert will certainly become slower as your table grow. But I don't think it will be too slow to avoid following this approach.
You can't force the data to be stored together. Are you re-sequencing the Serial upon an insert? How are you ensuring the data is "stored together"?
If you mean putting all this data in one table, then it really depends on your index structure. The more indexes on the table, the more processing that takes place on very insert. Since user tables are usually heavily queried and rarely inserted (relatively), they are usually indexed heavily, in which case inserts can be slow. The answer, as with almost every DB question is: "It depends".
I understand the concept of database normalization, but always have a hard time explaining it in plain English - especially for a job interview. I have read the wikipedia post, but still find it hard to explain the concept to non-developers. "Design a database in a way not to get duplicated data" is the first thing that comes to mind.
Does anyone has a nice way to explain the concept of database normalization in plain English? And what are some nice examples to show the differences between first, second and third normal forms?
Say you go to a job interview and the person asks: Explain the concept of normalization and how would go about designing a normalized database.
What key points are the interviewers looking for?
Well, if I had to explain it to my wife it would have been something like that:
The main idea is to avoid duplication of large data.
Let's take a look at a list of people and the country they came from. Instead of holding the name of the country which can be as long as "Bosnia & Herzegovina" for every person, we simply hold a number that references a table of countries. So instead of holding 100 "Bosnia & Herzegovina"s, we hold 100 #45. Now in the future, as often happens with Balkan countries, they split to two countries: Bosnia and Herzegovina, I will have to change it only in one place. well, sort of.
Now, to explain 2NF, I would have changed the example, and let's assume that we hold the list of countries every person visited.
Instead of holding a table like:
Person CountryVisited AnotherInformation D.O.B.
Faruz USA Blah Blah 1/1/2000
Faruz Canada Blah Blah 1/1/2000
I would have created three tables, one table with the list of countries, one table with the list of persons and another table to connect them both. That gives me the most freedom I can get changing person's information or country information. This enables me to "remove duplicate rows" as normalization expects.
One-to-many relationships should be represented as two separate tables connected by a foreign key. If you try to shove a logical one-to-many relationship into a single table, then you are violating normalization which leads to dangerous problems.
Say you have a database of your friends and their cats. Since a person may have more than one cat, we have a one-to-many relationship between persons and cats. This calls for two tables:
Friends
Id | Name | Address
-------------------------
1 | John | The Road 1
2 | Bob | The Belltower
Cats
Id | Name | OwnerId
---------------------
1 | Kitty | 1
2 | Edgar | 2
3 | Howard | 2
(Cats.OwnerId is a foreign key to Friends.Id)
The above design is fully normalized and conforms to all known normalization levels.
But say I had tried to represent the above information in a single table like this:
Friends and cats
Id | Name | Address | CatName
-----------------------------------
1 | John | The Road 1 | Kitty
2 | Bob | The Belltower | Edgar
3 | Bob | The Belltower | Howard
(This is the kind of design I might have made if I was used to Excel-sheets but not relational databases.)
A single-table approach forces me to repeat some information if I want the data to be consistent. The problem with this design is that some facts, like the information that Bob's address is "The belltower" is repeated twice, which is redundant, and makes it difficult to query and change data and (the worst) possible to introduce logical inconsistencies.
Eg. if Bob moves I have to make sure I change the address in both rows. If Bob gets another cat, I have to be sure to repeat the name and address exactly as typed in the other two rows. E.g. if I make a typo in Bob's address in one of the rows, suddenly the database has inconsistent information about where Bob lives. The un-normalized database cannot prevent the introduction of inconsistent and self-contradictory data, and hence the database is not reliable. This is clearly not acceptable.
Normalization cannot prevent you from entering wrong data. What normalization prevents is the possibility of inconsistent data.
It is important to note that normalization depends on business decisions. If you have a customer database, and you decide to only record a single address per customer, then the table design (#CustomerID, CustomerName, CustomerAddress) is fine. If however you decide that you allow each customer to register more than one address, then the same table design is not normalized, because you now have a one-to-many relationship between customer and address. Therefore you cannot just look at a database to determine if it is normalized, you have to understand the business model behind the database.
This is what I ask interviewees:
Why don't we use a single table for an application instead of using multiple tables ?
The answer is ofcourse normalization. As already said, its to avoid redundancy and there by update anomalies.
This is not a thorough explanation, but one goal of normalization is to allow for growth without awkwardness.
For example, if you've got a user table, and every user is going to have one and only one phone number, it's fine to have a phonenumber column in that table.
However, if each user is going to have a variable number of phone numbers, it would be awkward to have columns like phonenumber1, phonenumber2, etc. This is for two reasons:
If your columns go up to phonenumber3 and someone needs to add a fourth number, you have to add a column to the table.
For all the users with fewer than 3 phone numbers, there are empty columns on their rows.
Instead, you'd want to have a phonenumber table, where each row contains a phone number and a foreign key reference to which row in the user table it belongs to. No blank columns are needed, and each user can have as few or many phone numbers as necessary.
One side point to note about normalization: A fully normalized database is space efficient, but is not necessarily the most time efficient arrangement of data depending on use patterns.
Skipping around to multiple tables to look up all the pieces of info from their denormalized locations takes time. In high load situations (millions of rows per second flying around, thousands of concurrent clients, like say credit card transaction processing) where time is more valuable than storage space, appropriately denormalized tables can give better response times than fully normalized tables.
For more info on this, look for SQL books written by Ken Henderson.
I would say that normalization is like keeping notes to do things efficiently, so to speak:
If you had a note that said you had to
go shopping for ice cream without
normalization, you would then have
another note, saying you have to go
shopping for ice cream, just one in
each pocket.
Now, In real life, you would never do
this, so why do it in a database?
For the designing and implementing part, thats when you can move back to "the lingo" and keep it away from layman terms, but I suppose you could simplify. You would say what you needed to at first, and then when normalization comes into it, you say you'll make sure of the following:
There must be no repeating groups of information within a table
No table should contain data that is not functionally dependent on that tables primary key
For 3NF I like Bill Kent's take on it: Every non-key attribute must provide a fact about the key, the whole key, and nothing but the key.
I think it may be more impressive if you speak of denormalization as well, and the fact that you cannot always have the best structure AND be in normal forms.
Normalization is a set of rules that used to design tables that connected through relationships.
It helps in avoiding repetitive entries, reducing required storage space, preventing the need to restructure existing tables to accommodate new data, increasing speed of queries.
First Normal Form: Data should be broken up in the smallest units. Tables should not contain repetitive groups of columns. Each row is identified with one or more primary key.
For example, There is a column named 'Name' in a 'Custom' table, it should be broken to 'First Name' and 'Last Name'. Also, 'Custom' should have a column named 'CustiomID' to identify a particular custom.
Second Normal Form: Each non-key column should be directly related to the entire primary key.
For example, if a 'Custom' table has a column named 'City', the city should has a separate table with primary key and city name defined, in the 'Custom' table, replace the 'City' column with 'CityID' and make 'CityID' the foreign key in the tale.
Third normal form: Each non-key column should not depend on other non-key columns.
For example, In an order table, the column 'Total' is dependent on 'Unit price' and 'quantity', so the 'Total' column should be removed.
I teach normalization in my Access courses and break it down a few ways.
After discussing the precursors to storyboarding or planning out the database, I then delve into normalization. I explain the rules like this:
Each field should contain the smallest meaningful value:
I write a name field on the board and then place a first name and last name in it like Bill Lumbergh. We then query the students and ask them what we will have problems with, when the first name and last name are all in one field. I use my name as an example, which is Jim Richards. If the students do not lead me down the road, then I yank their hand and take them with me. :) I tell them that my name is a tough name for some, because I have what some people would consider 2 first names and some people call me Richard. If you were trying to search for my last name then it is going to be harder for a normal person (without wildcards), because my last name is buried at the end of the field. I also tell them that they will have problems with easily sorting the field by last name, because again my last name is buried at the end.
I then let them know that meaningful is based upon the audience who is going to be using the database as well. We, at our job will not need a separate field for apartment or suite number if we are storing people's addresses, but shipping companies like UPS or FEDEX might need it separated out to easily pull up the apartment or suite of where they need to go when they are on the road and running from delivery to delivery. So it is not meaningful to us, but it is definitely meaningful to them.
Avoiding Blanks:
I use an analogy to explain to them why they should avoid blanks. I tell them that Access and most databases do not store blanks like Excel does. Excel does not care if you have nothing typed out in the cell and will not increase the file size, but Access will reserve that space until that point in time that you will actually use the field. So even if it is blank, then it will still be using up space and explain to them that it also slows their searches down as well.
The analogy I use is empty shoe boxes in the closet. If you have shoe boxes in the closet and you are looking for a pair of shoes, you will need to open up and look in each of the boxes for a pair of shoes. If there are empty shoe boxes, then you are just wasting space in the closet and also wasting time when you need to look through them for that certain pair of shoes.
Avoiding redundancy in data:
I show them a table that has lots of repeated values for customer information and then tell them that we want to avoid duplicates, because I have sausage fingers and will mistype in values if I have to type in the same thing over and over again. This “fat-fingering” of data will lead to my queries not finding the correct data. We instead, will break the data out into a separate table and create a relationship using a primary and foreign key field. This way we are saving space because we are not typing the customer's name, address, etc multiple times and instead are just using the customer's ID number in a field for the customer. We then will discuss drop-down lists/combo boxes/lookup lists or whatever else Microsoft wants to name them later on. :) You as a user will not want to look up and type out the customer's number each time in that customer field, so we will setup a drop-down list that will give you a list of customer, where you can select their name and it will fill in the customer’s ID for you. This will be a 1-to-many relationship, whereas 1 customer will have many different orders.
Avoiding repeated groups of fields:
I demonstrate this when talking about many-to-many relationships. First, I draw 2 tables, 1 that will hold employee information and 1 that will hold project information. The tables are laid similar to this.
(Table1)
tblEmployees
* EmployeeID
First
Last
(Other Fields)….
Project1
Project2
Project3
Etc.
**********************************
(Table2)
tblProjects
* ProjectNum
ProjectName
StartDate
EndDate
…..
I explain to them that this would not be a good way of establishing a relationship between an employee and all of the projects that they work on. First, if we have a new employee, then they will not have any projects, so we will be wasting all of those fields, second if an employee has been here a long time then they might have worked on 300 projects, so we would have to include 300 project fields. Those people that are new and only have 1 project will have 299 wasted project fields. This design is also flawed because I will have to search in each of the project fields to find all of the people that have worked on a certain project, because that project number could be in any of the project fields.
I covered a fair amount of the basic concepts. Let me know if you have other questions or need help with clarfication/ breaking it down in plain English. The wiki page did not read as plain English and might be daunting for some.
I've read the wiki links on normalization many times but I have found a better overview of normalization from this article. It is a simple easy to understand explanation of normalization up to fourth normal form. Give it a read!
Preview:
What is Normalization?
Normalization is the process of
efficiently organizing data in a
database. There are two goals of the
normalization process: eliminating
redundant data (for example, storing
the same data in more than one table)
and ensuring data dependencies make
sense (only storing related data in a
table). Both of these are worthy goals
as they reduce the amount of space a
database consumes and ensure that data
is logically stored.
http://databases.about.com/od/specificproducts/a/normalization.htm
Database normalization is a formal process of designing your database to eliminate redundant data. The design consists of:
planning what information the database will store
outlining what information users will request from it
documenting the assumptions for review
Use a data-dictionary or some other metadata representation to verify the design.
The biggest problem with normalization is that you end up with multiple tables representing what is conceptually a single item, such as a user profile. Don't worry about normalizing data in table that will have records inserted but not updated, such as history logs or financial transactions.
References
When not to Normalize your SQL Database
Database Design Basics
+1 for the analogy of talking to your wife. I find talking to anyone without a tech mind needs some ease into this type of conversation.
but...
To add to this conversation, there is the other side of the coin (which can be important when in an interview).
When normalizing, you have to watch how the databases are indexed and how the queries are written.
When in a truly normalized database, I have found that in situations it's been easier to write queries that are slow because of bad join operations, bad indexing on the tables, and plain bad design on the tables themselves.
Bluntly, it's easier to write bad queries in high level normalized tables.
I think for every application there is a middle ground. At some point you want the ease of getting everything out a few tables, without having to join to a ton of tables to get one data set.
I am working on an application which accepts any uploaded CSV data, stores it alongside other datasets which have been uploaded previously, and then produces output (CSV or HTML) based on the user selecting which columns/values they want returned. The database will be automatically expanded to handle new/different columns and datatypes as required. This is in preference to a entity-attribute-value model.
Example - uploading these 2 sets to a blank database:
dataset A:
name | dept | age
------+-------+------
Bob | Sales | 24
Tim | IT | 32
dataset B:
name | dept | age | salary
------+-------+------+--------
Bob | Sales | 24 | £20,000
Tim | IT | 32 | £20,000
Will programatically change the 'data' table so that importing dataset A results in 3 newly created columns (name,dept,age). Importing dataset B results in 1 newly created column (salary). At the moment, forget about whether the recordsets should be combined or not and that there's no normalisation.
The issue I have is that some columns will also have lookup values - let's say that the Dept column will at some point in the future have associated values which give the address and phone numbers of that department. The same could be true for the Salary column, looking up tax groupings etc.
The number of columns in this big table should not become too high (a few hundred) but will be high enough to want the user to administer the lookup table structure and values through an admin panel rather than have to involve developers each time.
The question is whether to use individual lookup tables for each column (value, description), or a combined lookup table which references the column (column, value, description). Normally I would opt for individual lookup tables, but here the application will need to create them automatically (e.g. lookup_dept, lookup_salary) and then add a new join into the master SQL statement. This would be done at the request of the user rather than when the column's added (to avoid hundreds of empty tables).
The combined lookup table on the other hand would need to be joined multiple times onto the data table, selecting on the column name each time.
Individual lookups seems to make sense to me but I may be barking up completely the wrong tree.
I would agree that individual tables is preferable. It is more scalable and better for query optimisation. Also, if in future the users want more columns on a particular lookup then you can add them.
Yes, the application will have to create tables and constraints automatically: I wouldn't normally do this, but then this application is already altering existing tables and adding columns to them, which I wouldn't normally do either!
Ah, the "One true lookup table" idea. One of the rare times I agree with Mr Celko.
Google search too
Individual tables every time. It's "correct" in the database sense.
My reason (no normalisation pedants please): each row in a table stores one entity only.
eg Fruit names, car makes, phone brands. To mix them is nonsense. I could have a phone brand called "Apple". Er... wait a minute...
You said,
This is in preference to a entity-attribute-value model.
But it looks to me like that is exactly what you need.
Consider using an RDF triplestore, and query it with SPARQL.
Forget SQL, this is a job for RDF.