Is this a bad approach to database design? - sql

I have to build an application for my university that will count as course credit for a Class that lasts 1 month. In this application I have to have a way for users to save a Teacher Class Followup Evaluation, which is a person goes to the classroom and checks out the teacher and ticks certain columns.
An example would be:
Pedagogical Aspects:
Show order and follows class sequence: YES NO Observations
Gives clear examples: YES NO Observations
Involves students in discussion: YES NO Observations
If the user (the one evaluating) chooses YES, then nothing is written in Observations, but if he chooses NO, he has to write observations without fail.
How could I handle this in my database? I'm having doubts about over normalizing. :x Any suggestion would be welcome at this point before I move on with the project.
My plan as of now is to just have a big table called Followup that has all these 'aspects' with a BIT datatype in Microsoft SQL and have a ShowOrderSequenceObservation field for every aspect that can be null. O_O I feel dirty just thinking about so I turn to you fellow developers. Thank you!

I would do something like this:
Table for the actual record - note that this is an anonymous recording from the student perspective
| record_id | question_id | YESNO | observation | teacher_id |
Table of questions.
| question_id | question_string |
Table of teachers:
| teacher_id | teacher_string |
In the general flow of things, I would also update the student table to note "has recorded" and insert the answers all in one transaction. This would preserve student anonymity yet also get the data in.
edit - I have no idea how I would ORM this thing. If I was developing it, I'd hack it out in 10-30 hours with perl and direct sql access. Most of the time would be spent beating on HTML formatting.

Sounds like the age old question of time vs quality. A denormalised table would certainly be fast and easy, but a normalised one with category and question tables would allow flexibility. You uni could use it for other things, allow new question types to be set up on the fly etc, and could get you a better grade.
If you think you can get what you want with a denormalised table, I'd go that way. It's not a production system and business needs aren't going to change in its lifetime. But if you want to push for the blue ribbon solution, I'd normalise it.
BTW, adding a < br > at the end of each option makes it more readable.

You know normalization isn't just for large enterprise level database (I know you know :). History has shown that if you don't normalize you will get anomalies. Start with 5NF and 'optimize' from there, though I suspect you will find that optimization is not required.
I suspect the propsed design will not suit its intended purpose e.g. data analysis. Try writing some typical SQL queries against it (e.g. average length of Observations across all Pedagogical questions then across all questions) and you will find it a pain: huge CASE statements, tables UNIONed many times over, ... it's likely you will end up writing VIEWs to normalise the data!

Related

What is the difference between a hard coded table and a look up table in SQL?

Someone told me to avoid hard coding values in tables and to rather use look up tables. I'm not sure what the difference is. Can someone please explain?. Thanks in advance.
Hi a lookup table is a normal table - in general you put information in there that is used repeatedly. You may or may not use ids for the values (often you do - integers are ideal for fast lookups, but short character codes can be more user-friendly).
Examples are:
status_codes
id
status
1
pending
2
approved
3
denied
us_states
abbreviation
name
AL
Alabama
AK
Alaska
AR
Arkansas
The advantage of a lookup table are two-fold:
It easy to have two different values "hard-coded" two different way - a situation to be avoided at all costs!
it lets you "maintain" the table in a consistent way through regular updates and even through administration pages in you application. If you hard code data into your queries and stored procedures - well, if anything changes you have to find all the places where values are hard-coded and change them in place.
In your question, the particular phrase hard coded table is somewhat ambiguous - perhaps it means creating a temp table in a query or stored procedure. The reasoning about it, with the pros and cons, is pretty much the same though - ease of use, consistency, and maintainability.
As with many things, it's probably not something that's always true all of the time. If you had a small "lookup table" that you really only have in one query and is highly unlikely to ever need to be updated or used in other queries - its not the end of the world to have it hard-coded in. But if the coding standards where you are dictate otherwise, its best to just go with the flow.

SQL table design: one or multiple line per entity? [duplicate]

I was wondering if you have a website with a dozen different types of listings (Shops, Restaurants, Clubs, Hotels, Events) that require different fields, is there a benefit of creating a table with columns defined like so
Example Shop:
shop_id | name | X | Y | city | district | area | metro | station | address | phone | email | website | opening_hours
Or a more abstract approach similar to this:
object_id | name
---------------
1 | Messy Joe's
2 | Bate's Motel
type_id | name
---------------
1 | hotel
2 | restaurant
object_id | type_id
---------------
1 | 2
2 | 1
field_id | name | field_type
---------------
1 | address | text
2 | opening_hours | date
3 | speciality | text
type_id | field_id
---------------
1 | 1
1 | 2
2 | 1
2 | 3
object_id | field_id | value
1 | 1 | 1st street....
1 | 3 | English Cuisine
Of course it can be more abstract if value's are predefined (Example: specialties could have their own list)
If I take the abstract approach it can be very flexible, but queries will be more complex with a lot of joins.
But I don't know if this affects the performance, executing these 'more complex' queries.
I would be interested to know what are the up and downsides of both methods. I can just imagine for myself, but I don't have the experience to confirm this.
Certain issues need to be clarified and resolved before we can enter into a reasonable discussion.
Pre-requisite Resolution
Labels
In a profession that demands precision, it is important that we use precise labels, to avoid confusion, and so that we can communicate without having to use long-winded descriptions and qualifiers.
What you have posted as FixedTables, is Unnormalised. Fair enough, it may be an attempt at Third Normal form, but in fact it is a flat file, Unnormalised (not "denormalised). What you have posted as AbstractTables is, to be precise, Entity-Attribute-Value, which is almost, but not quite, Sixth Normal form, and is therefore more Normalised than 3NF. Assuming it is done correctly, of course.
The Unnormalised flat file is not "denormalised". It is chock full of duplication (nothing has been done to remove repeating groups and duplicate columns or to resolve dependencies) and Nulls, it is a performance hog in many ways, and prevents concurrency.
In order to be Denormalised, it has to first be Normalised, and then the Normalisation backed off a little for some good reason. Since it is not Normalised in the first place, it cannot be Denormalised. It is simply Unnormalised.
It cannot be said to be denormalised "for performance", because being a performance hog, it is the very antithesis of performance. Well, they need a justification for the lack of formalised design], and "for performance" is it. Even the smallest formal scrutiny exposed the misrepresentation (but very few people can provide, so it remains hidden, until they get an outsider to address, you guessed it, the massive performance problem).
Normalised structures perform far better than Unnormalised structures. More normalised structures (EAV/6NF) perform better than less normalised structures (3NF/5NF).
I am agreeing with the thrust of OMG Ponies, but not their labels and definitions
rather than saying 'don't "denormalise" unless you have to', I am saying, 'Normalise faithfully, period' and 'if there is a performance problem, you have not Normalised correctly'.
Wikipedia
The entries for Normal Forms and Normalisation offer definitions that are incorrect; they confuse the Normal Forms; they are lacking regarding the process of Normalisation; and they give equal weight to absurd or questionable NFs which have been debunked long ago. The result is, Wikipedia adds to an already confused and rarely understood subject. So don't waste your time.
However, in order to progress, without that reference posing a hindrance, let me say this.
The definition of 3NF is stable, and has not changed.
There is a lot of confusion of the NFs between 3NF and 5NF. The truth is that this is an area that progressed over the last 15 years; and many orgs, academics as well as vendors with their products with limitations, jumped to create a new "Normal Form" to validate their offerings. All serving commercial interests and academically unsound. 3NF in its original untampered state intended and guaranteed certain attributes.
The sum total is, 5NF is today, what 3NF was intended to be 15 years ago, and you can skip the commercial banter and the twelve or so "special" (commercial and pseudo-academic) NFs in-between, some of which are identified in Wikipedia, and even that in confusing terms.
Fifth Normal Form
Since you have been able to understand and implement the EAV in your post, you will have no problem understanding the following. Of course a true Relational Model is pre-requisite, strong keys, etc. Fifth Normal Form is, since we are skipping the Fourth:
Third Normal Form
which in simple definitive terms is, every non-key column in every table has a 1::1 relationship to the Primary Key of the table,
and to no other non-key columns
Zero data duplication (the result, if Normalisation is progressed diligently; not achieved by intelligence or experience alone, or by working toward it as a goal without the formal process)
no Update Anomalies (when you update a column somewhere, you do not have to update the same column located somewhere else; the column exists in one and only one place).
If you understand the above, 4NF, BCNF, and all the silly "NFs" can be dismissed, they are required for physicalised Record Filing Systems, as promoted by academics, quite foreign to the Relational Model (Codd).
Sixth Normal Form
The purpose is elimination of missing data (attribute columns), aka elimination of Nulls
This is the one true solution to the Null Problem (also called Handling Missing Values), and the result is a database without Nulls. (It can be done at 5NF with standards and Null substitutes but that is not optimal.) How you interpret and display the missing values is another story.
Technically, is not a true Normal Form, because it does not have 5NF as a pre-requisite, but it has a value
EAV vs Sixth Normal Form
All the databases I have written, except one, are pure 5NF. I have worked with (administered, fixed up, enhanced) a couple of EAV databases, and I have implemented many true 6NF databases. EAV is a loose implementation of 6NF, often done by people who do not have a good grasp on Normalisation and the NFs, but who can see the value in, and need the flexibility of, EAV. You are a perfect example.
The difference is this: because it is loose, and because implementers do not have a reference (6NF) to be faithful to, they only implement what they need, and they write it all in code; that ends up being an inconsistent model.
Whereas, a pure 6NF implementation does have a pure academic reference point, and thus it is usually tighter, and consistent. Typically this shows up in two visible elements:
6NF has a catalogue to contain metadata, and everything is defined in metadata, not code. EAV does not have one, everything is in code (implementers keep track of the objects and attributes). Obviously a catalogue eases the addition of columns, navigation, and allows utilities to be formed.
6NF when understood, provides the true solution to The Null Problem. EAV implementers, since they are absent the 6NF context, handle missing data in code, inconsistently, or worse, allow Nulls in the database. 6NF implementers disallow Nulls, and handle missing Data consistently and elegantly, without requiring code constructs (for Null handling; you still have to code for missing data of course).
Eg. For 6NF databases with a catalogue, I have a set of procs that will [re]generate the SQL required to perform all SELECTs, and I provide Views in 5NF for all users, so they do not need to know or understand the underlying 6NF structure. They are driven off the catalogue. Thus changes are easy and automated. EAV types do that manually, due to the absence of the catalogue.
Discussion
Now, we can start the discussion.
"Of course it can be more abstract if
value's are predefined (Example:
specialities could have their own
list)"
Sure. But do not get too "abstract". Maintain consistency and implement such lists in the same EAV (or 6NF) manner as other lists.
"If I take the abstract approach it
can be very flexible, but queries will
be more complex with a lot of joins.
But I don't know if this affects the
performance, executing these 'more
complex' queries."
Joins are pedestrian in relational databases. The problem is not the database, the problem is that SQL is cumbersome when handling joins, especially compound keys.
EAV and 6NF databases have more Joins, which just as pedestrian, no more, no less. If you have to code each SELECT manually, sure, the cumbersome gets really cumbersome.
The entire problem can be eliminated by (a) going with 6NF over EAV and (b) implementing a catalogue, from which you can (c) generate all the basic SQL. Eliminates an entire class of errors as well.
It is a common myth that Joins somehow have a cost. Totally false.
The join is implemented at compile time, there is nothing of substance to 'cost' CPU cycles.
The issue is the size of tables being joined, not the cost of the Join between those same tables.
Joining two tables with millions of rows each, on a correct PK⇢FK relation, each of which have the appropriate indices
(Unique on the parent [PK] side; Unique on the Child side [PK=parent FK + something]
is instantaneous
Where the Child index is not unique, but at least the leading columns are valid, it is slower; where there is no useful index, of course it is very slow.
None of it has to do with Join cost.
Where many rows are returned, the bottleneck will be the network and the disk layout; not the join processing.
Therefore you can get as "complex" as you like, there is no cost, SQL can handle it.
I would be interested to know what are
the up and downsides of both methods.
I can just imagine for myself, but I
don't have the experience to confirm
this.
5NF (or 3NF for those who have not made the progression) is the easiest and best, in terms of implementation; ease of use (developers as well as users); and maintenance.
The drawback is, every time you add a column, you have to change the database structure (table DDL). That is fine is some cases, but not in most cases, due to change control in place, quite onerous.
Second, you have to change existing code (code handling the new column does not count, because that is an imperative): where good standards are implemented, that is minimised; where they are absent, the scope is unpredictable.
EAV (which is what you have posted), allows columns to be added without DDL changes. That is the single reason people choose it. (code handling the new column does not count, because that is an imperative). If implemented well, it will not affect existing code; if not, it will.
But you need EAV-capable developers.
When EAV is implemented badly, it is abominable, a worse mess than 5NF done badly, but not any worse than Unnormalised which is what most databases out there are (misrepresented as "denormalised for performance").
Of course, it is even more important (than in 5NF/3NF) to hold a strong Transaction context, because the columns are far more distributed.
Likewise, it is essential to retain Declarative Referential Integrity: the messes I have seen were due in large part to the developers removing DRI because it became "too hard to maintain", the result was, as you can imagine, one mother of a data heap with duplicate 3NF/5NF rows and columns all over the place. And inconsistent Null handling.
There is no difference in performance, assuming that the server has been reasonably configured for the intended purpose. (Ok, there are specific optimisations that are possible only in 6NF, which are not possible in other NFs, but I think that is outside the scope of this thread.) And again, EAV done badly can cause unnecessary bottlenecks, no more so than Unnormalised.
Of course, if you go with EAV, I am recommending more formality; buy the full quid; go with 6NF; implement a catalogue; utilities to produce SQL; Views; handle Missing Data consistently; eliminate Nulls altogether. This reduces your vulnerability to the quality of your developers; they can forget about the EAV/6NF esoteric issues, use Views, and concentrate on the app logic.
In your question, you have presented at least two major issues at the same time. Those two issues are E-A-V and gen-spec.
First, let's talk about E-A-V. Your last table (object_id, field_id, value) is essentially an E-A-V. There is an upside to E-A-V and a downside to E-A-V. The upside is that the structure is so generic that it can accomodate almost any body of data describing almost any subject matter. That means that you can proceed to design and implementation with no data analysis and no understanding of the subject matter, and not worry about wrong assumptions. The down side is that at retrieval time, you have to do the data analysis that you skipped over before building the data base, in order to come up with queries that mean anything. This is much more serious than just retrieval efficiency. But you are also going to have terrible problems with retrieval efficiency. There are only two ways to learn about this pitfall: live through it or read about it from those who have. I recommend the reading.
Second, you have a gen-spec case. Your table (object_id, type_id) captures a gen-spec (generalization-specialization) pattern, along with the related tables. If I had to generalize between hotels and restaurants, I might call it something like "public accomodations" or "venues". But I'm not sure I understand your case, and you may be driving for something even more general than those two names suggest. After all, you've included "events" in your list, and an event is not a type of venue in my mind.
I've referred other people to readings on gen-spec and the relational model in previous responses.
When two tables are very similar, when should they be combined?
But I hesitate to send you off in the same direction, because it's not clear to me that you want to come up with a relational model of the data before building your database. A relational model of a body of data and an E-A-V model of the same data are almost totally at odds with each other. It seems to me you have to make that choice before you even explore how to express gen-spec in the relational model of data.
When you start to require a large number of different entities (or even before...), a nosql solution would be vastly simpler than either choice.
Just store each entity/record with the exact fields you require.
{
"id": 1,
"type":"Restaurant",
"name":"Messy Joe",
"address":"1 Main St.",
"tags":["asian","fusion","casual"]
}
The "abstract" approach is better known as "Normalization", looks like 3rd Normal Form (3NF).
The other one is called "Denormalized", and can be a valid performance option... when you've encountered speed issues using the Normalized approach, not before.
How do you have the listings represented in code? I'd guess Listing as a supertype, with Shop, Restuarant, etc. as subtypes?
Assuming so, this is a case of how to map subtypes to a relational database. There are generally three choices:
Option 1: single table per subtype,
with common attributes repeated in
each table (name, id, etc).
Option 2: single table for all objects (your single table approach)
Option 3: table for the supertype and one for each subtype
There's no universally correct solution. My preference is generally to start with option 3; it provides an intituitive structure to work with, is pretty well normalised and can easily be extended. It means a single join for retrieving each instance - but RDBMS are well optimised for doing joins so it doesn't really cause performance problems in practice.
Option 2 can be more performant for queries (no joins) but causes problems if other tables need to refer to all supertype instances (proliferation of foreign keys).
Option 1 appears at first sight to be the most performant, although 2 caveats: (1) It's not resilient to change. If you add a new subtype (and so different attributes) you'll need to change the table structure and migrate it. (2) It can be less efficient than it seems. Because the table population is sparse, some DBs don't store it particularly efficiently. As a consequence it can be less efficicent than option 1 - since the query engine can do joins faster than it can search bloated sparse table spaces.
Which to choose really comes down to knowing details of your problem. I'd suggest reading up a bit on the options: this article is a good place to start.
hth

Need database design advice for a school grading system

I am working for a K-12 school and I am developing a gradebook system. The existing gradebook system that we use which I also developed is based on Excel with VBA. It's always a nightmare for me to consolidate 400+ Excel workbooks every end of term. During the summer break, I'm planning to put all data in a database for easy management and stuff.
My problem is this:
For a computation-intensive application like a gradebook, is it good to store the computations in a table field or is it better to ONLY store the raw data and do the computations only on the frontend?
The way the Excel gradebook system works is like this...
Teacher records each score for each assessment of each student in the form of score / highest possible score. (e.g. Quiz 1 = 5/10, Homework 1 = 20/25, etc.)
The scores will be calculated as percentages and summarized per component. "Component" means Quiz, Homework, etc. So there will be something like "Quizzes Average = 90%, Homework Average = 80%, etc."
Different subjects will have different final grade breakdown like "Science = 50% Quizzes + 50% Homeworks, Math = 60% Quizzes + 40% Homeworks".
Then, the general average grade of each student is computed by getting the average of all subjects.
Everything above is very easy to make in a spreadsheet but I don't know how to implement it in a database.
As of the moment, I'm thinking something like having a table where all assessments are recorded like this:
tbl_scores
id
student_id
term_id
subject_id
component_id
assessment_id
raw_score
highest_possible_score #not sure about this cause this can be implied from assessment_id
Would it be useful to store the computations (percentage for each score entry, component average, subject average, general average, etc. in the database) and use stored procedures and triggers to update them when a new score comes in?
Or, is it better to just store the raw scores and calculate everything ONLY on the frontend? Will this option be faster than the first one knowing that the SELECT queries here will really be more complex due to subqueries?
By the way, I'm planning to use PostgreSQL for this.
Thanks in advance for any advice.
DashMug
400 students, that is peanuts for an RDBMS that is designed to handle millions of records. You need not worry about performance.
I have seen nothing in your description that PostgreSQL could not do in simple queries. Store only the necessary information and calculate the rest. Also, do not add a redundant column highest_possible_score to your table tbl_scores if that information is in the linked table assessment already. That would do more harm than good.
If (and only if) you should later find that a query is too slow, you can always use materialized views (write the output of a query to a table, recalculate that table after changes to the base data.)
Denormalization depends on total amounts of records. I think that for school it would not be so big to do it, better decision is to consolidate data with queries (yes it will be more complex some joins will appear, that is normal)

Implementing a Flexible Relationship in a RDBMS -- What really are the tradeoffs?

I have a bunch of products with a bunch of different possible attributes for each product. E.g. Product A has a name, size, color, shape. Product B has a name, calories, sugar, etc. One way to solve this is like:
1) Create tables
Products (id, name)
Attributes (id, name)
Product_Attributes (product_id, attribute_id, value as string)
This allows for maximum flexibility, but I have heard a lot of people recommend against this although I am not sure why. I mean, if those tables were called Teams, Players, Team_Players we would all agree that this is proper relational design.
Everyone who explains to me why this is bad does so in the context of a completely flexible relational design where you don't ever create real tables past a basic few basic initial tables (e.g. object, attribute, object_attribute)-- which I think we all can agree is bad. But this is a much more limited and contained version of that (only Products, not every object in the system), so I don't think it is fair to group these two architectures together.
What issues have you encountered (experience or theoretical) that makes this design so bad?
2) Another way to solve this is to create a Product table with a bunch of columns like Size, Color, Shape, Weight, Sugar, etc and then include some extra columns at the end to give us some flexibility. This will create generally sparse rows filled mostly with NULLs. People tend to like this approach, but my question is how many columns can you have before this approach loses its performance benefits? If you have 200 columns, I imagine this is no longer a smart move, but what about 100 columns? 50 columns? 25 columns?
3) The final approach I know about is to store all of the attributes as a blob (JSON perhaps) in a single column of the Products table. I like this approach but it doesn't feel right. Queries are hard. And if you want to be able to easily change the name of an attribute later, you either have to parse every record individually or have them keyed in your blob by some id. If you go the id path then you will need another table Attributes and things start to look like approach #1 from above except you won't be able to join the attribute_id with your blob, so I hope you didn't want to query anything by attribute name.
What I like about this approach though is you can query one product and in your code you can easily access all the properties it has -- fast. And if you delete a product, you won't have to cleanup other tables -- easy to stay consistent.
4) I have read some things about being able to index strongly typed xml formats in some RDBMSs, but I honestly don't know much about this approach.
I am stuck. I feel like approach #1 is the best bet, but everything I read says that way stinks. What is the right way to think about this problem to be able to decide what is the best method for a given situation? More ideas than what I have listed are obviously welcomed!
You can probably find a great deal about this topic by doing a Google search on "entity attribute value antipattern".
One of the issues with this approach is that you end up mixing meta-data with actual data. Your "attribute" has to now tell the database what exactly is held in the "value" column. This can make it very difficult to handle this data in front-ends, reporting software, etc.
Second, you're going to have a very hard time actually enforcing any data integrity in the database. When your product has an attribute of "weight" what's to stop someone from putting "22 inches" in the value? Or a non-numeric value completely. You might say, "Well, my application will handle that." Then you need to change your application every time that you want to add a new attribute because the application needs to know how to handle it. If you're going to go through all of that work, just add a new column.
Third, how do you enforce that a given product has all of the attributes that it needs? In a row you can make column NOT NULL and they are then required to get that row into the database. You can't enforce that in the EAV model.
Fourth, this kind of a model usually leads to a lot of confusion. People aren't sure what "attributes" are supported, or they duplicate an attribute, or they forget to handle an attribute when creating a report. As an example, if I have an attribute for "Weight(kg)" and another attribute for "Weight(lbs)" and someone asks me, "What's the heaviest product in your database?" I'd better remember that I need to check both attributes.
Fifth, this model usually also leads to laziness. Hey, there's no reason to actually do any analysis of the products that our system can handle, because whatever comes along we'll just add some attributes. In my experience, companies are much better off doing the analysis required to create a good database design rather than fall back on an antipattern like this. You'll learn things about the database, the application, and likely the business as well.
Sixth, it might take a LOT of joins to get a single row of data for a given product. You can return the attributes as separate rows, but now you have to come up with customized list boxes to list those products, etc. Similarly, writing search queries against this model can be very difficult and in both of these situations you're likely to have performance issues.
These are just a few of the problems which I've encountered over the years. I'm sure that there are others.
What the correct solution is for your system depends a lot on the specifics of your business and application. Rather than a sparse row, you might consider using subtype tables if your products fall into a few categories that share common attributes.
There are many problems with flexible data models but the first one that is likely to bite you is the fact that queries get unwieldy very quickly. For example, if you wanted to get the Size attribute for every product, the query is relatively easy.
SELECT p.name product_name,
pa.value product_size
FROM product p
left outer join product_attribute pa on (p.product_id = pa.product_id)
left outer join attribute a on (pa.attribute_id = a.attribute_id and
a.name = 'size')
If you want to get the size and some other attribute like color, things get trickier
SELECT p.name product_name,
pa_size.value product_size
pa_color.value product_color
FROM product p
left outer join product_attribute pa_size on (p.product_id = pa_size.product_id)
left outer join product_attribute pa_color on (p.product_id = pa_size.product_id)
left outer join attribute a_size on (pa_size.attribute_id = a.attribute_id and
a_size.name = 'size')
left outer join attribute a_color on (pa_color.attribute_id = a.attribute_id and
a_color.name = 'color')
Very quickly, when you start wanting to grab 10 attributes or write complex searches (show me products where the color is blue and the size is medium), the queries start to get very complicated both for developers to write and maintain and for the database optimizer to generate the query plan for. If you're joining 30 tables together, the optimizer would have to prune the tree of plans it considers very, very quickly to be able to generate a query plan in a reasonable time frame. That tends to lead the optimizer to discard promising paths too early and to generate less than optimal paths for many of your queries.
This, in turn, means that you very quickly get to a point where new development is bottlenecked because developers can't get their queries right or developers can't get their queries to return quickly enough. Whatever time you saved up front by not gathering the requirements to determine what the valid attributes are quickly gets used up with the 47th iteration of "Why can't I get the data I want out of this putrid data model?"
Beyond this cost to developers, you end up creating a lot of costs for the organization as a whole.
No query tool is going to handle this sort of data model well. So all the users that can currently fire up their favorite query tool and run some reports out of your database are now stuck waiting for developers to write their reports and do their extracts for them.
Data quality becomes very hard to enforce. It becomes very hard to check conditions that involve multiple attributes (i.e. if a product's size Medium then the weight must be between 1 and 10 pounds, if a product's height is specified then a width is required as well) so people don't make those checks. They don't write the reports to identify where these sorts of rules are violated. So the data ends up being a bit bucket of data that downstream processes decide they can't use because it isn't sufficiently complete.
You're moving too much of the initial requirements discussion off into the future when understanding the core entities will likely lead to a much better design overall. If you can't agree on a set of attributes that the first version of the product needs to support, you don't really understand what that version is supposed to do. Even if you successfully code a very generic application, that means that it is going to require a lot of time to configure once you've built it (because someone will have to figure out what attributes it supports at that point). And then you'll discover when the application is being configured that you missed a ton of requirements that only became clear when the attributes were defined-- you can't know that width is required if height is specified if you don't know whether they're going to store height or width in the first place.
In the worst case, the response to this problem during configuration is to immediately determine that you need to provide a flexible way to specify business rules and to specify workflows so that the people configuring the application can quickly code their business rules when they add new attributes and so that they can control the flow of the application by grouping attributes together or skipping certain pages (i.e. have a page where make & model are required if the product type is car, skip that page if now). But in order to do that, you're going to end up building an entire development environment. And you're going to push the job of actually coding the application to the folks that are configuring the product. Unless you happen to be really good at building development environments, and unless the people configuring the product are really developers, this doesn't end well.
I mean, if those tables were called
Teams, Players, Team_Players we would
all agree that this is proper
relational design.
No, we wouldn't. Here's why.
You started with this.
Products (id, name)
Attributes (id, name)
Product_Attributes (product_id, attribute_id, value as string)
Let's drop the id numbers, so we can see what's really going on. (Longer column names for clarity.)
Products (product_name)
Attributes (attribute_name)
Product_Attributes (product_name, attribute_name, value as string)
And translating that to teams and players . . .
Teams (team_name)
Players (player_name)
Team_Players (team_name, player_name, value as string)
So for sample data we might have
Team Player Value
--
St. Louis Cardinals Boggs, Mitchell ?
St. Louis Cardinals Carpenter, Chris ?
St. Louis Cardinals Franklin, Ryan ?
St. Louis Cardinals Garcia, Jaime ?
What on earth belongs in place of the question marks? Let's say we want to record number of games played. Now the sample data looks like this.
Team Player Value
--
St. Louis Cardinals Boggs, Mitchell 23
St. Louis Cardinals Carpenter, Chris 15
St. Louis Cardinals Franklin, Ryan 19
St. Louis Cardinals Garcia, Jaime 14
Want to store batting average, too? You can't. Not only can you not store batting average along with games played, you can't tell by looking at the database whether Mitch Boggs played in 23 games, had 23 hits, scored 23 runs, had 23 "at bats", had 23 singles, or struck out 23 times.
The reason why this approach is so bad is that you don't know how may times you have to join to the table to get all the attributes. Plus joining to the same table 20 times tends to create a performance block of massive proportions. I am assuming that Products wil be at the heart of your system and thus be a critical place for performance.
Now you say that the product attributes will be drastically different. I disagree. There will be many attributes that are common to a large number of your products things like price, units, size, color, dimemnsions, weight. Those should be in the product table as common properties. These are also the ones that the user is most likely to be searching for when picking a product.
Other properties are useful as a description but not for most anything else (they won't be searched on or put into the order details). Put those in a description or notes field.
Finally you are left with the few attributes which might be different. But how different are they? Are they common to a partiuclar type of product (books have these attributes, cameras have these), then a related table for that type of product might work well.
Once you have done your job and figured all this out, then add the flexibility of an EAV table if you still need one. The steps above should cover 98+% of the real requirements.
(Also it's kind of hard to design the order details table if you don't know the attribute fields you need to record for the order - you can't rely on the products table for that)
(oh and I agree wholeheartedly with what #Tom H is saying as well.)

Database design for a survey [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I need to create a survey where answers are stored in a database. I'm just wondering what would be the best way to implement this in the database, specifically the tables required. The survey contains different types of questions. For example: text fields for comments, multiple choice questions, and possibly questions that could contain more than one answer (i.e. check all that apply).
I've come up with two possible solutions:
Create a giant table which contains
the answers for each survey
submission. Each column would
correspond to an answer from the
survey. i.e. SurveyID, Answer1,
Answer2, Answer3
I don't think this is the best way
since there are a lot of questions
in this survey and doesn't seem very
flexible if the survey is to change.
The other thing I thought of was
creating a Question table and Answer
table. The question table would
contain all the questions for the
survey. The answer table would contain
individual answers from the survey,
each row linked to a question.
A simple example:
tblSurvey: SurveyID
tblQuestion: QuestionID, SurveyID, QuestionType, Question
tblAnswer: AnswerID, UserID, QuestionID, Answer
tblUser: UserID, UserName
My problem with this is that there
could be tons of answers which would
make the Answer table pretty huge.
I'm not sure that's so great when it
comes to performance.
I'd appreciate any ideas and suggestions.
I think that your model #2 is fine, however you can take a look at the more complex model which stores questions and pre-made answers (offered answers) and allows them to be re-used in different surveys.
- One survey can have many questions; one question can be (re)used in many surveys.
- One (pre-made) answer can be offered for many questions. One question can have many answers offered. A question can have different answers offered in different surveys. An answer can be offered to different questions in different surveys. There is a default "Other" answer, if a person chooses other, her answer is recorded into Answer.OtherText.
- One person can participate in many surveys, one person can answer specific question in a survey only once.
My design is shown below.
The latest create script is at https://gist.github.com/durrantm/1e618164fd4acf91e372
The script and the mysql workbench.mwb file are also available at
https://github.com/durrantm/survey
Definitely option #2, also I think you might have an oversight in the current schema, you might want another table:
+-----------+
| tblSurvey |
|-----------|
| SurveyId |
+-----------+
+--------------+
| tblQuestion |
|--------------|
| QuestionID |
| SurveyID |
| QuestionType |
| Question |
+--------------+
+--------------+
| tblAnswer |
|--------------|
| AnswerID |
| QuestionID |
| Answer |
+--------------+
+------------------+
| tblUsersAnswer |
|------------------|
| UserAnswerID |
| AnswerID |
| UserID |
| Response |
+------------------+
+-----------+
| tblUser |
|-----------|
| UserID |
| UserName |
+-----------+
Each question is going to probably have a set number of answers which the user can select from, then the actual responses are going to be tracked in another table.
Databases are designed to store a lot of data, and most scale very well. There is no real need to user a lesser normal form simply to save on space anymore.
As a general rule, modifying schema based on something that a user could change (such as adding a question to a survey) should be considered fairly smelly. There's cases where it can be appropriate, particularly when dealing with large amounts of data, but know what you're getting into before you dive in. Having just a "responses" table for each survey means that adding or removing questions is potentially very costly, and it's very difficult to do analytics in a question-agnostic way.
I think your second approach is best, but if you're certain you're going to have a lot of scale concerns, one thing that has worked for me in the past is a hybrid approach:
Create detailed response tables to store per-question responses as you've described in 2. This data would generally not be directly queried from your application, but would be used for generating summary data for reporting tables. You'd probably also want to implement some form of archiving or expunging for this data.
Also create the responses table from 1 if necessary. This can be used whenever users want to see a simple table for results.
For any analytics that need to be done for reporting purposes, schedule jobs to create additional summary data based on the data from 1.
This is absolutely a lot more work to implement, so I really wouldn't advise this unless you know for certain that this table is going to run into massive scale concerns.
The second approach is best.
If you want to normalize it further you could create a table for question types
The simple things to do are:
Place the database and log on their own disk, not all on C as default
Create the database as large as needed so you do not have pauses while the database grows
We have had log tables in SQL Server Table with 10's of millions rows.
No 2 looks fine.
For a table with only 4 columns it shouldn't be a problem, even with a good few million rows. Of course this can depend on what database you are using. If its something like SQL Server then it would be no problem.
You'd probably want to create an index on the QuestionID field, on the tblAnswer table.
Of course, you need to specify what Database you are using as well as estimated volumes.
You may choose to store the whole form as a JSON string.
Not sure about your requirement, but this approach would work in some circumstances.
Looks pretty complete for a smiple survey. Don't forget to add a table for 'open values', where a customer can provide his opinion via a textbox. Link that table with a foreign key to your answer and place indexes on all your relational columns for performance.
Number 2 is correct. Use the correct design until and unless you detect a performance problem. Most RDBMS will not have a problem with a narrow but very long table.
Having a large Answer table, in and of itself, is not a problem. As long as the indexes and constraints are well defined you should be fine. Your second schema looks good to me.
Given the proper index your second solution is normalized and good for a traditional relational database system.
I don't know how huge is huge but it should hold without problem a couple million answers.