Let's say you have a large number of fields, in various tables, which have integer codes that must then be cross-referenced against another table which then gives the textual representation of these codes - i.e. essentially an enumeration. Each of these codes - which appear in a number of disparate tables - would then have a foreign key against wherever the enumeration values are stored.
There are two main options:
Store all of the enumerations in one big table which defines all enumerations, and then has some column which specifies the enumeration type.
Store each enumeration definition in an isolated, separate table.
Which is the better way to go, especially with regards to performance? The database in question receives a large number of INSERTs and DELETEs and relatively fewer reads.
It depends. Separate tables have a big advantage. You can define foreign key relationships that enforce the type of the column for the referencing tables.
A second advantage is that there might be different data columns for different types. For instance, a countries table might have ISO2 and ISO3 codes and currency. A cities table might have a timezone.
One occasion where a single table can be handy is for internationalization. For translating values into separate languages, I find it convenient to have them all in one place.
There is also a space advantage for a single table. Tables in SQL are stored on pages -- and many reference tables will be smaller than one page. That leaves a lot of unused space. Storing them in one table "compacts" them, eliminating that space. However, that is rarely a real consideration in the modern world.
In general, though, you would use separate tables unless you had a compelling reason to use a single table.
Related
The data I want to store data that has this characteristics:
There are a finite number of fields (I don't expect to add new fields);
There are some columns that are common to all sets of data (a category field, for instance);
There are some columns that are specific to individual sets of data (each category needs it's own fields);
Here's how it would look like in a regular table:
I'm having trouble figuring out which would be the better way to store this data in a database for this situation.
Bellow are the ideas I already had:
Do exactly as the tabular table (I would have many NULL values);
Divide the categories into tables (I would use joins when needed);
Use JSON type for storing the values (no NULL values and having it all in same table).
So my questions are:
Is one of these solutions (or one that I have not thought about it) that is better for this case?
Are there other factors, other than the ones presented here, that I should consider to make this decision?
Unless you have very many columns (~ 100), it is usually better to use normal columns. NULL values don't take any storage space in PostgreSQL.
On the other hand, if you have queries that can use any of these columns in the WHERE condition, and you compare with =, a single GIN index on a jsonb might be better than having many B-tree indexes, because the index maintenance costs would be higher.
The definitive answer depends on the SQL statements that you plan to run on that table.
You have laid out the three options pretty well. Things to consider are:
Performance
Data size
Each of maintenance
Flexibility
Security
Note that you don't even allude to security considerations. But security at the table level is usually a tad simpler than at the column level and might be important for regulated data such as PII (personally identifiable information).
The primary strength of the JSON solution is flexibility. It is easy to add new columns. But you don't need that. JSON has a cost in data size and data type flexibility (notably JSON doesn't support date/times explicitly).
A multiple table solution requires duplicating the primary key but may result in much less storage overall if the columns really are sparse. The "may" may also depend on the data type. A NULL string for instance occupies less space than a NULL float in a table record.
The joins on multiple tables will be 1-1 on primary keys. These should be pretty fast.
What would I do? Unless the answer is obvious, I would dump the data into a single table with a bunch of columns. If that table starts to get unwieldy, then I would think about splitting it into separate tables -- but still have one table for the common columns. The details of one or multiple tables can be hidden behind a view.
Depends on how much data you want to store, but as long as it is finite it shouldn't make a big difference if it contains a lot of null's or not
I need to save about 500 values in structured database (SQL, postgresql) or whatever. So what is the best way to store data. Is it to take 500 fields or single field as (CSV) comma separated values.
What would be the pros and cons.
What would be easy to maintain.
What would be better to retrieve data.
A comma-separated value is just about never the right way to store values.
The traditional SQL method would be a junction or association table, with one row per field and per entity. This multiplies the number of rows, but that is okay, because databases can handle big tables. This has several advantages, though:
Foreign key relationships can be properly defined.
The correct type can be implemented for the object.
Check constraints are more naturally written.
Indexes can be built, incorporating the column and improving performance.
Queries do not need to depend on string functions (which might be slow).
Postgres also supports two other methods for such data, arrays and JSON-encoding. Under some circumstances one or the other might be appropriate as well. A comma-separated string would almost never be the right choice.
This is for SQL Server.
I have a table that will contain a lot of rows and that table will be queried multiple times so I need to make sure my design is optimized.
Just for the question let say that table contains 2 columns. Name and Type.
Name is a varchar and it will be unique.
Type can be 5 different value (type1... type5). (It possible can contains more values in the future)
Should I make type a varchar (and create an index) or would be it better to create a table of types that will contains 5 rows with only a column for the name and make type a foreign key?
Is there a performance difference between both approach? The queries will not always have the same condition. Sometime it will query the name, type, or both with different values.
EDIT: Consider that in my application if type would be a table, the IDs would be cached so I wouldn't have to query the Type table everytime.
Strictly speaking, you'll probably get better query performance if you keep all the data in one table. However doing this is known as "denormalization" and comes with a number of pretty significant drawbacks.
If your table has "a lot of rows", storing an extra varchar field for every row as opposed to say, a small, or even tinyint, can add a non-trivial amount of size to your table
If any of that data needs to change, you'll have to perform lots of updates against that table. This means transaction log growth and potential blocking on the table during modification locks. If you store it as a separate table with 5-ish rows, if you need to update the data associated with that data, you just update one of the 5 rows you need.
Denormalizing data means that the definition of that data is no longer stored in one place, but in multiple places (actually its stored across every single row that contains those values).
For all the reasons listed above, managing that data (inserts, updates, deletes, and simply defining the data) can quickly become far more overhead than simply normalizing the data correctly in the first place, and for little to no benefit beyond what can be done with proper indexing.
If you find the need to return both the "big" table and some other information from the type table and you're worried about join performance, truthfully, wouldn't be. That's a generalization, but If your big table has, say, 500M rows in it, I can't see many use cases where you'd want all those rows returned; you're probably going to get a subset. In which case, that join might be more manageable. Provided you index type, the join should be pretty snappy.
If you do go the route of denormalizing your data, I'd recommend still having the lookup table as the "master definition" of what a "type" is, so it's not a conglomeration of millions of rows of data.
If you STILL want to denormalize the data WITHOUT a lookup table, at least put a CHECK constraint on the column to limit which values are allowable or not.
How much is "a lot of rows"?.
If it is hundreds of thousands or more, then a Columnstore Index may be a good fit.
It depends on your needs, but usually you would want the type column to be of a numerical value (in your case tinyint).
I am building a MySQL-driven website that will analyze customer surveys distributed by a variety of clients. Generally, these surveys are structured fairly consistently, and most of our clients' data can be reduced to the same normalized database structure.
However, every client inevitably ends up including highly specific demographic questions for their customers that are irrelevant to every other one of our clients. For instance, although all of our clients will ask about customer satisfaction, only our auto clients will ask whether the customers know how to drive manual transmissions.
Up to now, I have been adding columns to a respondents table for all general demographic information, with a lot of default null's mixed in. However, as we add more clients, it's clear that this will end up with a massive number of columns which are almost always null.
Is there a way to do this consistently? I would rather keep as much of the standardized data as possible in the respondents table since our import script is already written for that table. One thought of mine is to build a respondent_supplemental_demographic_info table that has the columns response_id, demographic_field, demographic_value (so the manual transmissions example might become: 'ID999', 'can_drive_manual_indicator', true). This could hold an infinite number of demographic_fields, but would be incredible painful to work with from both a processing and programming perspective. Any ideas?
Your solution to this problem is called entity-attribute-value (EAV). This "unpivots" columns so they are rows in a table and then you tie them together into a single view.
EAV structures are a bit tricky to learn how to deal with. They require many more joins or aggregations to get a single view out. Also, the types of the values becomes challenging. Generally there is one value column, so everything is stored as a string. You can, of course, have a type column with different types.
They also take up more space, because the entity id is repeated on each row (I think that is the response_id in your case).
Although not idea in all situations, they are appropriate in a situation such as you describe. You are adding attributes indefinitely. You will quickly run over the maximum number of columns allowed in a single table (typically between 1,000 and 4,000 depending on the database). You can also keep track of each value in each column separately -- if they are added at different times, for instance, you can keep a time stamp on when they go in.
Another alternative is to maintain a separate table for each client, and then use some other process to combine the data into a common data structure.
Do not fall for a table with key-value pairs (field id, field value) as that is inefficient.
In your case I would create a table per customer. And metadata tables (in a separate DB) describing these tables. With these metadata you can generate SQL etcetera. That is definitely superior too having many null columns. Or copied, adapted scripts. It requires a bit of programming, where an application uses the metadata to generate SQL, collect the data (without customer specific semantic knowledge) and generate reports.
Would you introduce table inheritance for the TemplateTeststep and TestplanTeststep tables?
The TestplanTeststep is/will be always a readonly copy of the TemplateTeststep (red) PLUS some editable fields (purple).
The TemplateTeststep will have individual fields which will never appear in the TestplanTeststep table.
The TestplanTeststep will have individual fields which will never appear in the
TemplateTeststep table.
Moreover the TestplanTeststep table has some fields from the TemplateTeststep table which are read-only. (nobody should change them because they need to be safe for reports/investigation etc...)
The TemplateTeststep has still the fields ModifiedBy and ModifiedAt which does not appear on a TestplanTeststep for historical tracking who changed what/when.
This really depends on the context of use.
Is speed a big concern? If there are a large number of reads of these data, splitting the values for a single logical entity across two tables will increase latency on reads. In a low-throughput system this will be negligible, but if you've got hundreds of millions of rows then this would add up.
Is constraining the size of the dataset more important than read-access performance? It's no secret that denormalising data often makes them quicker to read (they're only stored in one contiguous space, instead of on different sectors of the disk) and reduces the amount of locking required, but will add to the amount of space to store the entire dataset.
Are you using an ORM solution, or will you be joining across the tables 'by hand'? The former may make splitting the data trivial, and the latter may be more error-prone.
I don't think there's a right or wrong answer here; I'd be inclined to use whichever approach is used in the rest of the codebase/the rest of your organisation.
For what it's worth, I'd probably use multi-table inheritance, as I tend to use Java's Hibernate for ORM.
You should have a Foreign Key in the TestPlanTestStep Table.
The Primary-Foreign key Relation-Ship of TemplateTestStep and TestPlanTestStep to follow Second form of Normalization.
The Design is like De-Normalization as if somebody needs reports. If not so please remove Duplicate columns to avoid redundancy in Database.