I am trying to set up the right indices on a table I have just created which contains 4 "polymorphic associations" and a PK ID. The 4 associations allow me not to have to quadruple the number of tables to the addition I am making to the database and should not be modified in this discussion. My question is how should I set up the indices so that I get optimal performance (speed, space not so much) ? None of the 4 keys is candidate for PK. More specifically all 4 are but only one at a time. I have added a PK "ID" because I had read that adding a PK, even if not used, is better than not adding a PK. However, I am questionning this assertion more and more.
More about the table : the logic that only 1 of the 4 FKs should be used is enforced by an Access form. Nobody non-dev has access to the tables directly. I expect there will be no more than a couple hundred entries every month for as long as this database is in use. Assuming we use it 10 more years and average 500 entries a month (which is probably a bit more than what it will be) we should have no more than 60k entries in 10 years. Basically, this is not a hugely populated table.
The db and forms run on Access 2003 (yeah I know...).
I hope that is enough information for you to help me. In the image below you can see the table structure as it is right now. The 4 FKs are NoDemandeAmendementTransit, NoDemandeAmendementRubrique, NoAmendementTransit, NoAmendementRubrique.
Many thanks.
A more practical design is to create a single supertype table for all of the four subtypes you are referencing. Then reference the supertype table with a single foreign key instead of having four separate FKs. It's a design pattern you can find in most good books on database design and it is simpler and more efficient than having multiple "optional" foreign keys. It will also provide you with a more useful primary key.
Related
I've really been scratching my head over this and don't know how to ask the question well enough to find an answer on Google or StackOverflow etc.
There is a very old system used at work - I don't have access to the server side so can't view its tables, but I do know its an SQL database and have done enough experimenting with the API to see what adding to each table does, and I'm questioning how it allocates primary keys;
It has a lot of tables, each with a primary key as expected, but the primary key on any/all of its tables seems to be allocated so that there is absolutely no duplication of primary keys anywhere in the system.
e.g.
add row to table 1 get pk = 1
add row to table 2 get pk = 2
add row to table 1 again, get pk = 3
add row to table 10 and get pk = 4
Is this method some sort of old database technique?
What could be the purpose of doing this?
There are more funny nuances that I won't get into detail of, e.g. a certain range of pk's being allocated for certain tables but I just wanted to see if anyone recognises the main principle here and if there's a point to it, or if it's just bad / weird design
A primary key only needs to be unique within a single table. There is no such thing as a primary key across multiple tables.
This might be useful under some circumstances. For instance, this would allow all entities to be represented in a single table. This can be handy for "generic" information, such as adding comments to the entities.
More prosaically, I have seen this in older Oracle databases. Oracle did not have any automated mechanism for generating ids, so this required using a sequence. As a matter of convenience, laziness, or design, multiple tables might use the same sequence -- resulting in the behavior that you see.
I have the following partial database design:
All the tables are dependent on each other so the table bvd_docflow_subdocuments is dependent on the table bdd_docflow_subsets
and the table bvd_docflow_subdocuments is dependent on bvd_docflow_subsets. So I thought I could me smart and use foreign keys on every table (and ON DELETE CASCADE). However the FK are being drilldown how further I go in to the tables.
The problem is the table bvd_docflow_documents has no point having a reference to the 1docflow_documentset_id` PK / FK. Is there a way (and maybe my design is crappy) that only the table standing above it has an FK relationship between the tables and not all the tables above it.
Edit:
More explanation:
In the bvd_docflow_subsets table information is stored about objects to create documents. There is an relation between that table and bvd_docflow_subdocuments table (This table stores master data about all the documents for an subset. (docflow_subset_id is in both tables). This is the link between those to tables.
Going further down we also got the table bvd_docflow_documents this table contains the actual document data. The link between bvd_docflow_documents and bvd_docflow_subdocuments is bvd_docflow_subdocument_id.
On every table I got an foreign key defined so when data is removed on a table all the data linked to that data is also removed.
However when we look to the bvd_docflow_documents table it has all the foreign keys from the other tables (docflow_subset_id and docflow_documentset_id) and there is the problem. The only foreign key needed for that bvd_docflow_documents table is docflow_subdocument_id and no other.
Edit 2
I have changed my design further and removed information that I don't need after initial import of the data.
See the following link for the (total) databse design:
https://sqldbm.com/Project/SQLServer/Share/_AUedvNutCEV2DGLJleUWA
The tables subsets, subdocuments and documents have a many to many relationship so I thought a table in between those 3 documents_subdocuments is the way to go were I define all the different keys for those tables.
I am not used to the database design first and then build it. But, for everything there is a first time, and I try to do make a database that is using standards and is using the power of SQL Server the correct way.
I'll address the bottom-most table and ignore the rest for the most part.
But first some comments. Your schema is simply a model of a system. To provide feedback, one must understand this "system" and how it actually works to evaluate your model. In addition, it is important to understand your entities and your reasons for choosing them and modelling them in the specified manner. Without that understanding all of this guessing based on experience.
And another comment. Slapping an identity column into every table is just lazy modelling IMO. Others will disagree, but you need to also enforce all natural keys. Do you have natural keys? It is rare not to have any. Enforce those that do exist.
And one last comment. Stop the ridiculous pattern of prepending the column names with the table names. And you should really think long and hard about using very long table names. Given what you have, I sense you need a schema for your docflow stuff.
For the documents table, your current PK makes no sense. Again, you've slapped an identity column into the table. By itself, this column is a key for the table. The inclusion of any other columns does not make the key any more "unique" - that inclusion is logical nonsense. Following your pattern, you would designate the identity column as the primary key. But ...
According to your image, the documents table is related to one and only one subdocument. You added a foreign key to that table - which matches the image. You also added additional columns and foreign keys to the "higher" tables. So now a document "points" to a specific subdocument. It also points to a specific subset - which may have no relationship to the subdocument. The same thought applies to the other FK. I have a doubt that this is logically correct. So why do these columns (and related FKs) exist? Perhaps this is the result of premature optimization - which everyone knows is the root of all evil coding. Again, it is impossible to know if this is "right" or even "useful" for your model.
To answer your question "... is there a way", the answer is obviously yes. You remove the columns of which you complain. You added them - Why? Is this perhaps a problem with the tool you are using?
And some last comments. There is nothing special about "varchar(50)". Perhaps this is a place holder that will be updated later. It may also be another sign of laziness. And generally speaking, columns with names like "type" and "code" tend to be foreign keys to "lookup" tables - because people like to add, modify, or remove these sorts categorization values over time. I'm also concerned about the column name overlap among the tables. "Location" exists in multiple tables, as do action_code and action_id. And a column named "id" (action_id) suggests a lookup to another table - is it? Should it be? Is there a relationship between action_id and action_code? From a distance it is impossible to answer any of these questions.
But designing a database is more art than science. Sometimes you just need to create something, populate it with some sample data, and then determine if it works for your needs. Everyone will get something wrong in the first try. That is expected; that is how you learn. The most difficult part is actually completing your first attempt.
I have two tables in a SQLite DBMS:
Shop(PK, A1, A2, A3) where PK is Primary Key `A1..An` are nullable attributes;
Product(PK, FK) where FK references Shop(PK) and PK|FK is Primary Key
Shop typically has 5 or 6 entries in a database instance.
The problem is that when a new product is inserted, it is very often present in ~all the shops, so now the user effectively has to insert 5 or 6 rows at a time (where PK is repeated in each row - PK consists of a long attributes in the real case).
I wonder if there's a way to make the life easier for the user by associating one new product to all the shops by default, by either refactoring the schema (e.g. maybe using flags?) or by triggering all the insertions when a new product appears (is it tricky?), or both the things. Note that one product must be present in at least one shop. I want the solution to be as less obscure as possible and easy to maintain.
It's an N->N relation between your two tables, in my opinion the FK shouldn't be in the product table, you should have another table "PRODUCT_SHOP" where you'd have PKSHOP, PKPRODUCT.
To answer your question there is a lot of tricky ways to do this, none of them is good since you'de have to use code to interpret your tricks,
the best way is to respect the standrads and insert as many times your poduct with the according shop in the PRODUCT_SHOP table(unless your have a lot, like really a lot of shops, the table PRODUCT_SHOP is basically a PRODUCTS x SHOPS).
It really depends on the size of your database, if it's not huge stick with relational basic data.
Hope i helped
I have one database with users and one with questions. What I want is to ensure that every user can answer every question only once.
I thought of a database that has all the question id's as columns and all the user id's as records, but this gets very big (and slow I guess) when the questions and the user count grow.
Is there another way to do this with better performance?
You probably want a setup like this.
Questions table (QuestionID Primary Key, QuestionText)
Users table (UserID Primary Key, Username)
Answers table (QuestionID, UserID, Date) -- plus AnswerText/Score/Etc as needed.
In the Answers table the two first columns together form a compound primary key (QuestionID, UserID) and both are foreign keys to Question(QuestionID) and Users(UserID) respectively.
The compound primary key ensures that each combination of QuestionID/UserID is only allowed once. If you want to allow users to answer the same question multiple times you could extend the ¨compound primary key to include the date (it would then be a composite key).
This is a normalized design and should be efficient enough. It's common to use a surrogate primary key (like AnswerID) instead of the compound key and use a unique constraint instead to ensure uniqueness - the use of a surrogate key is often motivated by ease of use, but it's by no means necessary.
Diagram
Below is a diagram of my own table design, quite similar to the correct Answer by jpw. I made up a few column names to give more of a flavor of the nature of the table. I used Postgres data types.
As the last paragraph of that Answer discusses, I would go with a simple single primary key on the response_ ("Answers") table rather than a compound primary key combining fkey_user_ & fkey_question_.
Unrealistic
This diagram fits the problem description in the Question. However this design is not practicable. This scenario is for a single set of questions to be put to the user, only a single survey or quiz ever. In real life in a situation like a school, opinion survey, or focus group, I expect we would put more than one questionnaire to a user. But I will ignore that to directly address the Question as worded.
Also in some scenarios we might have versions of a question, as it is tweaked and revised over time when given on successive quizzes/questionnaires.
Performance
Your Question correctly identifies this problem as a Many-To-Many relationship between a user and a question, where each user can answer many questions and each question may be answered by many users. In relational database design there is only one proper way to represent a many-to-many. That way is to add a third child table, sometimes called a "bridge table", with a foreign key linking to each of the two parent tables.
In a diagram where you draw parent tables vertically higher up the page than child tables, I personally see such a many-to-many diagram as a butterfly or bird pattern where the child bridge table is the body/thorax and the two parents are wings.
Performance is irrelevant in a sense, as this is the only correct design. Fortunately, modern relational databases are optimized for such situations. You should see good performance for many millions of records. Especially if you a sequential number as your primary key values. I tend to use UUID data type instead; their arbitrary bit values may have less efficient index performance when table size reaches the millions (but I don't know the details.
Good Morning,
in the design of a database, I have a table (TabA's call it) that could have relationships with four other tables. In the sense that this table can be connected both with the first of four, and with the second, and the third to the fourth, but could not have links with them; or it could have one (with any of the tables), or two links (always with two of any of them), and so on.
The table TabA I added four fields that refer to the four tables which could be "null" when they do not have any connection.
Wondering is this the kind of optimal design (say the four fields in the TabA) or you can make a better design for this type of situation?
Many thanks for your reply.
dave
In answer to the question and clarification in your comment, the answer is that your design can't be improved in terms of the number of foreign key columns. Having a specific foreign key column for every potential foreign key relationship is a best practice design.
However, the schema design itself seems questionable. I don't have enough information to tell whether the "Distributori_[N]_Livello" tables are a truly hierarchical structure or not. If it is, it is often possible to use a self-referential table for hierarchical structures rather than a set of N tables, as the diagram you linked seems to use. If you are able to refactor your design in such a way, it might be possible to reduce the number of foreign key columns required.
Whether this is possible or not is not for me to say given the data provided.