Related
I am planning to create one huge table to store all words that could possibly exist for personal experimentation (whether part of the official dictionary, urban or else).
Does it make sense to use the word itself as a primary key?
It is 100% certain that words MUST be unique, moreover they will not change.
The purpose in the end is to also use this PK as FK in related tables to get more information on these words.
I am not too familiar with table scaling, so I wonder if I can get into trouble:
Performance wise
If the table becomes too large and has to be partitioned (?)
If I want to move the database to sqlite to use as embedded data store
Tagging this question with postgres (my current db), but may migrate to sqlite.
I would be surprised if there were enough words that you needed to partition the table. Of course if your "words" are really genetic sequences or something, I might be off there.
In any case, one of the primary purposes of a primary key is to support foreign key relationships. So, if there is any possibility that another table might refer to this table, then you want to take that into account.
Integer foreign keys are generally preferable, because they are a fixed length -- and that is a little more efficient for indexes. In addition, four-byte integers are probably smaller than the average word length, so they save on storage of the foreign key as well.
That would be balanced against an additional 4 bytes in the words table itself. On balance, I usually add synthetic primary keys.
Another Idea:
Make 2 columns
Column 1: Initial Letter
Column 2: The Word
[if word is APPLE :::: Column1-->A :::: Column2-->Apple]
Benefits:
you can query faster for tasks like 'word count from a letter' (like, no. of words from A)
could give you simple rules for making shards (like all words with column1 as 'A', can be assigned to a particular dedicated shard)
I new to DB/Hibernate and found code:
#SequenceGenerator(name = "entSeq", allocationSize = 5, sequenceName = "CODE_SEQ")
...
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "entSeq")
which set sequence for primary key.
Why was sequences used for values of primary key? Which goals was addressed:
increase performance
add constraints, some checks
limit possible value range of integer values of ID, why to do so??
why to start counting from 1?
I read about syntax and usage in:
http://msdn.microsoft.com/en-us/library/ff878091.aspx
http://www.techonthenet.com/oracle/sequences.php
but doesn't found answer for my question.
UPDATE:
I enjoyed reading:
http://www.oracle.com/technetwork/products/rdb/0307-sequences-130053.pdf
Guide to Using SQL: Sequence Number Generator
where shown that there is problem in DB theory how to get unique ID for primary keys. That mean that I can make insert into table without providing value for primary key from my own:
INSERT INTO suppliers
(supplier_id, supplier_name)
VALUES
(supplier_seq.nextval, 'Kraft Foods');
But I expect that this feature must be present in all DB without forcing me to supply primary key values...
Do I think right?
UPDATE2:
Answer for why use START WITH:
This clause can be useful when adding sequences to existing databases. When an older scheme was in
use by the application and has already consumed some values from the legal range this clause can be
used to skip those consumed values. MINVALUE and MAXVALUE are used to specify the legal range but
START WITH would initiate the sequence usage within that range so that previously generated values
would not reappear.
UPDATE3: *sequences* provide http://en.wikipedia.org/wiki/Surrogate_key
Historically, there were two main reasons.
Avoid performance problems with ON UPDATE CASCADE in big tables.
Avoid performance problems with joins on wide, natural keys.
Oracle doesn't even support ON UPDATE CASCADE, so updating a value that's used in foreign key references is more troublesome than on other platforms.
Those performance "problems" are much less severe nowadays than they were 20 years ago, given tables of the same size. (Hardware's a lot faster now.) But we seem to deal with much bigger tables now than we did 20 years ago.
There are some undesirable side-effects for this kind of performance tuning.
You typically need many more joins than you might with carefully chosen natural keys and ON UPDATE CASCADE. You might need so many that the joins are more costly than the disk read.
It's easier to get lost in the joins when you have 20 or 30 of them.
Rows are harder to quickly understand. (A row that reads {1, 7, 13, 255, 438} is harder to understand than a row that reads {1, library, checkout, 255, 'A book is your friend'}.)
Often, a database designer assigns an ID number as the primary key, but doesn't set any other UNIQUE constraints. That makes the ID number a row identifier, not an identifier for the real-world thing the row represents. That can be a big problem.
You cannot rely on auto generated keys for all the databases. Unlike most other databases, Oracle does not provide an auto-incrementing datatype that can be used to generate a sequential primary key value.
However, the same effect can be achieved with a sequence and a trigger.
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Relational database design question - Surrogate-key or Natural-key?
When I create relational table there is a temptation to choose primary key column the column which values are unique. But for optimization and uniformity purposes I create artifical Id column every time. If there is a column (or columns combination) that should be unique I create Unique Index for that instead of marking them as (composite) primary key column(s).
Is it really a good practice always to prefer artificial "Id" column + indexes instead of natural columns for a primary key?
This is a bit of a religious debate. My personal preference is to have synthetic primary keys rather than natural primary keys but there are good arguments on both sides. Realistically, so long as you are consistent and reasonable, either approach can work well.
If you use natural keys, the two major downsides are the presence of composite keys and mutating primary key values. If you have composite primary keys, you'd obviously have to have multiple columns in each child table. That can get unwieldy from a data model perspective when there are many relationships among entities. But it can also cause grief for people developing queries-- it's awfully easy to create queries that use N-1 of N join conditions and get almost the right result. If you have natural keys, you'll also inevitably encounter a situation where the natural key value changes and you then have to ripple that change through many different entities-- that's vastly more complicated than changing a unique value in the table.
On the other hand, if you use synthetic keys, you're wasting space by adding additional columns, adding additional overhead to maintain an additional index, and you're increasing the risk that you'll get functionally duplicated results. It's awfully easy to either forget to create a unique constraint on the business key or to see that there is a non-unique index on the combination and just assume that it was a unique index. I actually just got bitten by this particular failing a couple days ago-- I had indexed the composite natural key (with a non-unique index) rather than creating a unique constraint. Dumb mistake but one that's relatively easy to make.
From a query writing and naming convention standpoint, I would also tend to prefer synthetic keys because it's nice to know when you're joining tables that the primary key of A is going to be A_ID and the primary key of B is going to be B_ID. That's far more self-documenting than trying to remember that the primary key of A is the combination of A_NAME and A_REVISION_NUMBER and that the primary key of B is B_CODE.
There is little or no difference between a key enforced through a PRIMARY KEY constraint and a key enforced through a UNIQUE constraint. What's important is that you enforce ALL the keys necessary from a data integrity perspective. Usually that means at least one "natural" key (a key exposed to the users/consumers of the data and used to identify the facts about the universe of discourse) per table.
Optionally you might also want to create "technical" keys to support the application and database features rather than the end user (usually called surrogate keys). That should be very much a secondary consideration however. In the interests of simplicity (and very often performance as well) it usually makes sense only to create surrogate keys where you have identified a particular need for them and not before.
It depends on your natural columns. If they are small and steadily increasing, then they are good candidates for the primary key.
Small - the smaller the key, the more values you can get into a single row, and the faster your index scans will be
Steadily increasing - produces fewer index reshuffles as the table grows, improving performance.
My preference is to always use an artificial key.
First it is consistent. Anyone working on your application knows that there is a key and they can make assumptions on it. This makes it easier to understand and maintain.
I've also seen scenarios where the natural key (aka. a string from an HR system that identifies an employee) has to change during the life of the application. If you have an artificial key that links the natural id to your employee record then you only have to change that natural id in the one table. However, if that natural id is a primary key and you have it duplicated across a number of other tables as a foreign key, then you have a mess on your hands.
In my humble opinion, it is always better to have an artificial Id, if I understand properly your meaning of it.
Some people would use, for instance, business significant unique values as their table Id, and I have already read on MSDN, and even in the NHibernate official documentation that a unique business meaningless value is prefered (artificial Id), though you want to create an index on that value for future reference. So, the day the company will change their nomenclature, the system shall still be running flawlessly.
Yes, it is. If nothing else, one of the most important properties of the artificial primary key is opacity, which means the artificial key doesn't reflect any information beyond itself; if you use natural row contents for keys, you wind up exposing that information to things like Web interfaces, which is just a terrible idea on all manner of principle.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
Almost every table in every database I've seen in my 7 years of development experience has an auto-incrementing primary key. Why is this? If I have a table of U.S. states where each state where each state must have a unique name, what's the use of an auto-incrementing primary key? Why not just use the state name as the primary key? Seems to me like an excuse to allow duplicates disguised as unique rows.
This seems plainly obvious to me, but then again, no one else seems to be arriving at and acting on the same logical conclusion as me, so I must assume there's a good chance I'm wrong.
Is there any real, practical reason we need to use auto-incrementing keys?
This question has been asked numerous times on SO and has been the subject of much debate over the years amongst (and between) developers and DBAs.
Let me start by saying that the premise of you question implies that one approach is universally superior to the other ... this is rarely the case in real life. Surrogate keys and natural keys both have their uses and challenges - and it's important to understand what they are. Whichever choice you make in your system, keep in mind there is benefit to consistency - it makes the data model easier to understand and easier to develop queries and applications for. I also want to say that I tend to prefer surrogate keys over natural keys for PKs ... but that doesn't mean that natural keys can't sometimes be useful in that role.
It is important to realize that surrogate and natural keys are NOT mutually exclusive - and in many cases they can complement each other. Keep in mind that a "key" for a database table is simply something that uniquely identifies a record (row). It's entirely possible for a single row to have multiple keys representing the different categories of constraints that make a record unique.
A primary key, on the other hand, is a particular unique key that the database will use to enforce referential integrity and to represent a foreign key in other tables. There can only be a single primary key for any table. The essential quality of a primary key is that it be 100% unique and non-NULL. A desirable quality of a primary key is that it be stable (unchanging). While mutable primary keys are possible - they cause many problems for database that are better avoided (cascading updates, RI failures, etc). If you do choose to use a surrogate primary key for your table(s) - you should also consider creating unique constraints to reflect the existence of any natural keys.
Surrogate keys are beneficial in cases where:
Natural keys are not stable (values may change over time)
Natural keys are large or unwieldy (multiple columns or long values)
Natural keys can change over time (columns added/removed over time)
By providing a short, stable, unique value for every row, we can reduce the size of the database, improve its performance, and reduce the volatility of dependent tables which store foreign keys. There's also the benefit of key polymorphism, which I'll get to later.
In some instances, using natural keys to express relationships between tables can be problematic. For instance, imagine you had a PERSON table whose natural key was {LAST_NAME, FIRST_NAME, SSN}. What happens if you have some other table GRANT_PROPOSAL in which you need to store a reference to a Proposer, Reviewer, Approver, and Authorizer. You now need 12 columns to express this information. You also need to come up with a naming convention of some kind to identify which columns belong to which kind of individual. But what if your PERSON table required 6, or 8, or 24 columns to for a natural key? This rapidly becomes unmanageable. Surrogate keys resolve such problems by divorcing the semantics (meaning) of a key from its use as an identifier.
Let's also take a look at the example you described in your question.
Should the 2-character abbreviation of a state be used as the primary key of that table.
On the surface, it looks like the abbreviation field meets the requirements of a good primary key. It's relatively short, it is easy to propagate as a foreign key, it looks stable. Unfortunately, you don't control the set of abbreviations ... the postal service does. And here's an interesting fact: in 1973 the USPS changed the abbreviation of Nebraska from NB to NE to minimize confusion with New Brunswick, Canada. The moral of the story is that natural keys are often outside of the control of the database ... and they can change over time. Even when you think they cannot. This problem is even more pronounced for more complicated data like people, or products, etc. As businesses evolve, the definitions for what makes such entities unique can change. And this can create significant problems for data modelers and application developers.
Earlier I mentioned that primary keys can support key polymorphism. What does that mean? Well, polymorphism is the ability of one type, A, to appear as and be used like another type, B. In databases, this concept refers to the ability to combine keys from different classes of entities into a single table. Let's look at an example. Imagine for a moment that you want have an audit trail in your system that identifies which entities were modified by which user on what date. It would be nice to create a table with the fields: {ENTITY_ID, USER_ID, EDIT_DATE}. Unfortunately, using natural keys, different entities have different keys. So now we need to create a separate linking table for each kind of entity ... and build our application in a manner where it understand the different kinds of entities and how their keys are shaped.
Don't get me wrong. I'm not advocating that surrogate keys should ALWAYS be used. In the real world never, ever, and always are a dangerous position to adopt. One of the biggest drawbacks of surrogate keys is that they can result in tables that have foreign keys consisting of lots of "meaningless" numbers. This can make it cumbersome to interpret the meaning of a record since you have to join or lookup records from other tables to get a complete picture. It also can make a distributed database deployment more complicated, as assigning unique incrementing numbers across servers isn't always possible (although most modern database like Oracle and SQLServer mitigate this via sequence replication).
No.
In most cases, having a surrogate INT IDENTITY key is an easy option: it can be guaranteed to be NOT NULL and 100% unique, something a lot of "natural" keys don't offer - names can change, so can SSN's and other items of information.
In the case of state abbreviations and names - if anything, I'd use the two-letter state abbreviation as a key.
A primary key must be:
unique (100% guaranteed! Not just "almost" unique)
NON NULL
A primary key should be:
stable if ever possible (not change - or at least not too frequently)
State two-letter codes definitely would offer this - that might be a candidate for a natural key. A key should also be small - an INT of 4 bytes is perfect, a two-letter CHAR(2) column just the same. I would not ever use a VARCHAR(100) field or something like that as a key - it's just too clunky, most likely will change all the time - not a good key candidate.
So while you don't have to have an auto-incrementing "artificial" (surrogate) primary key, it's often quite a good choice, since no naturally occuring data is really up to the task of being a primary key, and you want to avoid having huge primary keys with several columns - those are just too clunky and inefficient.
I think the use of the word "Primary", in the phrase "Primary" Key is in a real sense, misleading.
First, use the definition that a "key" is an attribute or set of attributes that must be unique within the table,
Then, having any key serves several often mutually inconsistent purposes.
Purpose 1. To use as joins conditions to one or many records in child tables which have a relationship to this parent table. (Explicitly or implicitly defining a Foreign Key in those child tables)
Purpose 2. (related) Ensuring that child records must have a parent record in the parent table (The child table FK must exist as Key in the parent table)
Purpose 3. To increase performance of queries that need to rapidly locate a specific record/row in the table.
Purpose 4. (Most Important from data consistency perspective!) To ensure data consistency by preventing duplicate rows which represent the same logical entity from being inserted itno the table. (This is often called a "natural" key, and should consist of table (entity) attributes which are relatively invariant.)
Clearly, any non-meaningfull, non-natural key (like a GUID or an auto-generated integer is totally incapable of satisfying Purpose 4.
But often, with many (most) tables, a totally natural key which can provide #4 will often consist of multiple attributes and be excessively wide, or so wide that using it for purposes #1, #2, or #3 will cause unacceptable performance consequencecs.
The answer is simple. Use both. Use a simple auto-Generating integral key for all Joins and FKs in other child tables, but ensure that every table that requires data consistency (very few tables don't) have an alternate natural unique key that will prevent inserts of inconsistent data rows... Plus, if you always have both, then all the objections against using a natural key (what if it changes? I have to change every place it is referenced as a FK) become moot, as you are not using it for that... You are only using it in the one table where it is a PK, to avoid inconsistent duplciate data...
The only time you can get away without both is for a completely stand alone table that participates in no relationships with other tables and has an obvious and reliable natural key.
In general, a numeric primary key will perform better than a string. You can additionaly create unique keys to prevent duplicates from creeping in. That way you get the assurance of no duplicates, but you also get the performance of numbers (vs. strings in your scenario).
In all likelyhood, the major databases have some performance optimizations for integer-based primary keys that are not present for string-based primary keys. But, that is only a reasonable guess.
Yes, in my opinion every table needs an auto incrementing integer key because it makes both JOINs and (especially) front-end programming much, much, much easier. Others feel differently, but this is over 20 years of experience speaking.
The single exception is small "code" or "lookup" tables in which I'm willing to substitute a short (4 or 5 character) TEXT code value. I do this because the I often use a lot of these in my databases and it allows me to present a meaningful display to the user without having to look up the description in the lookup table or JOIN it into a result set. Your example of a States table would fit in this category.
No, absolutely not.
Having a primary key which can't change is a good idea (UPDATE is legal for primary key columns, but in general potentially confusing and can create problems for child rows). But if your application has some other candidate which is more suitable than an auto-incrementing value, then you should probably use that instead.
Performance-wise, in general fewer columns are better, and particularly fewer indexes. If you have another column which has a unique index on it AND can never be changed by any business process, then it may be a suitable primary key.
Speaking from a MySQL (Innodb) perspective, it's also a good idea to use a "real" column as a primary key rather than an "artificial" one, as InnoDB always clusters the primary key and includes it in secondary indexes (that is how it finds the rows in them). This gives it potential to do useful optimisation with a primary key which it can't with any other unique index. MSSQL users often choose to cluster the primary key, but it can also cluster a different unique index.
EDIT:
But if it's a small database and you don't really care about performance or size too much, adding an unnecessary auto-increment column isn't that bad.
A non auto-incrementing value (e.g. UUID, or some other string generated according to your own algorithm) may be useful for distributed, sharded, or diverse systems where maintaining a consistent auto-incrementing ID is difficult (or impossible - think of a distributed system which continues to insert rows on both sides of a network partition).
I think there are two things that may explain the reason why auto-incrementing keys are sometimes used:
Space consideration; ok your state name doesn't amount to much, but the space it takes may add up. If you really want to store the state with its name as a primary key, then go ahead, but it will take more place. That may not be a problem in certain cases, and it sounds like a problem of olden days, but the habit is perhaps ingrained. And we programmers and DBA do love habits :D
Defensive consideration: i recently had the following problem; we have users in the database where the email is the key to all identification. Why not make the email the promary key? except suddenly border cases creep in where one guy must be there twice to have two different adresses, and nobody talked about it in the specs so the adress is not normalized, and there's this situation where two different emails must point to the same person and... After a while, you stop pulling your hairs out and add the damn integer id column
I'm not saying it's a bad habit, nor a good one; i'm sure good systems can be designed around reasonable primary keys, but these two points lead me to believe fear and habit are two among the culprits
It's a key component of relational databases. Having an integer relate to a state instead of having the whole state name saves a bunch of space in your database! Imagine you have a million records referencing your state table. Do you want to use 4 bytes for a number on each of those records or do you want to use a whole crapload of bytes for each state name?
Here are some practical considerations.
Most modern ORMs (rails, django, hibernate, etc.) work best when there is a single integer column as the primary key.
Additionally, having a standard naming convention (e.g. id as primary key and table_name_id for foreign keys) makes identifying keys easier.
It's been habitual in most of the scenarios while developing a database design we set primary key as integer type for a unique identifier in the Table. Why not use string or float for primary keys? Does this affect the accessibility of values, or in plain words retrieval speed? Are there any specific reasons?
An integer will use less disk space than a string, thus giving you a smaller index file to search through. This is important for large tables where you want to have as much of the index as possible cached in RAM.
Also, they can be autoincremented so you don't need to write your own routines to generate keys.
You often want to have a technical key (also called a surrogate key), a key that is only used to identify the row and not used for anything else. Most data may change sooner or later for reasons you can't control and you don't want to update it everywhere. Even such seemingly static data as a nation-assigned personal id number can change (if you get a new identity) or there may be laws prohibiting their use. A key generated by you, however, is in your own control. For such surrogate keys it's useful to have a small key that is easily generated.
As for "floats as primary keys": Don't do this. A primary key should uniquely identify a row. Floats have no equality relation, which means you cannot safely compare two float values for equality. This is an inherent shortcoming of floating-point values. If you need decimals, use a fixed-point number type instead.
The primary key is supposed to be an index that can provide a unique way to access a specific row in a table. Primary keys can be most data types (in practical applications, float/double won't work too well), and primary keys can also be compound keys (comprised of several columns.)
If you carefully examine the data in the table, you might be able to find a data item that will be unique for every row in the table, thereby eliminating the requirement that you fabricate a key like the autoincrement integer that you find in some schemas.
If you're in a manufacturing environment it might be an alphanumeric field like part number or assembly identifier. Retail or warehousing applications might have a stock number or combination of stock number/shipment/manufacturer.
Generally, If some data in your table is supposed to be a unique identifier it probably will serve well as a primary key for your table.
Using data that exists in the table already completely eliminates the requirement to "make up" a value (such as the autoincrement column) and use it as the primary key. This saves space since it's one less column in the table and one less index on the table.
Yes, in my experience integer keys are almost always faster, since it's more efficient for the database engine to compare integers than comparing strings. Depending on the "uniqueness" of the data (technically called cardinality http://en.wikipedia.org/wiki/Cardinality_(SQL_statements)), the effect of character vs. integer keys is nominal.
Character keys may degrade performance depending on the number of characters that the database needs to compare to determine whether keys are equal or not equal. In the pathological case, imagine a hundred-character field which differ only on the right hand side. One row has 100 A's. We need to compare this to a key with 99 A's and a B as the last character. Conceptually, databases compare character fields just like strcmp() (strncmp() if you prefer) from left to right.
good luck!
The only reason is for performance.
A logical database design should specify which "real" columns are unique, but when the logical design is transformed into a physical design, it is traditional to not use any of these "natural" keys as the primary key; instead, a meaningless integer column is added for this purpose - called a "surrogate key".
Normally the designer will add further unique constraints for the "real" uniqueness business rules as specified in the logical design.
This is because most DBMS's have trouble updating a primary key (e.g. due to performance issues when cascading the update to child tables). Some DBMS's might not be able to support non-integer primary keys at all.
Some side notes:
There's no theoretical reason why
primary keys should be immutable.
This is nothing to do with
normalization, which happens in the
logical model (which should never
have surrogate keys).
Also, note that the idea of a
"primary" key is not a relational
concept - it is simply a way of
denoting the "preferred" uniqueness
constraint, perhaps for relational
integrity - but there's nothing in
the RM that says that you must use
the same key for each child table.
I've created natural keys as "Primary
Keys" in Oracle databases before,
albeit rarely. I've even had them
used for foreign key constraints.
Admittedly, they were either
immutable, or I hand-wrote the
update-cascade code; and I had
trouble with one front-end
application where the PK included a
date column.
Bottom line: there is no theoretical requirement for surrogate keys, but they're much more practical than the alternative.
I suspect that it is because we can auto-increment integer values so it's easy to generate a new unique key for every insert.
Many common ORM (Object Relational Mapping) tools either force to use or at least recommend using integer as primary key.
Integer primary key also saves space compared to string and integer primary key is in some cases also faster. Sequences or auto increment fields make integer primary key generation easy at least if you do not work with distributed databases.
These are some of the main reasons why i think we have integers/ numbers as primary keys.
1.Primary keys should be able to uniquely define your row and should be immutable. One of the problems with using real attributes (name etc..) is that they could change over time. To maintain relational integrity in such a case would be very difficult as this change needs to cascade to all the child records.
2.The size of the table and thereby the index would be smaller in case we use a number as a key for the tab.e
3.Since these are automatically generated using a sequence, we can be sure that the values would be unique under all circumstances.
Check this.
http://forums.oracle.com/forums/thread.jspa?messageID=3916511�