I have a table which is:
CREATE SEQUENCE id_seq;
CREATE TABLE public."UserInfo"
(
id bigint NOT NULL DEFAULT nextval('id_seq'),
phone text,
password text,
name text,
surname text,
middle_name text,
email text,
company text,
title text,
image_id text,
CONSTRAINT "UserInfo_pkey" PRIMARY KEY (id),
CONSTRAINT "UserInfo_image_id_key" UNIQUE (image_id),
CONSTRAINT "UserInfo_phone_key" UNIQUE (phone)
)
WITH (
OIDS=FALSE
);
ALTER SEQUENCE id_seq OWNED BY public."UserInfo".id;
ALTER TABLE public."UserInfo"
OWNER TO postgres;
When I make bad request for insert like same value for unique column. "id" is increasing... Here is bad id request;
ERROR: null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, 9921455867, mg123209, name, surname, , namesurname#xxx.com, Company Name, Title Of Person, 123asd).
********** Error **********
Here is my table result ;
1;"1234477867";"qweff";"Name";"Surname";"''";"namesurname#qwe.com";"Company";"Title";"qwer1234"
4;"5466477868";"1235dsf";"Name";"Surname";"''";"banesyrna#pwqe.com";"Company";"Title";"qwer1235"
6;"5051377828";"asd123";"Name";"Surname";"''";"qweg#sdcv.com";"Company";"Title";"qwesr1235"
Please help me how I can solve this issue, I want order like 1,2,3.. sequential..
This is the way sequences work.
Important: To avoid blocking concurrent transactions that obtain
numbers from the same sequence, a nextval operation is never rolled
back; that is, once a value has been fetched it is considered used and
will not be returned again. This is true even if the surrounding
transaction later aborts, or if the calling query ends up not using
the value.
As pointed out in the comments there's no harm in having gaps in sequences. If you delete some rows in your table for what ever reason you are creating gaps in your primary key values and you wouldn't normally bother with resetting them to make them sequential.
If you insist on creating a gapless sequence, read this article: http://www.varlena.com/GeneralBits/130.php and be prepared for slow inserts.
When generating an artificial primary key of this type it's important to clearly understand that the value of the primary key has no meaning. I'll emphasize that point again - THE VALUE OF THE KEY HAS NO MEANING, and any "meaning" which is ascribed to it by developers, managers, or users is incorrect. It's a value which is unique, non-null, and unchanging - that's it. It does not matter what the number is. It does not matter if it's in some "order" relative to other keys in the table. Do not fall into the trap of believing that these values need to be ordered, need to be increasing, or must in some other manner conform to some external expectations of "how they should look".
Unique. Non-null. Unchanging. That's all you know, and all you need know.
Best of luck.
Related
I want to design primary key for my table with row versioning. My table contains 2 main fields : ID and Timestamp, and bunch of other fields. For a unique "ID" , I want to store previous versions of a record. Hence I am creating primary key for the table to be combination of ID and timestamp fields.
Hence to see all the versions of a particular ID, I can give,
Select * from table_name where ID=<ID_value>
To return the most recent version of a ID, I can use
Select * from table_name where ID=<ID_value> ORDER BY timestamp desc
and get the first element.
My question here is, will this query be efficient and run in O(1) instead of scanning the entire table to get all entries matching same ID considering ID field was a part of primary key fields? Ideally to get a result in O(1), I should have provided the entire primary key. If it does need to do entire table scan, then how else can I design my primary key so that I get this request done in O(1)?
Thanks,
Sriram
The canonical reference on this subject is Effective Timestamping in Databases:
https://www.cs.arizona.edu/~rts/pubs/VLDBJ99.pdf
I usually design with a subset of this paper's recommendations, using a table containing a primary key only, with another referencing table that has that key as well change_user, valid_from and valid_until colums with appropriate defaults. This makes referential integrity easy, as well as future value insertion and history retention. Index as appropriate, and consider check constraints or triggers to prevent overlaps and gaps if you expose these fields to the application for direct modification. These have an obvious performance overhead.
We then make a "current values view" which is exposed to developers, and is also insertable via an "instead of" trigger.
It's far easier and better to use the History Table pattern for this.
create table foo (
foo_id int primary key,
name text
);
create table foo_history (
foo_id int,
version int,
name text,
operation char(1) check ( operation in ('u','d') ),
modified_at timestamp,
modified_by text
primary key (foo_id, version)
);
Create a trigger to copy a foo row to foo_history on update or delete.
https://wiki.postgresql.org/wiki/Audit_trigger_91plus for a full example with postgres
I have a table with only 1 primary column (nvarchar), as in the designer, it's marked as primary key and not allow nulls. But by somehow, there is a row in that table with the key value being empty, it's of course not null and doesn't duplicate with any other rows in the primary key column and there isn't any conflict or violation occurring.
However as far as I know, that kind of value (empty) is not allowed for a primary column in SQL Server. I wonder if there is any option to turn on to make it work properly. Or I have to check the value myself through CHECK constraint or right in C# code (before updating).
Your help would be highly appreciated. Thanks!
"Empty-string" is a string with length of zero. It is NOT NULL, so it doesn't violate a null check. It most certainly IS an allowed value for a character-based primary key column in SQL Server. If the empty string value is not allowed by the business, a check constraint would be the best way to implement this as a business rule. That way, clients which might not know about the rule can't violate it.
This code runs without violations in SQL Server, I just tested it just to be sure.
create table TestTable (
myKey varchar(10) primary key,
myData int
)
GO
insert TestTable
select '', 1
In Microsoft SQL Server, when creating tables, are there any downsides to using a unique constraint on a column even though you don't really need it to be unique?
An example would be descriptions for say a role in a user management system:
CREATE TABLE Role
(
ID TINYINT PRIMARY KEY NOT NULL IDENTITY(0, 1),
Title CHARACTER VARYING(32) NOT NULL UNIQUE,
Description CHARACTER VARYING(MAX) NOT NULL UNIQUE
)
My fear is that validating this constraint when doing frequent insertions in other tables will be a very time consuming process. I am unsure as to how this constraint is validated, but I feel like it could be done in a very efficient way or as a linear comparison.
Your fear becomes true: UNIQUE constraint are implemented as indices, and this is time and space consuming.
So, whenever you insert a new row, the database have to update the table, and also one index for each unique constraint.
So, according to you:
using a unique constraint on a column even though you don't really need it to be unique
the answer is no, don't use it. there are time and space downsides.
Your sample table would need a clustered index for the Id, and 2 extra indices, one for each unique constraint. This takes up space, and time to update the 3 indices on the inserts.
This would only be justified if you made queries filtering by those fields.
BY THE WAY:
The original post sample table have several flaws:
that syntax is not SQL Server syntax (and you tagged this as SQL Server)
you cannot create an index in a varchar(max) column
If you correct the syntax and create this table:
CREATE TABLE Role
(
ID tinyint PRIMARY KEY NOT NULL IDENTITY(0, 1),
Title varchar(32) NOT NULL UNIQUE,
Description varchar(32) NOT NULL UNIQUE
)
You can then execute sp_help Role and you'll find the 3 indices.
The database creates an index which backs up the UNIQUE constraint, so it should be very low-cost to do the uniqueness check.
http://msdn.microsoft.com/en-us/library/ms177420.aspx
The Database Engine automatically creates a UNIQUE index to enforce the uniqueness requirement of the UNIQUE constraint. Therefore, if an attempt to insert a duplicate row is made, the Database Engine returns an error message that states the UNIQUE constraint has been violated and does not add the row to the table. Unless a clustered index is explicitly specified, a unique, nonclustered index is created by default to enforce the UNIQUE constraint.
Is it typically a good practice to constrain it if you know the data
will always be unique but it doesn't necessarily need to be unique for
the application to function correctly?
My question to you: would it make sense for two roles to have different titles but the same description? e.g.
INSERT INTO Role ( Title , Description )
VALUES ( 'CEO' , 'Senior manager' ),
( 'CTO' , 'Senior manager' );
To me it would seem to devalue the use of the description; if there were many duplications then it might make more sense to do something more like this:
INSERT INTO Role ( Title )
VALUES ( 'CEO' ),
( 'CTO' );
INSERT INTO SeniorManagers ( Title )
VALUES ( 'CEO' ),
( 'CTO' );
But then again you are not expecting duplicates.
I assume this is a low activity table. You say you fear validating this constraint when doing frequent insertions in other tables. Well, that will not happen (unless there is a trigger we cannot see that might update this table when another table is updated).
Personally, I would ask the designer (business analyst, whatever) to justify not applying a unique constraint. If they cannot then I would impose the unqiue constraint based on common sense. As is usual for such a text column, I would also apply CHECK constraints e.g. to disallow leading/trailing/double spaces, zero-length string, etc.
On SQL Server, the data type tinyint only gives you 256 distinct values. No matter what you do outside of the id column, you're not going to end up with a very big table. It will surely perform quickly even with a dozen indexed columns.
You usually need at least one unique constraint besides the surrogate key, though. If you don't have one, you're liable to end up with data like this.
1 First title First description
2 First title First description
3 First title First description
...
17 Third title Third description
18 First title First description
Tables that permit data like that are usually wrong. Any table that uses foreign key references to this table won't be able to report correctly, say, the number of "First title" used.
I'd argue that allowing multiple, identical titles for roles in a user management system is a design error. I'd probably argue that "title" is a really bad name for that column, too.
I have a table with this layout:
CREATE TABLE Favorites (
FavoriteId uuid NOT NULL PRIMARY KEY,
UserId uuid NOT NULL,
RecipeId uuid NOT NULL,
MenuId uuid
);
I want to create a unique constraint similar to this:
ALTER TABLE Favorites
ADD CONSTRAINT Favorites_UniqueFavorite UNIQUE(UserId, MenuId, RecipeId);
However, this will allow multiple rows with the same (UserId, RecipeId), if MenuId IS NULL. I want to allow NULL in MenuId to store a favorite that has no associated menu, but I only want at most one of these rows per user/recipe pair.
The ideas I have so far are:
Use some hard-coded UUID (such as all zeros) instead of null.
However, MenuId has a FK constraint on each user's menus, so I'd then have to create a special "null" menu for every user which is a hassle.
Check for existence of a null entry using a trigger instead.
I think this is a hassle and I like avoiding triggers wherever possible. Plus, I don't trust them to guarantee my data is never in a bad state.
Just forget about it and check for the previous existence of a null entry in the middle-ware or in a insert function, and don't have this constraint.
I'm using Postgres 9.0. Is there any method I'm overlooking?
Postgres 15 or newer
Postgres 15 adds the clause NULLS NOT DISTINCT. The release notes:
Allow unique constraints and indexes to treat NULL values as not distinct (Peter Eisentraut)
Previously NULL values were always indexed as distinct values, but
this can now be changed by creating constraints and indexes using
UNIQUE NULLS NOT DISTINCT.
With this clause null is treated like just another value, and a UNIQUE constraint does not allow more than one row with the same null value. The task is simple now:
ALTER TABLE favorites
ADD CONSTRAINT favo_uni UNIQUE NULLS NOT DISTINCT (user_id, menu_id, recipe_id);
There are examples in the manual chapter "Unique Constraints".
The clause switches behavior for all keys of the same index. You can't treat null as equal for one key, but not for another.
NULLS DISTINCT remains the default (in line with standard SQL) and does not have to be spelled out.
The same clause works for a UNIQUE index, too:
CREATE UNIQUE INDEX favo_uni_idx
ON favorites (user_id, menu_id, recipe_id) NULLS NOT DISTINCT;
Note the position of the new clause after the key fields.
Postgres 14 or older
Create two partial indexes:
CREATE UNIQUE INDEX favo_3col_uni_idx ON favorites (user_id, menu_id, recipe_id)
WHERE menu_id IS NOT NULL;
CREATE UNIQUE INDEX favo_2col_uni_idx ON favorites (user_id, recipe_id)
WHERE menu_id IS NULL;
This way, there can only be one combination of (user_id, recipe_id) where menu_id IS NULL, effectively implementing the desired constraint.
Possible drawbacks:
You cannot have a foreign key referencing (user_id, menu_id, recipe_id). (It seems unlikely you'd want a FK reference three columns wide - use the PK column instead!)
You cannot base CLUSTER on a partial index.
Queries without a matching WHERE condition cannot use the partial index.
If you need a complete index, you can alternatively drop the WHERE condition from favo_3col_uni_idx and your requirements are still enforced.
The index, now comprising the whole table, overlaps with the other one and gets bigger. Depending on typical queries and the percentage of null values, this may or may not be useful. In extreme situations it may even help to maintain all three indexes (the two partial ones and a total on top).
This is a good solution for a single nullable column, maybe for two. But it gets out of hands quickly for more as you need a separate partial index for every combination of nullable columns, so the number grows binomially. For multiple nullable columns, see instead:
Why doesn't my UNIQUE constraint trigger?
Aside: I advise not to use mixed case identifiers in PostgreSQL.
You could create a unique index with a coalesce on the MenuId:
CREATE UNIQUE INDEX
Favorites_UniqueFavorite ON Favorites
(UserId, COALESCE(MenuId, '00000000-0000-0000-0000-000000000000'), RecipeId);
You'd just need to pick a UUID for the COALESCE that will never occur in "real life". You'd probably never see a zero UUID in real life but you could add a CHECK constraint if you are paranoid (and since they really are out to get you...):
alter table Favorites
add constraint check
(MenuId <> '00000000-0000-0000-0000-000000000000')
You can store favourites with no associated menu in a separate table:
CREATE TABLE FavoriteWithoutMenu
(
FavoriteWithoutMenuId uuid NOT NULL, --Primary key
UserId uuid NOT NULL,
RecipeId uuid NOT NULL,
UNIQUE KEY (UserId, RecipeId)
)
I believe there is an option that combines the previous answers into a more optimal solution.
create table unique_with_nulls (
id serial not null,
name varchar not null,
age int2 not null,
email varchar,
email_discriminator varchar not null generated always as ( coalesce(email::varchar, 0::varchar) ) stored,
constraint uwn_pkey primary key (id)
);
create unique index uwn_name_age_email_uidx on unique_with_nulls(name, age, email_discriminator);
What happens here is that the column email_discriminator will be generated at "insert-or-update-time", as either an actual email, or "0" if the former one is null. Then, your unique index must target the discriminator column.
This way we don't have to create two partial indexes, and we don't loose the ability to use indexed scans on name and age selection only.
Also, you can keep the type of the email column and we don't have any problems with the coalesce function, because email_discriminator is not a foreign key. And you don't have to worry about this column receiving unexpected values because generated columns cannot be written to.
I can see three opinionated drawbacks in this solution, but they are all fine for my needs:
the duplication of data between the email and email_discriminator.
the fact that I must write to a column and read from another.
the need to find a value that is outside the set of acceptable values of email to be the fallback one (and sometimes this could be hard to find or even subjective).
I think there is a semantic problem here. In my view, a user can have a (but only one) favourite recipe to prepare a specific menu. (The OP has menu and recipe mixed up; if I am wrong: please interchange MenuId and RecipeId below)
That implies that {user,menu} should be a unique key in this table. And it should point to exactly one recipe. If the user has no favourite recipe for this specific menu no row should exist for this {user,menu} key pair. Also: the surrogate key (FaVouRiteId) is superfluous: composite primary keys are perfectly valid for relational-mapping tables.
That would lead to the reduced table definition:
CREATE TABLE Favorites
( UserId uuid NOT NULL REFERENCES users(id)
, MenuId uuid NOT NULL REFERENCES menus(id)
, RecipeId uuid NOT NULL REFERENCES recipes(id)
, PRIMARY KEY (UserId, MenuId)
);
Please clarify two things for me:
Can a Foreign key be NULL?
Can a Foreign key be duplicate?
As fair as I know, NULL shouldn't be used in foreign keys, but in some application of mine I'm able to input NULL in both Oracle and SQL Server, and I don't know why.
Short answer: Yes, it can be NULL or duplicate.
I want to explain why a foreign key might need to be null or might need to be unique or not unique. First remember a Foreign key simply requires that the value in that field must exist first in a different table (the parent table). That is all an FK is by definition. Null by definition is not a value. Null means that we do not yet know what the value is.
Let me give you a real life example. Suppose you have a database that stores sales proposals. Suppose further that each proposal only has one sales person assigned and one client. So your proposal table would have two foreign keys, one with the client ID and one with the sales rep ID. However, at the time the record is created, a sales rep is not always assigned (because no one is free to work on it yet), so the client ID is filled in but the sales rep ID might be null. In other words, usually you need the ability to have a null FK when you may not know its value at the time the data is entered, but you do know other values in the table that need to be entered. To allow nulls in an FK generally all you have to do is allow nulls on the field that has the FK. The null value is separate from the idea of it being an FK.
Whether it is unique or not unique relates to whether the table has a one-one or a one-many relationship to the parent table. Now if you have a one-one relationship, it is possible that you could have the data all in one table, but if the table is getting too wide or if the data is on a different topic (the employee - insurance example #tbone gave for instance), then you want separate tables with a FK. You would then want to make this FK either also the PK (which guarantees uniqueness) or put a unique constraint on it.
Most FKs are for a one to many relationship and that is what you get from a FK without adding a further constraint on the field. So you have an order table and the order details table for instance. If the customer orders ten items at one time, he has one order and ten order detail records that contain the same orderID as the FK.
1 - Yes, since at least SQL Server 2000.
2 - Yes, as long as it's not a UNIQUE constraint or linked to a unique index.
Yes foreign key can be null as told above by senior programmers... I would add another scenario where Foreign key will required to be null....
suppose we have tables comments, Pictures and Videos in an application which allows comments on pictures and videos. In comments table we can have two Foreign Keys PicturesId, and VideosId along with the primary Key CommentId. So when you comment on a video only VideosId would be required and pictureId would be null... and if you comment on a picture only PictureId would be required and VideosId would be null...
it depends on what role this foreign key plays in your relation.
if this foreign key is also a key attribute in your relation, then it can't be NULL
if this foreign key is a normal attribute in your relation, then it can be NULL.
Here's an example using Oracle syntax:
First let's create a table COUNTRY
CREATE TABLE TBL_COUNTRY ( COUNTRY_ID VARCHAR2 (50) NOT NULL ) ;
ALTER TABLE TBL_COUNTRY ADD CONSTRAINT COUNTRY_PK PRIMARY KEY ( COUNTRY_ID ) ;
Create the table PROVINCE
CREATE TABLE TBL_PROVINCE(
PROVINCE_ID VARCHAR2 (50) NOT NULL ,
COUNTRY_ID VARCHAR2 (50)
);
ALTER TABLE TBL_PROVINCE ADD CONSTRAINT PROVINCE_PK PRIMARY KEY ( PROVINCE_ID ) ;
ALTER TABLE TBL_PROVINCE ADD CONSTRAINT PROVINCE_COUNTRY_FK FOREIGN KEY ( COUNTRY_ID ) REFERENCES TBL_COUNTRY ( COUNTRY_ID ) ;
This runs perfectly fine in Oracle. Notice the COUNTRY_ID foreign key in the second table doesn't have "NOT NULL".
Now to insert a row into the PROVINCE table, it's sufficient to only specify the PROVINCE_ID. However, if you chose to specify a COUNTRY_ID as well, it must exist already in the COUNTRY table.
By default there are no constraints on the foreign key, foreign key can be null and duplicate.
while creating a table / altering the table, if you add any constrain of uniqueness or not null then only it will not allow the null/ duplicate values.
Simply put, "Non-identifying" relationships between Entities is part of ER-Model and is available in Microsoft Visio when designing ER-Diagram. This is required to enforce cardinality between Entities of type " zero or more than zero", or "zero or one". Note this "zero" in cardinality instead of "one" in "one to many".
Now, example of non-identifying relationship where cardinality may be "zero" (non-identifying) is when we say a record / object in one entity-A "may" or "may not" have a value as a reference to the record/s in another Entity-B.
As, there is a possibility for one record of entity-A to identify itself to the records of other Entity-B, therefore there should be a column in Entity-B to have the identity-value of the record of Entity-B. This column may be "Null" if no record in Entity-A identifies the record/s (or, object/s) in Entity-B.
In Object Oriented (real-world) Paradigm, there are situations when an object of Class-B does not necessarily depends (strongly coupled) on object of class-A for its existence, which means Class-B is loosely-coupled with Class-A such that Class-A may "Contain" (Containment) an object of Class-A, as opposed to the concept of object of Class-B must have (Composition) an object of Class-A, for its (object of class-B) creation.
From SQL Query point of view, you can query all records in entity-B which are "not null" for foreign-key reserved for Entity-B. This will bring all records having certain corresponding value for rows in Entity-A alternatively all records with Null value will be the records which do not have any record in Entity-A in Entity-B.
Can a Foreign key be NULL?
Existing answers focused on single column scenario. If we consider multi column foreign key we have more options using MATCH [SIMPLE | PARTIAL | FULL] clause defined in SQL Standard:
PostgreSQL-CREATE TABLE
A value inserted into the referencing column(s) is matched against the values of the referenced table and referenced columns using the given match type. There are three match types: MATCH FULL, MATCH PARTIAL, and MATCH SIMPLE (which is the default). MATCH FULL will not allow one column of a multicolumn foreign key to be null unless all foreign key columns are null; if they are all null, the row is not required to have a match in the referenced table. MATCH SIMPLE allows any of the foreign key columns to be null; if any of them are null, the row is not required to have a match in the referenced table. MATCH PARTIAL is not yet implemented.
(Of course, NOT NULL constraints can be applied to the referencing column(s) to prevent these cases from arising.)
Example:
CREATE TABLE A(a VARCHAR(10), b VARCHAR(10), d DATE , UNIQUE(a,b));
INSERT INTO A(a, b, d)
VALUES (NULL, NULL, NOW()),('a', NULL, NOW()),(NULL, 'b', NOW()),('c', 'b', NOW());
CREATE TABLE B(id INT PRIMARY KEY, ref_a VARCHAR(10), ref_b VARCHAR(10));
-- MATCH SIMPLE - default behaviour nulls are allowed
ALTER TABLE B ADD CONSTRAINT B_Fk FOREIGN KEY (ref_a, ref_b)
REFERENCES A(a,b) MATCH SIMPLE;
INSERT INTO B(id, ref_a, ref_b) VALUES (1, NULL, 'b');
-- (NULL/'x') 'x' value does not exists in A table, but insert is valid
INSERT INTO B(id, ref_a, ref_b) VALUES (2, NULL, 'x');
ALTER TABLE B DROP CONSTRAINT IF EXISTS B_Fk; -- cleanup
-- MATCH PARTIAL - not implemented
ALTER TABLE B ADD CONSTRAINT B_Fk FOREIGN KEY (ref_a, ref_b)
REFERENCES A(a,b) MATCH PARTIAL;
-- ERROR: MATCH PARTIAL not yet implemented
DELETE FROM B; ALTER TABLE B DROP CONSTRAINT IF EXISTS B_Fk; -- cleanup
-- MATCH FULL nulls are not allowed
ALTER TABLE B ADD CONSTRAINT B_Fk FOREIGN KEY (ref_a, ref_b)
REFERENCES A(a,b) MATCH FULL;
-- FK is defined, inserting NULL as part of FK
INSERT INTO B(id, ref_a, ref_b) VALUES (1, NULL, 'b');
-- ERROR: MATCH FULL does not allow mixing of null and nonnull key values.
-- FK is defined, inserting all NULLs - valid
INSERT INTO B(id, ref_a, ref_b) VALUES (1, NULL, NULL);
db<>fiddle demo
I think it is better to consider the possible cardinality we have in the tables.
We can have possible minimum cardinality zero. When it is optional, the minimum participation of tuples from the related table could be zero, Now you face the necessity of foreign key values to be allowed null.
But the answer is it all depends on the Business.
The idea of a foreign key is based on the concept of referencing a value that already exists in the main table. That is why it is called a foreign key in the other table. This concept is called referential integrity. If a foreign key is declared as a null field it will violate the the very logic of referential integrity. What will it refer to? It can only refer to something that is present in the main table. Hence, I think it would be wrong to declare a foreign key field as null.
I think foreign key of one table also primary key to some other table.So it won't allows nulls.So there is no question of having null value in foreign key.