How to ignore duplicate Primary Key in SQL? - sql

I have an excel sheet with several values which I imported into SQL (book1$) and I want to transfer the values into ProcessList. Several rows have the same primary keys which is the ProcessID because the rows contain original and modified values, both of which I want to keep. How do I make SQL ignore the duplicate primary keys?
I tried the IGNORE_DUP_KEY = ON but for rows with duplicated primary key, only 1 the latest row shows up.
CREATE TABLE dbo.ProcessList
(
Edited varchar(1),
ProcessId int NOT NULL PRIMARY KEY WITH (IGNORE_DUP_KEY = ON),
Name varchar(30) NOT NULL,
Amount smallmoney NOT NULL,
CreationDate datetime NOT NULL,
ModificationDate datetime
)
INSERT INTO ProcessList SELECT Edited, ProcessId, Name, Amount, CreationDate, ModificationDate FROM Book1$
SELECT * FROM ProcessList
Also, if I have a row and I update the values of that row, is there any way to keep the original values of the row and insert a clone of that row below, with the updated values and creation/modification date updated automatically?

How do I make SQL ignore the duplicate primary keys?
Under no circumstances can a transaction be committed that results in a table containing two distinct rows with the same primary key. That is fundamental to the nature of a primary key. SQL Server's IGNORE_DUP_KEY option does not change that -- it merely affects how SQL Server handles the problem. (With the option turned on it silently refuses to insert rows having the same primary key as any existing row; otherwise, such an insertion attempt causes an error.)
You can address the situation either by dropping the primary key constraint or by adding one or more columns to the primary key to yield a composite key whose collective value is not duplicated. I don't see any good candidate columns for an expanded PK among those you described, though. If you drop the PK then it might make sense to add a synthetic, autogenerated PK column.
Also, if I have a row and I update the values of that row, is there any way to keep the original values of the row and insert a clone of that row below, with the updated values and creation/modification date updated automatically?
If you want to ensure that this happens automatically, however a row happens to be updated, then look into triggers. If you want a way to automate it, but you're willing to make the user ask for the behavior, then consider a stored procedure.

try this
INSERT IGNORE INTO ProcessList SELECT Edited, ProcessId, Name, Amount, CreationDate, ModificationDate FROM Book1$
SELECT * FROM ProcessList

You drop the constraint. Something like this:
alter table dbo.ProcessList drop constraint PK_ProcessId;
You need to know the constraint name.
In other words, you can't ignore a primary key. It is defined as unique and not-null. If you want the table to have duplicates, then that is not the primary key.

Related

Violation of UNIQUE KEY constraint '...'. Cannot insert duplicate key in object 'dbo.Cliente'. The duplicate key value is (<NULL>) [duplicate]

I want to have a unique constraint on a column which I am going to populate with GUIDs. However, my data contains null values for this columns. How do I create the constraint that allows multiple null values?
Here's an example scenario. Consider this schema:
CREATE TABLE People (
Id INT CONSTRAINT PK_MyTable PRIMARY KEY IDENTITY,
Name NVARCHAR(250) NOT NULL,
LibraryCardId UNIQUEIDENTIFIER NULL,
CONSTRAINT UQ_People_LibraryCardId UNIQUE (LibraryCardId)
)
Then see this code for what I'm trying to achieve:
-- This works fine:
INSERT INTO People (Name, LibraryCardId)
VALUES ('John Doe', 'AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA');
-- This also works fine, obviously:
INSERT INTO People (Name, LibraryCardId)
VALUES ('Marie Doe', 'BBBBBBBB-BBBB-BBBB-BBBB-BBBBBBBBBBBB');
-- This would *correctly* fail:
--INSERT INTO People (Name, LibraryCardId)
--VALUES ('John Doe the Second', 'AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA');
-- This works fine this one first time:
INSERT INTO People (Name, LibraryCardId)
VALUES ('Richard Roe', NULL);
-- THE PROBLEM: This fails even though I'd like to be able to do this:
INSERT INTO People (Name, LibraryCardId)
VALUES ('Marcus Roe', NULL);
The final statement fails with a message:
Violation of UNIQUE KEY constraint 'UQ_People_LibraryCardId'. Cannot insert duplicate key in object 'dbo.People'.
How can I change my schema and/or uniqueness constraint so that it allows multiple NULL values, while still checking for uniqueness on actual data?
What you're looking for is indeed part of the ANSI standards SQL:92, SQL:1999 and SQL:2003, ie a UNIQUE constraint must disallow duplicate non-NULL values but accept multiple NULL values.
In the Microsoft world of SQL Server however, a single NULL is allowed but multiple NULLs are not...
In SQL Server 2008, you can define a unique filtered index based on a predicate that excludes NULLs:
CREATE UNIQUE NONCLUSTERED INDEX idx_yourcolumn_notnull
ON YourTable(yourcolumn)
WHERE yourcolumn IS NOT NULL;
In earlier versions, you can resort to VIEWS with a NOT NULL predicate to enforce the constraint.
SQL Server 2008 +
You can create a unique index that accept multiple NULLs with a WHERE clause. See the answer below.
Prior to SQL Server 2008
You cannot create a UNIQUE constraint and allow NULLs. You need set a default value of NEWID().
Update the existing values to NEWID() where NULL before creating the UNIQUE constraint.
SQL Server 2008 And Up
Just filter a unique index:
CREATE UNIQUE NONCLUSTERED INDEX UQ_Party_SamAccountName
ON dbo.Party(SamAccountName)
WHERE SamAccountName IS NOT NULL;
In Lower Versions, A Materialized View Is Still Not Required
For SQL Server 2005 and earlier, you can do it without a view. I just added a unique constraint like you're asking for to one of my tables. Given that I want uniqueness in column SamAccountName, but I want to allow multiple NULLs, I used a materialized column rather than a materialized view:
ALTER TABLE dbo.Party ADD SamAccountNameUnique
AS (Coalesce(SamAccountName, Convert(varchar(11), PartyID)))
ALTER TABLE dbo.Party ADD CONSTRAINT UQ_Party_SamAccountName
UNIQUE (SamAccountNameUnique)
You simply have to put something in the computed column that will be guaranteed unique across the whole table when the actual desired unique column is NULL. In this case, PartyID is an identity column and being numeric will never match any SamAccountName, so it worked for me. You can try your own method—be sure you understand the domain of your data so that there is no possibility of intersection with real data. That could be as simple as prepending a differentiator character like this:
Coalesce('n' + SamAccountName, 'p' + Convert(varchar(11), PartyID))
Even if PartyID became non-numeric someday and could coincide with a SamAccountName, now it won't matter.
Note that the presence of an index including the computed column implicitly causes each expression result to be saved to disk with the other data in the table, which DOES take additional disk space.
Note that if you don't want an index, you can still save CPU by making the expression be precalculated to disk by adding the keyword PERSISTED to the end of the column expression definition.
In SQL Server 2008 and up, definitely use the filtered solution instead if you possibly can!
Controversy
Please note that some database professionals will see this as a case of "surrogate NULLs", which definitely have problems (mostly due to issues around trying to determine when something is a real value or a surrogate value for missing data; there can also be issues with the number of non-NULL surrogate values multiplying like crazy).
However, I believe this case is different. The computed column I'm adding will never be used to determine anything. It has no meaning of itself, and encodes no information that isn't already found separately in other, properly defined columns. It should never be selected or used.
So, my story is that this is not a surrogate NULL, and I'm sticking to it! Since we don't actually want the non-NULL value for any purpose other than to trick the UNIQUE index to ignore NULLs, our use case has none of the problems that arise with normal surrogate NULL creation.
All that said, I have no problem with using an indexed view instead—but it brings some issues with it such as the requirement of using SCHEMABINDING. Have fun adding a new column to your base table (you'll at minimum have to drop the index, and then drop the view or alter the view to not be schema bound). See the full (long) list of requirements for creating an indexed view in SQL Server (2005) (also later versions), (2000).
Update
If your column is numeric, there may be the challenge of ensuring that the unique constraint using Coalesce does not result in collisions. In that case, there are some options. One might be to use a negative number, to put the "surrogate NULLs" only in the negative range, and the "real values" only in the positive range. Alternately, the following pattern could be used. In table Issue (where IssueID is the PRIMARY KEY), there may or may not be a TicketID, but if there is one, it must be unique.
ALTER TABLE dbo.Issue ADD TicketUnique
AS (CASE WHEN TicketID IS NULL THEN IssueID END);
ALTER TABLE dbo.Issue ADD CONSTRAINT UQ_Issue_Ticket_AllowNull
UNIQUE (TicketID, TicketUnique);
If IssueID 1 has ticket 123, the UNIQUE constraint will be on values (123, NULL). If IssueID 2 has no ticket, it will be on (NULL, 2). Some thought will show that this constraint cannot be duplicated for any row in the table, and still allows multiple NULLs.
For people who are using Microsoft SQL Server Manager and want to create a Unique but Nullable index you can create your unique index as you normally would then in your Index Properties for your new index, select "Filter" from the left hand panel, then enter your filter (which is your where clause). It should read something like this:
([YourColumnName] IS NOT NULL)
This works with MSSQL 2012
When I applied the unique index below:
CREATE UNIQUE NONCLUSTERED INDEX idx_badgeid_notnull
ON employee(badgeid)
WHERE badgeid IS NOT NULL;
every non null update and insert failed with the error below:
UPDATE failed because the following SET options have incorrect settings: 'ARITHABORT'.
I found this on MSDN
SET ARITHABORT must be ON when you are creating or changing indexes on computed columns or indexed views. If SET ARITHABORT is OFF, CREATE, UPDATE, INSERT, and DELETE statements on tables with indexes on computed columns or indexed views will fail.
So to get this to work correctly I did this
Right click [Database]-->Properties-->Options-->Other
Options-->Misscellaneous-->Arithmetic Abort Enabled -->true
I believe it is possible to set this option in code using
ALTER DATABASE "DBNAME" SET ARITHABORT ON
but i have not tested this
It can be done in the designer as well
Right click on the Index > Properties to get this window
Create a view that selects only non-NULL columns and create the UNIQUE INDEX on the view:
CREATE VIEW myview
AS
SELECT *
FROM mytable
WHERE mycolumn IS NOT NULL
CREATE UNIQUE INDEX ux_myview_mycolumn ON myview (mycolumn)
Note that you'll need to perform INSERT's and UPDATE's on the view instead of table.
You may do it with an INSTEAD OF trigger:
CREATE TRIGGER trg_mytable_insert ON mytable
INSTEAD OF INSERT
AS
BEGIN
INSERT
INTO myview
SELECT *
FROM inserted
END
It is possible to create a unique constraint on a Clustered Indexed View
You can create the View like this:
CREATE VIEW dbo.VIEW_OfYourTable WITH SCHEMABINDING AS
SELECT YourUniqueColumnWithNullValues FROM dbo.YourTable
WHERE YourUniqueColumnWithNullValues IS NOT NULL;
and the unique constraint like this:
CREATE UNIQUE CLUSTERED INDEX UIX_VIEW_OFYOURTABLE
ON dbo.VIEW_OfYourTable(YourUniqueColumnWithNullValues)
In my experience - if you're thinking a column needs to allow NULLs but also needs to be UNIQUE for values where they exist, you may be modelling the data incorrectly. This often suggests you're creating a separate sub-entity within the same table as a different entity. It probably makes more sense to have this entity in a second table.
In the provided example, I would put LibraryCardId in a separate LibraryCards table with a unique not-null foreign key to the People table:
CREATE TABLE People (
Id INT CONSTRAINT PK_MyTable PRIMARY KEY IDENTITY,
Name NVARCHAR(250) NOT NULL,
)
CREATE TABLE LibraryCards (
LibraryCardId UNIQUEIDENTIFIER CONSTRAINT PK_LibraryCards PRIMARY KEY,
PersonId INT NOT NULL
CONSTRAINT UQ_LibraryCardId_PersonId UNIQUE (PersonId),
FOREIGN KEY (PersonId) REFERENCES People(id)
)
This way you don't need to bother with a column being both unique and nullable. If a person doesn't have a library card, they just won't have a record in the library cards table. Also, if there are additional attributes about the library card (perhaps Expiration Date or something), you now have a logical place to put those fields.
Maybe consider an "INSTEAD OF" trigger and do the check yourself? With a non-clustered (non-unique) index on the column to enable the lookup.
As stated before, SQL Server doesn't implement the ANSI standard when it comes to UNIQUE CONSTRAINT. There is a ticket on Microsoft Connect for this since 2007. As suggested there and here the best options as of today are to use a filtered index as stated in another answer or a computed column, e.g.:
CREATE TABLE [Orders] (
[OrderId] INT IDENTITY(1,1) NOT NULL,
[TrackingId] varchar(11) NULL,
...
[ComputedUniqueTrackingId] AS (
CASE WHEN [TrackingId] IS NULL
THEN '#' + cast([OrderId] as varchar(12))
ELSE [TrackingId_Unique] END
),
CONSTRAINT [UQ_TrackingId] UNIQUE ([ComputedUniqueTrackingId])
)
You can create an INSTEAD OF trigger to check for specific conditions and error if they are met. Creating an index can be costly on larger tables.
Here's an example:
CREATE TRIGGER PONY.trg_pony_unique_name ON PONY.tbl_pony
INSTEAD OF INSERT, UPDATE
AS
BEGIN
IF EXISTS(
SELECT TOP (1) 1
FROM inserted i
GROUP BY i.pony_name
HAVING COUNT(1) > 1
)
OR EXISTS(
SELECT TOP (1) 1
FROM PONY.tbl_pony t
INNER JOIN inserted i
ON i.pony_name = t.pony_name
)
THROW 911911, 'A pony must have a name as unique as s/he is. --PAS', 16;
ELSE
INSERT INTO PONY.tbl_pony (pony_name, stable_id, pet_human_id)
SELECT pony_name, stable_id, pet_human_id
FROM inserted
END
You can't do this with a UNIQUE constraint, but you can do this in a trigger.
CREATE TRIGGER [dbo].[OnInsertMyTableTrigger]
ON [dbo].[MyTable]
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
DECLARE #Column1 INT;
DECLARE #Column2 INT; -- allow nulls on this column
SELECT #Column1=Column1, #Column2=Column2 FROM inserted;
-- Check if an existing record already exists, if not allow the insert.
IF NOT EXISTS(SELECT * FROM dbo.MyTable WHERE Column1=#Column1 AND Column2=#Column2 #Column2 IS NOT NULL)
BEGIN
INSERT INTO dbo.MyTable (Column1, Column2)
SELECT #Column2, #Column2;
END
ELSE
BEGIN
RAISERROR('The unique constraint applies on Column1 %d, AND Column2 %d, unless Column2 is NULL.', 16, 1, #Column1, #Column2);
ROLLBACK TRANSACTION;
END
END
CREATE UNIQUE NONCLUSTERED INDEX [UIX_COLUMN_NAME]
ON [dbo].[Employee]([Username] ASC) WHERE ([Username] IS NOT NULL)
WITH (ALLOW_PAGE_LOCKS = ON, ALLOW_ROW_LOCKS = ON, PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF,
DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, STATISTICS_NORECOMPUTE = OFF, ONLINE = OFF,
MAXDOP = 0) ON [PRIMARY];
this code if u make a register form with textBox and use insert and ur textBox is empty and u click on submit button .
CREATE UNIQUE NONCLUSTERED INDEX [IX_tableName_Column]
ON [dbo].[tableName]([columnName] ASC) WHERE [columnName] !=`''`;

How to force duplicate key insert mssql

I know this sounds crazy (And if I designed the database I would have done it differently) but I actually want to force a duplicate key on an insert. I'm working with a database that was designed to have columns as 'not null' pk's that have the same value in them for every row. The records keeping software I'm working with is somehow able to insert dups into these columns for every one of its records. I need to copy data from a column in another table into one column on this one. Normally I just would try to insert into that column only, but the pk's are set to 'not null' so I have to put something in them, and the way the table is set up that something has to be the same thing for every record. This should be impossible but the company that made the records keeping software made it work some how. I was wondering if anyone knows how this could be done?
P.S. I know this is normally not a good idea at all. So please just include suggestions for how this could be done regardless of how crazy it is. Thank you.
A SQL Server primary key has to be unique and NOT NULL. So, the column you're seeing duplicate data in cannot be the primary key on it's own. As urlreader suggests, it must be part of a composite primary key with one or more other columns.
How to tell what columns make up the primary key on a table: In Enterprise Manager, expand the table and then expand Columns. The primary key columns will have a "key" symbol next to them. You'll also see "PK" in the column description after, like this:
MyFirstIDColumn (PK, int, not null)
MySecondIDColumn (PK, int, not null)
Once you know which columns make up the primary key, simply ensure that you are inserting a combination of unique data into the columns. So, for my sample table above, that would be:
INSERT INTO MyTable (MyFirstIDColumn, MySecondIDColumn) VALUES (1,1) --SUCCEED
INSERT INTO MyTable (MyFirstIDColumn, MySecondIDColumn) VALUES (1,2) --SUCCEED
INSERT INTO MyTable (MyFirstIDColumn, MySecondIDColumn) VALUES (1,1) --FAIL because of duplicate (1,1)
INSERT INTO MyTable (MyFirstIDColumn, MySecondIDColumn) VALUES (1,3) --SUCCEED
More on primary keys:
http://msdn.microsoft.com/en-us/library/ms191236%28v=sql.105%29.aspx

SQL Server Constraint (Limit bit field based on a foreign key)

I need help with constraints in SQL Server. The situation is for each OrderID=1 (foreign key not primary key so there are multiple rows with the same ID) on the table, the bit field can only be 1 for one of those rows, and for each row with OrderID=2, the bit field can only be 1 for one row, etc etc. It should be 0 for all other rows with the same OrderID. Any new records coming in with 1 in the bit field should reject if there is already a row with that OrderID which has the bit field set to 1. Any ideas?
CREATE UNIQUE INDEX ON UnnamedTable (OrderID) WHERE UnnamedBitField=1
It's called a Filtered Index. If you're on a pre-2008 version of SQL Server, you can implement a poor-mans equivalent of a filtered index using an indexed view:
CREATE VIEW UnnamedView
WITH SCHEMABINDING
AS
SELECT OrderID From UnnamedSchema.UnnamedTable WHERE UnnamedBitField=1
GO
CREATE UNIQUE CLUSTERED INDEX ON UnnamedView (OrderID)
You can't really do it as a constraint, since SQL Server only supports column constraints and row constraints. There's no (non-fudging) way to write a constraint that deals with all values in the table.
You could more fully normalize the schema which will help you not have to hunt for the already set bit but use a join. You need to remove the bit field and crate a new table say X containing OrderID and the primary key of your table, with the primary key of X being all those fields.
This means that when you insert you need to insert into your original table and into X f and only if you would have set the bit to 1 on your table. The insert will fail if there is already a row in X which is as if there was already an original row with bit set to 1.
The downside is that this takes up more space than your schema but is easier to maintain as you can't get to the equivalent of having two rows with the bit set to 1.
The only way to do that is to subclass the parent table. You didn't mention it but a common reason for this pattern is to represent one unique active row from the set of all rows with the same common key value. Let's Assume your bit field represents the active Orders....
Then I would create a separate table called ActiveOrders, which will only contain the one row with the bit field set to 1
Create Table ActiveOrders(int Orderid Primary Key Null)
and the other table with all the rows in it, with it's own unique Primary Key OrderId
Create Table AllOrders
(OrderId Integer Primary Key Not Null, ActiveOrderId Integer Not Null,
[All other data fields]
Constraint FK_AllOrders2ActiveOrder
Foreign Key(ActiveOrderId) references ActiveOrders(OrderId))
You now no longer even need the bit field, as the presence of the row in the ActiveOrders table identifies it as the Active Order... To get only the active Orders (the ones that in your scheme would have bit field set to 1), just join the two tables.
I aggree with the other answers and if you can change the schema then do that but if not then I think something like this will do.
CREATE FUNCTION fnMyCheck
(#id INT)
RETURNS INT
AS
BEGIN
DECLARE #i INT
SELECT #i = COUNT(*)
FROM MyTable
WHERE FkCol = #id
AND BitCol = 1
RETURN #i
END
ALTER TABLE YourTable
ADD CONSTRAINT ckMyCheck CHECK (fnMyCheck(FkCol)<=1)
but there are problems that can come from doing using a udf in a check constraint, such as this
Edit to add comment regarding problems with this 'solution':
There are more straightforward issues than what you've linked to.
INSERT INTO YourTable(FkCol,BitCol) VALUES (1,1),(1,0)
followed by
UPDATE YourTable SET BitCol=1
succeeds and leaves two rows with FkCol=1 and BitCol=1

Is there a smart way to append a number to an PK identity column in a Relational database w/o total catastrophe?

It's far from the ideal situation, but I need to fix a database by appending the number "1" to the PK Identiy column which has FK relations to four other tables. I'm basically making a four digit number a five digit number. I need to maintain the relations. I could store the number in a var, do a Set query and append the 1, and do that for each table...
Is there a better way of doing this?
You say you are using an identity data type for your primary key so before you update the numbers you will have to SET IDENTITY_INSERT ON (documentation here) and then turn it off again after the update.
As long as you have cascading updates set for your relations the other tables should be updated automatically.
EDIT: As it's not possible to change an identity value I guess you have to export the data, set the new identity values (+10000) and then import your data again.
Anyone have a better suggestion...
Consider adding another field to the PK instead of extending the length of the PK field. Your new field will have to cascade to the related tables, like a field length increase would, but you get to retain your original PK values.
My suggestion is:
Stop writing to the tables.
Copy the tables to new tables with the new PK.
Rename the old tables to backup names.
Rename the new tables to the original table name.
Count the rows in all the tables and double check your work.
Continue using the tables.
Changing a PK after the fact is not fun.
If the column in question has an identity property on it, it gets complicated. This is more-or-less how I'd do it:
Back up your database.
Put it in single user mode. You don't need anybody mucking around whilst you do the surgery.
Execute the ALTER TABLE statements necessary to
disable the primary key constraint on the table in question
disable all triggers on the table in question
disable all foreign key constraints referencing the table in question.
Clone your table, giving it a new name and a column-for-column identical definitions. Don't bother with any triggers, indices, foreign keys or other constraints. Omit the identity property from the table's definition.
Create a new 'map' table that will map your old id values to the new value:
create table dbo.pk_map
(
old_id int not null primary key clustered ,
new_id int not null unique nonclustered ,
)
Populate the map table:
insert dbo.pk_map
select old_id = old.id ,
new_id = f( old.id ) // f(x) is the desired transform
from dbo.tableInQuestion old
Populate your new table, giving the primary key column the new value:
insert dbo.tableInQuestion_NEW
select id = map.id ,
...
from dbo.tableInQuestion old
join dbo.pk_map map on map.old_id = old.id
Truncate the original table: TRUNCATE dbo.tableInQuestion. This should work—safely—since you've disabled all the triggers and foreign key constraints.
Execute SET IDENTITY_INSERT dbo.tableInQuestion ON.
Reload the original table:
insert dbo.tableInQuestion
select *
from dbo.tableInQuestion_NEW
Execute SET IDENTITY_INSERT dbo.tableInQuestion OFF
Execute drop table dbo.tableInQuestion_NEW. We're all done with it.
Execute DBCC CHECKIDENT( dbo.tableInQuestion , reseed ) to get the identity counter back in sync with the data in the table.
Now, use the map table to propagate the changed primary key column down the line. Depending on your E-R model, this can get complicated as foreign keys referencing the updated column may themselves be part of a composite primary key.
When you're all done, start re-enabling the constraints and triggers you disabled. Make sure you do this using the WITH CHECK option. Fix any problems thus uncovered.
Finally, drop the map table, and clear the single user flag and bring your system(s) back online.
Piece of cake! (or something.)
Consider this approach:
Reset the identity seed to the 10000 + the current seed.
Set identity insert on
Insert into the table from the values in the table and add 10000 to the identity column on the way.
EX:
Set identity insert on
Insert Table(identity, column1, eolumn2)
select identity + 10000, column1, column2
From Table
Where identity < origional max identity value
After the insert you know the identity is exactly 10000 more than the origional.
Update the foreign keys by addding 10000.

How to create a unique index on a NULL column?

I am using SQL Server 2005. I want to constrain the values in a column to be unique, while allowing NULLS.
My current solution involves a unique index on a view like so:
CREATE VIEW vw_unq WITH SCHEMABINDING AS
SELECT Column1
FROM MyTable
WHERE Column1 IS NOT NULL
CREATE UNIQUE CLUSTERED INDEX unq_idx ON vw_unq (Column1)
Any better ideas?
Using SQL Server 2008, you can create a filtered index.
CREATE UNIQUE INDEX AK_MyTable_Column1 ON MyTable (Column1) WHERE Column1 IS NOT NULL
Another option is a trigger to check uniqueness, but this could affect performance.
The calculated column trick is widely known as a "nullbuster"; my notes credit Steve Kass:
CREATE TABLE dupNulls (
pk int identity(1,1) primary key,
X int NULL,
nullbuster as (case when X is null then pk else 0 end),
CONSTRAINT dupNulls_uqX UNIQUE (X,nullbuster)
)
Pretty sure you can't do that, as it violates the purpose of uniques.
However, this person seems to have a decent work around:
http://sqlservercodebook.blogspot.com/2008/04/multiple-null-values-in-unique-index-in.html
It is possible to use filter predicates to specify which rows to include in the index.
From the documentation:
WHERE <filter_predicate> Creates a filtered index by specifying which
rows to include in the index. The filtered index must be a
nonclustered index on a table. Creates filtered statistics for the
data rows in the filtered index.
Example:
CREATE TABLE Table1 (
NullableCol int NULL
)
CREATE UNIQUE INDEX IX_Table1 ON Table1 (NullableCol) WHERE NullableCol IS NOT NULL;
Strictly speaking, a unique nullable column (or set of columns) can be NULL (or a record of NULLs) only once, since having the same value (and this includes NULL) more than once obviously violates the unique constraint.
However, that doesn't mean the concept of "unique nullable columns" is valid; to actually implement it in any relational database we just have to bear in mind that this kind of databases are meant to be normalized to properly work, and normalization usually involves the addition of several (non-entity) extra tables to establish relationships between the entities.
Let's work a basic example considering only one "unique nullable column", it's easy to expand it to more such columns.
Suppose we the information represented by a table like this:
create table the_entity_incorrect
(
id integer,
uniqnull integer null, /* we want this to be "unique and nullable" */
primary key (id)
);
We can do it by putting uniqnull apart and adding a second table to establish a relationship between uniqnull values and the_entity (rather than having uniqnull "inside" the_entity):
create table the_entity
(
id integer,
primary key(id)
);
create table the_relation
(
the_entity_id integer not null,
uniqnull integer not null,
unique(the_entity_id),
unique(uniqnull),
/* primary key can be both or either of the_entity_id or uniqnull */
primary key (the_entity_id, uniqnull),
foreign key (the_entity_id) references the_entity(id)
);
To associate a value of uniqnull to a row in the_entity we need to also add a row in the_relation.
For rows in the_entity were no uniqnull values are associated (i.e. for the ones we would put NULL in the_entity_incorrect) we simply do not add a row in the_relation.
Note that values for uniqnull will be unique for all the_relation, and also notice that for each value in the_entity there can be at most one value in the_relation, since the primary and foreign keys on it enforce this.
Then, if a value of 5 for uniqnull is to be associated with an the_entity id of 3, we need to:
start transaction;
insert into the_entity (id) values (3);
insert into the_relation (the_entity_id, uniqnull) values (3, 5);
commit;
And, if an id value of 10 for the_entity has no uniqnull counterpart, we only do:
start transaction;
insert into the_entity (id) values (10);
commit;
To denormalize this information and obtain the data a table like the_entity_incorrect would hold, we need to:
select
id, uniqnull
from
the_entity left outer join the_relation
on
the_entity.id = the_relation.the_entity_id
;
The "left outer join" operator ensures all rows from the_entity will appear in the result, putting NULL in the uniqnull column when no matching columns are present in the_relation.
Remember, any effort spent for some days (or weeks or months) in designing a well normalized database (and the corresponding denormalizing views and procedures) will save you years (or decades) of pain and wasted resources.