sqlite and 'constraint failed' error while select and insert at the same time - sql

I'm working on migration function. It reads data from old table and inserts it into the new one. All that stuff working in background thread with low priority.
My steps in pseudo code.
sqlite3_prepare_stmt (select statement)
sqlite3_prepare_stmt (insert statement)
while (sqlite3_step (select statement) == SQLITE_ROW)
{
get data from select row results
sqlite3_bind select results to insert statement
sqlite3_step (insert statement)
sqlite3_reset (insert statement)
}
sqlite3_reset (select statement)
I'm always getting 'constraint failed' error on sqlite3_step (insert statement). Why it's happend and how i could fix that?
UPD: As i'm understand that's happend because background thread use db handle opened in main thread. Checking that guess now.
UPD2:
sqlite> select sql from sqlite_master where tbl_name = 'tiles';
CREATE TABLE tiles('pk' INTEGER PRIMARY KEY, 'data' BLOB, 'x' INTEGER, 'y' INTEGER, 'z' INTEGER, 'importKey' INTEGER)
sqlite> select sql from sqlite_master where tbl_name = 'tiles_v2';
CREATE TABLE tiles_v2 (pk int primary key, x int, y int, z int, layer int, data blob, timestamp real)

It probably means your insert statement is violating a constraint in the new table. Could be a primary key constraint, a unique constraint, a foreign key constraint (if you're using PRAGMA foreign_keys = ON;), and so on.
You fix that either by dropping the constraint, correcting the data, or dropping the data. Dropping the constraint is usually a Bad Thing, but that depends on the application.
Is there a compelling reason to copy data one row at a time instead of as a set?
INSERT INTO new_table
SELECT column_list FROM old_table;
If you need help identifying the constraint, edit your original question, and post the output of these two SQLite queries.
select sql from sqlite_master where tbl_name = 'old_table_name';
select sql from sqlite_master where tbl_name = 'new_table_name';
Update: Based on the output of those two queries, I see only one constraint--the primary key constraint in each table. If you haven't built any triggers on these tables, the only constraint that can fail is the primary key constraint. And the only way that constraint can fail is if you try to insert two rows that have the same value for 'pk'.
I suppose that could happen in a few different ways.
The old table has duplicate values in
the 'pk' column.
The code that does your migration
alters or injects a duplicate value
before inserting data into your new
table.
Another process, possibly running on
a different computer, is inserting or
updating data without your knowledge.
Other reasons I haven't thought of yet. :-)
You can determine whether there are duplicate values of 'pk' in the old table by running this query.
select pk
from old_table_name
group by pk
having count() > 1;
You might consider trying to manually migrate the data using INSERT INTO . . . SELECT . . . If that fails, add a WHERE clause to reduce the size of the set until you isolate the bad data.

I have found that troubleshooting foreign key constraint errors with sqlite to be troublesome, particularly on large data sets. However the following approach helps identify the offending relation.
disable foreign key checking: PRAGMA foreign_keys = 0;
execute the statement that cause the error - in my case it was an INSERT of 70,000 rows with 3 different foreign key relations.
re-enable foreign key checking: PRAGMA foreign_keys = 1;
identify the foreign key errors: PRAGMA foreign_key_check(table-name);
In my case, it showed the 13 rows with invalid references.

Just in case anyone lands here looking for "Constraint failed" error message, make sure your Id column's type is INTEGER, not INTEGER (0, 15) or something.
Background
If your table has a column named Id, with type INTEGER and set as Primary Key, SQLite treats it as an alias for the built-in column RowId. This column works like an auto-increment column. In my case, this column was working fine till some table designer (probably the one created by SQLite guys for Visual Studio) changed column type from INTEGER to INTEGER (0, 15) and all of a sudden my application started throwing Constraint failed exception.

just for making it more clearer :
make sure that you use data type correct , i was using int instead of integer on creating tmy tables like this :
id int primary key not null
and i was stuck on the constrain problem over hours ...
just make sure that to enter data type correctly in creating database.

One of the reasons that this error occurs is inserting duplicated data for a unique row too. Check your all unique and keys for duplicated data.

Today i had similar error: CHECK constraint failed: profile
First i read here what is constraint.
The CONSTRAINTS are an integrity which defines some conditions that
restrict the column to contain the true data while inserting or
updating or deleting. We can use two types of constraints, that is
column level or table level constraint. The column level constraints
can be applied only on a specific column where as table level
constraints can be applied to the whole table.
Then figured out that i need to check how database was created, because i used sqlalchemy and sqlite3 dialect, i had to check tables schema. Schema will show actual database structure.
sqlite3 >>> .schema
CREATE TABLE profile (
id INTEGER NOT NULL,
...
forum_copy_exist BOOLEAN DEFAULT 'false',
forum_deleted BOOLEAN DEFAULT 'true',
PRIMARY KEY (id),
UNIQUE (id),
UNIQUE (login_name),
UNIQUE (forum_name),
CHECK (email_verified IN (0, 1)),
UNIQUE (uid),
CHECK (forum_copy_exist IN (0, 1)),
CHECK (forum_deleted IN (0, 1))
So here I found that boolean CHECK is always 0 or 1 and i used column default value as 'false', so each time i had no forum_copy_exist or forum_deleted value, false was inserted, and because false is invalid value it throws error, and didn't inserted row.
So changing database defaults to:
forum_copy_exist BOOLEAN DEFAULT '0',
forum_deleted BOOLEAN DEFAULT '0',
solved the issue.
In postgresql i think false is valid value. So it depends how database schema is created.
Hope this will help others in the future.

Related

Violation of UNIQUE KEY constraint '...'. Cannot insert duplicate key in object 'dbo.Cliente'. The duplicate key value is (<NULL>) [duplicate]

I want to have a unique constraint on a column which I am going to populate with GUIDs. However, my data contains null values for this columns. How do I create the constraint that allows multiple null values?
Here's an example scenario. Consider this schema:
CREATE TABLE People (
Id INT CONSTRAINT PK_MyTable PRIMARY KEY IDENTITY,
Name NVARCHAR(250) NOT NULL,
LibraryCardId UNIQUEIDENTIFIER NULL,
CONSTRAINT UQ_People_LibraryCardId UNIQUE (LibraryCardId)
)
Then see this code for what I'm trying to achieve:
-- This works fine:
INSERT INTO People (Name, LibraryCardId)
VALUES ('John Doe', 'AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA');
-- This also works fine, obviously:
INSERT INTO People (Name, LibraryCardId)
VALUES ('Marie Doe', 'BBBBBBBB-BBBB-BBBB-BBBB-BBBBBBBBBBBB');
-- This would *correctly* fail:
--INSERT INTO People (Name, LibraryCardId)
--VALUES ('John Doe the Second', 'AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA');
-- This works fine this one first time:
INSERT INTO People (Name, LibraryCardId)
VALUES ('Richard Roe', NULL);
-- THE PROBLEM: This fails even though I'd like to be able to do this:
INSERT INTO People (Name, LibraryCardId)
VALUES ('Marcus Roe', NULL);
The final statement fails with a message:
Violation of UNIQUE KEY constraint 'UQ_People_LibraryCardId'. Cannot insert duplicate key in object 'dbo.People'.
How can I change my schema and/or uniqueness constraint so that it allows multiple NULL values, while still checking for uniqueness on actual data?
What you're looking for is indeed part of the ANSI standards SQL:92, SQL:1999 and SQL:2003, ie a UNIQUE constraint must disallow duplicate non-NULL values but accept multiple NULL values.
In the Microsoft world of SQL Server however, a single NULL is allowed but multiple NULLs are not...
In SQL Server 2008, you can define a unique filtered index based on a predicate that excludes NULLs:
CREATE UNIQUE NONCLUSTERED INDEX idx_yourcolumn_notnull
ON YourTable(yourcolumn)
WHERE yourcolumn IS NOT NULL;
In earlier versions, you can resort to VIEWS with a NOT NULL predicate to enforce the constraint.
SQL Server 2008 +
You can create a unique index that accept multiple NULLs with a WHERE clause. See the answer below.
Prior to SQL Server 2008
You cannot create a UNIQUE constraint and allow NULLs. You need set a default value of NEWID().
Update the existing values to NEWID() where NULL before creating the UNIQUE constraint.
SQL Server 2008 And Up
Just filter a unique index:
CREATE UNIQUE NONCLUSTERED INDEX UQ_Party_SamAccountName
ON dbo.Party(SamAccountName)
WHERE SamAccountName IS NOT NULL;
In Lower Versions, A Materialized View Is Still Not Required
For SQL Server 2005 and earlier, you can do it without a view. I just added a unique constraint like you're asking for to one of my tables. Given that I want uniqueness in column SamAccountName, but I want to allow multiple NULLs, I used a materialized column rather than a materialized view:
ALTER TABLE dbo.Party ADD SamAccountNameUnique
AS (Coalesce(SamAccountName, Convert(varchar(11), PartyID)))
ALTER TABLE dbo.Party ADD CONSTRAINT UQ_Party_SamAccountName
UNIQUE (SamAccountNameUnique)
You simply have to put something in the computed column that will be guaranteed unique across the whole table when the actual desired unique column is NULL. In this case, PartyID is an identity column and being numeric will never match any SamAccountName, so it worked for me. You can try your own method—be sure you understand the domain of your data so that there is no possibility of intersection with real data. That could be as simple as prepending a differentiator character like this:
Coalesce('n' + SamAccountName, 'p' + Convert(varchar(11), PartyID))
Even if PartyID became non-numeric someday and could coincide with a SamAccountName, now it won't matter.
Note that the presence of an index including the computed column implicitly causes each expression result to be saved to disk with the other data in the table, which DOES take additional disk space.
Note that if you don't want an index, you can still save CPU by making the expression be precalculated to disk by adding the keyword PERSISTED to the end of the column expression definition.
In SQL Server 2008 and up, definitely use the filtered solution instead if you possibly can!
Controversy
Please note that some database professionals will see this as a case of "surrogate NULLs", which definitely have problems (mostly due to issues around trying to determine when something is a real value or a surrogate value for missing data; there can also be issues with the number of non-NULL surrogate values multiplying like crazy).
However, I believe this case is different. The computed column I'm adding will never be used to determine anything. It has no meaning of itself, and encodes no information that isn't already found separately in other, properly defined columns. It should never be selected or used.
So, my story is that this is not a surrogate NULL, and I'm sticking to it! Since we don't actually want the non-NULL value for any purpose other than to trick the UNIQUE index to ignore NULLs, our use case has none of the problems that arise with normal surrogate NULL creation.
All that said, I have no problem with using an indexed view instead—but it brings some issues with it such as the requirement of using SCHEMABINDING. Have fun adding a new column to your base table (you'll at minimum have to drop the index, and then drop the view or alter the view to not be schema bound). See the full (long) list of requirements for creating an indexed view in SQL Server (2005) (also later versions), (2000).
Update
If your column is numeric, there may be the challenge of ensuring that the unique constraint using Coalesce does not result in collisions. In that case, there are some options. One might be to use a negative number, to put the "surrogate NULLs" only in the negative range, and the "real values" only in the positive range. Alternately, the following pattern could be used. In table Issue (where IssueID is the PRIMARY KEY), there may or may not be a TicketID, but if there is one, it must be unique.
ALTER TABLE dbo.Issue ADD TicketUnique
AS (CASE WHEN TicketID IS NULL THEN IssueID END);
ALTER TABLE dbo.Issue ADD CONSTRAINT UQ_Issue_Ticket_AllowNull
UNIQUE (TicketID, TicketUnique);
If IssueID 1 has ticket 123, the UNIQUE constraint will be on values (123, NULL). If IssueID 2 has no ticket, it will be on (NULL, 2). Some thought will show that this constraint cannot be duplicated for any row in the table, and still allows multiple NULLs.
For people who are using Microsoft SQL Server Manager and want to create a Unique but Nullable index you can create your unique index as you normally would then in your Index Properties for your new index, select "Filter" from the left hand panel, then enter your filter (which is your where clause). It should read something like this:
([YourColumnName] IS NOT NULL)
This works with MSSQL 2012
When I applied the unique index below:
CREATE UNIQUE NONCLUSTERED INDEX idx_badgeid_notnull
ON employee(badgeid)
WHERE badgeid IS NOT NULL;
every non null update and insert failed with the error below:
UPDATE failed because the following SET options have incorrect settings: 'ARITHABORT'.
I found this on MSDN
SET ARITHABORT must be ON when you are creating or changing indexes on computed columns or indexed views. If SET ARITHABORT is OFF, CREATE, UPDATE, INSERT, and DELETE statements on tables with indexes on computed columns or indexed views will fail.
So to get this to work correctly I did this
Right click [Database]-->Properties-->Options-->Other
Options-->Misscellaneous-->Arithmetic Abort Enabled -->true
I believe it is possible to set this option in code using
ALTER DATABASE "DBNAME" SET ARITHABORT ON
but i have not tested this
It can be done in the designer as well
Right click on the Index > Properties to get this window
Create a view that selects only non-NULL columns and create the UNIQUE INDEX on the view:
CREATE VIEW myview
AS
SELECT *
FROM mytable
WHERE mycolumn IS NOT NULL
CREATE UNIQUE INDEX ux_myview_mycolumn ON myview (mycolumn)
Note that you'll need to perform INSERT's and UPDATE's on the view instead of table.
You may do it with an INSTEAD OF trigger:
CREATE TRIGGER trg_mytable_insert ON mytable
INSTEAD OF INSERT
AS
BEGIN
INSERT
INTO myview
SELECT *
FROM inserted
END
It is possible to create a unique constraint on a Clustered Indexed View
You can create the View like this:
CREATE VIEW dbo.VIEW_OfYourTable WITH SCHEMABINDING AS
SELECT YourUniqueColumnWithNullValues FROM dbo.YourTable
WHERE YourUniqueColumnWithNullValues IS NOT NULL;
and the unique constraint like this:
CREATE UNIQUE CLUSTERED INDEX UIX_VIEW_OFYOURTABLE
ON dbo.VIEW_OfYourTable(YourUniqueColumnWithNullValues)
In my experience - if you're thinking a column needs to allow NULLs but also needs to be UNIQUE for values where they exist, you may be modelling the data incorrectly. This often suggests you're creating a separate sub-entity within the same table as a different entity. It probably makes more sense to have this entity in a second table.
In the provided example, I would put LibraryCardId in a separate LibraryCards table with a unique not-null foreign key to the People table:
CREATE TABLE People (
Id INT CONSTRAINT PK_MyTable PRIMARY KEY IDENTITY,
Name NVARCHAR(250) NOT NULL,
)
CREATE TABLE LibraryCards (
LibraryCardId UNIQUEIDENTIFIER CONSTRAINT PK_LibraryCards PRIMARY KEY,
PersonId INT NOT NULL
CONSTRAINT UQ_LibraryCardId_PersonId UNIQUE (PersonId),
FOREIGN KEY (PersonId) REFERENCES People(id)
)
This way you don't need to bother with a column being both unique and nullable. If a person doesn't have a library card, they just won't have a record in the library cards table. Also, if there are additional attributes about the library card (perhaps Expiration Date or something), you now have a logical place to put those fields.
Maybe consider an "INSTEAD OF" trigger and do the check yourself? With a non-clustered (non-unique) index on the column to enable the lookup.
As stated before, SQL Server doesn't implement the ANSI standard when it comes to UNIQUE CONSTRAINT. There is a ticket on Microsoft Connect for this since 2007. As suggested there and here the best options as of today are to use a filtered index as stated in another answer or a computed column, e.g.:
CREATE TABLE [Orders] (
[OrderId] INT IDENTITY(1,1) NOT NULL,
[TrackingId] varchar(11) NULL,
...
[ComputedUniqueTrackingId] AS (
CASE WHEN [TrackingId] IS NULL
THEN '#' + cast([OrderId] as varchar(12))
ELSE [TrackingId_Unique] END
),
CONSTRAINT [UQ_TrackingId] UNIQUE ([ComputedUniqueTrackingId])
)
You can create an INSTEAD OF trigger to check for specific conditions and error if they are met. Creating an index can be costly on larger tables.
Here's an example:
CREATE TRIGGER PONY.trg_pony_unique_name ON PONY.tbl_pony
INSTEAD OF INSERT, UPDATE
AS
BEGIN
IF EXISTS(
SELECT TOP (1) 1
FROM inserted i
GROUP BY i.pony_name
HAVING COUNT(1) > 1
)
OR EXISTS(
SELECT TOP (1) 1
FROM PONY.tbl_pony t
INNER JOIN inserted i
ON i.pony_name = t.pony_name
)
THROW 911911, 'A pony must have a name as unique as s/he is. --PAS', 16;
ELSE
INSERT INTO PONY.tbl_pony (pony_name, stable_id, pet_human_id)
SELECT pony_name, stable_id, pet_human_id
FROM inserted
END
You can't do this with a UNIQUE constraint, but you can do this in a trigger.
CREATE TRIGGER [dbo].[OnInsertMyTableTrigger]
ON [dbo].[MyTable]
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
DECLARE #Column1 INT;
DECLARE #Column2 INT; -- allow nulls on this column
SELECT #Column1=Column1, #Column2=Column2 FROM inserted;
-- Check if an existing record already exists, if not allow the insert.
IF NOT EXISTS(SELECT * FROM dbo.MyTable WHERE Column1=#Column1 AND Column2=#Column2 #Column2 IS NOT NULL)
BEGIN
INSERT INTO dbo.MyTable (Column1, Column2)
SELECT #Column2, #Column2;
END
ELSE
BEGIN
RAISERROR('The unique constraint applies on Column1 %d, AND Column2 %d, unless Column2 is NULL.', 16, 1, #Column1, #Column2);
ROLLBACK TRANSACTION;
END
END
CREATE UNIQUE NONCLUSTERED INDEX [UIX_COLUMN_NAME]
ON [dbo].[Employee]([Username] ASC) WHERE ([Username] IS NOT NULL)
WITH (ALLOW_PAGE_LOCKS = ON, ALLOW_ROW_LOCKS = ON, PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF,
DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, STATISTICS_NORECOMPUTE = OFF, ONLINE = OFF,
MAXDOP = 0) ON [PRIMARY];
this code if u make a register form with textBox and use insert and ur textBox is empty and u click on submit button .
CREATE UNIQUE NONCLUSTERED INDEX [IX_tableName_Column]
ON [dbo].[tableName]([columnName] ASC) WHERE [columnName] !=`''`;

Postgres BIGSERIAL does not share sequence when inserts are made with multiple remote fdw sources

I am trying to insert into a table in a Postgres database from two other Postgres databases using Foreign Data Wrappers. The objective is to have an autogenerate primary key, independent of the source, as there will be more than two in.
I first defined the tables like so:
Target database:
create table dummy (
dummy_pk bigserial primary key
-- other fields
);
Sources databases:
create foreign table dummy (
dummy_pk bigserial
-- other fields
) server ... ;
This solution worked fine as long as I inserted from only one source, when I tried to insert from the other one, without specifying dummy_pk, I got this message:
Duplicate key (dummy_pk)=(1)
Because postgres tries to insert an id of 1, I believe the sequence used for each source foreign table is different. I changed the source tables a bit in an attempt to let the target table's sequence do the job for the id:
create foreign table dummy (
dummy_pk bigint
-- other fields
) server ... ;
This time I got a diffrent error:
NULL value violates NOT NULL constaint on column « dummy_pk »
Therefore I believe the source server sends a query to the target where the dummy_pk is null, and the target does not replace it with the default value.
So, is there a way I can force the use of the target's sequence in a query executed on the source? Maybe I have to share that sequence, can I create a foreign sequence? I cannot remove the column on the foreign tables as I need a read access to them.
Thanks!
Remove dummy_pk from foreign tables so that destination table does not get NULL nor value and so fall backs to DEFAULT or NULL if no DEFAULT specified. If you attempt to pass DEFAULT to foreign table it will try to use DEFAULT value of foreign table instead.
create foreign table dummy (
/*dummy_pk bigserial,*/
column1 text,
column2 int2,
-- other fields
) server ... ;
Another way would be to grab sequence values from destination server using dblink, but I think this is better (if you can afford to have this column removed from foreign tables).

Why can primary key column be empty in SQLite?

I have a table with only 1 primary column (nvarchar), as in the designer, it's marked as primary key and not allow nulls. But by somehow, there is a row in that table with the key value being empty, it's of course not null and doesn't duplicate with any other rows in the primary key column and there isn't any conflict or violation occurring.
However as far as I know, that kind of value (empty) is not allowed for a primary column in SQL Server. I wonder if there is any option to turn on to make it work properly. Or I have to check the value myself through CHECK constraint or right in C# code (before updating).
Your help would be highly appreciated. Thanks!
"Empty-string" is a string with length of zero. It is NOT NULL, so it doesn't violate a null check. It most certainly IS an allowed value for a character-based primary key column in SQL Server. If the empty string value is not allowed by the business, a check constraint would be the best way to implement this as a business rule. That way, clients which might not know about the rule can't violate it.
This code runs without violations in SQL Server, I just tested it just to be sure.
create table TestTable (
myKey varchar(10) primary key,
myData int
)
GO
insert TestTable
select '', 1

postgresql: error duplicate key value violates unique constraint

This question have been asked by several people but my problem seems to be different.
Actually I have to merge same structured tables from different databases in postgresql into a new DB. What I am doing is that I connect to remote db using dblink, reads that table in that db and insert it into the table in the current DB like below
INSERT INTO t_types_of_dementia SELECT * FROM dblink('host=localhost port=5432 dbname=snap-cadence password=xxxxxx', 'SELECT dementia_type, snapid FROM t_types_of_dementia') as x(dementia_type VARCHAR(256),snapid integer);
First time this query runs fine, but when I run it again or try to run it with some other remote database's table: it gives me this error
ERROR: duplicate key value violates unique constraint
"t_types_of_dementia_pkey"
I want that this new tables gets populated by entries of others tables from other dbs.
Some of the solutions proposed talks about sequence, but i am not using any
The structure of the table in current db is
CREATE TABLE t_types_of_dementia(
dementia_type VARCHAR(256),
snapid integer NOT NULL,
PRIMARY KEY (dementia_type,snapid)
);
P.S. There is a specific reason why both the columns are used as a primary key which may not be relevant in this discussion, because same issue happens in other tables where this is not the case.
As the error message tells you - you can not have two rows with the same value in the columns dementia_type, snapid since they need to be unique.
You have to make sure that the two databases has the same values for dementia_type, snapid.
A workaround would be to add a column to your table alter table t_types_of_dementia add column id serial generated always and use that as primary key instead of your current.

Is there a smart way to append a number to an PK identity column in a Relational database w/o total catastrophe?

It's far from the ideal situation, but I need to fix a database by appending the number "1" to the PK Identiy column which has FK relations to four other tables. I'm basically making a four digit number a five digit number. I need to maintain the relations. I could store the number in a var, do a Set query and append the 1, and do that for each table...
Is there a better way of doing this?
You say you are using an identity data type for your primary key so before you update the numbers you will have to SET IDENTITY_INSERT ON (documentation here) and then turn it off again after the update.
As long as you have cascading updates set for your relations the other tables should be updated automatically.
EDIT: As it's not possible to change an identity value I guess you have to export the data, set the new identity values (+10000) and then import your data again.
Anyone have a better suggestion...
Consider adding another field to the PK instead of extending the length of the PK field. Your new field will have to cascade to the related tables, like a field length increase would, but you get to retain your original PK values.
My suggestion is:
Stop writing to the tables.
Copy the tables to new tables with the new PK.
Rename the old tables to backup names.
Rename the new tables to the original table name.
Count the rows in all the tables and double check your work.
Continue using the tables.
Changing a PK after the fact is not fun.
If the column in question has an identity property on it, it gets complicated. This is more-or-less how I'd do it:
Back up your database.
Put it in single user mode. You don't need anybody mucking around whilst you do the surgery.
Execute the ALTER TABLE statements necessary to
disable the primary key constraint on the table in question
disable all triggers on the table in question
disable all foreign key constraints referencing the table in question.
Clone your table, giving it a new name and a column-for-column identical definitions. Don't bother with any triggers, indices, foreign keys or other constraints. Omit the identity property from the table's definition.
Create a new 'map' table that will map your old id values to the new value:
create table dbo.pk_map
(
old_id int not null primary key clustered ,
new_id int not null unique nonclustered ,
)
Populate the map table:
insert dbo.pk_map
select old_id = old.id ,
new_id = f( old.id ) // f(x) is the desired transform
from dbo.tableInQuestion old
Populate your new table, giving the primary key column the new value:
insert dbo.tableInQuestion_NEW
select id = map.id ,
...
from dbo.tableInQuestion old
join dbo.pk_map map on map.old_id = old.id
Truncate the original table: TRUNCATE dbo.tableInQuestion. This should work—safely—since you've disabled all the triggers and foreign key constraints.
Execute SET IDENTITY_INSERT dbo.tableInQuestion ON.
Reload the original table:
insert dbo.tableInQuestion
select *
from dbo.tableInQuestion_NEW
Execute SET IDENTITY_INSERT dbo.tableInQuestion OFF
Execute drop table dbo.tableInQuestion_NEW. We're all done with it.
Execute DBCC CHECKIDENT( dbo.tableInQuestion , reseed ) to get the identity counter back in sync with the data in the table.
Now, use the map table to propagate the changed primary key column down the line. Depending on your E-R model, this can get complicated as foreign keys referencing the updated column may themselves be part of a composite primary key.
When you're all done, start re-enabling the constraints and triggers you disabled. Make sure you do this using the WITH CHECK option. Fix any problems thus uncovered.
Finally, drop the map table, and clear the single user flag and bring your system(s) back online.
Piece of cake! (or something.)
Consider this approach:
Reset the identity seed to the 10000 + the current seed.
Set identity insert on
Insert into the table from the values in the table and add 10000 to the identity column on the way.
EX:
Set identity insert on
Insert Table(identity, column1, eolumn2)
select identity + 10000, column1, column2
From Table
Where identity < origional max identity value
After the insert you know the identity is exactly 10000 more than the origional.
Update the foreign keys by addding 10000.