How reorder primary key? - primary-key

I have deleted one row(row 20) in my "table category" ,please let me know that how can i reorder the catid (primary key)? at this time it is 21 after 19.
Thanks

You cannot. The closest you can get is truncate table, which will drop the table and recreate it, which means you lose all data in it, and the ID counter is reset to 0. Other than that, ID will always increment by one from the last inserted record, no matter if that record still exists or not. You can write a query to fix all your current IDs, of course, but upon next insert, it'll still create a new gap. More to the point: if a sequential ordering without gaps is what you want, auto incremental ID is not the proper way to achieve that. Add another int field where you manually keep track of this ordering.

If you care enough about your primary key values that such a value is unwanted, you shouldn't be using auto-number primary keys in the first place.
The whole point with a auto-number key is that you say "As long as the key is unique, I don't really care about its value."

Don't mess with the primary keys. They should never change and you should not use them in your app for anything but joining tables.
Add a new column if you need a gapless index and update this column accordingly when you do inserts/removes. This might sound like useless work for you right now, but it will save you a lot of pain later.

Try this:
UPDATE tbl SET catid = (SELECT COUNT(*) FROM tbl t WHERE t.catid <= tbl.catid);
You might also want to rethink / redesign. Renumbering the entire table when you delete a row doesn't seem likely to be either practical or necessary.

Actually you can.
If your rows have unique enough data and you are using PHPmyAdmin
Delete the Column with the Primary ID
Read the Column with Primary Key and Auto Increment enabled.

What do you mean by reordering primary key? If you are saying that you want the primary key to take 20 instead of 21, then I afraid you can't do that straightaway.
All you can do, is to drop the primary key constraint, then change the 21 to 20, and reapply back the primary key constraint

David is right about not using primary key for indexing and such.
If you'll just have to change a particular primary key value once (I've done it sometimes during migration) you could of course set identity_insert on and copy the row with a insert select and then delete the original one.
For recreating a sort order or an column used as an index in your application you could use the following stored procedure:
CREATE PROCEDURE [dbo].[OrganizeOrderConfirmationMessages]
AS
BEGIN
SET NOCOUNT ON;
DECLARE #sortOrder INT;
SET #sortOrder = 0;
-- // Create temporary table
CREATE TABLE #IDs(ID INT, SortOrder INT)
-- // Insert IDs in order according to current SortOrder
INSERT INTO #IDs SELECT ocm.ID, 0 FROM OrderConfirmationMessages ocm ORDER BY ocm.SortOrder ASC
-- // Update SortOrders
UPDATE #IDs SET SortOrder = #sortOrder, #sortOrder = #sortOrder + 10
-- // Update the "real" values with data from #IDs
UPDATE OrderConfirmationMessages SET SortOrder = x2.SortOrder
FROM #IDs x2 WHERE OrderConfirmationMessages.ID = x2.ID
END
Results:
An example of SortOrders will go from 1,2,5,7,10,24,36 to 10,20,30,40,50,60,70

You should drop the 'catid' field and then create it again, set it as primary and check the Auto Increment checkbox, it will add the new field and fill the numbers.

First drop the primary key column from your table and run this syntax in your phpmyadmin sql section-
ALTER TABLE 'your_tablename' ADD 'column_name' BIGINT NOT NULL AUTO_INCREMENT FIRST, ADD PRIMARY KEY
('column_name' (10));
This will automatically arrange the column in numbers from 0, 1 and so on.

try this:
SET #var:=0;
UPDATE `table` SET `id`=(#var:=#var+1);
ALTER TABLE `table` AUTO_INCREMENT=1;

In postgres, you can do this where number of records < 300:
update schema.tbl1
set tbl_id = tbl_id + 300;
alter sequence schema.tbl1_id_seq
restart with 1;
insert into schema.tbl1
select nextval('schema.tbl1_id_seq'),
column2,
column3
from schema.tbl1;
delete from schema.tbl1
where tbl1_id > 300;

Related

Violation of UNIQUE KEY constraint '...'. Cannot insert duplicate key in object 'dbo.Cliente'. The duplicate key value is (<NULL>) [duplicate]

I want to have a unique constraint on a column which I am going to populate with GUIDs. However, my data contains null values for this columns. How do I create the constraint that allows multiple null values?
Here's an example scenario. Consider this schema:
CREATE TABLE People (
Id INT CONSTRAINT PK_MyTable PRIMARY KEY IDENTITY,
Name NVARCHAR(250) NOT NULL,
LibraryCardId UNIQUEIDENTIFIER NULL,
CONSTRAINT UQ_People_LibraryCardId UNIQUE (LibraryCardId)
)
Then see this code for what I'm trying to achieve:
-- This works fine:
INSERT INTO People (Name, LibraryCardId)
VALUES ('John Doe', 'AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA');
-- This also works fine, obviously:
INSERT INTO People (Name, LibraryCardId)
VALUES ('Marie Doe', 'BBBBBBBB-BBBB-BBBB-BBBB-BBBBBBBBBBBB');
-- This would *correctly* fail:
--INSERT INTO People (Name, LibraryCardId)
--VALUES ('John Doe the Second', 'AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA');
-- This works fine this one first time:
INSERT INTO People (Name, LibraryCardId)
VALUES ('Richard Roe', NULL);
-- THE PROBLEM: This fails even though I'd like to be able to do this:
INSERT INTO People (Name, LibraryCardId)
VALUES ('Marcus Roe', NULL);
The final statement fails with a message:
Violation of UNIQUE KEY constraint 'UQ_People_LibraryCardId'. Cannot insert duplicate key in object 'dbo.People'.
How can I change my schema and/or uniqueness constraint so that it allows multiple NULL values, while still checking for uniqueness on actual data?
What you're looking for is indeed part of the ANSI standards SQL:92, SQL:1999 and SQL:2003, ie a UNIQUE constraint must disallow duplicate non-NULL values but accept multiple NULL values.
In the Microsoft world of SQL Server however, a single NULL is allowed but multiple NULLs are not...
In SQL Server 2008, you can define a unique filtered index based on a predicate that excludes NULLs:
CREATE UNIQUE NONCLUSTERED INDEX idx_yourcolumn_notnull
ON YourTable(yourcolumn)
WHERE yourcolumn IS NOT NULL;
In earlier versions, you can resort to VIEWS with a NOT NULL predicate to enforce the constraint.
SQL Server 2008 +
You can create a unique index that accept multiple NULLs with a WHERE clause. See the answer below.
Prior to SQL Server 2008
You cannot create a UNIQUE constraint and allow NULLs. You need set a default value of NEWID().
Update the existing values to NEWID() where NULL before creating the UNIQUE constraint.
SQL Server 2008 And Up
Just filter a unique index:
CREATE UNIQUE NONCLUSTERED INDEX UQ_Party_SamAccountName
ON dbo.Party(SamAccountName)
WHERE SamAccountName IS NOT NULL;
In Lower Versions, A Materialized View Is Still Not Required
For SQL Server 2005 and earlier, you can do it without a view. I just added a unique constraint like you're asking for to one of my tables. Given that I want uniqueness in column SamAccountName, but I want to allow multiple NULLs, I used a materialized column rather than a materialized view:
ALTER TABLE dbo.Party ADD SamAccountNameUnique
AS (Coalesce(SamAccountName, Convert(varchar(11), PartyID)))
ALTER TABLE dbo.Party ADD CONSTRAINT UQ_Party_SamAccountName
UNIQUE (SamAccountNameUnique)
You simply have to put something in the computed column that will be guaranteed unique across the whole table when the actual desired unique column is NULL. In this case, PartyID is an identity column and being numeric will never match any SamAccountName, so it worked for me. You can try your own method—be sure you understand the domain of your data so that there is no possibility of intersection with real data. That could be as simple as prepending a differentiator character like this:
Coalesce('n' + SamAccountName, 'p' + Convert(varchar(11), PartyID))
Even if PartyID became non-numeric someday and could coincide with a SamAccountName, now it won't matter.
Note that the presence of an index including the computed column implicitly causes each expression result to be saved to disk with the other data in the table, which DOES take additional disk space.
Note that if you don't want an index, you can still save CPU by making the expression be precalculated to disk by adding the keyword PERSISTED to the end of the column expression definition.
In SQL Server 2008 and up, definitely use the filtered solution instead if you possibly can!
Controversy
Please note that some database professionals will see this as a case of "surrogate NULLs", which definitely have problems (mostly due to issues around trying to determine when something is a real value or a surrogate value for missing data; there can also be issues with the number of non-NULL surrogate values multiplying like crazy).
However, I believe this case is different. The computed column I'm adding will never be used to determine anything. It has no meaning of itself, and encodes no information that isn't already found separately in other, properly defined columns. It should never be selected or used.
So, my story is that this is not a surrogate NULL, and I'm sticking to it! Since we don't actually want the non-NULL value for any purpose other than to trick the UNIQUE index to ignore NULLs, our use case has none of the problems that arise with normal surrogate NULL creation.
All that said, I have no problem with using an indexed view instead—but it brings some issues with it such as the requirement of using SCHEMABINDING. Have fun adding a new column to your base table (you'll at minimum have to drop the index, and then drop the view or alter the view to not be schema bound). See the full (long) list of requirements for creating an indexed view in SQL Server (2005) (also later versions), (2000).
Update
If your column is numeric, there may be the challenge of ensuring that the unique constraint using Coalesce does not result in collisions. In that case, there are some options. One might be to use a negative number, to put the "surrogate NULLs" only in the negative range, and the "real values" only in the positive range. Alternately, the following pattern could be used. In table Issue (where IssueID is the PRIMARY KEY), there may or may not be a TicketID, but if there is one, it must be unique.
ALTER TABLE dbo.Issue ADD TicketUnique
AS (CASE WHEN TicketID IS NULL THEN IssueID END);
ALTER TABLE dbo.Issue ADD CONSTRAINT UQ_Issue_Ticket_AllowNull
UNIQUE (TicketID, TicketUnique);
If IssueID 1 has ticket 123, the UNIQUE constraint will be on values (123, NULL). If IssueID 2 has no ticket, it will be on (NULL, 2). Some thought will show that this constraint cannot be duplicated for any row in the table, and still allows multiple NULLs.
For people who are using Microsoft SQL Server Manager and want to create a Unique but Nullable index you can create your unique index as you normally would then in your Index Properties for your new index, select "Filter" from the left hand panel, then enter your filter (which is your where clause). It should read something like this:
([YourColumnName] IS NOT NULL)
This works with MSSQL 2012
When I applied the unique index below:
CREATE UNIQUE NONCLUSTERED INDEX idx_badgeid_notnull
ON employee(badgeid)
WHERE badgeid IS NOT NULL;
every non null update and insert failed with the error below:
UPDATE failed because the following SET options have incorrect settings: 'ARITHABORT'.
I found this on MSDN
SET ARITHABORT must be ON when you are creating or changing indexes on computed columns or indexed views. If SET ARITHABORT is OFF, CREATE, UPDATE, INSERT, and DELETE statements on tables with indexes on computed columns or indexed views will fail.
So to get this to work correctly I did this
Right click [Database]-->Properties-->Options-->Other
Options-->Misscellaneous-->Arithmetic Abort Enabled -->true
I believe it is possible to set this option in code using
ALTER DATABASE "DBNAME" SET ARITHABORT ON
but i have not tested this
It can be done in the designer as well
Right click on the Index > Properties to get this window
Create a view that selects only non-NULL columns and create the UNIQUE INDEX on the view:
CREATE VIEW myview
AS
SELECT *
FROM mytable
WHERE mycolumn IS NOT NULL
CREATE UNIQUE INDEX ux_myview_mycolumn ON myview (mycolumn)
Note that you'll need to perform INSERT's and UPDATE's on the view instead of table.
You may do it with an INSTEAD OF trigger:
CREATE TRIGGER trg_mytable_insert ON mytable
INSTEAD OF INSERT
AS
BEGIN
INSERT
INTO myview
SELECT *
FROM inserted
END
It is possible to create a unique constraint on a Clustered Indexed View
You can create the View like this:
CREATE VIEW dbo.VIEW_OfYourTable WITH SCHEMABINDING AS
SELECT YourUniqueColumnWithNullValues FROM dbo.YourTable
WHERE YourUniqueColumnWithNullValues IS NOT NULL;
and the unique constraint like this:
CREATE UNIQUE CLUSTERED INDEX UIX_VIEW_OFYOURTABLE
ON dbo.VIEW_OfYourTable(YourUniqueColumnWithNullValues)
In my experience - if you're thinking a column needs to allow NULLs but also needs to be UNIQUE for values where they exist, you may be modelling the data incorrectly. This often suggests you're creating a separate sub-entity within the same table as a different entity. It probably makes more sense to have this entity in a second table.
In the provided example, I would put LibraryCardId in a separate LibraryCards table with a unique not-null foreign key to the People table:
CREATE TABLE People (
Id INT CONSTRAINT PK_MyTable PRIMARY KEY IDENTITY,
Name NVARCHAR(250) NOT NULL,
)
CREATE TABLE LibraryCards (
LibraryCardId UNIQUEIDENTIFIER CONSTRAINT PK_LibraryCards PRIMARY KEY,
PersonId INT NOT NULL
CONSTRAINT UQ_LibraryCardId_PersonId UNIQUE (PersonId),
FOREIGN KEY (PersonId) REFERENCES People(id)
)
This way you don't need to bother with a column being both unique and nullable. If a person doesn't have a library card, they just won't have a record in the library cards table. Also, if there are additional attributes about the library card (perhaps Expiration Date or something), you now have a logical place to put those fields.
Maybe consider an "INSTEAD OF" trigger and do the check yourself? With a non-clustered (non-unique) index on the column to enable the lookup.
As stated before, SQL Server doesn't implement the ANSI standard when it comes to UNIQUE CONSTRAINT. There is a ticket on Microsoft Connect for this since 2007. As suggested there and here the best options as of today are to use a filtered index as stated in another answer or a computed column, e.g.:
CREATE TABLE [Orders] (
[OrderId] INT IDENTITY(1,1) NOT NULL,
[TrackingId] varchar(11) NULL,
...
[ComputedUniqueTrackingId] AS (
CASE WHEN [TrackingId] IS NULL
THEN '#' + cast([OrderId] as varchar(12))
ELSE [TrackingId_Unique] END
),
CONSTRAINT [UQ_TrackingId] UNIQUE ([ComputedUniqueTrackingId])
)
You can create an INSTEAD OF trigger to check for specific conditions and error if they are met. Creating an index can be costly on larger tables.
Here's an example:
CREATE TRIGGER PONY.trg_pony_unique_name ON PONY.tbl_pony
INSTEAD OF INSERT, UPDATE
AS
BEGIN
IF EXISTS(
SELECT TOP (1) 1
FROM inserted i
GROUP BY i.pony_name
HAVING COUNT(1) > 1
)
OR EXISTS(
SELECT TOP (1) 1
FROM PONY.tbl_pony t
INNER JOIN inserted i
ON i.pony_name = t.pony_name
)
THROW 911911, 'A pony must have a name as unique as s/he is. --PAS', 16;
ELSE
INSERT INTO PONY.tbl_pony (pony_name, stable_id, pet_human_id)
SELECT pony_name, stable_id, pet_human_id
FROM inserted
END
You can't do this with a UNIQUE constraint, but you can do this in a trigger.
CREATE TRIGGER [dbo].[OnInsertMyTableTrigger]
ON [dbo].[MyTable]
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
DECLARE #Column1 INT;
DECLARE #Column2 INT; -- allow nulls on this column
SELECT #Column1=Column1, #Column2=Column2 FROM inserted;
-- Check if an existing record already exists, if not allow the insert.
IF NOT EXISTS(SELECT * FROM dbo.MyTable WHERE Column1=#Column1 AND Column2=#Column2 #Column2 IS NOT NULL)
BEGIN
INSERT INTO dbo.MyTable (Column1, Column2)
SELECT #Column2, #Column2;
END
ELSE
BEGIN
RAISERROR('The unique constraint applies on Column1 %d, AND Column2 %d, unless Column2 is NULL.', 16, 1, #Column1, #Column2);
ROLLBACK TRANSACTION;
END
END
CREATE UNIQUE NONCLUSTERED INDEX [UIX_COLUMN_NAME]
ON [dbo].[Employee]([Username] ASC) WHERE ([Username] IS NOT NULL)
WITH (ALLOW_PAGE_LOCKS = ON, ALLOW_ROW_LOCKS = ON, PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF,
DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, STATISTICS_NORECOMPUTE = OFF, ONLINE = OFF,
MAXDOP = 0) ON [PRIMARY];
this code if u make a register form with textBox and use insert and ur textBox is empty and u click on submit button .
CREATE UNIQUE NONCLUSTERED INDEX [IX_tableName_Column]
ON [dbo].[tableName]([columnName] ASC) WHERE [columnName] !=`''`;

How to change the primary key value in sql server?

I am using SQL server 2014.
In my table, I set primary key for the column.The primary key value starts from 1. I want to change it to start from 0. How can I achieve this?
If the table is empty, the easiest thing to do would be to drop the table and recreate it with an identity seed of 0 like GuidoG did. If the table has data, changing the primary key to a 0 (like Intern87 mentions) would be a bad idea because after it inserts a row with a key of 0 the next key would be 1 which would probably already be in the table and therefore would cause further inserts to fail with a primary key duplication error.
So if you have existing data but you MUST have data with a key of 0, I would probably just do an identity insert with row 0. Do this with the following
SET IDENTITY_INSERT mytable ON;
INSERT INTO mytable (id, col1, col2, etc..)
VALUES (0, 'col1data','col2data', etc..);
SET IDENTITY_INSERT mytable OFF;
just make sure to run all of that at once because once you turn identity insert on then other inserts may fail till you turn it off.
this creates a table where the autoinc fields starts with 0 in stead of 1
Also it is best to name your primary key
CREATE TABLE myTable
(
id int IDENTITY(0,1),
othercolumn int,
and so on...
constraint PK_myTableID primary key (id)
)
If however you want to do this with an existing table than your best option is to use Element Zero's anwser
You may be able to reseed it;
DBCC CHECKIDENT ('TableName', RESEED, 0);
GO

SQL Server Constraint (Limit bit field based on a foreign key)

I need help with constraints in SQL Server. The situation is for each OrderID=1 (foreign key not primary key so there are multiple rows with the same ID) on the table, the bit field can only be 1 for one of those rows, and for each row with OrderID=2, the bit field can only be 1 for one row, etc etc. It should be 0 for all other rows with the same OrderID. Any new records coming in with 1 in the bit field should reject if there is already a row with that OrderID which has the bit field set to 1. Any ideas?
CREATE UNIQUE INDEX ON UnnamedTable (OrderID) WHERE UnnamedBitField=1
It's called a Filtered Index. If you're on a pre-2008 version of SQL Server, you can implement a poor-mans equivalent of a filtered index using an indexed view:
CREATE VIEW UnnamedView
WITH SCHEMABINDING
AS
SELECT OrderID From UnnamedSchema.UnnamedTable WHERE UnnamedBitField=1
GO
CREATE UNIQUE CLUSTERED INDEX ON UnnamedView (OrderID)
You can't really do it as a constraint, since SQL Server only supports column constraints and row constraints. There's no (non-fudging) way to write a constraint that deals with all values in the table.
You could more fully normalize the schema which will help you not have to hunt for the already set bit but use a join. You need to remove the bit field and crate a new table say X containing OrderID and the primary key of your table, with the primary key of X being all those fields.
This means that when you insert you need to insert into your original table and into X f and only if you would have set the bit to 1 on your table. The insert will fail if there is already a row in X which is as if there was already an original row with bit set to 1.
The downside is that this takes up more space than your schema but is easier to maintain as you can't get to the equivalent of having two rows with the bit set to 1.
The only way to do that is to subclass the parent table. You didn't mention it but a common reason for this pattern is to represent one unique active row from the set of all rows with the same common key value. Let's Assume your bit field represents the active Orders....
Then I would create a separate table called ActiveOrders, which will only contain the one row with the bit field set to 1
Create Table ActiveOrders(int Orderid Primary Key Null)
and the other table with all the rows in it, with it's own unique Primary Key OrderId
Create Table AllOrders
(OrderId Integer Primary Key Not Null, ActiveOrderId Integer Not Null,
[All other data fields]
Constraint FK_AllOrders2ActiveOrder
Foreign Key(ActiveOrderId) references ActiveOrders(OrderId))
You now no longer even need the bit field, as the presence of the row in the ActiveOrders table identifies it as the Active Order... To get only the active Orders (the ones that in your scheme would have bit field set to 1), just join the two tables.
I aggree with the other answers and if you can change the schema then do that but if not then I think something like this will do.
CREATE FUNCTION fnMyCheck
(#id INT)
RETURNS INT
AS
BEGIN
DECLARE #i INT
SELECT #i = COUNT(*)
FROM MyTable
WHERE FkCol = #id
AND BitCol = 1
RETURN #i
END
ALTER TABLE YourTable
ADD CONSTRAINT ckMyCheck CHECK (fnMyCheck(FkCol)<=1)
but there are problems that can come from doing using a udf in a check constraint, such as this
Edit to add comment regarding problems with this 'solution':
There are more straightforward issues than what you've linked to.
INSERT INTO YourTable(FkCol,BitCol) VALUES (1,1),(1,0)
followed by
UPDATE YourTable SET BitCol=1
succeeds and leaves two rows with FkCol=1 and BitCol=1

Is there a smart way to append a number to an PK identity column in a Relational database w/o total catastrophe?

It's far from the ideal situation, but I need to fix a database by appending the number "1" to the PK Identiy column which has FK relations to four other tables. I'm basically making a four digit number a five digit number. I need to maintain the relations. I could store the number in a var, do a Set query and append the 1, and do that for each table...
Is there a better way of doing this?
You say you are using an identity data type for your primary key so before you update the numbers you will have to SET IDENTITY_INSERT ON (documentation here) and then turn it off again after the update.
As long as you have cascading updates set for your relations the other tables should be updated automatically.
EDIT: As it's not possible to change an identity value I guess you have to export the data, set the new identity values (+10000) and then import your data again.
Anyone have a better suggestion...
Consider adding another field to the PK instead of extending the length of the PK field. Your new field will have to cascade to the related tables, like a field length increase would, but you get to retain your original PK values.
My suggestion is:
Stop writing to the tables.
Copy the tables to new tables with the new PK.
Rename the old tables to backup names.
Rename the new tables to the original table name.
Count the rows in all the tables and double check your work.
Continue using the tables.
Changing a PK after the fact is not fun.
If the column in question has an identity property on it, it gets complicated. This is more-or-less how I'd do it:
Back up your database.
Put it in single user mode. You don't need anybody mucking around whilst you do the surgery.
Execute the ALTER TABLE statements necessary to
disable the primary key constraint on the table in question
disable all triggers on the table in question
disable all foreign key constraints referencing the table in question.
Clone your table, giving it a new name and a column-for-column identical definitions. Don't bother with any triggers, indices, foreign keys or other constraints. Omit the identity property from the table's definition.
Create a new 'map' table that will map your old id values to the new value:
create table dbo.pk_map
(
old_id int not null primary key clustered ,
new_id int not null unique nonclustered ,
)
Populate the map table:
insert dbo.pk_map
select old_id = old.id ,
new_id = f( old.id ) // f(x) is the desired transform
from dbo.tableInQuestion old
Populate your new table, giving the primary key column the new value:
insert dbo.tableInQuestion_NEW
select id = map.id ,
...
from dbo.tableInQuestion old
join dbo.pk_map map on map.old_id = old.id
Truncate the original table: TRUNCATE dbo.tableInQuestion. This should work—safely—since you've disabled all the triggers and foreign key constraints.
Execute SET IDENTITY_INSERT dbo.tableInQuestion ON.
Reload the original table:
insert dbo.tableInQuestion
select *
from dbo.tableInQuestion_NEW
Execute SET IDENTITY_INSERT dbo.tableInQuestion OFF
Execute drop table dbo.tableInQuestion_NEW. We're all done with it.
Execute DBCC CHECKIDENT( dbo.tableInQuestion , reseed ) to get the identity counter back in sync with the data in the table.
Now, use the map table to propagate the changed primary key column down the line. Depending on your E-R model, this can get complicated as foreign keys referencing the updated column may themselves be part of a composite primary key.
When you're all done, start re-enabling the constraints and triggers you disabled. Make sure you do this using the WITH CHECK option. Fix any problems thus uncovered.
Finally, drop the map table, and clear the single user flag and bring your system(s) back online.
Piece of cake! (or something.)
Consider this approach:
Reset the identity seed to the 10000 + the current seed.
Set identity insert on
Insert into the table from the values in the table and add 10000 to the identity column on the way.
EX:
Set identity insert on
Insert Table(identity, column1, eolumn2)
select identity + 10000, column1, column2
From Table
Where identity < origional max identity value
After the insert you know the identity is exactly 10000 more than the origional.
Update the foreign keys by addding 10000.

renumber primary key

How would I reset the primary key counter on a sql table and update each row with a new primary key?
I would add another column to the table first, populate that with the new PK.
Then I'd use update statements to update the new fk fields in all related tables.
Then you can drop the old PK and old fk fields.
EDIT: Yes, as Ian says you will have to drop and then recreate all foreign key constraints.
Not sure which DBMS you're using but if it happens to be SQL Server:
SET IDENTITY_INSERT [MyTable] ON
allows you to update/insert the primary key column. Then when you are done updating the keys (you could use a CURSOR for this if the logic is complicated)
SET IDENTITY_INSERT [MyTable] OFF
Hope that helps!
This may or not be MS SQL specific, but:
TRUNCATE TABLE resets the identity counter, so one way to do this quick and dirty would be to
1) Do a Backup
2) Copy table contents to temp table:
3) Copy temp table contents back to table (which has the identity column):
SELECT Field1, Field2 INTO #MyTable FROM MyTable
TRUNCATE TABLE MyTable
INSERT INTO MyTable
(Field1, Field2)
SELECT Field1, Field2 FROM #MyTable
SELECT * FROM MyTable
-----------------------------------
ID Field1 Field2
1 Value1 Value2
Why would you even bother? The whole point of counter-based "identity" primary keys is that the numbers are arbitrary and meaningless.
you could do it in the following steps:
create copy of yourTable with extra column new_key
populate copyOfYourTable with the affected rows from yourTable along with desired values of new_key
temporarily disable constraints
update all related tables to point to the value of new_key instead of the old_key
delete affected rows from yourTable
SET IDENTITY_INSERT [yourTable] ON
insert affected rows again with the new proper value of the key (from copy table)
SET IDENTITY_INSERT [yourTable] OFF
reseed identity
re-enable constraints
delete the copyOfYourtable
But as others said all that work is not needed.
I tend to look at the identity type primary keys as if they were equivalent of pointers in C, I use them to reference other objects but never modify of access them explicitly
If this is Microsoft's SQL Server, one thing you could do is use the [dbcc checkident](http://msdn.microsoft.com/en-us/library/ms176057(SQL.90).aspx)
Assume you have a single table that you want to move around data within along with renumbering the primary keys. For the example, the name of the table is ErrorCode. It has two fields, ErrorCodeID (which is the primary key) and a Description.
Example Code Using dbcc checkident
-- Reset the primary key counter
dbcc checkident(ErrorCode, reseed, 7000)
-- Move all rows greater than 8000 to the 7000 range
insert into ErrorCode
select Description from ErrorCode where ErrorCodeID >= 8000
-- Delete the old rows
delete ErrorCode where ErrorCodeID >= 8000
-- Reset the primary key counter
dbcc checkident(ErrorCode, reseed, 8000)
With this example, you'll effectively be moving all rows to a different primary key and then resetting so the next insert takes on an 8000 ID.
Hope this helps a bit!