Convert multiple int primary keys from different tables into Identity - sql

I have a dozen or so different databases with similar structure, with around 50 different tables each, and some of these tables used a sequential int [Id] as Primary Key and Identity.
At some point, these databases were migrated to a different remote infrastructure, namely from Azure to AWS, and somewhere in the process, the Identity property was lost, and as such, new automated inserts are not working as it fails to auto-increment the Id and generate a valid primary key.
I've tried multiple solutions, but am struggling to get any of them to work, as SQL-Server seems extremely finicky with letting you mess with or alter value of Identity columns in any way, and it's driving me insane.
I need to re-enable the Identity in multiple different tables, in multiple databases, but the solutions I've found so far are either extremely convoluted or impractical, for what seems to be a relatively simple problem.
tl;dr - How can I enable Identity for all my int primary keys in multiple different tables at the same time?
My approach so far:
CREATE PROC Fix_Identity #tableName varchar(50)
AS
BEGIN
IF NOT EXISTS(SELECT * FROM sys.identity_columns WHERE OBJECT_NAME(object_id) = #tableName)
BEGIN
DECLARE #keyName varchar(100) = 'PK_dbo.' + #tableName;
DECLARE #reName varchar(100) = #tableName + '.Id_new';
EXEC ('Alter Table ' + #tableName + ' DROP CONSTRAINT ['+ #keyName +']');
EXEC ('Alter Table ' + #tableName + ' ADD Id_new INT IDENTITY(1, 1) PRIMARY KEY');
EXEC ('SET IDENTITY_INSERT [dbo].[' + #tableName + '] ON');
EXEC ('UPDATE ' + #tableName + ' SET [Id_new] = [Id]');
EXEC ('SET IDENTITY_INSERT [dbo].[' + #tableName + '] OFF');
EXEC ('Alter Table ' + #tableName + ' DROP COLUMN Id');
EXEC sp_rename #reName, 'Id', 'Column';
END
END;
I tried creating this procedure, to be executed once per table, but i'm having problems with the UPDATE statement, which I require to guarantee that the new values Identity column will have the same value as the old Id column, but this approach currently doesn't work because:
Cannot update identity column 'Id_new'.

There are assumptions made in that script that you might want to look out for specifically with assuming the PK constraint name. You might want to double check that on all of your tables before. The rest of your script seemed to make sense to me except you will need to reseed the index after updating the data in the new column.
See if this helps:
select t.name AS [Table],c.Name AS [Non-Indent PK],i.name AS [PK Constraint]
from sys.columns c
inner join sys.tables t On c.object_id=t.object_id
inner join sys.indexes i ON i.object_id=c.object_id
AND i.is_primary_key = 1
INNER JOIN sys.index_columns ic ON i.object_id=ic.object_id
AND i.index_id = ic.index_id
AND ic.column_id=c.column_id
WHERE c.Is_Identity=0

Instead of adding an identity, create a sequence and default constraint
declare #table nvarchar(50) = N'dbo.T'
declare #sql nvarchar(max) = (N'select #maxID = max(Id) FROM ' + #table);
declare #maxID int
exec sp_executesql #sql, N'#maxID int output', #maxID=#maxID OUTPUT;
set #sql = concat('create sequence ', #table, '_sequence start with ', #maxID + 1, ' increment by 1')
exec(#sql)
set #sql = concat('alter table ', #table, ' add default(next value for ', #table, '_sequence) for ID ')
exec(#sql)

Related

Pass table and column names as parameters and retrieve value

I am trying to built generic query to pass column name I want to count on and table name I want to select value.
So far this is my code:
ALTER PROCEDURE [dbo].[GenericCountAll]
#TableName VARCHAR(100),
#ColunName VARCHAR(100)
AS
BEGIN
DECLARE #table VARCHAR(30);
DECLARE #Rowcount INT;
SET #table = N'SELECT COUNT(' + #ColunName +') FROM ' + #TableName + '';
EXEC(#table)
SET #Rowcount = ##ROWCOUNT
SELECT #Rowcount
END
Trying to execute like this:
EXEC GenericCountAll 'T_User', 'Id';
but looks like I get two results, first result always returning a value of 1, and the second result returns the real count. Can anyone take a look?
Don't create dynamic sql like that! Imagine if I ran:
EXEC GenericCountAll '*/DROP PROCEDURE dboGenericCountAll;--', '1);/*';
The resulting executed SQL would be:
SELECT COUNT(1);/*) FROM */ DROP PROCEDURE dboGenericCountAll;--
That would, quite simply, DROP your procedure. And that's just a simple example. If i knew I could keep doing malicious things, I might even be able to create a new login or user, and make the a db_owner or sysadmin (depending on the permissions of what ever is being used to run that procedure).
I don't know what the point of the ##ROWCOUNT is either, I doubt that's needed. Thus, to make this SAFE you would need to do something like this:
ALTER procedure [dbo].[GenericCountAll]
#TableName sysname, --Note the datatype change
#ColumnName sysname
AS
BEGIN
DECLARE #SQL nvarchar(MAX);
SELECT N'SELECT COUNT(' + QUOTENAME(c.[name]) + N') AS RowCount' + NCHAR(10) +
N'FROM ' + QUOTENAME(s.[name]) + N'.' + QUOTENAME(t.name) + N';'
FROM sys.tables t
JOIN sys.schemas s ON t.schema_id = s.schema_id
JOIN sys.columns c ON t.object_id = c.object_id
WHERE t.[name] = #TableName
AND c.[name] = #ColumnName;
/*
If either the column or the table doesn't exist, then #SQL
will have a value of NULL. This is a good thing, as it
is a great way to further avoid injection, if a bogus
table or column name is passed
*/
IF #SQL IS NOT NULL BEGIN;
PRINT #SQL; --Your best debugging friend
EXEC sp_executesql #SQL;
END ELSE BEGIN;
RAISERROR(N'Table does not exist, or the Column does not exist for the Table provided.',11,1);
END;
END

SQL Server : change PK type from uniqueidentifier to int

I have designed a database that has primary key type uniqueidentifier for all tables. It has 50 tables and existing data. Then, I knew it was a bad idea. I want to change to int pk type from uniqueidentifier.
How can I do? How do I move the foreign key?
The general steps to take are (links are to MSDN information on performing these steps with SQL Server):
Delete the current Primary Key constraint - Delete Primary Key
Alter the table to drop your Unique Identifier field and create a new Integer field - Alter Table
Create a new Primary Key on the new Integer Field Create Primary Key
just done a script, tested on a couple of tables and works fine, test it yourself before you execute it in your production environment.
The Script does the following.
Find all the columns where Primary key has data type Uniqueidentifier.
Drop the primary key constraint.
drop the Uniqueidentifier column.
Add INT column with Identity starting with seed value of 1 and increment of 1.
Make that column the Primary key column in that table.
declare #table SYSNAME,#Schema SYSNAME
, #PkColumn SYSNAME, #ContName SYSNAME
,#Sql nvarchar(max)
DECLARE db_cursor CURSOR LOCAL FORWARD_ONLY FOR
SELECT OBJECT_NAME(O.object_id) AS ConstraintName
,SCHEMA_NAME(O.schema_id) AS SchemaName
,OBJECT_NAME(O.parent_object_id) AS TableName
,c.name ColumName
FROM sys.objects o
inner join sys.columns c ON o.parent_object_id = c.object_id
inner join sys.types t ON c.user_type_id = t.user_type_id
WHERE o.type_desc = 'PRIMARY_KEY_CONSTRAINT'
and t.name = 'uniqueidentifier'
Open db_cursor
fetch next from db_cursor into #ContName , #Schema , #table, #PkColumn
while (##FETCH_STATUS = 0)
BEGIN
SET #Sql= 'ALTER TABLE ' + QUOTENAME(#Schema) +'.'+ QUOTENAME(#table)
+ ' DROP CONSTRAINT ' + QUOTENAME(#ContName)
Exec sp_executesql #Sql
SET #Sql= 'ALTER TABLE ' + QUOTENAME(#Schema) +'.'+ QUOTENAME(#table)
+ ' DROP COLUMN ' + QUOTENAME(#PkColumn)
Exec sp_executesql #Sql
SET #Sql= 'ALTER TABLE ' + QUOTENAME(#Schema) +'.'+ QUOTENAME(#table)
+ ' ADD ' + QUOTENAME(#PkColumn)
+ ' INT NOT NULL IDENTITY(1,1) '
Exec sp_executesql #Sql
SET #Sql= 'ALTER TABLE ' + QUOTENAME(#Schema) +'.'+ QUOTENAME(#table)
+ ' ADD CONSTRAINT '+ QUOTENAME(#table+'_'+ #PkColumn)
+ ' PRIMARY KEY ('+QUOTENAME(#PkColumn)+')'
Exec sp_executesql #Sql
fetch next from db_cursor into #ContName , #Schema , #table, #PkColumn
END
Close db_cursor
deallocate db_cursor

Diff / Delta script: ideas on streamlining it?

I'm pulling data from a remote DB into a local MS SQL Server DB, the criteria being whether the PK is higher than I have in my data warehouse or whether the edit date is within a range that I provide with an argument. That works super fast so I am happy with it. However, when I attempt to sync this delta table into my data warehouse it takes quite a long time.
Here's my SPROC:
ALTER PROCEDURE [dbo].[sp_Sync_Delta_Table]
#tableName varchar(max)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #sql as varchar(4000)
-- Delete rows in MIRROR database where ID exists in the DELTA database
SET #sql = 'Delete from [SERVER-WHS].[MIRROR].[dbo].[' + #tableName
+ '] Where [ID] in (Select [ID] from [SERVER-DELTA].[DELTAS].[dbo].[' + #tableName + '])'
EXEC(#sql)
-- Insert all deltas
SET #sql = 'Insert Into [SERVER-WHS].[MIRROR].[dbo].[' + #tableName
+ '] Select * from [SERVER-DELTA].[DELTAS].[dbo].[' + #tableName + ']'
EXEC(#sql)
END
It works, but I think it takes way too long. For example: inserting 3590 records from the DELTA table into the MIRROR table containing 3,600,761 took over 25 minutes.
Can anyone give me a hint on how I can make this job easier on SSMS? I'm using 2008 R2, btw.
Thanks again!
Nate
The issue is likely the time required to do a table scan on the 3,600,761 to see if the new records are unique.
First of all, let's confirm that the primary key (ID) on the target table is the clustered index and increasing.
SELECT s.name, o.name, i.name, i.type_desc, ic.key_ordinal, c.name
FROM sys.objects o
JOIN sys.columns c ON (c.object_id = o.object_id)
JOIN sys.schemas s ON (s.schema_id = o.schema_id)
JOIN sys.indexes i ON (i.object_id = o.object_id)
JOIN sys.index_columns ic ON (ic.object_id = i.object_id AND ic.index_id = i.index_id AND ic.column_id = c.column_id)
WHERE o.name = '[table_name]'
If the index is not an ascending integer, it is possible that the inserts are causing lots of page splits.
Second, what other objects does that insert affect. Are there triggers, materialized views, or non-clustered indexes?
Third, do you have
My suggestion would be to stage the data on the mirror server in a local table. It can be as simple as as:
SET #sql = 'SELECT INTO * [MIRROR].[dbo].[' + #tableName + '_Staging] from [SERVER-DELTA].[DELTAS].[dbo].[' + #tableName + ']'
EXEC(#sql)
After that add a clustered primary key to the table.
SET #sql = 'ALTER TABLE [MIRROR].[dbo].[' + #tableName + '_Staging] ADD CONSTRAINT [PK_' + #tableName + '] PRIMARY KEY CLUSTERED (Id ASC)'
EXEC(#sql)
At this point, try inserting the data into the real table. The optimizer should be much more helpful this time.
Change the delete portion to:
SET #sql = 'Delete tbl1 from [SERVER-WHS].[MIRROR].[dbo].[' + #tableName
+ '] tbl1 inner join [SERVER-DELTA].[DELTAS].[dbo].[' + #tableName + '] tbl2 on tbl1.[ID] = tbl2.[ID]'
In future use INNER JOIN instead of IN with Sub Query.

SQL Server : query to insert data into table from another table with different struct

I have two tables in two different databases.
My first table is an older version and has fewer columns than the second table.
I want to copy the contents of my old table to my new table.
In each database table there are several distribution in this case.
How can I do to quickly copy data from old tables to the new without having to write the column names manually for each table?
Thanks!
You can "avoid writing the column names manually" in SSMS by dragging and dropping the "Columns" folder under the table in the Object Explorer over to a query window (just hold the dragged item over whitespace or the character position where you want the names to appear). All the column names will be displayed separated by commas.
You could also try something like this to get just the list of columns that are common between two tables (then writing the INSERT statement is trivial).
SELECT
Substring((
SELECT
', ' + S.COLUMN_NAME
FROM
INFORMATION_SCHEMA.COLUMNS S
INNER JOIN INFORMATION_SCHEMA.COLUMNS D
ON S.COLUMN_NAME = D.COLUMN_NAME
WHERE
S.TABLE_SCHEMA = 'dbo'
AND S.TABLE_NAME = 'Source Table'
AND D.TABLE_SCHEMA = 'dbo'
AND D.TABLE_NAME = 'Destination Table'
FOR XML PATH(''), TYPE
).value('.[1]', 'nvarchar(max)'), 3, 21474783647)
;
You could also create an SSIS package that simply moves all the data from one table to the other. Column names that match would automatically be linked up. Depending on your familiarity with SSIS, this could take you 2 minutes, or it could take you 2 hours.
The following code should do the work.
Basically what it does is:
1. Collects column names from both tables.
2. Intersects the column names in order to filter out columns that exists only in 1 table.
3. Get a string which is the column names delimited by a comma.
4. Using the string from stage #3 creating the insert command.
5. Executing the command from stage #4.
--BEGIN TRAN
DECLARE #oldName NVARCHAR(50) = 'OldTableName', #newName NVARCHAR(50) = 'newTableName'
DECLARE #oldDBName NVARCHAR(50) = '[OldDBName].[dbo].['+#oldName+']', #newDBName NVARCHAR(50) = '[newDBName].[dbo].['+#newName+']'
/*This table variable will have columns that exists in both table*/
DECLARE #tCommonColumns TABLE(
ColumnsName NVARCHAR(max) NOT NULL
);
INSERT INTO #tCommonColumns
SELECT column_name --,*
FROM information_schema.columns
WHERE table_name = #oldName
AND COLUMNPROPERTY(object_id(#oldName), column_name, 'IsIdentity') = 0 --this will make sure you ommit IDentity columns
INTERSECT
SELECT column_name --, *
FROM information_schema.columns
WHERE table_name = #newName
AND COLUMNPROPERTY(object_id(#newName), column_name,'IsIdentity') = 0--this will make sure you ommit IDentity columns
--SELECT * FROM #tCommonColumns
/*Get the columns as a comma seperated string */
DECLARE #columns NVARCHAR(max)
SELECT DISTINCT
#columns = STUFF((SELECT ', ' + cols.ColumnsName
FROM #tCommonColumns cols
FOR XML Path('')),1,1,'')
FROM #tCommonColumns
PRINT #columns
/*Create tyhe insert command*/
DECLARE #InserCmd NVARCHAR(max)
SET #InserCmd =
'INSERT INTO '+#newDBName +' ('+#columns +')
SELECT '+#columns +' FROM '+#oldDBName
PRINT #InserCmd
/*Execute the command*/
EXECUTE sp_executesql #InserCmd
--ROLLBACK
Please note that this script might fail if you have FOREIGN KEY Constraints That are fulfiled in the old table but not in the new table.
Edit:
The query was updated to omit Identity columns.
Edit 2:
query updated for supporting different databases for the tables (make sure you set the #oldName ,#newName, #oldDBName, #newDBName variables to match actual credentials).
Thanks all !
I propose that it's more generic :)
--BEGIN TRAN
DECLARE #Tablename NVARCHAR(50)
SET #Tablename = 'tableName'
DECLARE #Schemaname NVARCHAR(50)
SET #Schemaname = 'schemaName'
DECLARE #Datasource NVARCHAR(50)
SET #Datasource = 'dataSource'
DECLARE #Datadest NVARCHAR(50)
SET #Datadest = 'dataDestination'
/*This table variable will have columns that exists in both table*/
DECLARE #tCommonColumns TABLE(
ColumnsName NVARCHAR(max) NOT NULL
);
--INSERT INTO #tCommonColumns
DECLARE #sql NVARCHAR(max)
SET #sql = 'SELECT column_name
FROM ' + #Datasource + '.information_schema.columns
WHERE table_name = ''' + #Tablename + '''
AND COLUMNPROPERTY(object_id(''' + #Datasource + '.' + #Schemaname + '.' + #Tablename + '''), column_name, ''IsIdentity'') = 0' --this will make sure you ommit IDentity columns
SET #sql = #sql + ' INTERSECT
SELECT column_name
FROM ' + #Datadest + '.information_schema.columns
WHERE table_name = ''' + #Tablename + '''
AND COLUMNPROPERTY(object_id(''' + #Datadest + '.' + #Schemaname + '.' + #Tablename + '''), column_name, ''IsIdentity'') = 0' --this will make sure you ommit IDentity columns'
INSERT INTO #tCommonColumns EXECUTE sp_executesql #sql
-- SELECT * FROM #tCommonColumns
/*Get the columns as a comma seperated string */
DECLARE #columns NVARCHAR(max)
SELECT DISTINCT
#columns = STUFF((SELECT ', ' + cols.ColumnsName
FROM #tCommonColumns cols
FOR XML Path('')),1,1,'')
FROM #tCommonColumns
--PRINT #columns
/*Create tyhe insert command*/
DECLARE #InserCmd NVARCHAR(max)
SET #InserCmd =
'INSERT INTO '+#Datadest+'.'+#Schemaname+'.'+#Tablename +' ('+#columns +')
SELECT '+#columns +' FROM '+#Datasource+'.'+#Schemaname+'.'+#Tablename
PRINT #InserCmd
/*Execute the command*/
--EXECUTE sp_executesql #InserCmd
--ROLLBACK
Something like this:
Insert into dbo.Newtbl
SELECT * FROM dbo.OldTbl

Finding a Primary Key Constraint on the fly in SQL Server 2005

I have the following SQL:
ALTER TABLE dbo.PS_userVariables DROP CONSTRAINT PK_PS_userVariables;
ALTER TABLE dbo.PS_userVariables ADD PRIMARY KEY (varnumber, subjectID, userID, datasetID, listid, userVarTitle);
Since I have multiple environments, that PK_PS_userVariables constraint name is different on my different databases. How do I write a script that gets that name then adds it into my script?
While the typical best practice is to always explicitly name your constraints, you can get them dynamically from the catalog views:
DECLARE #table NVARCHAR(512), #sql NVARCHAR(MAX);
SELECT #table = N'dbo.PS_userVariables';
SELECT #sql = 'ALTER TABLE ' + #table
+ ' DROP CONSTRAINT ' + name + ';'
FROM sys.key_constraints
WHERE [type] = 'PK'
AND [parent_object_id] = OBJECT_ID(#table);
EXEC sp_executeSQL #sql;
ALTER TABLE dbo.PS_userVariables ADD CONSTRAINT ...
SELECT
A.TABLE_NAME,
A.CONSTRAINT_NAME,
B.COLUMN_NAME
FROM
INFORMATION_SCHEMA.TABLE_CONSTRAINTS A,
INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE B
WHERE
CONSTRAINT_TYPE = 'PRIMARY KEY'
AND A.CONSTRAINT_NAME = B.CONSTRAINT_NAME
ORDER BY
A.TABLE_NAME
Ref: Pinal Dave # http://blog.sqlauthority.com/2008/09/06/sql-server-find-primary-key-using-sql-server-management-studio/
DECLARE #TableName varchar(128)
DECLARE #IndexName varchar(128)
DECLARE #Command varchar(1000)
SET #TableName = 'PS_userVariables'
SELECT #IndexName = si.name
FROM sys.tables st
JOIN sys.indexes si ON st.object_id = si.object_id
WHERE st.name = #TableName
AND si.is_primary_key = 1
SET #Command = 'ALTER TABLE dbo.' + QUOTENAME(#Tablename) + ' DROP CONSTRAINT ' + QUOTENAME(#IndexName) + ';
ALTER TABLE dbo.' + QUOTENAME(#Tablename) + ' ADD PRIMARY KEY (varnumber, subjectID, userID, datasetID, listid, userVarTitle);'
My use case was updating primary key constraint names generated by Entity Framework 6 to match the primary key naming convention of Entity Framework Core.
As EF uses migrations, I created both an Up and Down migration script, and created a temporary long-lived table to store the old and new constraint names so that I could roll back if needed.
This is the SQL I used to update the primary key constraint names:
-- create a temporary long-lived table
-- it can be deleted when rollback is no longer needed
CREATE TABLE dbo.__OldPrimaryKeyConstraintNames (
SchemaName NVARCHAR(128) NOT NULL DEFAULT 'dbo',
TableName NVARCHAR(128) NOT NULL,
OldPrimaryKeyConstraintName NVARCHAR(128) NOT NULL,
NewPrimaryKeyConstraintName NVARCHAR(128) NOT NULL
);
-- create a temporary table to hold the data for the script
DECLARE #tbl TABLE (SchemaName NVARCHAR(3), TableName NVARCHAR(128), PrimaryKeyConstraintName NVARCHAR(128));
-- get all primary key constraint names as well as it's schema and table
INSERT INTO #tbl
SELECT SCHEMA_NAME(pk.schema_id), t.name, pk.name
FROM sys.key_constraints pk
INNER JOIN sys.objects t on t.object_id = pk.parent_object_id
WHERE pk.type = 'PK'
-- row count used for iterating through #tbl
DECLARE #RowCount INT = (SELECT COUNT(*) FROM #tbl);
-- variables used when used for iterating through #tbl
DECLARE #SchemaName NVARCHAR(128)
DECLARE #TableName NVARCHAR(128)
DECLARE #OldPrimaryKeyConstraintName NVARCHAR(128)
DECLARE #NewPrimaryKeyConstraintName NVARCHAR(128)
DECLARE #RenameSql NVARCHAR(MAX)
WHILE #RowCount > 0 BEGIN
-- get the primary key constraint name, schema, and table name for this iteration
SELECT #SchemaName = SchemaName, #TableName = TableName, #OldPrimaryKeyConstraintName = PrimaryKeyConstraintName, #NewPrimaryKeyConstraintName = CONCAT('PK_', TableName)
FROM #tbl
ORDER BY PrimaryKeyConstraintName DESC OFFSET #RowCount - 1 ROWS FETCH NEXT 1 ROWS ONLY;
-- store the old and new primary key constraint names
INSERT __OldPrimaryKeyConstraintNames (SchemaName, TableName, OldPrimaryKeyConstraintName, NewPrimaryKeyConstraintName)
VALUES (#SchemaName, #TableName, #OldPrimaryKeyConstraintName, #NewPrimaryKeyConstraintName)
-- perform the rename
SET #RenameSql = 'sp_rename ' + '''' + #SchemaName + '.' + QUOTENAME(#OldPrimaryKeyConstraintName) + '''' + ', ' + '''' + #NewPrimaryKeyConstraintName + ''''
EXEC sp_executeSQL #RenameSql
-- move to the next row
SET #RowCount -= 1;
END
After running this script, dbo.__OldPrimaryKeyConstraintNames should be populated with the old and new constraint names.
This allows us to revert the renaming if required for whatever reason.
This is the SQL I used to revert the primary key constraint names:
-- create a temporary table to hold the data for the script
DECLARE #tbl TABLE (SchemaName NVARCHAR(3), OldPrimaryKeyConstraintName NVARCHAR(128), NewPrimaryKeyConstraintName NVARCHAR(128));
-- get the old and new constraint names as well as it's schema and table name
INSERT INTO #tbl
SELECT SchemaName, OldPrimaryKeyConstraintName, NewPrimaryKeyConstraintName
FROM dbo.__OldPrimaryKeyConstraintNames
-- row count used for iterating through #tbl
DECLARE #RowCount INT = (SELECT COUNT(*) FROM #tbl);
-- variables used when used for iterating through #tbl
DECLARE #SchemaName NVARCHAR(128)
DECLARE #TableName NVARCHAR(128)
DECLARE #OldPrimaryKeyConstraintName NVARCHAR(128)
DECLARE #NewPrimaryKeyConstraintName NVARCHAR(128)
DECLARE #RenameSql NVARCHAR(MAX)
WHILE #RowCount > 0 BEGIN
-- get the old and new constraint name and it's schema for this iteration
SELECT #SchemaName = SchemaName, #OldPrimaryKeyConstraintName = OldPrimaryKeyConstraintName, #NewPrimaryKeyConstraintName = NewPrimaryKeyConstraintName
FROM #tbl
ORDER BY OldPrimaryKeyConstraintName DESC OFFSET #RowCount - 1 ROWS FETCH NEXT 1 ROWS ONLY;
-- revert the rename
SET #RenameSql = 'sp_rename ' + '''' + #SchemaName + '.' + QUOTENAME(#NewPrimaryKeyConstraintName) + '''' + ', ' + '''' + #OldPrimaryKeyConstraintName + ''''
SELECT #RenameSql
EXEC sp_executeSQL #RenameSql
-- move to the next row
SET #RowCount -= 1;
END
-- drop the temporary long-lived table as it is not required
DROP TABLE IF EXISTS dbo.__OldPrimaryKeyConstraintNames