I took over an application a few months ago that used Guids for primary keys on the main tables. We've been having some index related database problems lately and I've just been reading up on the use of Guids as primary keys and I've learnt how much of a bad idea they can be, and I thought it might pay to look into changing them before the database gets too large.
I'm wondering if there's an easy way to change them to Ints? Is there some amazing software that will do this for me? Or is it all up to me?
I was thinking of just adding an extra Int column too all appropriate tables, write some code to poplate this column with 1 - n based on the CreationDate column, writing some more code to populate columns in all related tables, then switching the relationships to the new int columns. Does't sound TOO difficult... Would this be the best way to do it?
After combining pieces from all the above links, I came up with this script, simplified for the sake of the answer.
Tables Before Changes
JOB
Id Guid PK
Name nvarchar
CreationDate datetime
REPORT
Id Guid PK
JobId int
Name nvarchar
CreationDate datetime
SCRIPT
-- Create new Job table with new Id column
select JobId = IDENTITY(INT, 1, 1), Job.*
into Job2
from Job
order by CreationDate
-- Add new JobId column to Report
alter table Report add JobId2 int
-- Populate new JobId column
update Report
set Report.JobId2 = Job2.JobId
from Job2
where Report.JobId = Job2.Id
-- Delete Old Id
ALTER TABLE Job2 DROP COLUMN Id
-- Delete Relationships
ALTER TABLE Report DROP CONSTRAINT [FK_Report_Job]
ALTER TABLE Job DROP CONSTRAINT PK_Job
-- Create Relationships
ALTER TABLE [dbo].[Job2] ADD CONSTRAINT [PK_Job] PRIMARY KEY CLUSTERED
([JobId] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
ALTER TABLE [dbo].[Report] WITH CHECK ADD CONSTRAINT [FK_Report_Job] FOREIGN KEY([JobId2])
REFERENCES [dbo].[Job2] ([JobId])
ON DELETE CASCADE
ALTER TABLE [dbo].[Report] CHECK CONSTRAINT [FK_Report_Job]
-- Rename Columns
sp_RENAME 'Report.JobId', 'OldJobId' , 'COLUMN'
sp_RENAME 'Report.JobId2', 'JobId' , 'COLUMN'
-- Rename Tables
sp_rename Job, Job_Old
sp_rename Job2, Job
I created the Job2 table because it means I didn't have to touch the original Job table (apart from deleting the relationships), so that everything could easily be put back to it's original state in case something went bad.
Related
I have been experiencing some strange behaviour with one of my SQL commands taken from one of our stored procedures.
This command follows the below order of execution:
1) Drop table
2) Select * into table name from live server
3) Alter table to apply PK - this step fails once out of 4 daily executions
My SQL statement:
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'
[inf].[tblBase_MyTable]') AND type in (N'U'))
DROP TABLE [inf].[tblBase_MyTable]
SELECT * INTO [inf].[tblBase_MyTable]
FROM LiveServer.KMS_ALLOCATION WITH (NOLOCK)
ALTER TABLE [inf].[tblBase_MyTable] ADD
CONSTRAINT [PK_KMS_ALLOCATION] PRIMARY KEY NONCLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY =
OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
GRANT SELECT ON [inf].[tblBase_MyTable] TO ourGroup
This is very strange considering the table is dropped, and I thought the indexes / keys would also be dropped. However I get this error at the same time every day. Any advice would be very much appreciated.
Error:
The CREATE UNIQUE INDEX statement terminated because a duplicate key was found for the object name 'inf.tblBase_MyTable' and the index name 'PK_KMS_ALLOCATION'.
Duplicate keys in [inf].[tblBase_MyTable] table are actually possible thanks to the WITH (NOLOCK) hint which allows "dirty reads". Have a look at blog which describes this in detail: SQL Server NOLOCK Hint & other poor ideas:
What many people think NOLOCK is doing
Most people think the NOLOCK hint just reads rows & doesn’t have to
wait till others have committed their updates or selects. If someone
is updating, that is OK. If they’ve changed a value then 99.999% of
the time they will commit, so it’s OK to read it before they commit.
If they haven’t changed the record yet then it saves me waiting, its
like my transaction happened before theirs did.
The Problem
The issue is that transactions do more than just update the row. Often
they require an index to be updated OR they run out of space on the
data page. This may require new pages to be allocated & existing rows
on that page to be moved, called a PageSplit. It is possible for your
select to completely miss a number of rows &/or count other rows
twice.
Well... you might have to repeat creating the new table and filling it until the check-query from #DarkoMartinovic does not return duplicates. Only then you can continue to add the PK. But this solution might cause heavy load on your live system. And you nave no guarantee that you have a 1:1 copy of the data as well.
Having reviewed various helpful comments here, I have decided against (for now) implementing SNAPSHOT isolation as this interface does not make use of a proper staging environment.
To move to this would mean either creating a staging area and setting that database to READ COMMITTED SNAPSHOT isolation, and a rebuild of the entire interface.
To that end and on the basis of saving development time, we have opted for ensuring that any ghost reads where dupes could be brought across from the source are handled before applying the PK.
This is by no means an ideal solution in terms of performance on the target server but will provide some headroom for now and certainly remove the previous error.
SQL approach below:
DECLARE #ALLOCTABLE TABLE
(SEQ INT, ID NVARCHAR(1000), CLASSID NVARCHAR(1000), [VERSION] NVARCHAR(25), [TYPE]
NVARCHAR(100), VERSIONSEQUENCE NVARCHAR(100), VERSIONSEQUENCE_TO NVARCHAR(100),
BRANCHID NVARCHAR(100), ISDELETED INT, RESOURCE_CLASS NVARCHAR(25), RESOURCE_ID
NVARCHAR(100), WARD_ID NVARCHAR(100), ISCOMPLETE INT, TASK_ID NVARCHAR(100));
------- ALLOCATION
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[inf].
[tblBase_MyTable]') AND type in (N'U'))
DROP TABLE [inf].[tblBase_MyTable]
SELECT * INTO [inf].[tblBase_MyTable]
FROM LiveServer.KMS_ALLOCATION WITH (NOLOCK)
INSERT INTO #ALLOCTABLE
SELECT *
FROM
(SELECT
ROW_NUMBER() OVER (PARTITION BY ID ORDER BY ISCOMPLETE DESC) SEQ, AL.*
FROM [inf].[tblBase_MyTable] AL
)DUPS
WHERE SEQ >1
DELETE FROM [inf].[tblBase_MyTable]
WHERE ID IN (SELECT ID FROM #ALLOCTABLE)
AND ISCOMPLETE = 0
ALTER TABLE [inf].[tblBase_MyTable] ADD CONSTRAINT
[PK_KMS_ALLOCATION] PRIMARY KEY NONCLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
GRANT SELECT ON [inf].[tblBase_MyTable] TO OurGroup
I have a a table with about 10000 records and an identity column that doesn't do auto increment, I want to change this. Normally I would drop the column and re-add it with the identity on but the problem is that this column is used as a foreign key in a lot of my other tables and there are some numbers in the primary column that are missing. I know that I cant change the column to start auto incrementing but is there a way to have the new auto incremented column copy the same numbers as the original and start from the end of that?
You should be able to do something like below. Table_1 is the table I wish to update to now include an identity column whereas the column was just a simple int before. Notice when it is creating the tmp_Table_1 it is creating an identity column that is setting the identity seed to 889 since the largest int id i had before was 888. The script then takes all the data in the existing table and inserts into into the tmp table, then drops the old table and renames the tmp table back to Table_1. By setting the identity seed to 889 the next row of data you insert will insert with an id of 890 automatically. Does this make sense?
BEGIN TRANSACTION
GO
CREATE TABLE dbo.Tmp_Table_1
(
id int NOT NULL IDENTITY (889, 1),
name nchar(10) NULL
) ON [PRIMARY]
GO
ALTER TABLE dbo.Tmp_Table_1 SET (LOCK_ESCALATION = TABLE)
GO
SET IDENTITY_INSERT dbo.Tmp_Table_1 ON
GO
IF EXISTS(SELECT * FROM dbo.Table_1)
EXEC('INSERT INTO dbo.Tmp_Table_1 (id, name)
SELECT id, name FROM dbo.Table_1 WITH (HOLDLOCK TABLOCKX)')
GO
SET IDENTITY_INSERT dbo.Tmp_Table_1 OFF
GO
DROP TABLE dbo.Table_1
GO
EXECUTE sp_rename N'dbo.Tmp_Table_1', N'Table_1', 'OBJECT'
GO
ALTER TABLE dbo.Table_1 ADD CONSTRAINT
PK_Table_1 PRIMARY KEY CLUSTERED
(
id
) WITH( STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
COMMIT
I have a simple table with tax rates
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[TaxRates](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](50) NULL,
CONSTRAINT [PK_TaxRates] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
If user deleted record I want to not change autoincrementer while next insert.
To have more clearance.
Now I have 3 records with id 0,1 and 2. When I delete row with Id 2 and some time later I add next tax rate I want to have records in this table like before 0,1,2.
There shouldn't be chance to have a gap like 0,1,2,4,6. It must be trigger.
Could you help with that?
You need to accept gaps or don't use IDENTITY
id should have no external meaning
You can't update IDENTITY values
IDENTITY columns will always have gaps
In this case you'd update the clustered Pk which will be expensive
What about foreign keys? you'd need a CASCADE
Contiguous numbers can be generated with ROW_NUMBER() at read time
Without IDENTITY (whether you load this table or another) won't be concurrency-safe under load
Trying to INSERT into a gap (by an INSTEAD OF trigger) won't be concurrency-safe under load
(Edit) History tables may have the deleted values anyway
An option, if the identity column has become something passed around in your organization is to duplicate that column into a non-identity column on the same table and you can modify those new id values at will while retaining the actual identity field.
turning identity_insert on and off can allow you to insert identity values.
I am going to change the primary key on SQL Azure. But it throws an error when using Microsoft SQL Server Management Studio to generate the scripts. Because every tables on SQL Azure must contains a primary key. And I can't drop it before create. What can I do if I must change it?
Script generated
IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[mytable]') AND name = N'PK_mytable')
ALTER TABLE [dbo].[mytable] DROP CONSTRAINT [PK_mytable]
GO
ALTER TABLE [dbo].[mytable] ADD CONSTRAINT [PK_mytable] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF)
GO
Error message
Msg 40054, Level 16, State 2, Line 3
Tables without a clustered index are not supported in this version of SQL Server. Please create a clustered index and try again.
Msg 3727, Level 16, State 0, Line 3
Could not drop constraint. See previous errors.
The statement has been terminated.
Msg 1779, Level 16, State 0, Line 3
Table 't_event_admin' already has a primary key defined on it.
Msg 1750, Level 16, State 0, Line 3
Could not create constraint. See previous errors.
I ran into this exact problem and contacted the Azure team on the forums. Basically it isn't possible. You'll need to create a new table and transfer the data to it.
What I did was create a transaction and within it do the following:
Renamed the old table to OLD_MyTable.
Create the new table with the correct Primary Key and call it MyTable.
Select the contents from OLD_MyTable
into MyTable.
Drop OLD_MyTable.
You may also need to call sp_rename on any constraints so they don't conflict.
See also: http://social.msdn.microsoft.com/Forums/en/ssdsgetstarted/thread/5cc4b302-fa42-4c62-956a-bbf79dbbd040
upgrade SQL V12 and heaps are supported on it. So you can drop the primary key and recreate it.
I appreciate that this may be late in the day for yourself, but it may help others.
I recently came across this issue and found the least painful solution was to download the database from Azure, restore it locally, update the primary key locally (as the key constraint is a SQL Azure specific issue), and then restore the database back into Azure.
This saved any issues in regards to renaming databases or transferring data between them.
You can try the following scripts. Change it to suit for your table def.
EXECUTE sp_rename N'[PK_MyTable]', N'[PK_MyTable_old]', 'OBJECT'
CREATE TABLE [dbo].[Temp_MyTable](
[id] [int] NOT NULL,
[text] [text] NOT NULL CONSTRAINT [PK_MyTable] PRIMARY KEY CLUSTERED (
[id] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON))
INSERT INTO dbo.[Temp_MyTable] (Id, Text)
SELECT Id, Text FROM dbo.MyTable
drop table dbo.MyTable
EXECUTE sp_rename N'Temp_MyTable', N'MyTable', 'OBJECT'
This question is outdated because changing PK is already supported in latest version of SQL Azure. And you don't have to create temporary table.
I'm trying to programmatically add an identity column to a table Employees. Not sure what I'm doing wrong with my syntax.
ALTER TABLE Employees
ADD COLUMN EmployeeID int NOT NULL IDENTITY (1, 1)
ALTER TABLE Employees ADD CONSTRAINT
PK_Employees PRIMARY KEY CLUSTERED
(
EmployeeID
) WITH( STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
What am I doing wrong? I tried to export the script, but SQL Mgmt Studio does a whole Temp Table rename thing.
UPDATE:
I think it is choking on the first statement with "Incorrect syntax near the keyword 'COLUMN'."
Just remove COLUMN from ADD COLUMN
ALTER TABLE Employees
ADD EmployeeID numeric NOT NULL IDENTITY (1, 1)
ALTER TABLE Employees ADD CONSTRAINT
PK_Employees PRIMARY KEY CLUSTERED
(
EmployeeID
) WITH( STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
This is how Adding new column to Table
ALTER TABLE [tableName]
ADD ColumnName Datatype
E.g
ALTER TABLE [Emp]
ADD Sr_No Int
And If you want to make it auto incremented
ALTER TABLE [Emp]
ADD Sr_No Int IDENTITY(1,1) NOT NULL
The correct syntax for adding column into table is:
ALTER TABLE table_name
ADD column_name column-definition;
In your case it will be:
ALTER TABLE Employees
ADD EmployeeID int NOT NULL IDENTITY (1, 1)
To add multiple columns use brackets:
ALTER TABLE table_name
ADD (column_1 column-definition,
column_2 column-definition,
...
column_n column_definition);
COLUMN keyword in SQL SERVER is used only for altering:
ALTER TABLE table_name
ALTER COLUMN column_name column_type;
It could be doing the temp table renaming if you are trying to add a column to the beginning of the table (as this is easier than altering the order). Also, if there is data in the Employees table, it has to do insert select * so it can calculate the EmployeeID.