Is it possible to change a column type in a SQL Server 2008 database from varchar(255) to varchar(MAX) without having to drop the table and recreate?
SQL Server Management Studio throws me an error every time I try to do it using that - but to save myself a headache would be nice to know if I can change the type without having to DROP and CREATE.
Thanks
You should be able to do it using TSQL.
Something like
ALTER TABLE [table] ALTER COLUMN [column] VARCHAR(MAX)
'Saving changes is not permitted. The
changes you have made require the
following tables to be dropped and
re-created. You have either made
changes to a table that can't be
re-created or enabled the option
Prevent saving changes that require
table to be re-created.' Option
'Prevent saving changes' is not
enabled..
That's a new "feature" in SQL Server Management Studio 2008 which by default is turned on. Whenever you make a larger change, SSMS can only recreate the table by creating a new one and then moving over the data from the old one - all in the background (those changes include re-ordering of your columns amongst other things).
This option is turned off by default, since if your table has FK constraints and stuff, this way of re-doing the table might fail. But you can definitely turn that feature on!
It's under Tools > Options and once you uncheck that option you can do these kind of changes to table structure in the table designer again.
Be aware
with Something like
ALTER TABLE [table] ALTER COLUMN [column] VARCHAR(MAX)
https://dba.stackexchange.com/questions/15007/change-length-of-varchar-on-live-prod-table
Martin Smith's answare:
If you are increasing it to varchar(100 - 8000) (i.e. anything other than varchar(max)) and you are doing this through TSQL rather than the SSMS GUI ALTER TABLE YourTable ALTER COLUMN YourCol varchar(200) [NOT] NULL and not altering column nullability from NULL to NOT NULL (which would lock the table while all rows are validated and potentially written to or from NOT NULL to NULL in some circumstances then this is a quick metadata only change. It might need to wait for a SCH-M lock on the table but once it acquires that the change will be pretty much instant.
One caveat to be aware of is that during the wait for a SCH-M lock other queries will be blocked rather than jump the queue ahead of it so you might want to consider adding a SET LOCK_TIMEOUT first.
Also make sure in the ALTER TABLE statement you explicitly specify NOT NULL if that is the original column state as otherwise the column will be changed to allow NULL.
Related
I have around 10 tables which have data in them. I need to change the fields which have data type bigint to numeric(18,0).
We have verified data in our DB, there would not be any data loss. In our lower environment, what we have done is:
Took backup for existing table, renamed it temporarily
Create a new table with numeric data type
Populate data from backup table
If everything is okay, then delete backup table
The above is the process we have followed in lower environments.
But, we cannot follow above procedure when it comes to prod. We would like to change using ALTER statement. Since it is PROD environment, we have to be careful with changes. As I said earlier, there would not be any data loss.
But still wanted to know - what internally happens when we execute the ALTER statement?
Will it drop the table and recreate it with new definitions and populate the data back? If so, are there any risk associated with this?
Any thoughts on how this could be properly handled in PROD would be appreciated.
I might suggest an approach that doesn't rebuild the data. Use a computed column instead. Something like this:
sp_rename 'table.dbo.col', '_col', 'COLUMN';
alter table table add col as (cast(_col as numeric(18, 0));
You can then access col as the type that you want. You will not have to rewrite any data, so there will not be any locks or other issues with performance. Of course, select * will be a bit redundant, but you probably shouldn't be doing that anyway.
I'm trying to create and increment by one some values to put into an already existing (but empty) column. I'm currently using the identity function, but I wouldn't mind using a custom made function. Right now, SSMS is saying there's incorrect syntax near IDENTITY. Could anybody help me fix this syntax?
ALTER Table anthemID IDENTITY(1,1)
First, you can't make a column identity after the fact: it has to be set that way at creation time.
Second, I'm not quite sure what you mean by "increment the value of an already existing column by one." You can only increment the value of rows within a column--perform a DML (Data Modification Language) query. The script you suggested above is a DDL (Data Definition Language) query that actually modifies the structure of the table, affecting the entire column--all rows.
If you just want to increment all the rows by 1, you'd do this:
UPDATE dbo.YourTable SET anthemID = anthemID + 1;
On the other hand, if you want the anthemID column to acquire the identity property so that new inserts to the table receive unique, autoincrementing values, you can do that with some juggling:
Back up your database and confirm it is a good backup.
Script out your table including all constraints.
Drop all constraints on your table or other tables that involve anthemID.
ALTER TABLE dbo.YourTable DROP CONSTRAINT PK_YourTable -- if part of PK
ALTER TABLE dbo.AnotherTable DROP CONSTRAINT FK_AnotherTable_anthemID -- FKs
Rename your table
EXEC sp_rename 'dbo.YourTable', 'YourTableTemp';
Modify the script you generated above to make anthemID identity (add in identity(1,1) after int);
Run the modified script to create a new table with the same name as the original.
Insert the data from the old table to the new one:
SET IDENTITY_INSERT dbo.YourTable ON;
INSERT dbo.YourTable (anthemID, AnotherColumn, RestOfColumns)
SELECT anthemID, AnotherColumn, RestOfColumns
FROM dbo.YourTableTemp;
SET IDENTITY_INSERT dbo.YourTable OFF;
Re-add all constraints that were dropped.
Drop the original, renamed table after confirming you don't need the data any more.
You may be able to do this from SSMS's GUI table designer, and it will take care of moving the data over for you. However, this has bitten some people in the past and if you don't have a good database backup, well, don't do it because you might encounter some regret in the process.
UPDATE
Now that I know the column is blank, it's even easier.
ALTER TABLE dbo.YourTable DROP COLUMN anthemID;
ALTER TABLE dbo.YourTable ADD anthemID int identity(1,1) NOT NULL;
This does have the drawback of moving the column to the end of the table. If that's a problem, you can follow much the same procedure as I outlined above (to fix things yourself, or alternately use the designer in SQL Server Management Studio).
I recommend in the strongest terms possible that you use an identity column and do not try to create your own means of making new rows get an incremented value.
For emphasis, I'll quote #marc_s's comment above:
The SELECT MAX(ID)+1 approach is highly unsafe in a concurrent environment - in a system under some load, you will get duplicates. Don't do this yourself - don't try to reinvent the wheel - use the proper mechanisms (here: IDENTITY) that your database gives you and let the database handle all the nitty-gritty details!
I wholeheartedly agree with him.
I am trying to alter a table field - with some rows in it - from DateTime to DateTime2(3).
But the SQL Server Management Studio complains that I have drop and re-create the table.
But why?
Isn't DateTime2(3) has more precision than DateTime type? It should be fine, should not it be?
There is a setting in SSMS that will allow you to do what you want.. Menu-Tools-Options-Designers-Prevent saving changes that require table re-creaction.
SSMS has a habit of recreating almost any changes you do. It should be just fine to only alter the column data type with something like this.
alter table TableName alter column ColName datetime2(3)
You can also do this without rebuilding the table (as Management Studio does behind the scenes).
ALTER TABLE T ALTER COLUMN D DateTime2(3) [NOT NULL]
This will be less resource intensive up front but leave the "old" column behind in the data pages so will have an effect ongoing until you rebuild the table.
I've got a SQL Database Table, which has 35 existing records. One of the fields in this table is called Name, nvarchar(100), not null
However, due to a recent change, I need to make this column nullable.
When I change the column to allow nulls in SQL Server Management Studio, and go to save my changes, I get the following error:
Saving changes is not permitted. The changes you have made require the
following tables to be dropped and re-created
How can I allow this to automatically be dropped and re-created?
I've found the solution. Go to "Tools > Options > Designers > Table and Database Designers":
It's a setting in SSMS.
Tools - Option - Designers - Prevent saving changes that require table re-creation
I had the same problem; wanting to Allow Nulls for a column that previously did not. Consider MS's warning to NOT turn off this option:
http://support.microsoft.com/kb/956176
And their recommendation to use Transact-SQL to solve the problem, e.g.
alter table MyTable alter column MyDate7 datetime NULL
This solved it for me.
I'm trying to change the size of a column in sql server using:
ALTER TABLE [dbo].[Address]
ALTER COLUMN [Addr1] [nvarchar](80) NULL
where the length of Addr1 was originally 40.
It failed, raising this error:
The object 'Address_e' is dependent on column 'Addr1'.
ALTER TABLE ALTER COLUMN Addr1 failed because one or more objects access
this column.
I've tried to read up on it and it seems that because some views are referencing this column and it seems that SQL Server is actually trying to drop the column that raised the error.
Address_e is a view created by the previous DB Administrator.
Is there any other way I can change the size of the column?
ALTER TABLE [table_name] ALTER COLUMN [column_name] varchar(150)
The views are probably created using the WITH SCHEMABINDING option and this means they are explicitly wired up to prevent such changes. Looks like the schemabinding worked and prevented you from breaking those views, lucky day, heh? Contact your database administrator and ask him to do the change, after it asserts the impact on the database.
From MSDN:
SCHEMABINDING
Binds the view to the schema of the underlying table or tables. When
SCHEMABINDING is specified, the base
table or tables cannot be modified in
a way that would affect the view
definition. The view definition itself
must first be modified or dropped to
remove dependencies on the table that
is to be modified.
If anyone wants to "Increase the column width of the replicated table" in SQL Server 2008, then no need to change the property of "replicate_ddl=1". Simply follow below steps --
Open SSMS
Connect to Publisher database
run command -- ALTER TABLE [Table_Name] ALTER COLUMN [Column_Name] varchar(22)
It will increase the column width from varchar(x) to varchar(22) and same change you can see on subscriber (transaction got replicated). So no need to re-initialize the replication
Hope this will help all who are looking for it.
See this link
Resize or Modify a MS SQL Server Table Column with Default Constraint using T-SQL Commands
the solution for such a SQL Server problem is going to be
Dropping or disabling the DEFAULT Constraint on the table column.
Modifying the table column data type and/or data size.
Re-creating or enabling the default constraint back on the sql table column.
Bye
here is what works with the version of the program that I'm using: may work for you too.
I will just place the instruction and command that does it. class is the name of the table. you change it in the table its self with this method. not just the return on the search process.
view the table class
select * from class
change the length of the columns FacID (seen as "faci") and classnumber (seen as "classnu") to fit the whole labels.
alter table class modify facid varchar (5);
alter table class modify classnumber varchar(11);
view table again to see the difference
select * from class;
(run the command again to see the difference)
This changes the the actual table for good, but for better.
P.S. I made these instructions up as a note for the commands. This is not a test, but can help on one :)
Check the column collation. This script might change the collation to the table default. Add the current collation to the script.
You can change the size of the column in 3 steps:
Alter view Address_e and take in comment column /*Addr1*/
Run your script
ALTER TABLE [dbo].[Address]
ALTER COLUMN [Addr1] [nvarchar](80) NULL
Then again alter view Address_e, in order to uncomment column Addr1