Change column type from bigint to numeric(18,0) in sql server - sql

I have around 10 tables which have data in them. I need to change the fields which have data type bigint to numeric(18,0).
We have verified data in our DB, there would not be any data loss. In our lower environment, what we have done is:
Took backup for existing table, renamed it temporarily
Create a new table with numeric data type
Populate data from backup table
If everything is okay, then delete backup table
The above is the process we have followed in lower environments.
But, we cannot follow above procedure when it comes to prod. We would like to change using ALTER statement. Since it is PROD environment, we have to be careful with changes. As I said earlier, there would not be any data loss.
But still wanted to know - what internally happens when we execute the ALTER statement?
Will it drop the table and recreate it with new definitions and populate the data back? If so, are there any risk associated with this?
Any thoughts on how this could be properly handled in PROD would be appreciated.

I might suggest an approach that doesn't rebuild the data. Use a computed column instead. Something like this:
sp_rename 'table.dbo.col', '_col', 'COLUMN';
alter table table add col as (cast(_col as numeric(18, 0));
You can then access col as the type that you want. You will not have to rewrite any data, so there will not be any locks or other issues with performance. Of course, select * will be a bit redundant, but you probably shouldn't be doing that anyway.

Related

change attributes data type in database table when it is already filled with records

Could we change attribute's data type when the database table has record in SQL?
I am using Microsoft Management Studio 2008. The error that i am getting is:
** Error converting data type nvarchar to float. **
In short: It is possible with alter column command ONLY if the altered data type is compatible with newly modified one. In addition, it is recommended to be done with transaction.
For example: You may change a column from a varchar(50) to a nvarchar(200), with a script below.
alter table TableName
alter column ColumnName nvarchar(200)
Edit: Regarding your posted error while altering column type.
** Error converting data type nvarchar to float. **
One way would be to create a new column, and convert all good (convertible and compatible) records to new column. After that you may wanna to clean-up the bad records that do not convert, delete old column and re-name your newly added and populated column back to the original name. Important: use testing environment for all this manipulations first. Usually, playing with productions tables turns to be a bad practice to screw things up.
References to look for more discussions on similar SE posts:
Change column types in a huge table
How to change column datatype in SQL Server database without losing data
Obviously, there is no default conversion to your new datatype. One solution could be to create a second column with the requested type, and write your own conversion function. Once this done, delete the first column and rename the second one with the same name.
Things to consider: How big your table is. You then use the alter table syntax. We do not know what data type you want to change, so just for e.g.
alter column:
Alter Table [yourTable] Alter column [yourColumn] varchar(15)
You could also try to add a new column and then update that column using your old column. Drop the old column. Rename the new column. This is a safe better way, becasue at times the data that you hold might not react well to the new data type...
A post to look into for ideas: Change column types in a huge table, How to change column datatype in SQL database without losing data
Alter datatype of that column ..But In general sql wont allow to channge.It will prompt u drop that column..There is setting to achive that thing.
Go to Tool->Option->designers->Table and Database designers and Uncheck Prevent saving option.I m taking abt sql server 2008R2.Now u can easily alter data type.

a special case when modifing the database

sometimes i face the following case in my database design,, i wanna to know what is the best practice to handle this case:::
for example i have a specific table and after a while ,, when the database in operation and some real data are already entered.. i need to add some required fields (that supposed not to accept null)..
what is the best practice in this situation..
make the field accept null as (some data already entered in the table ,, and scarify the important constraint )and try to force the user to enter this field through some validation in the code..
truncate all the entered data and reentered them again (tedious work)..
any other suggestions about this issue...
It depends on requirements. If the data to populate existing rows for the new column isn't available immediately then I would generally prefer to create a new table and just populate new rows when the data exists. If and when you have all the data for every row then put the new column into the original table.
If possible i would set a default value for the new column.
e.g. For Varchar
alter table table_name
add column_name varchar(10) not null
constraint column_name_default default ('Test')
After you have updated you could then drop the default
alter table table_name
drop constraint column_name_default
A lot will come down to your requirements.
It depends on your application, your database scheme, your entities.
The best way to go about it is to truncate the data and re - enter it again, but it need not be too tedious an item. Temporary tables and table variables could assist a great deal with this issue. A simple procedure comes to mind to go about it:
In SQL Server Management Studio, Right - click on the table you wish to modify and select Script Table As > CREATE To > New Query Editor Window.
Add a # in front of the table name in the CREATE statement.
Move all records into the temporary table, using something to the effect of:
INSERT INTO #temp SELECT * FROM original
Then run the script to keep all your records into the temporary table.
Truncate your original table, and make any changes necessary.
Right - click on the table and select Script Table As > INSERT To > Clipboard, paste it into your query editor window and modify it to read records from the temporary table, using INSERT .. SELECT.
That's it. Admittedly not quite straightforward, but a well - kept database is almost always worth a slight hassle.

Adding an autonumber to a SQLcolumn which has more than 15 million records

I need to add a autonumber column to an existing table which has about 15 million records in SQL 2005.
Do you think how much time it'll take? What's the better way to do it?
To minimize impact, I would create a new table with the identity column added, insert into the new table by selecting from the old table, then drop the old table and rename the new. I'll give a basic outline below. Extra steps may be needed to handle foreign keys, etc.
create table NewTable (
NewID int identity(1,1),
Column1 ...
)
go
insert into NewTable
(Column1, ...)
select Column1, ...
from OldTable
go
drop table OldTable
go
exec sp_rename 'NewTable', 'OldTable'
go
It's really difficult to say how long it will take.
In my opinion, your best bet would be to bring back a copy of the production database, restore it in a development environment, and apply your changes there to see how long it takes.
From there, you can coordinate site downtime, or schedule the update to run when users aren't connected.
Unless it's an emergency:
Don't make changes to a live database.
Don't make changes to a live database.
To find out how much downtime you'll need, do a restore to a new DB and make the change there.
It shouldn't be very long: it depends not only on how many rows, but even more on how much data there is in each row. (SQL Server is going to copy the entire table over.)
Do you have the option of backing up the production database, applying the changes on another server and changing connection strings? You could even restore it on the same server as the original, change connection strings and get rid of the old database.
May not be feasible if disk space is limited.

SQL converting columns times out

I need a query that will alter a single column from nvarchar(max) to 32. The real problem is this table has 800,000 rows. And my alter table myTable alter column mycolumn statement times out. Any suggestions or tips?
Maybe adding a new column, then selecting the data in the new column, and then remove the old column and rename the new column with the original name will help.
Another simpler approach would be to create a new table with the specifications as needed and then do select .. into.. After this is completed the old table can be dropped.
If you run a SQL Script in SSMS it has no timeout set. You can only get a timeout using c# etc, and it's the default 30 second CommandTimeout.
I would suggest changing the timeout to 3600 for example, or running it in SSMS.
The other thing to think of: this change will be logged so it can rollback. Make sure you resize the log file upfront to a respectable size so it doesn't have to grow by 10% each time (when the changes you are making use us current log space).
Or combine this with codymanix's answer
Two things I can think of to try:
first do an UPDATE truncating the data to 32 characters; this might help the ALTER run more quickly, since it won't have to do any truncation itself. The UPDATE could be batched if necessary
Or
Create a new nvarchar(32) column with a temporary name
Populate it from the nvarchar(max) column
DROP the nvarchar(max) column
Rename the (32) column to the original name of the (max) column
See this.
You can also specify the timeout counter or just disable it via GUI.
When you execute the statement, open another copy of SSMS, and run the statement
sp_who2
That will show you, among other things, a column called "BlkBy". That's the SPID of a process which may be blocking your query from completing. You may have an open transaction somewhere else in the system. If you know what that process is, and you know it won't blow up your universe, kill it.

Varchar(255) to Varchar(MAX)

Is it possible to change a column type in a SQL Server 2008 database from varchar(255) to varchar(MAX) without having to drop the table and recreate?
SQL Server Management Studio throws me an error every time I try to do it using that - but to save myself a headache would be nice to know if I can change the type without having to DROP and CREATE.
Thanks
You should be able to do it using TSQL.
Something like
ALTER TABLE [table] ALTER COLUMN [column] VARCHAR(MAX)
'Saving changes is not permitted. The
changes you have made require the
following tables to be dropped and
re-created. You have either made
changes to a table that can't be
re-created or enabled the option
Prevent saving changes that require
table to be re-created.' Option
'Prevent saving changes' is not
enabled..
That's a new "feature" in SQL Server Management Studio 2008 which by default is turned on. Whenever you make a larger change, SSMS can only recreate the table by creating a new one and then moving over the data from the old one - all in the background (those changes include re-ordering of your columns amongst other things).
This option is turned off by default, since if your table has FK constraints and stuff, this way of re-doing the table might fail. But you can definitely turn that feature on!
It's under Tools > Options and once you uncheck that option you can do these kind of changes to table structure in the table designer again.
Be aware
with Something like
ALTER TABLE [table] ALTER COLUMN [column] VARCHAR(MAX)
https://dba.stackexchange.com/questions/15007/change-length-of-varchar-on-live-prod-table
Martin Smith's answare:
If you are increasing it to varchar(100 - 8000) (i.e. anything other than varchar(max)) and you are doing this through TSQL rather than the SSMS GUI ALTER TABLE YourTable ALTER COLUMN YourCol varchar(200) [NOT] NULL and not altering column nullability from NULL to NOT NULL (which would lock the table while all rows are validated and potentially written to or from NOT NULL to NULL in some circumstances then this is a quick metadata only change. It might need to wait for a SCH-M lock on the table but once it acquires that the change will be pretty much instant.
One caveat to be aware of is that during the wait for a SCH-M lock other queries will be blocked rather than jump the queue ahead of it so you might want to consider adding a SET LOCK_TIMEOUT first.
Also make sure in the ALTER TABLE statement you explicitly specify NOT NULL if that is the original column state as otherwise the column will be changed to allow NULL.