SQL converting columns times out - sql

I need a query that will alter a single column from nvarchar(max) to 32. The real problem is this table has 800,000 rows. And my alter table myTable alter column mycolumn statement times out. Any suggestions or tips?

Maybe adding a new column, then selecting the data in the new column, and then remove the old column and rename the new column with the original name will help.
Another simpler approach would be to create a new table with the specifications as needed and then do select .. into.. After this is completed the old table can be dropped.

If you run a SQL Script in SSMS it has no timeout set. You can only get a timeout using c# etc, and it's the default 30 second CommandTimeout.
I would suggest changing the timeout to 3600 for example, or running it in SSMS.
The other thing to think of: this change will be logged so it can rollback. Make sure you resize the log file upfront to a respectable size so it doesn't have to grow by 10% each time (when the changes you are making use us current log space).
Or combine this with codymanix's answer

Two things I can think of to try:
first do an UPDATE truncating the data to 32 characters; this might help the ALTER run more quickly, since it won't have to do any truncation itself. The UPDATE could be batched if necessary
Or
Create a new nvarchar(32) column with a temporary name
Populate it from the nvarchar(max) column
DROP the nvarchar(max) column
Rename the (32) column to the original name of the (max) column

See this.
You can also specify the timeout counter or just disable it via GUI.

When you execute the statement, open another copy of SSMS, and run the statement
sp_who2
That will show you, among other things, a column called "BlkBy". That's the SPID of a process which may be blocking your query from completing. You may have an open transaction somewhere else in the system. If you know what that process is, and you know it won't blow up your universe, kill it.

Related

SQL Server 2012 takes long time on simple alter to add NULL columns

I have a problem to alter table in SQL Server 2012, it takes a long time to add 04 columns being allowed NULL to large table with 340 columns and approximately 166M rows and 01 non-clustered index
This problem happens only specific table after restoring.
I'm waiting the execution for 10 hours but it's not finished so I must cancel it for more investigation. It's such strange because the script is really really simple as below, and we have done it successfully before:
alter table sample_database.sample_schema.sample_table
add column_001 int null
,column_002 numeric(18,4) null
,column_003 nvarchar(500) null
,column_004 int null;
My questions are:
Why does it happen strangely?
How to solve this because it's related to our deployment package? We have done the workaround as creating new table with new columns and load data. But it doesn't work to us.
How to prevent this problem in the future?
Many thanks all,
If it is 2012 (according to tag), this may happen:
http://rusanu.com/2012/02/16/adding-a-nullable-column-can-update-the-entire-table/
If adding a nullable column in SQL Server 2012 has the potential of
increasing the row size over the 8060 size then the ALTER performs an
offline size-of-data update to every row of the table to ensure it
fits in the page. This behavior is new in SQL Server 2012.
The reason behind this is explained by #Anton.
-Page Split
And for workaround you could follow these steps:
Create clone table with empty rows(e.g. SELECT * INTO <New table> from sample_table WHERE 0=1)
Add new columns in .
Copy data from sample_table to .
Once step 3 completes modify the sample_table name and rename to sample_table.
You should also try to add columns with NOT NULL values and once columns added then modify it accordingly.
This case happened in our UAT environment. After we restore the database again from the latest backup, this problem doesn't happen again. The alter is completed within mili-seconds.
In my opinion, there're certain issue since the last restoration.
Many thanks all,
I had this same issue and a simple server restart worked.

Change column type from bigint to numeric(18,0) in sql server

I have around 10 tables which have data in them. I need to change the fields which have data type bigint to numeric(18,0).
We have verified data in our DB, there would not be any data loss. In our lower environment, what we have done is:
Took backup for existing table, renamed it temporarily
Create a new table with numeric data type
Populate data from backup table
If everything is okay, then delete backup table
The above is the process we have followed in lower environments.
But, we cannot follow above procedure when it comes to prod. We would like to change using ALTER statement. Since it is PROD environment, we have to be careful with changes. As I said earlier, there would not be any data loss.
But still wanted to know - what internally happens when we execute the ALTER statement?
Will it drop the table and recreate it with new definitions and populate the data back? If so, are there any risk associated with this?
Any thoughts on how this could be properly handled in PROD would be appreciated.
I might suggest an approach that doesn't rebuild the data. Use a computed column instead. Something like this:
sp_rename 'table.dbo.col', '_col', 'COLUMN';
alter table table add col as (cast(_col as numeric(18, 0));
You can then access col as the type that you want. You will not have to rewrite any data, so there will not be any locks or other issues with performance. Of course, select * will be a bit redundant, but you probably shouldn't be doing that anyway.

How the Alter Table command is handled by SQLServer?

We are using SQL Server 2008. We have an Existing database and it was required to ADD a new COLUMN to one of the Table which has 2700 rows only but one of its column is of type VARCHAR(8000). When i try to add new column (CHAR(1) NULL) by using ALTER table command, it takes too much time!! it took 5 minutes and the command was still running to i stopped the command.
Below is the command, i was trying to add new column:
ALTER TABLE myTable Add ColumnName CHAR(1) NULL
Can someone help me to understand that How the SQL Server handles
the ALTER Table command? what happens exactly?
Why it takes so much time to Add new column
EDIT:
What is the effect of Table size on ALTER Command?
Altering a table requires a schema lock. Many other operations require the same lock too. After all, it wouldn't make sense to add a column halfway a select statement.
So a likely explanation is that a process had the table locked for 5 minutes. The ALTER then has to wait until it gets the lock itself.
You can see blocked processes, and the blocking process, from the Activity Monitor in SQL Server Management Studio.
Well, one thing to bear in mind is that you were adding a new fixed length column to the table. The way that rows are structured in storage, all fixed length columns are placed before all of the variable length columns, for each row. So every row would have had to be updated in storage to make this change.
If, in turn, this caused the number of rows which could be stored on each page to change, a great many new allocations may have been required.
That being said, for the number of rows indicated, I wouldn't have though it should take 5 minutes - unless, as Andomar indicated, there was some lock contention also involved.

How to delete a large record from SQL Server?

In a database for a forum I mistakenly set the body to nvarchar(MAX). Well, someone posted the Encyclopedia Britanica, of course. So now there is a forum topic that won't load because of this one post. I have identified the post and ran a delete query on it but for some reason the query just sits and spins. I have let it go for a couple hours and it just sits there. Eventually it will time out.
I have tried editing the body of the post as well but that also sits and hangs. When I sit and let my query run the entire database hangs so I shut down the site in the mean time to prevent further requests while it does it's thinking. If I cancel my query then the site resumes as normal and all queries for records that don't involve the one in question work fantastically.
Has anyone else had this issue? Is there an easy way to smash this evil record to bits?
Update: Sorry, the version of SQL Server is 2008.
Here is the query I am running to delete the record:
DELETE FROM [u413].[replies] WHERE replyID=13461
I have also tried deleting the topic itself which has a relationship to replies and deletes on topics cascade to the related replies. This hangs as well.
Option 1. Depends on how big the table itself and how big are the rows.
Copy data to a new table:
SELECT *
INTO tempTable
FROM replies WITH (NOLOCK)
WHERE replyID != 13461
Although it will take time, table should not be locked during the copy process
Drop old table
DROP TABLE replies
Before you drop:
- script current indexes and triggers so you are able to recreate them later
- script and drop all the foreign keys to the table
Rename the new table
sp_rename 'tempTable', 'replies'
Recreate all the foreign keys, indexes and triggers.
Option 2. Partitioning.
Add a new bit column, called let's say 'Partition', set to 0 for all rows except the bad one. Set it to 1 for bad one.
Create partitioning function so there would be two partitions 0 and 1.
Create a temp table with the same structure as the original table.
Switch partition 1 from original table to the new temp table.
Drop temp table.
Remove partitioning from the source table and remove new column.
Partitioning topic is not simple. There are some examples in the internet, e.g. Partition switching in SQL Server 2005
Start by checking if your transaction is being blocked by another process. To do this, you can run this command..
SELECT * FROM sys.dm_os_waiting_tasks WHERE session_id = {spid}
Replace {spid} with the correct spid number of the connection running your DELETE command. To get that value, run SELECT ##spid before the DELETE command.
If the column sys.dm_os_waiting_tasks.blocking_session_id has a value, you can use activity monitor to see what that process is doing.
To open activity monitor, right-click on the server name in SSMS' Object Explorer and choose Activity Monitor. The Processes and Resource Waits sections are the ones you want.
Since you're having issues deleting the record and recreating the table, have you tried updating the record?
Something like (changing "body" field name to whatever it is in the table):
update [u413].[replies] set body='' WHERE replyID=13461
Once you clear out the text from that single reply record you should be able to alter the data type of the column to set an upper bound. Something like:
alter table [u413].[replies] alter column body nvarchar(100)

Issue After Converting Table Column From "Text" to Varchar(max)

I had a table called Person with column PersonDescription which was of type "text".
I had issues with updating this column so i ran the script
ALTER TABLE dbo.Person ALTER COLUMN
PersonDescription VARCHAR(max)
to change the column to be
varchar(max). This was all fine and ran instantly. However now i have noticed that anytime i try to update this column then It is taking up to 3-4 mins to execute. Query is
Update Person set PersonDescription
='persons description' where personid=18
After this update is ran then it executes instantly. This is all fine but when this change goes to production then this table has a million records so every person that logs in is going to timeout when this runs. Can anyone tell me how i can prevent this. Is there another script etc that i need to run. After running the update i saw that Statman runs on sqlserver which is what is taking the time.
thanks
Niall
It is my opinion that the issue is due to SQL server calculating if it should store the data in a blob or the row.
See:Using varchar(MAX) vs TEXT on SQL Server
"The VARCHAR(MAX) type is a replacement for TEXT. The basic difference is that a TEXT type will always store the data in a blob whereas the VARCHAR(MAX) type will attempt to store the data directly in the row unless it exceeds the 8k limitation and at that point it stores it in a blob."
Are also you using a full-text index? Disable indexes, update, rebuild indexes.
Have you tried creating a new table and copying records to it? Do you still have the same performance issues?