Issue After Converting Table Column From "Text" to Varchar(max) - sql

I had a table called Person with column PersonDescription which was of type "text".
I had issues with updating this column so i ran the script
ALTER TABLE dbo.Person ALTER COLUMN
PersonDescription VARCHAR(max)
to change the column to be
varchar(max). This was all fine and ran instantly. However now i have noticed that anytime i try to update this column then It is taking up to 3-4 mins to execute. Query is
Update Person set PersonDescription
='persons description' where personid=18
After this update is ran then it executes instantly. This is all fine but when this change goes to production then this table has a million records so every person that logs in is going to timeout when this runs. Can anyone tell me how i can prevent this. Is there another script etc that i need to run. After running the update i saw that Statman runs on sqlserver which is what is taking the time.
thanks
Niall

It is my opinion that the issue is due to SQL server calculating if it should store the data in a blob or the row.
See:Using varchar(MAX) vs TEXT on SQL Server
"The VARCHAR(MAX) type is a replacement for TEXT. The basic difference is that a TEXT type will always store the data in a blob whereas the VARCHAR(MAX) type will attempt to store the data directly in the row unless it exceeds the 8k limitation and at that point it stores it in a blob."
Are also you using a full-text index? Disable indexes, update, rebuild indexes.
Have you tried creating a new table and copying records to it? Do you still have the same performance issues?

Related

Best way to do a long running schema change (or data update) in MS Sql Server?

I need to alter the size of a column on a large table (millions of rows). It will be set to a nvarchar(n) rather than nvarchar(max), so from what I understand, it will not be a long change. But since I will be doing this on production I wanted to understand the ramifications in case it does take long.
Should I just hit F5 from SSMS like I execute normal queries? What happens if my machine crashes? Or goes to sleep? What's the general best practice for doing long running updates? Should it be scheduled as a job on the server maybe?
Thanks
Please DO NOT just hit F5. I did this once and lost all the data in the table. Depending on the change, the update statement that is created for you actually stores the data in memory, drops the table, creates the new one that has the change you want, and populates the data from memory. However in my case one of the changes I made was adding a unique constraint so the population failed, and as the statement was over the data in memory was dropped. This left me with the new empty table.
I would create the table you are changing, with the change(s) you want, as a new table. Then select * into the new table, then re-name the tables in a single statement. If there is potential for data to be entered into the table while this is running and that is an issue, you may want to lock the table.
Depending on the size of the table and duration of the statement, you may want to save the locking and re-naming for later, and after the initial population of the new table do a differential population of new data and re-name the tables.
Sorry for the long post.
Edit:
Also, if the connection times out due to duration, then run the insert statement locally on the DB server. You could also create a job and run that, however it is essentially the same thing.

Change column type from bigint to numeric(18,0) in sql server

I have around 10 tables which have data in them. I need to change the fields which have data type bigint to numeric(18,0).
We have verified data in our DB, there would not be any data loss. In our lower environment, what we have done is:
Took backup for existing table, renamed it temporarily
Create a new table with numeric data type
Populate data from backup table
If everything is okay, then delete backup table
The above is the process we have followed in lower environments.
But, we cannot follow above procedure when it comes to prod. We would like to change using ALTER statement. Since it is PROD environment, we have to be careful with changes. As I said earlier, there would not be any data loss.
But still wanted to know - what internally happens when we execute the ALTER statement?
Will it drop the table and recreate it with new definitions and populate the data back? If so, are there any risk associated with this?
Any thoughts on how this could be properly handled in PROD would be appreciated.
I might suggest an approach that doesn't rebuild the data. Use a computed column instead. Something like this:
sp_rename 'table.dbo.col', '_col', 'COLUMN';
alter table table add col as (cast(_col as numeric(18, 0));
You can then access col as the type that you want. You will not have to rewrite any data, so there will not be any locks or other issues with performance. Of course, select * will be a bit redundant, but you probably shouldn't be doing that anyway.

How the Alter Table command is handled by SQLServer?

We are using SQL Server 2008. We have an Existing database and it was required to ADD a new COLUMN to one of the Table which has 2700 rows only but one of its column is of type VARCHAR(8000). When i try to add new column (CHAR(1) NULL) by using ALTER table command, it takes too much time!! it took 5 minutes and the command was still running to i stopped the command.
Below is the command, i was trying to add new column:
ALTER TABLE myTable Add ColumnName CHAR(1) NULL
Can someone help me to understand that How the SQL Server handles
the ALTER Table command? what happens exactly?
Why it takes so much time to Add new column
EDIT:
What is the effect of Table size on ALTER Command?
Altering a table requires a schema lock. Many other operations require the same lock too. After all, it wouldn't make sense to add a column halfway a select statement.
So a likely explanation is that a process had the table locked for 5 minutes. The ALTER then has to wait until it gets the lock itself.
You can see blocked processes, and the blocking process, from the Activity Monitor in SQL Server Management Studio.
Well, one thing to bear in mind is that you were adding a new fixed length column to the table. The way that rows are structured in storage, all fixed length columns are placed before all of the variable length columns, for each row. So every row would have had to be updated in storage to make this change.
If, in turn, this caused the number of rows which could be stored on each page to change, a great many new allocations may have been required.
That being said, for the number of rows indicated, I wouldn't have though it should take 5 minutes - unless, as Andomar indicated, there was some lock contention also involved.

ALTER SQL statement takes very much time

We are facing problem in the alter table SQL statement. Some time we update our database at client side and the alter table sql taking very much time. I like to know, how alter works? Does alter statement performance correlated to that table data? Means, if table have large data then alter will take much time.
There is also problem with the Oracle 11G R2. Is there any changes which need to incorporate to our code? Our code is very old and working fine till now?
There could be several reasons for this:
If the table is locked by another
query/resource. It would wait for the
lock to be released and then execute
the update...
If the table contains many rows and you have added a new column in the table with a default value, it would execute an update query for whole table after altering the table to update all the existing records with the default value...
If for example you add a new column with a default value in a large table, then it will take time depending the size of the table.

SQL converting columns times out

I need a query that will alter a single column from nvarchar(max) to 32. The real problem is this table has 800,000 rows. And my alter table myTable alter column mycolumn statement times out. Any suggestions or tips?
Maybe adding a new column, then selecting the data in the new column, and then remove the old column and rename the new column with the original name will help.
Another simpler approach would be to create a new table with the specifications as needed and then do select .. into.. After this is completed the old table can be dropped.
If you run a SQL Script in SSMS it has no timeout set. You can only get a timeout using c# etc, and it's the default 30 second CommandTimeout.
I would suggest changing the timeout to 3600 for example, or running it in SSMS.
The other thing to think of: this change will be logged so it can rollback. Make sure you resize the log file upfront to a respectable size so it doesn't have to grow by 10% each time (when the changes you are making use us current log space).
Or combine this with codymanix's answer
Two things I can think of to try:
first do an UPDATE truncating the data to 32 characters; this might help the ALTER run more quickly, since it won't have to do any truncation itself. The UPDATE could be batched if necessary
Or
Create a new nvarchar(32) column with a temporary name
Populate it from the nvarchar(max) column
DROP the nvarchar(max) column
Rename the (32) column to the original name of the (max) column
See this.
You can also specify the timeout counter or just disable it via GUI.
When you execute the statement, open another copy of SSMS, and run the statement
sp_who2
That will show you, among other things, a column called "BlkBy". That's the SPID of a process which may be blocking your query from completing. You may have an open transaction somewhere else in the system. If you know what that process is, and you know it won't blow up your universe, kill it.