SQL Server Data Type Change - sql

In SQL Server, We have table with 80 million of records and 3 column data type is float right now. Now we need to change the float data type column into Decimal column. How to proceed it with minimum downtime?
We executed the usual ALTER statement to change the data type, but log file got filled and going to system out of memory exception. So kindly let me the better way to solve this issue.
We cant use this technique Like: Creating 3 new temp columns and updating the existing data batch wise and dropping the existing column and renaming the temp column to live columns.

I have done exactly same in one of my project. Here are the steps which you can follow where minimal logging, minimum downtime with least complexity.
Create new table with new data types with same column names (Without index if table require any index create once data will be loaded in new table) but different name. For example existing table is EMPLOYEE then new table name should be EMPLOYEE_1. Keep in mind all constraints like foreign key and all you can create before loading or after loading it is not going to impact anything in terms of performance. But recommend don't create as existing table is having so names of constraints you have rename after renaming the table.
Keep in mind have max precision in new data type at what max precision value is available in your existing table.
Load data from your existing table to new table using SSIS with fast load option so that logging will not happen in temp database.
Rename the old table with EMPLOYEE_2 during downtime and rename the EMPLOYEE_1 to EMPLOYEE.
Alter table definition for foreign key, default or any other constrains.
Go live and create indexes in lower load time on table.
by using this approach we have changed the table data type where we had about more than billions records. Using this approach you can have minimum downtime and logging into tempDB as SSIS fast load option will not log anything in database.

This is too long for a comment.
One of these approaches might work.
The simplest approach is to give up. Well almost, but do the following:
Rename the existing column to something else (say, _col)
Add a computed column that does the conversion.
If the data is infrequently modified (so you only need read access), then create a new table. That is, copy the data in to a new table with the correct type, then drop the original table and rename the new table to the old name.
You can do something similar by swapping table spaces rather than renaming the table.

Another way could be:
Add a new column to the table with the correct datatype.
Insert the values from the float column to the decimal column, but do it in bulk of x records.
When everything is done, remove the float column and rename the decimal column to what the float column name was.

Related

Add column to a huge table

I have a table that has around 13 billion records. Size of this table is around 800 GB. I want to add a column of type tinyint to the table but it takes a lot of time to run add column command. Another option would be to create another table with the additional column and copy data from source table to the new table using BCP (data export and import) or copy data directly to the new table.
Is there a better way to achieve this?
My preference for tables of this size is to create a new table and then batch the records into it (BCP, Bulk Insert, SSIS, whatever you like). This may take longer but it keeps your log from blowing out. You can also do the most relevant data (say last 30 days) first, swap out the table, then batch in the remaining history so that you can take advantage of the new row immediately...if your application lines up with that strategy.

Alter Column Size in Large SQL Server database causes issues

I have a table with more than 600 million records and a NVARCHAR(MAX) column.
I need to change the column size to fixed size but I usually have two options with this large DB:
Change it using the designer in SSMS and this takes a very long time and perhaps never finishes
Change it using Alter Column, but then the database doesn't work after that, perhaps because we have several indexes
My questions would be: what are possible solutions to achieve the desired results without errors?
Note: I use SQL Server on Azure and this database is in production
Thank you
Clarification: I actually have all the current data within the range of the new length that I want to put
Never did this on Azure, but did it with hundred million rows on SQL Server.
I'll add a new column of the desired size allowing null values of course and creating the desired index(es).
After that, I'll update in chunks this new column trimming / converting old column values to new one. After all, remove indexes on the old column and removing it.
Change it using the designer in SSMS and this takes a very long time
and perhaps never finishes
What SSMS will do behind the scenes is
create a new table with the new data type for your column
copy all the data from original table to newe table
rename new table with the old name
drop old table
recreate all the indexes on new table
Of course it will take time to copy all of your "more than 600 million records".
Besides, the whole table will locked for the duration of data loading with (holdlock, tablockx)
Change it using Alter Column, but then the database doesn't work after
that, perhaps because we have several indexes
That is not true.
If there is any index involved, and it can be only indexes that have that field ad included column beacuse of its datatype nvarchar(max), server will give you an error and will do nothing until you drop that index. And in case there is no index affected it just cannot "doesn't work after that, perhaps because we have several indexes".
Please think one more time what you want to achieve changing the type of that column. If you think you'll gain some space doing so it's wrong.
After passing from nvarchar(max) to nchar(max) (from variable length type to fixed length type) your data will occupy more space than now because you are going to store fixed number of bytes even if the column has null or 1-2-3 characters.
And if you just want to change max to smth else, for example 8000, you gain nothing because this is not the real data size but only the maximum size the data can have. And if your strings size is small enough, your nvarchar(max) is already stored as in-row data, not as LOB data as it was with ntext

Dependency Size Database and Alter table

is the connection between the size of a database and the speed in which a command as below is
completed dependending on the data in the column? Does "It" have to check whether it is possible to change the datatype depending on the entries? For the 12,5 mln records it takes about
15 minutes.
Code:
USE RDW_DB
GO
ALTER TABLE dbo.RDWTabel
ALTER COLUMN Aantalcilinders SmallInt
All ALTER COLUMN operations are implemented as an add of a new column and drop of the old column:
add a new column of the new type
run an internal UPDATE <table> SET <newcolumn> = CAST(<oldcolumn> as <newtype>);
mark the old column as dropped
You can inspect a table structure after an ALTER COLUMN and see all the dropped, hidden columns. See SQL Server table columns under the hood.
As you can see, this results in a size-of-data update that must touch every row to populate the values of the new column. This takes time on its own on a big table, but the usual problem is from the log growth. As the operation must be accomplished in a single transaction, the log must grow to accommodate this change. Often newbs run out of disk space when doing such changes.
Certain operations may be accomplished 'inline'. IF the new type fits in the space reserved for the old type and the on-disk layout is compatible then the update is not required: the new column is literally overlayed on top of the old column data. The article linked above exemplifies this. Also operations on variable length types that change the length are often not required to change the data on-disk and are much faster.
Additionally any ALTER operation requires an exclusive schema modification lock on the table. This will block, waiting for any current activity (queries) to drain out. Often time the perceived duration time is due to locking wait, not execution. Read How to analyse SQL Server performance for more details.
Finally some ALTER COLUMN operations do not need to modify the existing data but they do need to validate the existing data, to ensure it matches the requirements of the new type. Instead of running an UPDATE they will run a SELECT (they will scan the data) which will still be size-of-data, but at least won't generate any log.
In your case ALTER COLUMN Aantalcilinders SmallInt is impossible to tell whether the operation was size-of-data or not. It depends on what was the previous type of Aantalcilinders. If the new type grew the size then it requires a size-of-data update. Eg. if the previous type was tinyint then the ALTER COLUMN will have to update every row.
When you change the type of a column, the data generally needs to be rewritten. For instance, if the data were originally an integer, then each record will be two bytes shorter after this operation. This affects the layout of records on a page as well as the allocation of pages. Every page needs to be touched.
I believe there are some operations in SQL Server that you can do on a column that do no affect the layout. One of them is changing a column from nullable to not-nullable, because the null flags are stored for every column, regardless of nullability.

Altering 2 columns in table - risky operation?

I have a table with ~100 columns, about ~30M rows, on MSSQL server 2005.
I need to alter 2 columns - change their types from VARCHAR(1024) to VARCHAR(max). These columns does not have index on them.
I'm worried that doing so will fill up the log, and cause the operation to fail. How can I estimate the needed free disk space, both of the data and the log, needed for such operation to ensure it will not fail?
You are right, increasing the column size (including to MAX) will generate a huge log for a large table, because every row will be updated (behind the scenens the old column gets dropped and a new column gets added and data is copied).
Add a new column of type VARCHAR(MAX) NULL. As a nullable column, will be added as metadata only (no data update)
Copy the data from the old column to new column. This can be done in batches to alleviate the log pressure.
Drop the old column. This will be a metadata only operation.
Use sp_rename to rename the new column to the old column name.
Later, at your convenience, rebuild the clustered index (online if needed) to get rid of the space occupied by the old column
This way you get control over the log by controlling the batches at step 2). You also minimize the disruption on permissions, constraints and relations by not copying the entire table into a new one (as SSMS so poorly does...).
You can do this sequence for both columns at once.
I would recommend that you consider, instead:
Create a new table with the new schema
Copy data from old table to new table
Drop old table
Rename new table to name of old table
This might be a far less costly operation and could possibly be done with minimal logging using INSERT/SELECT (if this were SQL Server 2008 or higher).
Why would increasing the VARCHAR limit fill up the log?
Try to do some test in smaller pieces. I mean, you could create the same structure locally with few thousand rows, and see the difference before and after. I think the change will be linear. The real question is about redo log, if it will fit into it or not, since you can do it at once. Must you do it online, or you can stop production for a while? If you can stop, maybe there is a way to stop redo log in MSSQL like in Oracle. It could make it a lot faster. If you need to do it online, you could try to make a new column, copy the value into it by a cycle for example 100000 rows at once, commit, continue. After completing maybe to drop original column and rename new one is faster than altering.

How do I add a column to large sql server table

I have a SQL Server table in production that has millions of rows, and it turns out that I need to add a column to it. Or, to be more accurate, I need to add a field to the entity that the table represents.
Syntactically this isn't a problem, and if the table didn't have so many rows and wasn't in production, this would be easy.
Really what I'm after is the course of action. There are plenty of websites out there with extremely large tables, and they must add fields from time to time. How do they do it without substantial downtime?
One thing I should add, I did not want the column to allow nulls, which would mean that I'd need to have a default value.
So I either need to figure out how to add a column with a default value in a timely manner, or I need to figure out a way to update the column at a later time and then set the column to not allow nulls.
ALTER TABLE table1 ADD
newcolumn int NULL
GO
should not take that long... What takes a long time is to insert columns in the middle of other columns... b/c then the engine needs to create a new table and copy the data to the new table.
I did not want the column to allow nulls, which would mean that I'd need to have a default value.
Adding a NOT NULL column with a DEFAULT Constraint to a table of any number of rows (even billions) became a lot easier starting in SQL Server 2012 (but only for Enterprise Edition) as they allowed it to be an Online operation (in most cases) where, for existing rows, the value will be read from meta-data and not actually stored in the row until the row is updated, or clustered index is rebuilt. Rather than paraphrase any more, here is the relevant section from the MSDN page for ALTER TABLE:
Adding NOT NULL Columns as an Online Operation
Starting with SQL Server 2012 Enterprise Edition, adding a NOT NULL column with a default value is an online operation when the default value is a runtime constant. This means that the operation is completed almost instantaneously regardless of the number of rows in the table. This is because the existing rows in the table are not updated during the operation; instead, the default value is stored only in the metadata of the table and the value is looked up as needed in queries that access these rows. This behavior is automatic; no additional syntax is required to implement the online operation beyond the ADD COLUMN syntax. A runtime constant is an expression that produces the same value at runtime for each row in the table regardless of its determinism. For example, the constant expression "My temporary data", or the system function GETUTCDATETIME() are runtime constants. In contrast, the functions NEWID() or NEWSEQUENTIALID() are not runtime constants because a unique value is produced for each row in the table. Adding a NOT NULL column with a default value that is not a runtime constant is always performed offline and an exclusive (SCH-M) lock is acquired for the duration of the operation.
While the existing rows reference the value stored in metadata, the default value is stored on the row for any new rows that are inserted and do not specify another value for the column. The default value stored in metadata is moved to an existing row when the row is updated (even if the actual column is not specified in the UPDATE statement), or if the table or clustered index is rebuilt.
Columns of type varchar(max), nvarchar(max), varbinary(max), xml, text, ntext, image, hierarchyid, geometry, geography, or CLR UDTS, cannot be added in an online operation. A column cannot be added online if doing so causes the maximum possible row size to exceed the 8,060 byte limit. The column is added as an offline operation in this case.
The only real solution for continuous uptime is redundancy.
I acknowledge #Nestor's answer that adding a new column shouldn't take long in SQL Server, but nevertheless, it could still be an outage that is not acceptable on a production system. An alternative is to make the change in a parallel system, and then once the operation is complete, swap the new for the old.
For example, if you need to add a column, you may create a copy of the table, then add the column to that copy, and then use sp_rename() to move the old table aside and the new table into place.
If you have referential integrity constraints pointing to this table, this can make the swap even more tricky. You probably have to drop the constraints briefly as you swap the tables.
For some kinds of complex upgrades, you could completely duplicate the database on a separate server host. Once that's ready, just swap the DNS entries for the two servers and voilĂ !
I supported a stock exchange company
in the 1990's who ran three duplicate
database servers at all times. That
way they could implement upgrades on
one server, while retaining one
production server and one failover
server. Their operations had a
standard procedure of rotating the
three machines through production,
failover, and maintenance roles every
day. When they needed to upgrade
hardware, software, or alter the
database schema, it took three days to
propagate the change through their
servers, but they could do it with no
interruption in service. All thanks
to redundancy.
"Add the column and then perform relatively small UPDATE batches to populate the column with a default value. That should prevent any noticeable slowdowns"
And after that you have to set the column to NOT NULL which will fire off in one big transaction. So everything will run really fast until you do that so you have probably gained very little really. I only know this from first hand experience.
You might want to rename the current table from X to Y. You can do this with this command sp_RENAME '[OldTableName]' , '[NewTableName]'.
Recreate the new table as X with the new column set to NOT NULL and then batch insert from Y to X and include a default value either in your insert for the new column or placing a default value on the new column when you recreate table X.
I have done this type of change on a table with hundreds of millions of rows. It still took over an hour, but it didn't blow out our trans log. When I tried to just change the column to NOT NULL with all the data in the table it took over 20 hours before I killed the process.
Have you tested just adding a column filling it with data and setting the column to NOT NULL?
So in the end I don't think there's a magic bullet.
select into a new table and rename. Example, Adding column i to table A:
select *, 1 as i
into A_tmp
from A_tbl
//Add any indexes here
exec sp_rename 'A_tbl', 'A_old'
exec sp_rename 'A_tmp', 'A_tbl'
Should be fast and won't touch your transaction log like inserting in batches might.
(I just did this today w/ a 70 million row table in < 2 min).
You can wrap it in a transaction if you need it to be an online operation (something might change in the table between the select into and the renames).
Another technique is to add the column to a new related table (Assume a one-to-one relationship which you can enforce by giving the FK a unique index). You can then populate this in batches and then you can add the join to this table wherever you want the data to appear. Note I would only consider this for a column that I would not want to use in every query on the original table or if the record width of my original table was getting too large or if I was adding several columns.