I have a table with ~100 columns, about ~30M rows, on MSSQL server 2005.
I need to alter 2 columns - change their types from VARCHAR(1024) to VARCHAR(max). These columns does not have index on them.
I'm worried that doing so will fill up the log, and cause the operation to fail. How can I estimate the needed free disk space, both of the data and the log, needed for such operation to ensure it will not fail?
You are right, increasing the column size (including to MAX) will generate a huge log for a large table, because every row will be updated (behind the scenens the old column gets dropped and a new column gets added and data is copied).
Add a new column of type VARCHAR(MAX) NULL. As a nullable column, will be added as metadata only (no data update)
Copy the data from the old column to new column. This can be done in batches to alleviate the log pressure.
Drop the old column. This will be a metadata only operation.
Use sp_rename to rename the new column to the old column name.
Later, at your convenience, rebuild the clustered index (online if needed) to get rid of the space occupied by the old column
This way you get control over the log by controlling the batches at step 2). You also minimize the disruption on permissions, constraints and relations by not copying the entire table into a new one (as SSMS so poorly does...).
You can do this sequence for both columns at once.
I would recommend that you consider, instead:
Create a new table with the new schema
Copy data from old table to new table
Drop old table
Rename new table to name of old table
This might be a far less costly operation and could possibly be done with minimal logging using INSERT/SELECT (if this were SQL Server 2008 or higher).
Why would increasing the VARCHAR limit fill up the log?
Try to do some test in smaller pieces. I mean, you could create the same structure locally with few thousand rows, and see the difference before and after. I think the change will be linear. The real question is about redo log, if it will fit into it or not, since you can do it at once. Must you do it online, or you can stop production for a while? If you can stop, maybe there is a way to stop redo log in MSSQL like in Oracle. It could make it a lot faster. If you need to do it online, you could try to make a new column, copy the value into it by a cycle for example 100000 rows at once, commit, continue. After completing maybe to drop original column and rename new one is faster than altering.
Related
Our app has a few very large tables in SQL Server. 500 million rows and 1 billion rows in two tables that we'd like to clean up to reclaim some disk space.
In our testing environment, I tried running chunked deletes in a loop but I don't think this is a feasible solution in prod.
So the other alternative is to select/insert the data we want to keep into a temp table, truncate/drop the old table, and then recreate
indexes
foreign key constraints
table permissions
rename the temp table back to the original table name
My question is, am I missing anything from my list? Are there any other objects / structures that we will lose which we need to re-create or restore? It would be a disastrous situation if something went wrong. So I am playing this extremely safe.
Resizing the db/adding more space is not a possible solution. Our SQL Server app is near end of life and is being decom'd, so we are just keeping the lights on until then.
While you are doing this operation will there be new records added to the original table? I mean is the app that writing to this table will be live? If it is the case, maybe it would be better to change the order of steps like:
First to rename original table's name to the temp
Create a new table with the original name so that new records can be added from the writing app.
In parallel, you can move the data you want to keep, from temp to the new original table.
I have a table with more than 600 million records and a NVARCHAR(MAX) column.
I need to change the column size to fixed size but I usually have two options with this large DB:
Change it using the designer in SSMS and this takes a very long time and perhaps never finishes
Change it using Alter Column, but then the database doesn't work after that, perhaps because we have several indexes
My questions would be: what are possible solutions to achieve the desired results without errors?
Note: I use SQL Server on Azure and this database is in production
Thank you
Clarification: I actually have all the current data within the range of the new length that I want to put
Never did this on Azure, but did it with hundred million rows on SQL Server.
I'll add a new column of the desired size allowing null values of course and creating the desired index(es).
After that, I'll update in chunks this new column trimming / converting old column values to new one. After all, remove indexes on the old column and removing it.
Change it using the designer in SSMS and this takes a very long time
and perhaps never finishes
What SSMS will do behind the scenes is
create a new table with the new data type for your column
copy all the data from original table to newe table
rename new table with the old name
drop old table
recreate all the indexes on new table
Of course it will take time to copy all of your "more than 600 million records".
Besides, the whole table will locked for the duration of data loading with (holdlock, tablockx)
Change it using Alter Column, but then the database doesn't work after
that, perhaps because we have several indexes
That is not true.
If there is any index involved, and it can be only indexes that have that field ad included column beacuse of its datatype nvarchar(max), server will give you an error and will do nothing until you drop that index. And in case there is no index affected it just cannot "doesn't work after that, perhaps because we have several indexes".
Please think one more time what you want to achieve changing the type of that column. If you think you'll gain some space doing so it's wrong.
After passing from nvarchar(max) to nchar(max) (from variable length type to fixed length type) your data will occupy more space than now because you are going to store fixed number of bytes even if the column has null or 1-2-3 characters.
And if you just want to change max to smth else, for example 8000, you gain nothing because this is not the real data size but only the maximum size the data can have. And if your strings size is small enough, your nvarchar(max) is already stored as in-row data, not as LOB data as it was with ntext
is the connection between the size of a database and the speed in which a command as below is
completed dependending on the data in the column? Does "It" have to check whether it is possible to change the datatype depending on the entries? For the 12,5 mln records it takes about
15 minutes.
Code:
USE RDW_DB
GO
ALTER TABLE dbo.RDWTabel
ALTER COLUMN Aantalcilinders SmallInt
All ALTER COLUMN operations are implemented as an add of a new column and drop of the old column:
add a new column of the new type
run an internal UPDATE <table> SET <newcolumn> = CAST(<oldcolumn> as <newtype>);
mark the old column as dropped
You can inspect a table structure after an ALTER COLUMN and see all the dropped, hidden columns. See SQL Server table columns under the hood.
As you can see, this results in a size-of-data update that must touch every row to populate the values of the new column. This takes time on its own on a big table, but the usual problem is from the log growth. As the operation must be accomplished in a single transaction, the log must grow to accommodate this change. Often newbs run out of disk space when doing such changes.
Certain operations may be accomplished 'inline'. IF the new type fits in the space reserved for the old type and the on-disk layout is compatible then the update is not required: the new column is literally overlayed on top of the old column data. The article linked above exemplifies this. Also operations on variable length types that change the length are often not required to change the data on-disk and are much faster.
Additionally any ALTER operation requires an exclusive schema modification lock on the table. This will block, waiting for any current activity (queries) to drain out. Often time the perceived duration time is due to locking wait, not execution. Read How to analyse SQL Server performance for more details.
Finally some ALTER COLUMN operations do not need to modify the existing data but they do need to validate the existing data, to ensure it matches the requirements of the new type. Instead of running an UPDATE they will run a SELECT (they will scan the data) which will still be size-of-data, but at least won't generate any log.
In your case ALTER COLUMN Aantalcilinders SmallInt is impossible to tell whether the operation was size-of-data or not. It depends on what was the previous type of Aantalcilinders. If the new type grew the size then it requires a size-of-data update. Eg. if the previous type was tinyint then the ALTER COLUMN will have to update every row.
When you change the type of a column, the data generally needs to be rewritten. For instance, if the data were originally an integer, then each record will be two bytes shorter after this operation. This affects the layout of records on a page as well as the allocation of pages. Every page needs to be touched.
I believe there are some operations in SQL Server that you can do on a column that do no affect the layout. One of them is changing a column from nullable to not-nullable, because the null flags are stored for every column, regardless of nullability.
I have a very large table (more than 300 millions records) that will need to be cleaned up. Roughly 80% of it will need to be deleted. The database software is MS SQL 2005. There are several indexes and statistics on the table but not external relationships.
The best solution I came up with, so far, is to put the database into "simple" recovery mode, copy all the records I want to keep to a temporary table, truncate the original table, set identity insert to on and copy back the data from the temp table.
It works but it's still taking several hours to complete. Is there a faster way to do this ?
As per the comments my suggestion would be to simply dispense with the copy back step and promote the table containing records to be kept to become the new main table by renaming it.
It should be quite straightforward to script out the index/statistics creation to be applied to the new table before it gets swapped in.
The clustered index should be created before the non clustered indexes.
A couple of points I'm not sure about though.
Whether it would be quicker to insert into a heap then create the clustered index afterwards. (I guess no if the insert can be done in clustered index order)
Whether the original table should be truncated before being dropped (I guess yes)
#uriDium -- Chunking using batches of 50,000 will escalate to a table lock, unless you have disabled lock escalation via alter table (sql2k8) or other various locking tricks.
I am not sure what the structure of your data is. When does a row become eligible for deletion? If it is a purely ID based on date based thing then you can create a new table for each day, insert your new data into the new tables and when it comes to cleaning simply drop the required tables. Then for any selects construct a view over all the tables. Just an idea.
EDIT: (In response to comments)
If you are maintaining a view over all the tables then no it won't be complicated at all. The complex part is coding the dropping and recreating of the view.
I am assuming that you don't want you data to be locked down too much during deletes. Why not chunk the delete operations. Created a SP that will delete the data in chunks, 50 000 rows at a time. This should make sure that SQL Server keeps a row lock instead of a table lock. Use the
WAITFOR DELAY 'x'
In your while loop so that you can give other queries a bit of breathing room. Your problem is the old age computer science, space vs time.
I have a SQL Server table in production that has millions of rows, and it turns out that I need to add a column to it. Or, to be more accurate, I need to add a field to the entity that the table represents.
Syntactically this isn't a problem, and if the table didn't have so many rows and wasn't in production, this would be easy.
Really what I'm after is the course of action. There are plenty of websites out there with extremely large tables, and they must add fields from time to time. How do they do it without substantial downtime?
One thing I should add, I did not want the column to allow nulls, which would mean that I'd need to have a default value.
So I either need to figure out how to add a column with a default value in a timely manner, or I need to figure out a way to update the column at a later time and then set the column to not allow nulls.
ALTER TABLE table1 ADD
newcolumn int NULL
GO
should not take that long... What takes a long time is to insert columns in the middle of other columns... b/c then the engine needs to create a new table and copy the data to the new table.
I did not want the column to allow nulls, which would mean that I'd need to have a default value.
Adding a NOT NULL column with a DEFAULT Constraint to a table of any number of rows (even billions) became a lot easier starting in SQL Server 2012 (but only for Enterprise Edition) as they allowed it to be an Online operation (in most cases) where, for existing rows, the value will be read from meta-data and not actually stored in the row until the row is updated, or clustered index is rebuilt. Rather than paraphrase any more, here is the relevant section from the MSDN page for ALTER TABLE:
Adding NOT NULL Columns as an Online Operation
Starting with SQL Server 2012 Enterprise Edition, adding a NOT NULL column with a default value is an online operation when the default value is a runtime constant. This means that the operation is completed almost instantaneously regardless of the number of rows in the table. This is because the existing rows in the table are not updated during the operation; instead, the default value is stored only in the metadata of the table and the value is looked up as needed in queries that access these rows. This behavior is automatic; no additional syntax is required to implement the online operation beyond the ADD COLUMN syntax. A runtime constant is an expression that produces the same value at runtime for each row in the table regardless of its determinism. For example, the constant expression "My temporary data", or the system function GETUTCDATETIME() are runtime constants. In contrast, the functions NEWID() or NEWSEQUENTIALID() are not runtime constants because a unique value is produced for each row in the table. Adding a NOT NULL column with a default value that is not a runtime constant is always performed offline and an exclusive (SCH-M) lock is acquired for the duration of the operation.
While the existing rows reference the value stored in metadata, the default value is stored on the row for any new rows that are inserted and do not specify another value for the column. The default value stored in metadata is moved to an existing row when the row is updated (even if the actual column is not specified in the UPDATE statement), or if the table or clustered index is rebuilt.
Columns of type varchar(max), nvarchar(max), varbinary(max), xml, text, ntext, image, hierarchyid, geometry, geography, or CLR UDTS, cannot be added in an online operation. A column cannot be added online if doing so causes the maximum possible row size to exceed the 8,060 byte limit. The column is added as an offline operation in this case.
The only real solution for continuous uptime is redundancy.
I acknowledge #Nestor's answer that adding a new column shouldn't take long in SQL Server, but nevertheless, it could still be an outage that is not acceptable on a production system. An alternative is to make the change in a parallel system, and then once the operation is complete, swap the new for the old.
For example, if you need to add a column, you may create a copy of the table, then add the column to that copy, and then use sp_rename() to move the old table aside and the new table into place.
If you have referential integrity constraints pointing to this table, this can make the swap even more tricky. You probably have to drop the constraints briefly as you swap the tables.
For some kinds of complex upgrades, you could completely duplicate the database on a separate server host. Once that's ready, just swap the DNS entries for the two servers and voilĂ !
I supported a stock exchange company
in the 1990's who ran three duplicate
database servers at all times. That
way they could implement upgrades on
one server, while retaining one
production server and one failover
server. Their operations had a
standard procedure of rotating the
three machines through production,
failover, and maintenance roles every
day. When they needed to upgrade
hardware, software, or alter the
database schema, it took three days to
propagate the change through their
servers, but they could do it with no
interruption in service. All thanks
to redundancy.
"Add the column and then perform relatively small UPDATE batches to populate the column with a default value. That should prevent any noticeable slowdowns"
And after that you have to set the column to NOT NULL which will fire off in one big transaction. So everything will run really fast until you do that so you have probably gained very little really. I only know this from first hand experience.
You might want to rename the current table from X to Y. You can do this with this command sp_RENAME '[OldTableName]' , '[NewTableName]'.
Recreate the new table as X with the new column set to NOT NULL and then batch insert from Y to X and include a default value either in your insert for the new column or placing a default value on the new column when you recreate table X.
I have done this type of change on a table with hundreds of millions of rows. It still took over an hour, but it didn't blow out our trans log. When I tried to just change the column to NOT NULL with all the data in the table it took over 20 hours before I killed the process.
Have you tested just adding a column filling it with data and setting the column to NOT NULL?
So in the end I don't think there's a magic bullet.
select into a new table and rename. Example, Adding column i to table A:
select *, 1 as i
into A_tmp
from A_tbl
//Add any indexes here
exec sp_rename 'A_tbl', 'A_old'
exec sp_rename 'A_tmp', 'A_tbl'
Should be fast and won't touch your transaction log like inserting in batches might.
(I just did this today w/ a 70 million row table in < 2 min).
You can wrap it in a transaction if you need it to be an online operation (something might change in the table between the select into and the renames).
Another technique is to add the column to a new related table (Assume a one-to-one relationship which you can enforce by giving the FK a unique index). You can then populate this in batches and then you can add the join to this table wherever you want the data to appear. Note I would only consider this for a column that I would not want to use in every query on the original table or if the record width of my original table was getting too large or if I was adding several columns.