Reclaim space in SQL Server 2005 database when dropping tables permanently - sql-server-2005

I'm dropping out massive numbers of tables out of a SQL Server 2005 database. How do I shrink the database - assuming I'm not replacing the data or the tables? I'm archiving stuff to another db.

DBCC Shrinkdatabase(0) -- Currently selected database
or
DBCC Shrinkdatabase(<databasename>) -- Named database
However, shrinking files will likely fragment your tables, particularly larger onces, as contents of tables get moved about within the file, so once shrunk it's a good idea to defragment your tables. This, of course, will make your files grow again, but probably not so large as they were before you dropped your old tables. (Err, that assumes that the dropped tables contained large quantities of data...)

You can use the DBCC SHRINKDATABASE command, or you can right-click the database, Tasks, Shrink, Database.

Related

SQL: How to alter all varchar column in all tables to nvarchar?

I need to convert all varchar columns in about 40 tables (filled with the data) to nvarchar columns. It is planned to happen in a dedicated MS SQL server used only for the purpose. The result should be moved to Azure SQL.
Where should be the conversion done: on the old SQL, or after moving it on Azure SQL Server?
According to Remus Rusanu's answer https://stackoverflow.com/a/8157951/1346705, new nvarchar columns are created in the process, and the old varchar columns are dropped. The space can be reclaimed by DBCC CLEANTABLE or using ALTER TABLE ... REBUILD. Are the dropped varchar columns packed into the backup table, or does the backup/restore also remove the dropped columns?
Can the process be somehow automated using a universal SQL script? Or is it necessary to write the script for each individual table?
Context: We are the 3rd party with respect to the enterprise information system. Our product reads from the information system SQL database and presents the data the way that would otherwise be expensive to implement in the IS. The enterprise information system is now migrated to the new version and is to be run on Azure SQL. The database of the IS have been changed heavily, and one of the changes was to abandon the old 8-bit text encoding (varchar) and to use Unicode instead (nvarchar). Our system was used also for collecting data typed manually -- using the same encoding that the old IS used.
Migration is to be done via doing old version of backup (SqlCmd that produces xxx.bak files), restoring on another good old SQL server. Then we run the script that removes all the tables, views, and stored procedures that can be reconstructed from the IS. One of the main reasons is that the SQL code uses features that are not accepted by the new backup tool SqlPackage.exe to produce xxx.bacpac file. Then the bacpac file is restored in Azure SQL.
Where should be the conversion done: on the old SQL, or after moving it on Azure SQL Server?
I would do it on local SQLServer First,Running this on Azure database,might cause you to run into some issues like hitting your DTU limits,disk IO throttling..
Are the dropped varchar columns packed into the backup table, or does the backup/restore also remove the dropped columns?
The space wont be released back to filesystem,also backup doesn't process free spaces,so you will not see much change there.You might want to read more on dbcc cleantable though,before proceeding ..
Can the process be somehow automated using a universal SQL script? Or is it necessary to write the script for each individual table?
It can be automated,may be you can use dynamic sql to see the column type and process further.You will also have to see if any of those columns are part of indexes,if so you have to drop them first
I suggest making the schema changes beforehand on the old instances. Even if you don't bother cleaning up space with DBCC CLEAANTABLE or ALTER...REBUILD, the resultant bacpac size will be the same because, unlike a physical backup/restore, a bacpac file is just a compressed package format of schema and data.
Consider using SQL Server Data Tools (SSDT) to facilitate the schema changes. This will consider all the dependencies (constraints, indexes, etc.) that is a challenge with a "universal" T-SQL solution. SSDT will generally generate a migration script that employs temp tables for such schema changes so the end result won't have wasted space in your old database. However, you will need sufficient unused space in the database to contain the old/new objects side-by-side.

How to safely release unallocated space for 1 table in a database?

Our Production database is on SQL Server 2008 R2. One of our tables, Document_Details, stores documents that users upload via our application (VB). They are stored in varbinary(max) format. There are over 20k files in pdf format and many of these are large in size (some are 50mb each). So overall this table is 90GB. We then ran an exe that compressed these pdf files down to 10GB.
However here lies the problem - the table is still 90GB in size. The unalloacted space hasn't been released. How do I unallocate this space so that the table is 10GB?
I tried moving the table to a new filegroup and then back to original filegroup but in either case it didn't release any space.
I also tried rebuilding the index on the table but that didn't work either.
What did work (but I heard it isn't recommended) was - change the recovery type from Simple, Shrink the filegroup, set recovery to Full.
Could I move this table to a new filegroup and then shrink that filegroup (i.e. just the Document_Details table)? I know the shrink command affects performance but if it's just 1 table would it still be a problem? Or is there anything else I can try?
Thanks.
Moving a table to a filegroup has one problem: By default the TEXTIMAGE data (the blobs) are not moved! A table's rows can reside on one filegroup and the blobs and on another. This is a crazy defect in SQL Server. Maybe by rebuilding the table the blobs were simply not touched.
Use one of the well-known methods to move lob data as well. That would rebuild the lobs and shrink them.

LDF file continues to grow very large during transaction phase - SQL Server 2005

We have a 6 step where we copy tables from one database to another. Each step is executing a stored procedure.
Remove tables from destination database
Create tables in destination database
Shrink database log before copy
Copy tables from source to destination
Shrink the database log
Back up desstination database
during the step 4, our transaction log (ldf file) grows very large to where we now have to consistently increase the max size on the sql server and soon enough (in the far furture) we believe it may eat up all the resources on our server. It was suggested that in our script, we commit each transaction instead of waiting til the end to commit the transactions.
Any suggestions?
I'll make the assumption that you are moving large amounts of data. The typical solution to this problem is to break the copy up in to smaller number of rows. This keeps the hit on transaction log smaller. I think this will be the preferred answer.
The other answer that I have seen is using Bulk Copy, which writes the data out to a text file and imports it into your target db using Bulk Copy. I've seen a lot of posts that recommend this. I haven't tried it.
If the schema of the target tables isn't changing could you not just truncate the data in the target tables instead of dropping and recreating?
Can you change the database recovery model to Bulk Logged for this process?
Then, instead of creating empty tables at the destination, do a SELECT INTO to create them. Once they are built, alter the tables to add indices and constraints. Doing bulk copies like this will greatly reduce your logging requirements.

SQL Server DB size - why is it so large?

I am building a database which contains about 30 tables:
The largest amount of columns in a table is about 15.
For datatypes I am mostly using VarChar(50) for text
and Int og SmallInt for numbers.
Identity columns is Uniqueidentifiers
I have been testing a bit filling in data and deleting
again. I have no deleted all data, so everey table is empty.
But, if I look inside the properties of the database in
Management Studio, the size says 221,38 MB!
How comes that? Please help, I am getting notifications
from my hosting company that I am exceeding my limits .
Best regards,
:-)
I would suggest that you look first at the recovery mode for the database. By default, the recovery mode is FULL. This fills the log file with all transactions that you perform, never deleting them until you do a backup.
To change the recovery mode, right click on the database and choose Properties. In the properties list, choose the Options (on the right hand pane). Then change the "Recovery model" to Simple.
You probably also want to shrink your files. To do this, right click on the database and choose Tasks --> Shrink --> Files. You can shrink both the data file and the log file, by changing the "File Type" option in the middle.
Martin's comment is quite interesting. Even if the log file is in auto-truncate mode, you still have the issue of deletes being logged. If you created large-ish tables, the log file will still expand and the space not recovered until you truncate the file. You can get around this by using TRUNCATE rathe than DELETE:
truncate table <table>
does not log every record being deleted (http://msdn.microsoft.com/en-us/library/ms177570.aspx).
delete * from table
logs every record.
As you do inserts, updates, deletes, and design changes a log file with every transaction, and a whole bunch of other data is created. This transaction log is a required component of a SQL Server database, and thus cannot be disabled in any available settings.
Below is an article from Microsoft on doing backups to shrink the transaction logs generated by SQL Server.
http://msdn.microsoft.com/en-us/library/ms178037(v=sql.105).aspx
Also, are you indexing your columns? Indexes that consist of several columns on tables with a high row count can become unnecessarily large, especially if you are just doing tests. Try just having a single clustered index on only one column per table.
You may also want to learn about table statistics. They help your indexes out and also help you perform queries like SELECT DISTINCT, or SELECT COUNT(*), etc.
http://msdn.microsoft.com/en-us/library/ms190397.aspx
Finally, you will need to upgrade your storage allocation for the SQL Server database. The more you use it, the faster it will want to grow.

table size not reducing after purged table

I recently perform a purging on my application table. total record of 1.1 millions with the disk space used 11.12GB.
I had deleted 860k records and remain 290k records, but why my space used only drop to 11.09GB?
I monitor the detail on report - disk usage - disk space used by data files - space used.
Is it that i need to perfrom shrink data file? This has been puzzle me long time.
For MS SQL Server, rebuild the clustered indexes.
You have only deleted rows: not reclaimed space.
DBCC DBREINDEX or ALTER INDEX ... WITH REBUILD depending on verison
(It's MS SQL because the disk space report is in SSMS)
You need to explicitly call some operation (specific to your database management system) that will shrink the data file. The database engine doesn't shrink the file when you delete records, that's for optimization purposes - shrinking is time-consuming.
I think this is like with mail folders in Thunderbird: If you delete something, it's just marked as deleted, but to get higher performance, the space isn't freed. So most of your 11.09 GB will now contain either your old data or 0's. Shrink data file will "compress" (or "clean") this by creating a new file that'll only contain the actual data that is left.
Probably you need to shrink the table. I know that SQL server doesn't do it by default for you, I would guess this is for reasons of performance, maybe other DBs are the same.