SQL not able to get storage space back when delete table - sql

I have a problem when it comes to delete a large table from my database (170GB).
When i "elete this large table by right click > delete I do not get the storage space free again. Of course the table is off the database but the database needed space does not shrink
Can anyone tell me what is wrong?

Tables are stored in table spaces. These are allocated to the database, regardless of whether the space is actually used to store tables (or indexes or anything else).
When you delete the table, you have freed space in the table space. The space is available to the database for your next table (or whatever). You need to either drop or shrink the table space to release the space back to the operating system.
A place to start is with dbcc shrinkfile, documented here.

Short answer:
Run sp_clean_db_free_space, before shrinking. My assumption is that you've tried shrinking the files, but if not that question has been answered.
Parenthetical statement:
You shouldn't shrink databases if you can avoid it.
Long answer: The behavior you see is the result of Ghost Records. To understand more about this at a system level read this article: Inside the Storage Engine: Ghost cleanup in depth.

Related

Truncation of large table in SQL Server database

I would like to completely clear one table in my SQL Server database.
Unfortunately, the table is large (> 90GB). I am going to use the TRUNCATE statement.
The question is whether I should pay attention to something before?
I am also wondering if it will somehow affect the server's disk space (currently about 110 GB free)?
After all the action, SHRINK DATABASE will probably be necessary.
TRUNCATE TABLE is faster and uses fewer system and transaction log resources
than DELETE with no WHERE clause,
but if you need even faster solution, you can create new version of the table (table1), drop the old table, and rename table1 into table.
R

Teradata Drop Column returns with "no more room"

I am trying to drop a varchar(100) column of a 150 GB table (4.6 billion records). All the data in this column is null. I have 30GB more space in the database.
When I attempt to drop the column, it says "no more room in database XY". Why does such an action needs so much space?
The ALTER TABLE statement needs a temporary storage for the altered version before overwriting the original table. I guess the the table that you are trying to alter occupies at least 1/3 of your total storage size
This could happen for a variety of reasons. It's possible that one of the AMP's in your database are full, this would cause that error even with a minor table alteration.
try running the following SQL to check space
select VProc, CurrentPerm, MaxPerm
from dbc.DiskSpace
where DatabaseName='XY';
also, you should check to see what column your primary index is on in this very large table. if the table is not skewed properly, you could also run into space issues when trying to alter a table or by running a query against it.
For additional suggestions I found a decent article on the kind of things you may want to investigate when the "no more room in database" error occurs - Teradata SQL Tutorial. Some of the suggestions include:
dropping any intermediary work or "sandbox" tables
implementing single value or multi-value compression.
dropping unwanted/unnecessary secondary indexes
removing data in dbc tables like accesslog or dbql tables
remove and archive old tables that are no longer used.

Improve update performance when setting column to null

Bit of a long shot here, but I have a simple query below:
begin transaction
update s
set s.SomeField = null
from someTable s (NOLOCK)
rollback transaction
This runs in ~30 seconds sitting close to the SQL Server box. Are there any tricks I can use to improve the speed. The table has 144,306 rows in it.
thanks.
The single largest component of the performance of a large UPDATE command like this is going to be the speed of your DB log.
For best performance:
Make sure the DB log (LDF file) is on a separate physical spindle from the DB data (MDF file)
Avoid parity RAID for the log volume, such as RAID-5; RAID-1 or RAID-10 are better
Make sure that the DB log file is pre-grown, and that it's physically contiguous on disk
Make sure your server has enough RAM -- ideally, at least enough to hold all of the DB pages containing the modified rows
Using SSDs for your data drive may also help, because the command will create a large number of dirty buffers, which be flushed to disk later by the lazy writer; this can make other operations on the DB slow while it's happening.
If there's no constraint on it, and you really need to set all values of that column to NULL, then I would test dropping the column and re-adding it.
Not sure if that would be faster or not, but I'd investigate it.
Try disabling the index temporarily.
You could change the syntax of your query slightly, but I had no difference in my testing by doing that. I was using STATISTICS IO and STATISTICS TIME.
You mention the column is indexed. You could disable it / re-enable it as part of your transaction. The t-sql for that is simple, see this - http://blog.sqlauthority.com/2007/05/17/sql-server-disable-index-enable-index-alter-index/
I've had to do that in the past for similar jobs and it has worked out well for me.
Try to implement like this
Disable Index
Drop the column
Create the column
Rebuild index
I can guess that it will improve performance.

Bulk delete (truncate vs delete)

We have a table with a 150+ million records. We need to clear/delete all rows. Delete operation would take forever due to it writing to the t-logs and we cannot change our recovery model for the whole DB. We have tested the truncate table option.
What we realized that truncate deallocates pages from the table, and if I am not wrong makes them available for reuse but doesn't shrink the db automatically. So, if we want to reduce the DB size, we really would need to do run the shrink db command after truncating the table.
Is this normal procedure? Anything we need to be careful or aware about, or are there any better alternatives?
truncate is what you're looking for. If you need to slim down the db afterwards, run a shrink.
This MSDN refernce (if you're talking T-SQL) compares the behind the scenes of deleting rows versus truncating.
"Delete all rows"... wouldn't DROP TABLE (and re-recreate an empty one with same schema / indices) be preferable ? (I personally like "fresh starts" ;-) )
This said TRUNCATE TABLE is quite OK too, and yes, DBCC SHRINKFILE may be required afterwards if you wish to recover the space.
Depending on the size of the full database, the shrink may take a while; I've found it to go faster if it is shrunk in smaller chunks, rather than trying to get it back all at once.
One thing to remember with Truncate Table (as well as drop table) is going forward this will not work if you ever have foreign keys referencing the table.
As pointed out, if you can't use truncate or drop
SELECT 1
WHILE ##ROWCOUNT <> 0
DELETE TOP (100000) MyTable
You have a normal solution (truncate + shrink db) to remove all the records from a table.
As Irwin pointed out. The TRUNCATE command won't work while being referenced by a Foreign key constraint. So first drop the constraints, truncate the table and recreate the constraints.
If your concerned about performance and this is a regular routine for your system. You might want to look into moving this table to it's own data file, then run shrink only against the target datafile!

SQL Server 2005 Shrink and Rebuild indexes

We have a weekly maintenance plan to shrink all user databases and rebuild their indexes. This has working fine until we created a read-only database, now each time the plan runs it fails when it starts processing this database due to its read only state.
As far as I can see we have two options remove the read only flag from the database, this is possible but as the database is only updated once a quarter it makes sense from a performance point of view to make use of the read-only feature. Or manually select the database that the plan should run for i.e. all the users databases apart from the read only one, this then requires people to remember to add any new databases into the plan.
Does anyone have any suggestions of a better way of doing this?
Thanks
Neil
why are you shrinking the database in the first place?
also there's no need to maintain read opnly db's like that.
I'd remove the read only flag if you don't want to customise the maint plan.
Why are you shrinking DBs too? If the database grows to a given size, then this is probably it's natural current size.
Also remember that an index rebuild (rule of thumb) require free space of 120% of target table size. Eg 500 MB table needs 600 MB free space.
It's pointless to shrink then rebuild... and you'll have horrendous file fragmentation too
I suppose could modify the maintenance plan to start with a 'Execute T-SQL Statement' step, which removes the readonly flag (ALTER DATABASE database-name SET READ_WRITE) and add a final step to reset it:
ALTER DATABASE database-name SET READ_ONLY