table size not reducing after purged table - sql

I recently perform a purging on my application table. total record of 1.1 millions with the disk space used 11.12GB.
I had deleted 860k records and remain 290k records, but why my space used only drop to 11.09GB?
I monitor the detail on report - disk usage - disk space used by data files - space used.
Is it that i need to perfrom shrink data file? This has been puzzle me long time.

For MS SQL Server, rebuild the clustered indexes.
You have only deleted rows: not reclaimed space.
DBCC DBREINDEX or ALTER INDEX ... WITH REBUILD depending on verison
(It's MS SQL because the disk space report is in SSMS)

You need to explicitly call some operation (specific to your database management system) that will shrink the data file. The database engine doesn't shrink the file when you delete records, that's for optimization purposes - shrinking is time-consuming.

I think this is like with mail folders in Thunderbird: If you delete something, it's just marked as deleted, but to get higher performance, the space isn't freed. So most of your 11.09 GB will now contain either your old data or 0's. Shrink data file will "compress" (or "clean") this by creating a new file that'll only contain the actual data that is left.

Probably you need to shrink the table. I know that SQL server doesn't do it by default for you, I would guess this is for reasons of performance, maybe other DBs are the same.

Related

Sql server when rebuild index table hard disk size 0

I've got a major problem
I tried to rebuild index a table with 300M records
I've had 100GB Free storage
when the process was ongoing free space has gone to 0 and the rebuild stucked
Now I can't access any data on this specific table
(the sql log file size seems as the table was before)
Anyone has a suggestion what should I do to fix this issue?
I've shrink the database logile and from almost 700GB it shrinked to nothing.
somehow the table had data again probably it resumed the reindexing after doing that.
Back to normal

How to safely release unallocated space for 1 table in a database?

Our Production database is on SQL Server 2008 R2. One of our tables, Document_Details, stores documents that users upload via our application (VB). They are stored in varbinary(max) format. There are over 20k files in pdf format and many of these are large in size (some are 50mb each). So overall this table is 90GB. We then ran an exe that compressed these pdf files down to 10GB.
However here lies the problem - the table is still 90GB in size. The unalloacted space hasn't been released. How do I unallocate this space so that the table is 10GB?
I tried moving the table to a new filegroup and then back to original filegroup but in either case it didn't release any space.
I also tried rebuilding the index on the table but that didn't work either.
What did work (but I heard it isn't recommended) was - change the recovery type from Simple, Shrink the filegroup, set recovery to Full.
Could I move this table to a new filegroup and then shrink that filegroup (i.e. just the Document_Details table)? I know the shrink command affects performance but if it's just 1 table would it still be a problem? Or is there anything else I can try?
Thanks.
Moving a table to a filegroup has one problem: By default the TEXTIMAGE data (the blobs) are not moved! A table's rows can reside on one filegroup and the blobs and on another. This is a crazy defect in SQL Server. Maybe by rebuilding the table the blobs were simply not touched.
Use one of the well-known methods to move lob data as well. That would rebuild the lobs and shrink them.

SQL Server - How to shrink the allocated space for a staging table used by SSIS? [duplicate]

This question already has answers here:
How do I shrink my SQL Server Database?
(16 answers)
Closed 8 years ago.
I'm facing a strange problem about a staging database used by my ETL (to update rows).
Only rows to update are stored in the database, then a script is executed to update the destination database. At the end of the process, It truncates the staging database.
It removes all data, however the allocated size for my database grows every execution time of my SSIS package. So, is there a way to reduce the allocated size and to limit the maximum allocated size ? In SQL Server Management Studio, there is a wizard to reduce data size and database size.
Is there the same command in T-SQL ?
Thanks !
Don't.
If your staging needs a database of size X, then size the database at X and leave it so. Attempting to shrink it is misguided at best. By shrinking it all you achieve is just invite an opportunity for your ETL to fail tomorrow, because it runs out of required disk space. Do not fool yourself that 'I only need space X for ETL'. You need space X, period.
I'm not even going to go into all the performance problems related to shrink and re-growth.
There is a command in T-SQL.
Look here [http://msdn.microsoft.com/de-de/library/ms189493.aspx][1]
DBCC SHRINKFILE (Transact-SQL): Shrinks the size of the specified data or log file for the current database, or empties a file by moving the data from the specified file to other files in the same filegroup, allowing the file to be removed from the database. You can shrink a file to a size that is less than the size specified when it was created. This resets the minimum file size to the new value.
But take the answer from Remus in consideration

SQL Server DB size - why is it so large?

I am building a database which contains about 30 tables:
The largest amount of columns in a table is about 15.
For datatypes I am mostly using VarChar(50) for text
and Int og SmallInt for numbers.
Identity columns is Uniqueidentifiers
I have been testing a bit filling in data and deleting
again. I have no deleted all data, so everey table is empty.
But, if I look inside the properties of the database in
Management Studio, the size says 221,38 MB!
How comes that? Please help, I am getting notifications
from my hosting company that I am exceeding my limits .
Best regards,
:-)
I would suggest that you look first at the recovery mode for the database. By default, the recovery mode is FULL. This fills the log file with all transactions that you perform, never deleting them until you do a backup.
To change the recovery mode, right click on the database and choose Properties. In the properties list, choose the Options (on the right hand pane). Then change the "Recovery model" to Simple.
You probably also want to shrink your files. To do this, right click on the database and choose Tasks --> Shrink --> Files. You can shrink both the data file and the log file, by changing the "File Type" option in the middle.
Martin's comment is quite interesting. Even if the log file is in auto-truncate mode, you still have the issue of deletes being logged. If you created large-ish tables, the log file will still expand and the space not recovered until you truncate the file. You can get around this by using TRUNCATE rathe than DELETE:
truncate table <table>
does not log every record being deleted (http://msdn.microsoft.com/en-us/library/ms177570.aspx).
delete * from table
logs every record.
As you do inserts, updates, deletes, and design changes a log file with every transaction, and a whole bunch of other data is created. This transaction log is a required component of a SQL Server database, and thus cannot be disabled in any available settings.
Below is an article from Microsoft on doing backups to shrink the transaction logs generated by SQL Server.
http://msdn.microsoft.com/en-us/library/ms178037(v=sql.105).aspx
Also, are you indexing your columns? Indexes that consist of several columns on tables with a high row count can become unnecessarily large, especially if you are just doing tests. Try just having a single clustered index on only one column per table.
You may also want to learn about table statistics. They help your indexes out and also help you perform queries like SELECT DISTINCT, or SELECT COUNT(*), etc.
http://msdn.microsoft.com/en-us/library/ms190397.aspx
Finally, you will need to upgrade your storage allocation for the SQL Server database. The more you use it, the faster it will want to grow.

Reclaim space in SQL Server 2005 database when dropping tables permanently

I'm dropping out massive numbers of tables out of a SQL Server 2005 database. How do I shrink the database - assuming I'm not replacing the data or the tables? I'm archiving stuff to another db.
DBCC Shrinkdatabase(0) -- Currently selected database
or
DBCC Shrinkdatabase(<databasename>) -- Named database
However, shrinking files will likely fragment your tables, particularly larger onces, as contents of tables get moved about within the file, so once shrunk it's a good idea to defragment your tables. This, of course, will make your files grow again, but probably not so large as they were before you dropped your old tables. (Err, that assumes that the dropped tables contained large quantities of data...)
You can use the DBCC SHRINKDATABASE command, or you can right-click the database, Tasks, Shrink, Database.