SQL Server - How to shrink the allocated space for a staging table used by SSIS? [duplicate] - sql

This question already has answers here:
How do I shrink my SQL Server Database?
(16 answers)
Closed 8 years ago.
I'm facing a strange problem about a staging database used by my ETL (to update rows).
Only rows to update are stored in the database, then a script is executed to update the destination database. At the end of the process, It truncates the staging database.
It removes all data, however the allocated size for my database grows every execution time of my SSIS package. So, is there a way to reduce the allocated size and to limit the maximum allocated size ? In SQL Server Management Studio, there is a wizard to reduce data size and database size.
Is there the same command in T-SQL ?
Thanks !

Don't.
If your staging needs a database of size X, then size the database at X and leave it so. Attempting to shrink it is misguided at best. By shrinking it all you achieve is just invite an opportunity for your ETL to fail tomorrow, because it runs out of required disk space. Do not fool yourself that 'I only need space X for ETL'. You need space X, period.
I'm not even going to go into all the performance problems related to shrink and re-growth.

There is a command in T-SQL.
Look here [http://msdn.microsoft.com/de-de/library/ms189493.aspx][1]
DBCC SHRINKFILE (Transact-SQL): Shrinks the size of the specified data or log file for the current database, or empties a file by moving the data from the specified file to other files in the same filegroup, allowing the file to be removed from the database. You can shrink a file to a size that is less than the size specified when it was created. This resets the minimum file size to the new value.
But take the answer from Remus in consideration

Related

Why will my SQL Transaction log file not auto-grow?

The Issue
I've been running a particularly large query, generating millions of records to be inserted into a table. Each time I run the query I get an error reporting that the transaction log file is full.
I've managed to get a test query to run with a reduced set of results and by using SELECT INTO instead of INSERT into as pre built table. This reduced set of results generated a 20 gb table, 838,978,560 rows.
When trying to INSERT into the pre built table I've also tried using it with and without a Cluster index. Both failed.
Server Settings
The server is running SQL Server 2005 (Full not Express).
The dbase being used is set to SIMPLE for recovery and there is space available (around 100 gb) on the drive that the file is sitting on.
The transaction log file setting is for File Growth of 250 mb and to a maximum of 2,097,152 mb.
The log file appears to grow as expected till it gets to 4729 mb.
When the issue first appeared the file grow to a lower value however i've reduced the size of other log files on the same server and this appears to allow this transaction log file grow further by the same amount as the reduction on the other files.
I've now run out of ideas of how to solve this. If anyone has any suggestion or insight into what to do it would be much appreciated.
First, you want to avoid auto-growth whenever possible; auto-growth events are HUGE performance killers. If you have 100GB available why not change the log file size to something like 20GB (just temporarily while you troubleshoot this). My policy has always been to use 90%+ of the disk space allocated for a specific MDF/NDF/LDF file. There's no reason not to.
If you are using SIMPLE recovery SQL Server is supposed manage the task of returning unused space but sometimes SQL Server does not do a great job. Before running your query check the available free log space. You can do this by:
right-click the DB > go to Tasks > Shrink > Files.
change the type to "Log"
This will help you understand how much unused space you have. You can set "Reorganize pages before releasing unused space > Shrink File" to 0. Moving forward you can also release unused space using CHECKPOINT; this may be something to include as a first step before your query runs.

Moving data from one table to another in Sql Server 2005

I am moving around 10 million data from one table to another in SQL Server 2005. The Purpose of Data transfer is to Offline the old data.
After some time it throws an error Description: "The LOG FILE FOR DATABASE 'tempdb' IS FULL.".
My tempdb and templog is placed in a drive (other than C drive) which has around 200 GB free. Also my tempdb size in database is set to 25 GB.
As per my understanding I will have to increase the size of tempdb from 25 GB to 50 GB and set the log file Auto growth portion to "unrestricted file growth (MB)".
Please let me know other factors and I cannot experiment much as I am working on Production database so can you please let me know if they changes will have some other impact.
Thanks in Advance.
You know the solution. Seems you are just moving part of data to make your queries faster.
I am agree with your solution
As per my understanding I will have to increase the size of tempdb from 25 GB to 50 GB and set the log file Auto growth portion to "unrestricted file growth (MB)".
Go ahead
My guess is that you're trying to move all of the data in a single batch; can you break it up into smaller batches, and commit fewer rows as you insert? Also, as noted in the comments, you may be able to set your destination database to SIMPLE or BULK-INSERT mode.
Why are you using Log file at all? Copy your data (Data and Logfile) then set the mode on SIMPLE and run the transfer again.

LDF file continues to grow very large during transaction phase - SQL Server 2005

We have a 6 step where we copy tables from one database to another. Each step is executing a stored procedure.
Remove tables from destination database
Create tables in destination database
Shrink database log before copy
Copy tables from source to destination
Shrink the database log
Back up desstination database
during the step 4, our transaction log (ldf file) grows very large to where we now have to consistently increase the max size on the sql server and soon enough (in the far furture) we believe it may eat up all the resources on our server. It was suggested that in our script, we commit each transaction instead of waiting til the end to commit the transactions.
Any suggestions?
I'll make the assumption that you are moving large amounts of data. The typical solution to this problem is to break the copy up in to smaller number of rows. This keeps the hit on transaction log smaller. I think this will be the preferred answer.
The other answer that I have seen is using Bulk Copy, which writes the data out to a text file and imports it into your target db using Bulk Copy. I've seen a lot of posts that recommend this. I haven't tried it.
If the schema of the target tables isn't changing could you not just truncate the data in the target tables instead of dropping and recreating?
Can you change the database recovery model to Bulk Logged for this process?
Then, instead of creating empty tables at the destination, do a SELECT INTO to create them. Once they are built, alter the tables to add indices and constraints. Doing bulk copies like this will greatly reduce your logging requirements.

empty sql server 2008 db backup file is very big

Im deploying my db, i more or less emptied the db(data) and then created a backup.
the .bak file is over 100mb.
why is this?
how do i get it down?
im using sql server 2008
EDIT
When you back up, please note that SQL Server backup files can contain multiple backups. It does not overwrite by default. If you choose the same backup file and do not choose the overwrite option, it simply adds another backup to the same file. So your file just keeps getting larger.
Run this and all will be revealed:
select dpages *8 [size in kbs]
from sysindexes
where indid <= 1
order by 1 desc
You can also..
Do two backups in a row to have the 2nd backup contain minimal log data. The first backup will contain logged activity so as to be able to recover. The 2nd one would no longer contain them.
There is also an issue with leaked Service Broker handles if you use SSSB in your database with improper code, but if this is the case, the query above will reveal it.
To get the size down, you can use WITH COMPRESSION, eg.
backup database mydb to disk = 'c:\tempdb.bak' with compression
It will normally bring it down to about 20% the size. As Martin has commented above, run also
exec sp_spaceused
To view the distribution of data/logs. From what you are saying, 1.5 MB for first table... down to 8kB on the 45th row, that accounts for maybe tens of MB, so the rest could be in the log file.

table size not reducing after purged table

I recently perform a purging on my application table. total record of 1.1 millions with the disk space used 11.12GB.
I had deleted 860k records and remain 290k records, but why my space used only drop to 11.09GB?
I monitor the detail on report - disk usage - disk space used by data files - space used.
Is it that i need to perfrom shrink data file? This has been puzzle me long time.
For MS SQL Server, rebuild the clustered indexes.
You have only deleted rows: not reclaimed space.
DBCC DBREINDEX or ALTER INDEX ... WITH REBUILD depending on verison
(It's MS SQL because the disk space report is in SSMS)
You need to explicitly call some operation (specific to your database management system) that will shrink the data file. The database engine doesn't shrink the file when you delete records, that's for optimization purposes - shrinking is time-consuming.
I think this is like with mail folders in Thunderbird: If you delete something, it's just marked as deleted, but to get higher performance, the space isn't freed. So most of your 11.09 GB will now contain either your old data or 0's. Shrink data file will "compress" (or "clean") this by creating a new file that'll only contain the actual data that is left.
Probably you need to shrink the table. I know that SQL server doesn't do it by default for you, I would guess this is for reasons of performance, maybe other DBs are the same.