I have a database with 20GB space. I have taken the backup and want to restore it to 3 other databases. But due to large size of database, it's prompting low space message on restore.I don't want all the data. Is there any way to reduce size of database OR what are the possible ways to manage all this work?
Previously, I took a backup and shrink it. The size was reduced to 1.11GB but on restoring back with a new database name, it takes 20GB disk space again.
Generate Script and type of script make as schema only and create new database and run this script...
One of my tempdb's has a data file size of 60GB. I shrunk the file down to 2GB, then set the initial size to 2GB. The data file shrink is successful. When I go back into the db properties for tempdb, it shows initial size of 60000MB again. I've tried setting it to 4GB too and that still resets to 60000MB. This is very frustrating, since every time the service restarts, that tempdb data file is set to 60GB using up a lot of space.
Any ideas?
how did you "shrink" the filesize? If there are 60GB worth of entries, the table should "auto-size" to allow room for all entires.
If I look in the database settings of my database (with SQLite Manager) I have a freelist_count of 16. According to this source the freelist count is
the number of unused pages in the database file
What is meant with unused pages and why are there unusued pages? Is it bad to have a freelist count of greater than zero? What can I do to reduce the number to zero?
From SQLite FAQ:
(12) I deleted a lot of data but the database file did not get any smaller. Is this a bug?
No. When you delete information from an SQLite database, the unused
disk space is added to an internal "free-list" and is reused the next
time you insert data. The disk space is not lost. But neither is it
returned to the operating system.
If you delete a lot of data and want to shrink the database file, run
the VACUUM command. VACUUM will reconstruct the database from scratch.
This will leave the database with an empty free-list and a file that
is minimal in size. Note, however, that the VACUUM can take some time
to run (around a half second per megabyte on the Linux box where
SQLite is developed) and it can use up to twice as much temporary disk
space as the original file while it is running.
As of SQLite version 3.1, an alternative to using the VACUUM command
is auto-vacuum mode, enabled using the auto_vacuum pragma.
There's nothing bad in having some free pages, unless they take significant amount of space. It is up to you to decide where ends the “nothing bad” and starts the “significant”, depending on the needs of your application.
We have one database and we do lot many transactios per day, so the log file size growing too much, and i have tried shrinking it but its not reducing..
What shall i do to reduce the log file size. (We do too many inserts)
Thanks
Srinivas
You have a database with "Full" recovery model and no log backups running.
Either set to simple recovery model or set up log backups. There are no other correct choices (such as shrinking or truncating). The link above has links to other articles too.
After you set your database recovery mode to simple run the following query:
use databasename;
CHECKPOINT;
DBCC SHRINKFILE ('databasename_log');
where databasename_log is the filename of the physical transaction log file
I went in and checked my Transaction log the other day and it was something crazy like 15GB. I ran the following code:
USE mydb
GO
BACKUP LOG mydb WITH TRUNCATE_ONLY
GO
DBCC SHRINKFILE(mydb_log,8)
GO
Which worked fine, shrank it down to 8MB...but the DB in question is a Log Shipping Publisher, and the log is already back up to some 500MB and growing quick.
Is there any way to automate this log shrinking, outside of creating a custom "Execute T-SQL Statement Task" Maintenance Plan Task, and hooking it on to my log backup task? If that's the best way then fine...but I was just thinking that SQL Server would have a better way of dealing with this. I thought it was supposed to shrink automatically whenever you took a log backup, but that's not happening (perhaps because of my log shipping, I don't know).
Here's my current backup plan:
Full backups every night
Transaction log backups once a day, late morning (maybe hook the Log shrinking onto this...doesn't need to be shrank every day though)
Or maybe I just run it once a week, after I run a full backup task? What do you all think?
If you file grows every night at 500 MB there is only one correct action: pre-grow the file to 500MB and leave it there. Shrinking the log file is damaging. Having the log file auto-grow is also damaging.
you hit the file growth zero fill initialization during normal operations, reducing performance
your log grows in small increments creating many virtual log files, resulting in poorer operational performance
your log gets fragmented during shrinkage. While not as bad as a data file fragmentation, log file fragmentation still impact performance
one day the daily growth of 500MB will run out of disk space and you'd wish the file was pre-grown
You don't have to take my word for it, you can read on some of the MVP blogs what they have to say about the practice of log and file shrinkage on a regular basis:
Auto-shrink – turn it OFF!
Oh, the horror! Please stop telling people they should shrink their log files!
Why you want to be restrictive with shrink of database files
Don't Touch that Shrink Button!
Do not truncate your ldf files!
There are more, I just got tired of linking them.
Every time you shrink a log file, a fairy loses her wings.
I'd think more frequent transaction log backups.
I think what you suggest in your question is the right approach. That is, "hook the Log shrinking onto" your nightly backup/maintenance task process. The main thing is that you are regularly doing transaction log backups, which will allow the database to be shrunk when you do the shrink task. The key thing to keep in mind is that this is a two-step process: 1) backup your transaction log, which automatically "truncates" your log file; 2) run a shrink against your log file. "truncate" doesn't necessarily (or ever?) mean that the file will shrink...shrinking it is a separate step you must do.
for SQL Server 2005
DBCC SHRINKFILE ( Database_log_file_name , NOTRUNCATE)
This statement don't break log shipping. But, you may need to run more than one. For each run, the log shipping backup, copy, and restored to run after again run this statement.
Shrink and truncate are different.
My experiences:
AA db, 6.8GB transaction log
first run: 6.8 GB
log shipping backup, copy, restore after second run: 1.9 GB
log shipping backup, copy, restore after third run: 1.7 GB
log shipping backup, copy, restore after fourth run: 1 GB
BB db, 50GB transaction log
first run: 39 GB
log shipping backup, copy, restore after second run: 1 GB
Creating a transaction log backup doesn't mean that the online transaction log file size will be reduced. The file size remains the same. When a transaction is backuped up, in the online transaction log it's marked for overwriting. It;s not automatically removed, and no spaces is freed, therefore, the size remains the same.
Once you set the LDF file size, maintain its size by setting the right transaction log backup frequency.
Paul Randal provides details here:
Understanding Logging and Recovery in SQL Server
Understanding SQL Server Backups
Based on Microsoft recommendation Before you intend to Shrink log file you should first try to perform the following capabilities:
Freeing disk space so that the log can automatically grow.
Moving the log file to a disk drive with sufficient space.
Increasing the size of a log file.
Adding a log file on a different disk.
Turn on auto growth by using the ALTER DATABASE statement to set a non-zero growth increment for the FILEGROWTH option.
ALTER DATABASE EmployeeDB MODIFY FILE ( NAME = SharePoint_Config_log, SIZE = 2MB, MAXSIZE = 200MB, FILEGROWTH = 10MB );
Also, you should be aware of shrink operation via maintenance plan will effect on *.mdf file and *.ldf file. so you need to create a maintenance plan with SQL job task and write the following command to can only shrink *.ldf file to your appropriate target size.
use sharepoint_config
go
alter database sharepoint_config set recovery simple
go
dbcc shrinkfile('SharePoint_Config_log',100)
go
alter database sharepoint_config set recovery FUll
go
Note: 100 is called the target_size for the file in megabytes, expressed as an integer. If not specified, DBCC SHRINKFILE reduces the size to the default file size. The default size is the size specified when the file was created.
In my humble opinion, It’s not recommended to perform the shrink operation periodically! Only in some circumstances that you need to reduce the physical size.
You can also check this useful guide to Shrink a transaction log file Maintenance Plan in SQL Server