Reducing Size Of SQL Backup? - sql

I am using SQL Express 2005 and do a backup of all DB's every night. I noticed one DB getting larger and larger. I looked at the DB and cannot see why its getting so big! I was wondering if its something to do with the log file?
Looking for tips on how to find out why its getting so big when its not got that much data in it - Also how to optimise / reduce the size?

Several things to check:
is your database in "Simple" recovery mode? If so, it'll produce a lot less transaction log entries, and the backup will be smaller. Recommended for development - but not for production
if it's in "FULL" recovery mode - do you do regular transaction log backups? That should limit the growth of the transaction log and thus reduce the overall backup size
have you run a DBCC SHRINKDATABASE(yourdatabasename) on it lately? That may help
do you have any log / logging tables in your database that are just filling up over time? Can you remove some of those entries?
You can find the database's recovery model by going to the Object Explorer, right click on your database, select "Properties", and then select the "Options" tab on the dialog:
Marc

If it is the backup that keeps growing and growing, I had the same problem. It is not a 'problem' of course, this is happening by design - you are just making a backup 'set' that will simply expand until all available space is taken.
To avoid this, you've got to change the overwrite options. In the SQL management studio, right-click your DB, TASKS - BACKUP, then in the window for the backup you'll see it defaults to the 'General' page. Change this to 'Options' and you'll get a different set of choices.
The default option at the top is 'Append to the existing media set'. This is what makes your backup increase in size indefinitely. Change this to 'Overwrite all existing backup sets' and the backup will always be only as big as one entire backup, the latest one.
(If you have a SQL script doing this, turn 'NOINIT' to 'INIT')
CAUTION: This means the backup will only be the latest changes - if you made a mistake three days ago but you only have last night's backup, you're stuffed. Only use this method if you have a backup regime that copies your .bak file daily to another location, so you can go back to any one of those files from previous days.

It sounds like you are running with the FULL recovery model and the Transaction Log is growing continuously as the result of no Transaction Log backups being taken.
In order to rectify this you need to:
Take a transaction log backup. (See: BACKUP(TRANSACT-SQL) )
Shrink the transaction log file down
to an appropriate size for your needs. (See:How to use DBCC SHRINKFILE.......)
Schedule regular transaction log
backups according to data recovery
requirements.
I suggest reading the following Microsoft reference in order to ensure that you are managing your database environment appropriately.
Recovery Models and Transaction Log Management
Further Reading: How to stop the transaction log of a SQL Server database from growing unexpectedly

One tip for keeping databases small would be at design time, use the smallest data type that you can use.
for Example you may have a status table, do you really need the index to be an int, when a smallint or tinyint will do?
Darknight

as you do a daily FULL backup for your Database , ofcourse it will get so big with time .
so you have to put a plan for your self . as this
1st day: FULL
/ 2nd day: DIFFERENTIAL
/ 3rd day: DIFFERENTIAL
/ 4th day: DIFFERENTIAL
/ 5th day: DIFFERENTIAL
and then start over .
and when you restore your database , if you want to restore the FULL you can do it easily , but when you need to restore the DIFF version , you backup the first FULL before it with " NO-recovery " then the DIFF you need , and then you will have your data back safely .

7zip your backup file for archiving. I recently backed up a database to a 178MB .bak file. After archiving it to a .7z file is was only 16MB.
http://www.7-zip.org/
If you need an archive tool that works with larger files sizes more efficiently and faster than 7zip does, I'd recommend taking a look at LZ4 archiving. I have used it for archiving file backups for years with no issues:
http://lz4.github.io/lz4/

Related

CockroachDB how to restore a dropped column?

I accidentally dropped a column. I have no backup set up for this single node setup. Does cockroach have any auto backup mechanism or am I screwed?
We could use time-travel queries to restored deleted data within a garbage collection window before the data is deleted forever.
The garbage collection window is determined by the gc.ttlseconds field in the replication zone configuration.
Examples are:
SELECT name, balance
FROM accounts
AS OF SYSTEM TIME '2016-10-03 12:45:00'
WHERE name = 'Edna Barath`;
SELECT * FROM accounts AS OF SYSTEM TIME '-4h';
SELECT * FROM accounts AS OF SYSTEM TIME '-20m';
I noticed that managed CockroachDB run database backup (incremental or full) hourly up to 30 days. You may be able to restore the whole database from it.
Please note that the restoration will cause your cluster to be unavailable for the duration of the restored. All current data is deleted.
We can manage our own backup, including incremental, database and table level backup. We need to configure a userfile location or a cloud storage location. This require billing information.
CockroachDB stores old versions of data at least through its configured gc.ttlseconds window (default one day). There's no simple way that I know of to instantly restore, but you can do
SELECT * FROM <tablename> AS OF SYSTEM TIME <timestamp before dropping the column>
And then manually reinsert the data from there.

SQL Server DB size - why is it so large?

I am building a database which contains about 30 tables:
The largest amount of columns in a table is about 15.
For datatypes I am mostly using VarChar(50) for text
and Int og SmallInt for numbers.
Identity columns is Uniqueidentifiers
I have been testing a bit filling in data and deleting
again. I have no deleted all data, so everey table is empty.
But, if I look inside the properties of the database in
Management Studio, the size says 221,38 MB!
How comes that? Please help, I am getting notifications
from my hosting company that I am exceeding my limits .
Best regards,
:-)
I would suggest that you look first at the recovery mode for the database. By default, the recovery mode is FULL. This fills the log file with all transactions that you perform, never deleting them until you do a backup.
To change the recovery mode, right click on the database and choose Properties. In the properties list, choose the Options (on the right hand pane). Then change the "Recovery model" to Simple.
You probably also want to shrink your files. To do this, right click on the database and choose Tasks --> Shrink --> Files. You can shrink both the data file and the log file, by changing the "File Type" option in the middle.
Martin's comment is quite interesting. Even if the log file is in auto-truncate mode, you still have the issue of deletes being logged. If you created large-ish tables, the log file will still expand and the space not recovered until you truncate the file. You can get around this by using TRUNCATE rathe than DELETE:
truncate table <table>
does not log every record being deleted (http://msdn.microsoft.com/en-us/library/ms177570.aspx).
delete * from table
logs every record.
As you do inserts, updates, deletes, and design changes a log file with every transaction, and a whole bunch of other data is created. This transaction log is a required component of a SQL Server database, and thus cannot be disabled in any available settings.
Below is an article from Microsoft on doing backups to shrink the transaction logs generated by SQL Server.
http://msdn.microsoft.com/en-us/library/ms178037(v=sql.105).aspx
Also, are you indexing your columns? Indexes that consist of several columns on tables with a high row count can become unnecessarily large, especially if you are just doing tests. Try just having a single clustered index on only one column per table.
You may also want to learn about table statistics. They help your indexes out and also help you perform queries like SELECT DISTINCT, or SELECT COUNT(*), etc.
http://msdn.microsoft.com/en-us/library/ms190397.aspx
Finally, you will need to upgrade your storage allocation for the SQL Server database. The more you use it, the faster it will want to grow.

empty sql server 2008 db backup file is very big

Im deploying my db, i more or less emptied the db(data) and then created a backup.
the .bak file is over 100mb.
why is this?
how do i get it down?
im using sql server 2008
EDIT
When you back up, please note that SQL Server backup files can contain multiple backups. It does not overwrite by default. If you choose the same backup file and do not choose the overwrite option, it simply adds another backup to the same file. So your file just keeps getting larger.
Run this and all will be revealed:
select dpages *8 [size in kbs]
from sysindexes
where indid <= 1
order by 1 desc
You can also..
Do two backups in a row to have the 2nd backup contain minimal log data. The first backup will contain logged activity so as to be able to recover. The 2nd one would no longer contain them.
There is also an issue with leaked Service Broker handles if you use SSSB in your database with improper code, but if this is the case, the query above will reveal it.
To get the size down, you can use WITH COMPRESSION, eg.
backup database mydb to disk = 'c:\tempdb.bak' with compression
It will normally bring it down to about 20% the size. As Martin has commented above, run also
exec sp_spaceused
To view the distribution of data/logs. From what you are saying, 1.5 MB for first table... down to 8kB on the 45th row, that accounts for maybe tens of MB, so the rest could be in the log file.

How to undo a SQL Server UPDATE query?

In SQL Server Management Studio, I did the query below.
Unfortunately, I forgot to uncomment the WHERE clause.
1647 rows were updated instead of 4.
How can I undo the last statement?
Unfortunately, I've only just finished translating those 1647 rows and was doing final corrections , and thus don't have a backup.
UPDATE [dbo].[T_Language]
SET
[LANG_DE] = 'Mietvertrag' --<LANG_DE, varchar(255),>
,[LANG_FR] = 'Contrat de bail' -- <LANG_FR, varchar(255),>
,[LANG_IT] = 'Contratto di locazione' -- <LANG_IT, varchar(255),>
,[LANG_EN] = 'Tenancy agreement' -- <LANG_EN, varchar(255),>
--WHERE [LANG_DE] like 'Mietvertrag'
There is a transaction protocol, at least I hope so.
A non-committed transaction can be reverted by issuing the command ROLLBACK
But if you are running in auto-commit mode there is nothing you can do....
If you already have a full backup from your database, fortunately, you have an option in SQL Management Studio. In this case, you can use the following steps:
Right click on database -> Tasks -> Restore -> Database.
In General tab, click on Timeline -> select Specific date and time option.
Move the timeline slider to before update command time -> click OK.
In the destination database name, type a new name.
In the Files tab, check in Reallocate all files to folder and then select a new path to save your recovered database.
In the options tab, check in Overwrite ... and remove Take tail-log... check option.
Finally, click on OK and wait until the recovery process is over.
I have used this method myself in an operational database and it was very useful.
Considering that you already have a full backup I’d just restore that backup into separate database and migrate the data from there.
If your data has changed after the latest backup then what you recover all data that way but you can try to recover that by reading transaction log.
If your database was in full recovery mode than transaction log has enough details to recover updates to your data after the latest backup.
You might want to try with DBCC LOG, fn_log functions or with third party log reader such as ApexSQL Log
Unfortunately there is no easy way to read transaction log because MS doesn’t provide documentation for this and stores the data in its proprietary format.
Since you have a FULL backup, you can restore the backup to a different server as a database of the same name or to the same server with a different name.
Then you can just review the contents pre-update and write a SQL script to do the update.
If you can catch this in time and you don't have the ability to ROLLBACK or use the transaction log, you can take a backup immediately and use a tool like Redgate's SQL Data Compare to generate a script to "restore" the affected data. This worked like a charm for me. :)
I have a good way to undo or recovery databases using SQL Server Manager. by following the below points :
1 - Go to Database from left side -> Right-click -> Task-> Restore -> Database.
2- Go to Timeline then select your time.

SQL Server 2005 Transaction Log too big

I am running SQL Server 2005.
My db backup scheme is:
Recovery model: FULL
Backup Type: Full
Backup component: Database
Backup set will expire: after 0 days
Overwrite media: Back up to the existing media set, Append to the existing backup set
The db is writing to 250GB drive (232GB actual).
My _Data.mdf file is over 55GB and my _Log.ldf is over 148GB.
We ran into a situation where our drive was filled today. I moved our ab_Full.bak and ab_Log.bak files to another drive to make space - about 45GB. Five hours later, free space is at 37GB.
I'm new to managing SQL server; so, I have some basic questions about my backups.
I know I need to update the db to start managing the transaction log size to help prevent this problem in the future. So, assuming I have enough free space, I:
1. right click the db and choose Backup
2. set 'Backup Type' to 'Transaction Log'
3. change 'Backup set will expire' after to 30 days
4. click 'ok'
My understanding is this will move 'closed' transactions from the transaction log to a backup and truncate the transaction log.
Is this plan sound? Will I need to manually resize the log file afterwards?
Thanks for your time.
Are you backing up the transaction log at any time at all?
If you are using the FULL recovery model, then you need to back up the transaction log in addition to backing up the main database, or if you don't want to back up the log (why would you then use the FULL recovery model?) then at least truncate the log at some regular interval.
You should back up the transaction log before every full backup (and keep it as long as you keep the previous full backup) so you can restore to any point in time since the first full backup you've kept. Also, it might be worth backing up the transaction log more often (the total size is the same) in case something bad happens between two full backups.
The best procedure is to regularly backup your log file. In the mean-time, for 'catastrofic' scenarios like the one you described, you may use this snippet to reduce the size of your log:
http://www.snip2code.com/Snippet/12913/How-to-correctly-Shrink-Log-File-for-SQL