version pgBackRest 2.19 (this is old, so I might just need to update)
We have pgbackrest set to retain the last 14 backups, and when I run pgbackrest --stanza=main info | grep 'backup:' | sort I see the last 14 full backups, but also many incremental backups that are no longer attached to a full backup. For example, the full backups listed are:
full backup: 20210321-070002F
full backup: 20210328-070002F
full backup: 20210404-070001F
full backup: 20210411-070001F
full backup: 20210418-070001F
full backup: 20210425-070001F
full backup: 20210502-070001F
full backup: 20210509-070001F
full backup: 20210516-070001F
full backup: 20210523-070001F
full backup: 20210530-070001F
full backup: 20210606-070001F
full backup: 20210613-070001F
full backup: 20210620-070001F
But I note many incremental backups (and they are indeed taking up space in aws s3) like:
incr backup: 20200405-070001F_20200406-020001I
incr backup: 20200405-070001F_20200407-020001I
incr backup: 20200405-070001F_20200408-020001I
etc etc...
then ones like this, which ARE attached to the full backups that are not expired.
incr backup: 20210620-070001F_20210624-030001I
incr backup: 20210620-070001F_20210625-030002I
Full backups are weekly and incremental backups are daily.
I also note the wal segments in the archive folder for pgbackrest start from 20200405 (the oldest incremental backup that's not part of a full backup).
While I can choose to manually delete the backup folders and the wal segment folders older than a certain date, I would like to do so with the pgbackrest CLI instead, or at least understand what went wrong.
How can I safely delete these incremental backups that are older than the oldest retained full backup, and how can I delete the wal files out of the bpgbackrest archive folder as well? If I have to do it manually in s3, would there be consequences associated with deleting the incremental backups folders and the wal archives folders that are older than the oldest full backup?
Related
I am confused with the size of the file I backup with SSMS and Query.
If I create a file from SSMS in its default folder something like "C:\Program Files\Microsoft SQL Server\MSSQL14.NAMEDINSTANCE\MSSQL\Backup" the outfile say Db1.bak is about 198292 KB
Same database if I backup with the query "backup database Db1 to disk='D:\Db1.bak' the file size is just 6256 KB
Sometimes the other database say Db2 gives the same filesize i.e 6256 KB(Both Db1 and Db2 have identical(same) schemas just data in it are different.)
And backup with SSMS gives 33608 KB which seems satisfactory.
I also tried verifying all database in SSMS like this RESTORE VERIFYONLY FROM DISK = 'D:\BACKUP\Db1.bak'
GO and result gives valid in every database check.
I also tried deleting Db1 from SSMS and restoring the less KB file and checked some data of few tables (Not All) and it seems showing all data in tables properly but the filesize dissatisfies me.
Thank You.
I suspect that,like initially mentioned, you have compression on my
default, and using the GUI, with the settings is not making use of
that (and that if you selected to Compress in the GUI, you'd get a
similar size)
If the server option backup compression default is on, even if you don't mention it in your backup command, compression will be applied. So in both cases there would be compressed backup. But it's easy to see, just run this command for both backups:
restore headeronly
from disk = 'here_the_full_path_with_filename';
In the 5th column you'll get the flag if your backup is compressed.
But the cause of this difference is another one, and you'll see it when run restore headeronly: you made multiple backups to the same file.
You used backup command with noinit from SSMS, and the same file name, so now this file contains more than one backup, and restore headeronly will show them all.
We use IBM DB2 10.1 on Windows Server 2008 R2
Majority of the DB2 space is occupied by fields of the type BLOB and CLOB.
Currently, full backup of the entire database takes very long time and goes beyond time limits.
We tried to use incremental backup to speed up backup, but problem did not go away as BLOB and CLOB fields are still
being pulled into the backup, irrespective of whether they were changed, so it is almost the same as doing a full backup.
We decided to proceed with following approach:
Create a new tablespace and put there tables with BLOBs and CLOBs. Old data will be archived once a year, and this tablespace every day.
Once the size of tablespace exceeds a certain limit, we will create a new space and start writing new data to it, and so on.
Problem occurs with restoration when using above mentioned approach.
Offline backup steps:
Full backup of all data is done.
DB2 -svl%LOG% BACKUP DATABASE %DB_NAME% TO %DB_PATH_BACKUP% COMPRESS EXCLUDE LOGS WITHOUT PROMPTING
Logs are copied.
Backup of a separate tablespace is done.
B2 -svl%LOG% BACKUP DATABASE %DB_NAME% TABLESPACE (TSPACEGEN1) TO %DB_PATH_BACKUP% COMPRESS EXCLUDE LOGS WITHOUT PROMPTING
Logs are copied.
Restoration approach:
Full backup is restored.
RESTORE DB EAPOBLOB FROM "..." TAKEN AT ... REPLACE HISTORY FILE WITHOUT PROMPTING
Logs are copied and ROLLFORWARD performed. So far everything is OK.
ROLLFORWARD DATABASE COMMDB TO END OF LOGS
Backup of a separate tablespace is restored.
RESTORE DATABASE EAPOBLOB TABLESPACE (TSPACEGEN1 ) FROM "..." TAKEN AT WITHOUT PROMPTING
Logs are copied and ROLLFORWARD performed. So far everything is OK.
ROLLFORWARD DATABASE COMMDB TO END OF LOGS
!! when we try to connect to DB:
The following errors were received:
A connection attempt was unsuccessful.
Summary
SQL1117N A connection to or activation of the database cannot be made because of ROLLFORWARD PENDING.
Is this feasible approach for backup and are steps we take to backup and restore adequate?
Is there a better way to speed up backup, while still maintaining same level of resilience?
I have a backup file of SQL Database. For Example MyDB.bak. I want to check MyDB.bak file is corrupted or not. Is there any way to check my database backup condition either corrupted or in good condition?
Note: I dont want to restore that .bak file.
Thanks
Exactly as stakx said. See the link for how to use the command:
how to use RESTORE VERIFYONLY
Check a backup file on disk
RESTORE VERIFYONLY FROM DISK = C:\AdventureWorks.BAK
GO
Check a backup file on disk for a particular backup
RESTORE VERIFYONLY FROM DISK = C:\AdventureWorks.BAK WITH FILE = 2
GO
This command will check the second backup in this backup file. To check the contents in a backup you can use RESTORE HEADERONLY and use the Position column to specify the FILE number.
I suppose that's what RESTORE VERIFYONLY is for.
"Verifies the backup but does not restore it, and checks to see that the backup set is complete and the entire backup is readable. However, RESTORE VERIFYONLY does not attempt to verify the structure of the data contained in the backup volumes. […] If the backup is valid, the SQL Server Database Engine returns a success message. "
I have a database on SQL Server 2008 R2 SP2, I have this backup plan on that db:
every Friday morning I get a full backup of my db and at noon I get differential backup and the other days of the week I get differential backup twice per day (morning and noon).
The full backup size is about 50 GB. My problem is: the first differential backup size is about 42 GB.
I have no jobs in the time between the full and differential backup and there is no any rebuild index, reorganize index or update stats and the transaction on this db is not more.
In order to test, I get a full backup from db and after this done, I get differential backup from that immediately but the differential backup size is about 42 GB.
Even if I check the DCM page content and after getting full backup this page is reset.
I don't know what is the problem.
Here are my backup commands:
Full backup:
BACKUP DATABASE [test]
TO DISK = N''filePath\test.bak''
WITH NOFORMAT, NOINIT, NAME = 'test', SKIP, REWIND,
NOUNLOAD,COMPRESSION, STATS = 10
DIFF Backup
BACKUP DATABASE [test]
TO DISK = N''filePath\test.bak''
WITH DIFFERENTIAL, NOFORMAT, NAME = 'testdiff', NOINIT, SKIP, REWIND,
NOUNLOAD, STATS = 10
You are specifying NOINIT clause.
Indicates that the backup set is appended to the specified media set, preserving existing backup sets. If a media password is defined for the media set, the password must be supplied. NOINIT is the default.
Your files will keep growing as new backups are being appended.
Also your post does not mention when and how you backup the log. I hope this is only an omission, as log needs to be backed up too.
BACKUP DATABASE [test] TO DISK = N''filePath\test.bak'' WITH DIFFERENTIAL, NOFORMAT, NAME = testdiff',NOINIT, SKIP, REWIND, NOUNLOAD, STATS = 10
in the Statement above I've used NOINIT, naturally the new backup file must be appended to the previous file but because I use the new name for my new backup file, the new file will be created and it won't appended to previous file.
But my problem has been solved. because I had replication on my DB before and after removing it the publication had remained in the SQL instance and there was an active transaction(Replication) on my db so it locks many of the transaction logs and many of my VLFs were active.they were waiting to send to subscriber server.
after removing publication from my SQL instance the VLF files set to 0 and the transaction log file has been shrink So, the differential backup file size were decreased.
I've created a Back Up Database Task for a few selected databases in my server. What I want to do is to have only one backup file for any database. The new one could overwrite the old one or create a new one and delete the old one, doesn't matter. I've checked Backup set will expire: 2 days but but evidently this doesn't do what I thought it'd do, it keeps creating new backup files every day. How can accomplish this?
I wouldn’t set the backup to expire in 2 days, as this means that you can only restore the backup for two days once the backup expires you can no longer rebuild the database using it.
In the same why you built a maintenance plan to backup the database you can create a maintenance plan to clean up the system and delete backup over x days old. then just run it after your backup plan.
Use a history cleanup task and then pick, remove everything older than 1 day or whatever your desired time frame is