How can vxfs backup superblock data be recovered in Redhat? - backup

While Veritas CFS solution is installed in mail server, disk I/O error is happened.
In order to solve this problem, the disk setup command is used but the head data(31MB) is cleared.
My question is how many vxfs(Veritas File System) backup superblock data is existed in Redhat?
If backup superblock as well as superblock is collapsed and can’t be recovered, how can it be created?

Related

WD Backup Software: How to access older backups after replacement of laptop harddrive?

The defective laptop HDD was replaced with an SSD. For many years up to this point, backups were stored on an external HDD (WD My Passport Ultra, 4 TB) using the "WD Backup" software. After numerous failed attempts, the most recent backup could finally be successfully restored with "WD Backup" according to the following instructions:
https://support-en.wd.com/app/answers/detailweb/a_id/5207/kw/restore
Now, however, I would need to restore additional backups of files that were backed up PRIOR to the last backup. As with the first failed attempts when I didn't know the instructions yet, I get the message that there is no backup plan, i.e. I can't save anything back.
In plain language, this means that I would have to delete and reinstall the "WD Backup" software again. And I would have to repeat this in the future for every access to older backups that is still required.
But that can't really be the case, can it? What did I not understand?
Anyone know a reasonable way?
In addition, since the disk replacement, I no longer dare to save new backups on the external HDD: I fear that I will no longer be able to access the previous backups at all.
Anyone know advice?
I would be grateful for every tip!
PS: I am aware that the Western Digital software "WD Backup" has been "out of support" since 2019. However, it is still used in many places. The same problem exists with the replacement software "Acronis".

WAL log files fill up quickly - how to prevent this?

currently the logs in the folder “/engine-rocksdb/journals” are running full (WAL logs).
When does ArangoDB do a cleaning run of these logs and delete them automatically and how to trigger this cleaning run earlier? My ArangoDB 3.10 runs in single mode and in a virtual environment (cloud with a network storage).
The logfile are increasing very fast for me because there are many writes to the DB. What is the best way, any idea?
What I have done so far:
If I set the value “rocksdb.wal-archive-size-limit” it does delete the logs when the set limit is reached, but it shows errors in the logfile:
2022-09-27T17:53:04Z [898948] WARNING [d9793] {engines} forcing removal of RocksDB WAL file '/archive/813371.log' with start sequence 5387062892 because of overflowing archive. configured maximum archive size is 1073741824, actual archive size is: 75401520
However, I still don't understand the meaning of the logfile output: "configured maximum archive size is 1073741824, actual archive size is: 75401520`". The "actual archive size" is smaller?
But what are the consequences of lowering the "wal-archive-size-limit" value? Is it possible to switch off the wal-archive completely. What exactly is it for? As I understand it, ArangoDb need it for transaction security (i.e. in case of power loss), right?
In general, yes, this is a good thing, but how can I get ArangoDb to a) limit this WAL-archive (without error massages) and b) do a cleaning run faster?
thx :-)
When does ArangoDB do a cleaning run of these logs and delete them automatically and how to trigger this cleaning run earlier?
ArangoDB uses RocksDB underneath, and RocksDB will move WAL file (.log files) into its archive as soon as possible. In order to do so all data from the WAL file needs to be safely stored in the column families' .sst files and have been flushed to disk.
ArangoDB will delete files from the WAL archive (and only from there) once it can assure that an archived WAL file is not used anymore. It will not remove files for the archive that are or may be in current use.
There are a few reasons why ArangoDB may keep archived WAL files for some time:
when server-to-server replication is used: while a follower replicates data, it may read from the leader's WAL. Deleting the WAL file on the leader may make the replication fail
when arangodump is used to create a database dump, it will create a snapshot of data on the server, and the WAL files for that snapshot will be kept around until the snapshot isn't needed anymore (i.e. arangodump finishes).
the first 180 seconds after server start, all WAL files are intentionally kept, for forensic reasons, and to allow followers to replay events from a leader's WAL when it is restarted. The value of 180 seconds can be changed by adjusting the startup option --rocksdb.wal-file-timeout-initial.
there can be some background processing of changes that may refer to data from WAL files. For example, each insert into a collection will need to increase the collection's count() value by 1. To save an extra write into RocksDB on each insert, the count() value is only written to the storage engine by a background thread, ideally only once every X insert operations. However, this may lead to WAL files being around for a bit longer, especially if the background thread cannot keep up with the insert workload.
There is the startup option --rocksdb.wal-archive-size-limit to put a hard limit on the cumulated size of the WAL files in the archive. From your question, it appears that you are currently using ArangoDB version 3.10.
From the warning message you posted, it seems that the WAL archive cleanup somehow applies the wrong limit values.
It turns out that there has been a recent bugfix, released in ArangoDB version 3.10.1, 3.9.4, and 3.8.8, that should rectify this behavior. So upgrading to one of these or later versions may actually help when using the WAL archive size limit.
Shared your question in the Speedb hive, on Discord, and here is what we got for you:
"By default, ArangoDB set the max_wal_size to 1G the value of rocksdb.wal-archive-size-limit must be set to at least twice this number (otherwise you may end up with a single WAL file and the delete will fail)."
Hope this help, if it doesn't or you have follow up questions, please join the Speedb Discord and we will be happy to help.

RethinkDB backup data

I read article about backing up data, but some issues is not clear for me:
What happens with data, that will be changed after backup process
was started?
Does backup operation work only on current machine? Or will it collect
data from all shards in cluster? If only on current, should I start
backup process on all servers?
Is it slow operation so I should forbid all operation to db while
backup in progress?
If a row changes while the backup is going on, the new value may or may not be in the backup. This is generally OK because RethinkDB only offers single-row atomicity anyway, but if you have a workload where that isn't OK then your other options are to use a filesystem that lets you snapshot the data on disk, or to add a new server to your cluster and set it as a replica of the table you want to back up.
It collects data from all shards.
It can take a very long time.

What happens when a transaction is being carried out during backing up of LDF files?

My DB Admin advised that I should regularly take backup of .ldf files. Fine, this SQL post here explains this beautifully.
Consider that a transaction is being done in SQL Server. And at the same time, a scheduled process tries to access the .ldf file for backing it up.
What happens ? How this works ?
You must read Article Understanding SQL Server backup by Paul Randal. That is the best I can see which is available and can explain you in details various aspects.
Coming to your question a transaction log backup includes all information from previous transaction log back or full backup that started the log chain. Backup simply means reading information froma file(data or log) and writing it to destination disk. The transaction any would work independed of log backup running. A transaction follow a WAL(write ahead logging) protocol, for practical purposes all transaction information is first written in log file and then changes are later made to data file. So when transaction is running it would not be affected by transaction log backup job which is running both are doing different task and are muttually exclusive events. Current backup would try to backup all logs which are marked as committed and would truncate the logs if no transaction requires it. If any portion of log is committed after log backup has read that portion it would not come in current log backup but would come under further log backup.
Transacion log backup has important role in crash recovery it helps in determining what all operations has to be roll forwared and what has to be rolled back. Without transaction log backup or transaction log crash recovery is not possible
You must also read Logging and recovery in SQL Server to know about life cycle of a transaction.
The excat answer as to what acctual steps happens inside is beyond scope of discussion as nobody can exactly tell you what would happen but reading the article would give you a good idea.
Please let me know if you have any further questions.

SQL Log File Not Shrinking in SQL Server 2012

I am dealing with someone else's backup Maintenance Plan and have an issue with the log file, I have a database that sits on one drive with a size of 31 GB and a log file that sits on another server with a size of 20 GB, the database is in Full Recovery Model. There is a maintenance plan that runs once a day to do a complete backup and a second plan that does a backup of the log file every 15 minutes. I have checked and the drive that the log file gets backed up to and there is still plenty of room but the log file never gets smaller after the backup, is there something missing from the maintenance plan?
Thanks in advance
The situation as you describe it seems fine.
A transaction log backup does not shrink the log file. However, it does truncate the log, file, which means that space can be reused:
From Books Online (Transaction Log Truncation):
Log truncation automatically frees space in the logical log for reuse
by the transaction log.
Also, from Managing the Transaction Log:
Log truncation, which is automatic under the simple recovery model, is
essential to keep the log from filling. The truncation process reduces
the size of the logical log file by marking as inactive the virtual
log files that do not hold any part of the logical log.
This means that each time the transaction log backup occurs in your scenario, it's creating free space in the file which can be used by subsequent transactions.
Leading on from this, should you shrink the file as well? Generally speaking, the answer is no. Assuming your database does not suddenly have massive one-off spikes in usage, the transaction log will have grown to a size to accommodate the typical workload.
This means if you start shrinking the log, SQL Server will just need to grow it again... This is a resource intensive operation, affecting server performance, and no transactions can complete while the log is growing.
The current plan and file sizes all seem reasonable to me.
I don't know if this applies to your situation, but earlier versions of SQL Server 2012 have a bug that crops up when model is set to Simple recovery model. For any database created with model set to Simple, log files will continue to grow in an attempt to reach the 2,097,152 MB limit. This still applies if you alter to Full afterwards. KB article 2830400 states that altering to Full, then altering back to Simple is a workaround -- that was not my experience. Running CU 7 for SP1 was the only trick that worked for me.
The article provides links for the first updates that resolved this bug: "Cumulative Update 4 for SQL Server 2012 SP1", as well as, "Cumulative Update 7 for SQL Server 2012" (if you haven't installed SP1).
If you change the recovery to full and then back to simple, the shrink will work successfully.