I'm looking for a backup tool for ext4, which can take a copy from a running fs like /var with no collisions after recover such fs. I know BSD dump has an '-L' option, which tells him to work on a snapshot. But nor dump nor dumpe2fs from repository have such option. I've read about a patchset for ext4 providing snapshot support, but replies about it are very different, so i'm here to ask about your experience with this patches.
It's not a dump tool but I use rsync which allows incremental backups between 2 filesystems on a running system.
for example
rsync -aSXvH /srcdir /target_dir
Related
I have a 20.04 ext4 installation (successful upgrade from 19.10!) and am just wondering about the above.
One caveat I can think of is /etc/fstab and some other things may be somewhat different for ZFS root and so should probably not be transferred over.
Is there any way to automagically avoid/resolve such conflicts, or should I just do a clean ZFS root installation and setup from scratch?
The diet version is that to switch to ZFS root on a separate disk you will need to do the following:
1) Remove the / mount from /etc/fstab on the ZFS side after copying the rootfs across
2) Make sure that you rebuild the initramfs to include the zfs kernel module and userspace zpool and zfs binaries.
3) Change your kernel boot parameters to specify root=ZFS=poolname/rootfsname
There is an excellent howto available here that covers this topic in full detail:
https://github.com/openzfs/zfs/wiki/Ubuntu-18.04-Root-on-ZFS
I am trying to set-up the testing of the repository using travis-ci.org and Docker. However, I couldn't find any manuals about what is the politics on memory usage.
To perform a set of tests (test.sh) I need a set of input files to run on, which are very big (up to 1 Gb, but average 500 Mb).
One idea is to wget directly in test.sh script, but for each test-run it would be not efficient to download the input file again and again.
The other idea is to create a separate dockerfile containing the test-files and mount it as a drive, but this would be not nice to push such a big dockerimage in the general register.
Is there a general prescription for such tests?
Have you considered using Travis File Cache?
You can write your test.sh script in a way so that it will only download a test file if it was not available on the local file system yet.
In your .travis.yml file, you specify which directories should be cached after a successful build. Travis will automatically restore that directory and files in it at the beginning of the next build. As your test.sh script will then notice the file exists already, it will simply skip the download and your build should be a little faster.
Note that how the Travis cache works is that it will create an archive file and put it on some cloud storage where it will need to download it later on as well. However, the assumption is that the network traffic will likely be inside that "cloud" and potentially in the same data center as well. This should still give you some benefits in terms of build time and lower use of resources in your own infrastructure.
With the creators update out, I'd like to upgrade my Ubuntu instance to 16.04.
The recommended approach to upgrade (and I agree) is to remove and replace the instance with a clean installation. However I have some files and configurations I would like to keep and transfer to the new install. They suggest copying the files over to a Windows folder to backup the files and restore afterward. However, by putting the files there, it messes up all the permissions of everything.
I had already done the remove/replace on one of my machines and I found that trying to restore all the permissions on all the files was just not worth it and did another clean install and will be copying the contents of the file over instead. This will be an equally tedious solution to restore these files but it has to be done.
Is there an easier way to backup and restore my files and their permissions when doing this upgrade?
I have two more machines I would like to upgrade but do not want to go through this process again if it can be helped.
Just use linux way to backup your files with permission, such as getfacl/setfacl or tar -p
I'm using rsync (version 3.0.9) to do snapshot-style incremental backup from local disk to a LAN-attached NAS that's mounted using cifs. The functionality is ideal, but it's unreasonably slow for the most common scenario: daily backup of a file hierarchy (~100GB, ~2000 directories) in which only a very few files have changed. The slowdown does not happen when doing the simple:
rsync -a /home/stuff/ /mnt/nas/backup/yesterday
(when only a few files have changed since yesterday) because in this case rsync uses only its quick timestamp+size check to compare files. But when I do my snapshot backup:
rsync -a --link-dest=/mnt/nas/backup/yesterday /home/stuff/ /mnt/nas/backup/today
there is heavy network traffic to/from the NAS and things go very slowly even though almost no data is actually transferred from source to target. I suspect this is caused by rsync checksumming the target files in the link-dest directory. Adding --no-checksum doesn't alter things. Is there any way to get rsync to do its file compare as quickly when doing link-dest as it does when doing a simple overwrite?
All sources are on windows OS, and destination backup is on Unix system (we are using Samba).
My source repository is similar to :
-Repository
--Folder1
---Files
--Folder2
---Files
etc...
I would like to get a destination similar to :
-Repository
--Folder1.zip
--Folder2.zip
etc...
After first backup, I only backup files that have changed since the last backup. (or backup if new folders/files have been created).
Is someone know a tool or a script for my backup needs? Can we do that with Robocopy?
You may install cygwin on your Windows machine, and use simple sh script, like the following one:
#!/bin/bash
for i in $( ls ); do
tar czf $i.tar.gz $i
done
# specify your destination here:
rsync *.tar.gz /tmp
rm *.tar.gz
BTW, that's not the most straightforward way, I suppose.
If you are open to forget about the zipping part, I would advise backuppc on the Unix system and rsync (win32 port) / or samba config on the Windows system
see https://help.ubuntu.com/community/BackupPC and http://backuppc.sourceforge.net/ for more infos