borg backup is processing all files instead of modified - backup

please advise how to correctly make borgbackup to process and copy only modified source files and their diffs to destination?
every week we make our data snapshot in cephfs:
/data/.snap/snap1/
/data/.snap/snap2/
and then borg backup of the snapshot to external machine is started.
we expect that borg will only make 1st initial long run when running borg create, and all other backups will be incremental, but backup time is not changing, and we see in logs that it processes all files, what we are doing wrong?
we use:
cd /data/.snap/snap1
borg create --progress --stats --list --files-cache mtime,size --compression lzma,3 ${user}#${host}:${BORGHOME}${namespace}::${backup_name} ./*
could problem be with different absolute path to source folder (/data/.snap/snap1, /data/.snap/snap2, and so on)? unfortunately we can't change it.

the problem was in different absolute paths of snapshot folders, creating separate folder for borg and symlinks solved the problem

Related

How can I backup a Memgraph database?

I'm running a Memgraph within Ubuntu WSL. I want to make a backup of my database. I'm having trouble locating the database files.
I've found the question that addresses Memgraph platform, but I need some solution for WSL.
While running, Memgraph generates several different files in its data directory. This is the location where Memgraph saves all permanent data. The default data directory is /var/lib/memgraph.
If you want to trigger creating a snapshot of the current database state, run the following query in mgconsole or Memgraph Lab:
CREATE SNAPSHOT;
Creating a backup of a Memgraph instance would consist of simply copying the data directory. This is impossible without additional help because the durability files can be deleted when an event is triggered (the number of snapshots exceeded the maximum allowed number).
To disable this behavior, you can use the following query in mgconsole or Memgraph Lab:
LOCK DATA DIRECTORY;
If you are using Linux to run Memgraph, here are the steps for copying files:
Start your Memgraph instance.
Open a new Linux terminal and check the location of the permanent data directory:
grep -A 1 'permanent data' /etc/memgraph/memgraph.conf
Copy a file from the snapshot directory to the backup folder, e.g.:
cp /var/lib/memgraph/snapshots/20220325125308366007_timestamp_3380 ~/backup/
To allow the deletion of the files, run the following query in mgconsole or Memgraph Lab::
UNLOCK DATA DIRECTORY;
Memgraph will delete the files which should have been deleted before and allow any future deletion of the files contained in the data directory.

bzr could not complete pull, now files are missing

I bzr pulled from a repo. Some of the new files (related to a TeX documentation) in the repo apparently could not be placed in the corresponding local dir since there was some kind of lock. I had TeXStudio open, I am not sure if it locked a directory.
The pull operation reported an error (which I missed since the shell window was later closed).
Now the status of my local dirs is:
bzr pull shows the system is up to date.
$ bzr pull
Using saved parent location: XXXXX
No revisions or tags to pull.
The local dir is empty. There should be some files (I actually have them in the local dir in another computer).
I guess .bzr contains the required info.
Is there any way to fix the local copy?
You probably need to run:
bzr co
(without any arguments)
To create a working tree for the current branch.

How do I backup and restore my files/permissions when preparing for a remove/replace of WSL?

With the creators update out, I'd like to upgrade my Ubuntu instance to 16.04.
The recommended approach to upgrade (and I agree) is to remove and replace the instance with a clean installation. However I have some files and configurations I would like to keep and transfer to the new install. They suggest copying the files over to a Windows folder to backup the files and restore afterward. However, by putting the files there, it messes up all the permissions of everything.
I had already done the remove/replace on one of my machines and I found that trying to restore all the permissions on all the files was just not worth it and did another clean install and will be copying the contents of the file over instead. This will be an equally tedious solution to restore these files but it has to be done.
Is there an easier way to backup and restore my files and their permissions when doing this upgrade?
I have two more machines I would like to upgrade but do not want to go through this process again if it can be helped.
Just use linux way to backup your files with permission, such as getfacl/setfacl or tar -p

excluding a directory from accurev using pop command

I have 10 directories in a AccuRev depot and don't want to populate one directory using "accurev pop" command. Is there any way? .acignore is not suiting to my requirements because in another jenkins build I need that folder. Just want to save time to avoid unnecessary populate of directories.
Any idea?
Thanks,
Sanjiv
I would create a stream off this stream and exclude the directories you dont want. Then you can pop this stream and only get the directories you want.
When you run the AccuRev populate command you can specify which directories to populate by specifying the directory name:
accurev pop -O -R thisDirectory
will cause the contents of thisDirectory to match the backing stream from the time of the last AccuRev update in that workspace.
The -O is for over write and the -R is for recurse. If you have active work in progress the -O will cause that work to be over written/destroyed.
The .acignore is only for (external) files and not those that are being managed by AccuRev.
David Howland

Backing up source files managed by source control software: TortoiseSVN

I am new to source control and I am confused with something I read on a webpage yesterday (I don't have the link). I have followed these instructions: "create folder structure", then "Start Reprobrowser", then copy source files into trunk folder. Please see the screen shot below:
However, when I navigate to the folder using Windows Explorer I do not see this folder structure. I see this:
Therefore I am wandering: where are the files physically stored? The reason I ask is because I want to ensure that NetBackup (corporate backup tool) backs up the correct directories.
To make sense of the repository structure you need to read all the documentation on SVN, but the preferred way to backup a SVN repository is through the command
svnadmin dump your_svn_repository_path > destination_filename_backup.svn
You could put this command in a scheduled task running sometime before your corporate tool execute the full backup of your data and include the destination_filename_backup.svn in your backup job
If you ever need to restore the backup (after recreating the repository) you could use the command
svnadmin load your_svn_repository_path < destination_filename_backup.svn