Tar incremental restore : Cannot rename - backup

I created a python script to do an incremental backup strategy on seven days whith a full backup on Sunday, using the command : tar
I have no probleme to generate my differents backups.
However, I've got an issue during trying to restore an incremental backup with this message error :
tar: Cannot rename `./path1' to `./path2': No such file or directory
tar: Exiting with failure status due to previous errors
My backups strategy run for a jenkins service.
Do you why I've got this error message which stop my restore. And do you know how to fix it

The short answer is: DO NOT use GNU's tar for incremental backups.
The long answer is, - there is pretty old bug that won't allows to restore incremental archives reliably. The bug still exists and reported multiple times since 2004.
References:
stackexchange 01,stackexchange 02,
Ubuntu-Lunchpad,
GNU 01, GNU 02, GNU 03,
Debian

Related

redis.exceptions.ResponseError: MISCONF Redis is configured to save RDB snapshots

I'm with this problem when I try to save to redis. Introducing the message below.
MISCONF Redis is configured to save RDB snapshots, but it's currently unable to persist to disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Red
The redis log file displays this:
Background saving started by pid 73
Write error saving DB on disk: Function not implemented
Has anyone ever experienced this?
I found the answer. You need to wsl 2 To find out the version run below command in PowerShell
wsl -l -v
If it is version 1, run the command below and open ubuntu again
wsl --set-version 2
More information: https://learn.microsoft.com/en-us/windows/wsl/install

GitLab Backup | Different archive checksum for identical information

Summary
When I create a backup of GitLab. I always have different checksums.
Steps to reproduce
sudo gitlab-rake gitlab:backup:create STRATEGY=copy
What is the current bug
I created backup-script, its backup GitLab hourly and send archive in cloud storage.
In two archives with identical contents, there is always a different check-sum.
Why do I have two folders with the same files and the same checksums, but when this folders is archived, I get different checksum? Content has not changed, but checksum has always changed. Why?
What is the expected correct behavior?
When the archive is not edited, it must have the same checksum
Relevant logs and/or screenshots
94e779cbe595eda6f79f15437d6059ec50c40de9efe01c7c8227b2c799556aac artifacts.tar.gz (first)
a15da160a4bc6d308f47bd0ebbbeaa09c549f07136d6f13203f05cf0374c77d2 569.log
709b40d737572628d282d5c5f97a62ea4681560d3300f5c126d34436a375618d 570.log
caf0c823c22213c63a86299c4100aec8e8913d3ef6209d36e893982d6fdf3510 571.log
dc77e18335dde4e2ba3ac38d4b2c8b9f59785057e871cceaea172596d3932a0c 572.log
709b40d737572628d282d5c5f97a62ea4681560d3300f5c126d34436a375618d 573.log
14ec475a0cbfc50408a010e14c7f5ab91ae4f675046b53b0e4a65d5dec7e2b79 574.log
67fbe4206bc4b2e5298472e155b81643fb8a30ab41b3c7971e2c9c9c0af1d9a7 artifacts.tar.gz (second)
a15da160a4bc6d308f47bd0ebbbeaa09c549f07136d6f13203f05cf0374c77d2 569.log
709b40d737572628d282d5c5f97a62ea4681560d3300f5c126d34436a375618d 570.log
caf0c823c22213c63a86299c4100aec8e8913d3ef6209d36e893982d6fdf3510 571.log
dc77e18335dde4e2ba3ac38d4b2c8b9f59785057e871cceaea172596d3932a0c 572.log
709b40d737572628d282d5c5f97a62ea4681560d3300f5c126d34436a375618d 573.log
14ec475a0cbfc50408a010e14c7f5ab91ae4f675046b53b0e4a65d5dec7e2b79 574.log
Output of checks
This bug happens on GitLab-CE Omnibus
Results of GitLab environment info
(For installations with omnibus-gitlab package run and paste the output of:
sudo gitlab-rake gitlab:env:info)
System information
System: Ubuntu 16.04
Current User: git
Using RVM: no
Ruby Version: 2.3.5p376
Gem Version: 2.6.13
Bundler Version:1.13.7
Rake Version: 12.0.0
Redis Version: 3.2.5
Git Version: 2.13.5
Sidekiq Version:5.0.4
Go Version: unknown
GitLab information
Version: 10.0.1
Revision: 2417795
Directory: /opt/gitlab/embedded/service/gitlab-rails
DB Adapter: postgresql
URL: https://git.site
HTTP Clone URL: https://git.site
SSH Clone URL: git#git.git.site
Using LDAP: no
Using Omniauth: no
GitLab Shell
Version: 5.9.0
Repository storage paths:
- default: /var/opt/gitlab/git-data/repositories
Hooks: /opt/gitlab/embedded/service/gitlab-shell/hooks
Git: /opt/gitlab/embedded/bin/git
My Issues on Gitlab.com
This seems expected considering the COPy strategy will copy first the files, before tar/gz them.
As explained in "Backup strategy option":
The default backup strategy is to essentially stream data from the respective data locations to the backup using the Linux command tar and gzip.
This works fine in most cases, but can cause problems when data is rapidly changing.
When data changes while tar is reading it, the error file changed as we read it may occur, and will cause the backup process to fail.
To combat this, 8.17 introduces a new backup strategy called copy. The strategy copies data files to a temporary location before calling tar and gzip, avoiding the error.
A side-effect is that the backup process with take up to an additional 1X disk space. The process does its best to clean up the temporary files at each stage so the problem doesn't compound, but it could be a considerable change for large installations. This is why the copy strategy is not the default in 8.17.
See lib/backup/files.rb:
# Copy files from public/files to backup/files
def dump
FileUtils.mkdir_p(Gitlab.config.backup.path)
FileUtils.rm_f(backup_tarball)
if ENV['STRATEGY'] == 'copy'
cmd = %W(cp -a #{app_files_dir} #{Gitlab.config.backup.path})
output, status = Gitlab::Popen.popen(cmd)
So the created timestamp for those copied files will change, making any checksum different.

Xcode Build server. Git operation failed

I've got an error on my Build server. It looks like this:
Bot Issue for My OSX Project (build service error)
Integration #1300 of My OSX Project
Open in Xcode: xcbot://xwserver/botID/d127cd23bd4cee1081dfcc192904a85b/integrationID/699d47fa9105419469cca90c6a2a7286
Assertion: Could not open '/Library/Developer/XcodeServer/Integrations/Caches/d127cd23bd4cee1081dfcc192904a85b/Source/xwrtrunk/.git/logs/refs/remotes/origin/AnotherProjectFolderName'
for writing: Is a directory (-1) File: (null):(null)
Introduced 5 integrations ago
Full logs for this integration are attached.
When I changed git repo, everything was great. but with this git repo it always fail. And I don't know what I must do. And even no ideas.
What did we do:
Cut checkouted repo on build server
Checked file system using disc utilities.
P.S. Any way thanks for attention.
Whenever a repo is problematic with XCode, the first workaround is to:
clone it again.
Make XCode reference the newly cloned repo
The OP ZevsVU (doing just that) adds in the comments:
We got this problem when I created a branch folder which name was equal to folder name in the repo.
We just deleted this branch and everything is great at the moment.
Another instance of a similar issue is now (Q4 2021) better presented:
See commit 66e905b, commit a7439d0 (25 Aug 2021) by René Scharfe (rscharfe).
(Merged by Junio C Hamano -- gitster -- in commit 7b06222, 08 Sep 2021)
xopen: explicitly report creation failures
Signed-off-by: René Scharfe
If the flags O_CREAT and O_EXCL are both given then open(2) is supposed to create the file and error out if it already exists.
The error message in that case looks like this:
fatal: could not open 'foo' for writing: File exists
Without further context this is confusing: Why should the existence of the file pose a problem? Isn't that a requirement for writing to it?
Add a more specific error message for that case to tell the user that we actually don't expect the file to preexist, so the example becomes:
fatal: unable to create 'foo': File exists

How can I recover after a checksum mismatch with 'git svn clone'?

I'm cloning an SVN repository to git as part of our migration plan. I've hit various snags along the way, forcing me to continue the clone with a git svn fetch command. The most recent failure I can't figure out how to solve:
$ git svn fetch
Checksum mismatch: dc/trunk-4632-jh/dc-smtpd/lib/Qpsmtpd/Address.pm.t 8ce3aea3f47dc115e8fe53bd62d0f074cfe93ec6
expected: 59de969022e46135fa6dc7599fc2f3b4
got: 4334926a01c905cdb7fce71265e370c1
I found this related answer, however that solution doesn't work because git svn log is not yet functional, as the repo is not fully in place:
$ git svn log dc/trunk-4632-jh/dc-smtpd/lib/Qpsmtpd/Address.pm.t
fatal: ambiguous argument 'HEAD': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions
log --no-color --first-parent --pretty=medium HEAD: command returned error: 128
How can I proceed?
Another answer to an old question but straight forward solutions are tough to find for this problem so hopefully this helps others.
I think this issue occurs due to a corrupted file during transfer. Not sure how or why it happens, but in my case, I get the same error at different revisions every time I do a new clone and sometimes not at all.
Using the questioners error message
$ git svn fetch
Checksum mismatch: dc/trunk-4632-jh/dc-smtpd/lib/Qpsmtpd/Address.pm.t
8ce3aea3f47dc115e8fe53bd62d0f074cfe93ec6
expected: 59de969022e46135fa6dc7599fc2f3b4
got: 4334926a01c905cdb7fce71265e370c1
The following steps allowed me to resume and progress :-
View all branches. These will all be remote branches. git branch -a
Checkout branch affected. git checkout remotes/origin/trunk-4632-jh
This will take some time to complete.
Find the last revision that the problematic file was changed. git svn log dc-smtpd/lib/Qpsmtpd/Address.pm.t
Note the highest revision #
Reset back to this rev. git svn reset -r (rev #) -p
Carry on. git svn fetch
Good luck.
I know this is old but maybe it will be helpful for future reference as all search results on this are not helpful.
I've hit similar issue on our huge repository which takes days to clone and unfortunately at one point I had to restart my machine. I am currently working out how to resolve the problem, so please keep in mind this is more a suggestion than tested solution.
I think you need to try creating a branch and checking out the commits you currently have from previous fetch:
git checkout -b master git-svn
After that is done you should have working tree up to that commit. Another fetches will probably fail due to object mismatch but at that point at least it should be possible to use "git svn reset" to revert faulty svn fetches (see OP's related answer link). If that's true find offending commit, reset before it and then continue fetching.
You might want to rebase and revert to state before that broken commit on your master branch or convert back to bare repository, if that's what you're after (in my case it is).
Hope this works. I'll post an update when my checkout is done (will take at least few hours... sigh).
Edit: That seemed to work. I successfully discarded some git-svn commits and am able to re-fetch them again. :)
Edit2: Make sure to reset until you don't get any object mismatch warnings on git svn fetch (otherwise you will run into the same issue soon).
Cheers,
Henryk
See also: Git svn rebase : checksum mismatch
In our case the additional treatment of the files (server-side includes in Apache) caused the checksum problem.
Disabling SSI in Apache's /etc/httpd.conf file for the period of migration by commenting out the
AddType text/html .shtml
AddOutputFilter INCLUDES .shtml
directives solved the problem, caused by the interpretation of .shtml files by the front-end Apache server, which produced a new content (and thus a new hash), other than the hash of the original file itself.
That means some files in the repository got corrupted. It can be caused by various reasons such as software bugs, bit rots in drives, etc. I was recently transitioning very old ~10GB svn repository to git, therefore some corruption was expected.
To fix the corruption, you basically need to dump the entire repository and import it while filtering the errors out. Note that our goal is to complete the import process no matter why or how the repository got corrupted. You cannot simply fix the corruption without having a backup and diffing through the revision files.
First basic one-off command you could use is:
svnadmin create repo2
svnadmin dump repo | sed '/^Text-content-md5/d' | svnadmin load repo2
This removes the checksum calculation from the dump so the new repo will have updated checksums.
If you encountered more errors during the dump and load (which is expected), try incremental approach so you can continue from the point you left. Below command will dump the revisions starting from 101 to 150 (inclusive).
svnadmin dump --incremental -r101:150 repo | sed '/^Text-content-md5/d' | svnadmin load repo2
Some common errors and solutions:
'Premature end of content data in dumpstream': That means Content-length of some file does not match the repository version, so some data is lost in the specified file. We must skip it. Add | svndumpfilter exclude path/to/file.jar command like this:
svnadmin dump --incremental -r101:150 repo | svndumpfilter exclude path/to/file.jar | sed '/^Text-content-md5/d' | svnadmin load repo2
Property errors: Add --bypass-prop-validation to svnadmin load command
After populating your second repo, you would simply svnserve -d -r repo2 and try git svn fetch again.
Good luck!

Update redis server from 1.2.6 to latest

I need to update redis server.
I found a way to save DB on disk and after restore it, but my question is will new redis server have problems with read old DB structure?
The version of the dump file is encoded in the first 9 characters. So the following command can be used to check it:
$ head -1 dump.rdb | cut -c1-9
REDIS0002
Redis 1-2-6 used version 1 of the dump file (it can read and write only version 1)
Redis 2-4-6 is using version 2. However, it is able to read both version 1 and version 2 files. Version 2 happen to be backward compatible with version 1 anyway.
To upgrade, you can just read the version 1 dump file with a recent Redis release, and then dump the file again (it will be written with version 2 format). The new file may be smaller due to some optimizations available with recent Redis versions and the version 2 format.
Optionally, you can check the integrity of the dump file before starting the 2-4 Redis instance by using the redis-check-dump command:
$ ../redis-2.4.4/src/redis-check-dump dump.rdb
==== Processed 19033 valid opcodes (in 639641 bytes) ===========================
This is a pure read-only utility, it cannot harm the dump file.