I am using rsnapshot for backups and noticed a problem.
I defined some rsync_long_args in the rsnapshot.conf file
rsync_long_args --delete --numeric-ids --relative
Further in the file, when declaring BACKUP POINTS / SCRIPTS, I need to add some specific sync_long_args that will use the initial sync_long_args, add or overwrite the args, example:
backup backup#xxx.xxx.xxx.xxx:/usr/local/nagios/ myserver/ rsync_long_args=--compress-level=5
And there I have a problem, when testing the rsnapshot with the -t option, I am geting:
/usr/bin/rsync -a --delete --numeric-ids --relative --link-dest=/data/backups/rsnapshot/daily.1/myserver/ backup#xxx.xxx.xxx.xxx:/usr/local/nagios/ /data/backups/rsnapshot/daily.0/myserver/
You can notice here that the address of the source dir is
backup#xxx.xxx.xxx.xxx:/usr/local/nagios/
There is the last slash that will only copy the content of the /usr/local/nagios directory, and I need the full path, so the slash shouldn't be there.
If I remove the rsync_long_args= in the BACKUP POINTS / SCRIPTS, then the slash is not there anymore.
Any idea why?
I found the solution, it is enough to add + in frond of the rsync_long_args
backup backup#xxx.xxx.xxx.xxx:/usr/local/nagios/ myserver/ +rsync_long_args=--compress-level=5
Related
On Bacula/Bareos, document stress the importance of Catalog bootstrap file must be save on somewhere safe, I know Catalog consist of MySQL DB dump and optional included Bacula/bareos config file, but how exactly does anyone recover from scratch in case the whole backup infrastructure is gone?
Is it just install all Bacula/bareos software, then import MySQL and config then fire up Director would do the trick?
A bit of an old question, but I'll provide some feed back,
If you've done a mysqldump of the database (or pgdump depending on the backend) you essentially have the catalog in it's full state. I believe that you can simply restore this database to a new server, and restore the old config files (these are not stored in the dump but rather in /etc/bareos). Also, make sure that the same user/password is used for the database user as specified in the bareos-dir.conf file, or else you will not be able to connect to the database. Depending on how your storage devices are setup you may need to mess around with the baroes-sd.conf file.
To answer the other question off the OP, you can use a volume without a catalog. It's a bit cumbersome, but is possible with the following:
http://www.bacula.org/5.0.x-manuals/en/utility/utility/Volume_Utility_Tools.html
For example:
List jobs on a volume: bls -j -V Full_1-1886 FileStorage1
List files on a volume: bls -V Full_1-1886 FileStorage1
Once you have found the file, or directory (Note wildcard characters are supported) you can extract the file:
bextract -i restoreFiles -V Full_2-1277 FileStorage2 /tmp/
Where:
restoreFiles specifies a file separated with newlines that lists files/directories to restore
/tmp/ is the destination of the restore
For instance, I have tried this (notice sources is remote):
scp root#$node:/sourcepath/sourcefile.log /destinationpath/destinationfile.log
The other option is to rename the file afterwards, but would be more convenient to do it on the fly while the data is downloaded via scp, therein my question. Thanks.
Maybe without scp:
ssh yourserver "cat >tmpfile && mv tmpfile datafile" <datafile
This command copies the "datafile" file to a remote server under the name "tmpfile".
Only after successful copy renames the temporary file "tmpfile" to the right name "datafile" on remote host.
If copying was not successful, the remote host will be only a temporary file.
Thus, you are protected from getting no full "datafile" file.
Sorry for my English.
I have a thecus home server, I'd like to edit the index.php file located under /img/www/htdocs/index.php however it tells me every time I vi that it's 'Read-only'.
I checked it's file permissions using ls -l index.php:
-rw-r--r-- 1 root root 7619 Mar 29 2013 /img/www/htdocs/index.php
From my understanding, the -rw first in the permissions, stands for the ownership permissions, and the owner is root in the group of root.
I have ssh'd into my server using:
ssh root#server.com
Once I login, it say's
root#127.0.0.1:~#
I have tried changing it's ownership, chmodding it, using vi to change permissions, trying to force it doesn't work either, how can I edit this damned file ! :(
When I try to use sudo it say's the command is not found, so I'm assuming that's because Thecus have stripped down the commands.
The output of mount without any arguments, I have noticed that the directory that I'm currently working in, is actually set to ro, is there a way I can edit this?
/dev/cloop2 on /img/www type ext2 (ro,relatime,errors=continue,user_xattr,acl)
Any help would be great! :)
Try mount -o remount,rw /img/www/, if that is not possible, you can copy the contents to a place where you can modify them, unmount the original /img/www/ and then symlink or "bind mount" the new location there.
I want to take backup of my website which is hosted on godaddy.
I used pscp command from my windows dos and try to download whole public_html folder.
my command is :
pscp -r user#host:public_html/ d:\sites\;
Files are downloading properly and folders also. But the issue is public_html and other subfolders has two folder like "./" and "../". Due to these two folders my copy is getting failed and I am getting
"security violation: remote host attempted to write to " a '.' or '..' path!"error.
Hope any one can help for this.
Note : I have only ssh access and have to download it from ssh commands itself.
Appending a star to the source should fix it, e.g.
pscp -r user#host:public_html/* d:\sites\;
Also you can do same thing by not adding '/' at the end of your source path.
For eg.
pscp -r user#host:public_html d:\sites
Above command will create public_html directory if not exists at your destination (i.e. d:\sites).
Simply we can say using above command we can make a as it is clone of public_html at d:\sites.
One important thing: You need to define the port number over here "-P 22".
pscp -r -P 22 user#host:public_html/* D:\sites
In my case, it works when I use port number 22 with the above script.
Trying to download S3 directory to local machine using s3cmd. I'm using the command:
s3cmd sync --skip-existing s3://bucket_name/remote_dir ~/local_dir
But if I restart downloading after interruption s3cmd doesn't skip existing local files downloaded earlier and rewrites them. What is wrong with the command?
I had the same problem and found the solution in comment # 38 from William Denniss there http://s3tools.org/s3cmd-sync
If you have:
$s3cmd sync —verbose s3://mybucket myfolder
Change it to:
$s3cmd sync —verbose s3://mybucket/ myfolder/ # note the trailing slash
Then, the MD5 hashes are compared and everything works correctly! —skip-existing works as well.
To recap, both —skip-existing and md5 checks won’t happen if you use the first command, and both work if you use the second (I made a mistake in my previous post, as I was testing with 2 different directories).
Use boto-rsync instead. https://github.com/seedifferently/boto_rsync
It correctly syncs only new/changed files from s3 to the local directory.