Rsyng doesn't run from cron, but manually - ssh

I have a simple script for backing up files from my server. It does the following:
Joins the server with SSH
Creates a MySQL dump file
Tar some folders
Exits
Starts rsnapshot to download the folder where the tar.gz and sql file are located
sshs back to the server just to clean up files
Exits
On the top of my crontab I've given the following
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
SHELL=/bin/bash
However, the scripts sometimes starts, sometimes not. Also Rsnapshot sais for a few of my servers when running from cron:
/usr/bin/rsnapshot -c /backup/configs/myserver.com.conf daily: ERROR: /usr/bin/rsync returned 255 while processing user#myserver.com:/home/user/serverdump/
Do you have any idea for both the issues?

Related

code deployment via zipped file in jenkins

I am new to Jenkins and still taking baby steps to learn it. What I have could be very simple to some people but I couldn't find a straightforward way to do it. I simply want to take source code in a zipped file format and do the following:
copy to remote server in a certain directoy
delete the old code
unzip the new code
delete the zipped file
finally start apache web server
I have installed plugins like ssh2, ssh-copy, remote commands, etc but still cannot achieve what I am looking in to do. Any help would greatly appreciated it.
I have a Spring project and build it to get a .war file by Jenkins.
The following shell commands show how to copy the .war to a remote server and to run it on Tomcat.
remote_host=192.168.1.2
tomcat_home=/x/y
# stop web server
ssh root#${remote_host} "sh /root/stop.sh" || echo "something wrong ignored!"
# copy to remote server in a certain directoy
scp $WORKSPACE/build/libs/myapp-test.war root#${remote_host}:$tomcat_home/webapps/myapp.war
# delete the old code
ssh root#${remote_host} "rm -rf $tomcat_home/webapps/*"
# unzip the new code
ssh root#${remote_host} "unzip -o $tomcat_home/webapps/myapp.war -d $tomcat_home/webapps/myapp"
# delete the zipped file
ssh root#${remote_host} "rm -rf $tomcat_home/webapps/myapp.war"
# finally start apache web server
ssh root#${remote_host} "sh $tomcat_home/bin/startup.sh"
In my case, I put the commands in a Jenkins job and at the section: Build -- Execute shell -- Command

Ubuntu Server Backup and Restore via tar

I'm trying to learn how to backup and restore my Ubuntu Server via tar so I know that I have a safe system. After I untar and reboot, I have several issues, but they seem to be caused by a read-only file system. The source and destination server are both Ubuntu Server on the same version, 18.04.05 LTS. The source server is a VPS that has 6 GB RAM and 4vCPUs. The destination server is a VM on my FreeNAS machine with 6GB RAM and 2 vCPUs.
The primary applications that need to work are my Graylog server and Nagios server. I've mostly followed the instructions at Ubuntu.
First, my tar command is:
sudo tar -c --use-compress-program=pigz -f backup.tar.gz --exclude=/backup.tar.gz --exclude=/dev --exclude=/usr --exclude=/sbin --exclude=/proc --exclude=/sys --exclude=/tmp --exclude=/run --exclude=/mnt --exclude=/media --exclude=/lost+found --exclude=/home/*/.cache --exclude=/home/*/.gvfs --exclude=/home/*/.local/share/Trash --exclude=/var/log --exclude=/var/cache/apt/archives --exclude=/usr/src/linux-headers* --one-file-system /
I use pigz to utilize the VPS's 4 vCPUs to take less time. I transfer this to my VM which as a fresh copy of Ubuntu Server 18.04.05 and untar with:
sudo tar -xvpzf backup.tar.gz -C / --numeric-owner
After I reboot, I get the following as soon as I boot:
Unable to setup logging. [Errno 30] Read-only file system: '/var/log/landscape/sysinfo.log'
run-parts: /etc/update-motd.d/50-lanscape-sysinfo exited with return code 1
mktemp: failed to create file via template '/var/lib/update-notifier/tmp.XXXXXXXXXX': Read-only file system
run-parts: /etc/update-motd.d/95-hwe-eol exited with return code 1
/usr/lib/update-notifier/update-motd-fsck-at-reboot: 33: /usr/lib/update-motd-fsck-at-reboot: cannot create /var/lib/update-notifier/fsck-at-reboot: Read-only file system
I do see that some areas of the system do work like the original source. My SSH port changes, hostname changes, etc. But I get these above errors and my Graylog and Nagios servers do not work.
So I'm wondering where I went wrong in my process and any help would be appreciated. The source is a live server with backups so I'm safe there. I'm just making sure I have my ducks in a row for the future.

Barman postgresql incoming WALs directory

I have got a problem with incoming WALs directory in Barman - backup tool to postgresql databases
In my database server I have in postgresql.conf
wal_level = 'archive'
archive_mode = on
archive_command = 'rsync -a %p barman#mybarmanserverip:INCOMING_WALS_DIRECTORY/%f'
In my barman server when I make command "barman show-server myservername" I get, that my incoming_wals_directory is
/var/lib/barman/myservername/incoming
Command barman check myservername return "OK" in all points, but when I want to make backup in command barman backup myservername I see that first 3 points is correct but point "Asking PostgreSQL server to finalize the backup" never ends.
Where is my mistake?
I had this issue and that was a problem due to rsync.
To check if it's the case for you, try to rsync a random file :
rsync -zvh random_file user#remote_host:/tmp/test
if the output is something like:
protocol version mismatch -- is your shell clean?
then there is 2 possible reasons :
rsync versions are not the same on the two servers
some text is output when you ssh to the remote server, rsync does not like it
To fix the first issue, here is what I did :
be sure that rsync --version is the same on both machines :
on your local env run rsync --version
from your local (to the remote) run ssh login#remote_host "rsync --version"
(Install the correct version if they don't match.)
To fix the second issue, you must add something in your .bashrc file that prevent text output after ssh connection on non interactive session (i.e "Last login: Thu Sep..." - it makes rsync fail)
I put that at the top of my .bashrc file :
case $- in
*i*) ;;
*) return;;
esac
Then rsync works fine, and the initial barman backup command finnishes well
replace INCOMING_WALS_DIRECTORY with your incoming folder path which you can find using this command barman show-server main
archive_command = 'rsync -a %p barman#mybarmanserverip:/var/lib/barman/main/incoming/%f'
Make sure you change the INCOMING_WALS_DIRECTORY placeholder with the value returned by the barman show-server main command above.
Also make sure that postgres user can ssh to barman server correctly.

Random error in Unix shell script

I have a shell script which in turn calls sql file. Its a bash shell running in UNIX. Following is the main steps taken in script.
1) Generate Term file
2) Remove previous day's Term and Rpt file from utility directory.
3) Copy Term file from Run directory to Utility directory.
4) Run the sql file
5) Copy the output, RPT file from Utility to Run directory.
Here is the code snippet:
> RUN_DIR/nj.terms
if [[ -s RUN_DIR/nj.terms ]] then
rm -f /utl/nj.terms
rm -f /utl/nj.rpt
cp RUN_DIR/nj.terms utl
/bin/sqlplus USER PSWD #sql
cp utl/nj.RPT RUN_DIR
fi
I get following error from sql output as :
ORA-29283 - Invalid file operation.
Mostly this error is due to absence of Term file whenever the sql runs. Due this error RPT file will not be generated and cause failure in following copy command (cp utl/nj.RPT RUN_DIR).After the failure, when we checked the Term file ,it was present in Utl directory.
This error occurs randomly. Is there any chance system takes more time to copy the Term file to Utility directory and before completing it sql was run? It would be great if someone can help me in this situation.

Setup Amazon S3 backup on QNAP using s3cmd

I own a QNAP-219P and I want to set this up manually using s3cmd.
I did quite a bit of research on this, and here are the references I got:
http://web.archive.org/web/20091120211330/http://codemonkeybrown.com/qnaps3.html
http://wiki.qnap.com/wiki/Running_Your_Own_Application_at_Startup
http://wiki.qnap.com/wiki/Add_items_to_crontab
http://blog.wingateuk.com/2013/03/cloud-backup-on-qnap-nas.html?showComment=1413660445187#c8935766892046800936
I'm trying to get the s3cmd to work on my TS-219P.
I got everything to work (on command line), even running the script file (s3-backup.sh) on command line:
#!/bin/bash <-- I also tried #!/bin/sh
/share/maintenance/s3cmd-1.5.0-rc1/s3cmd --rr sync -rv /share/all-shared-folders/emilie/ s3://kingjim-backup/kingjim-nas/emilie/ >> /share/maintenance/log/s3cmd/backup_`date "+%Y%m%d-%H-%M"`.log <-- I also tried running s3cmd via python by adding /usr/bin/python on the front.
If I run using the SSH command prompt, it seems to work perfectly.
The problem though, is the cronjob. I can confirm the cronjob trigger, and it was run, because my log file (the one above) was generated, but the log is always empty, even though I'm sure there are some new files created/modified.
This is my cronjob task:
14 3 * * * /share/maintenance/s3-backup.sh 2>&1 | logger
I've done a number of different variations on the above, but couldn't find out what was missing.
I feel like some dependency is missing when the crontab is running, as compared to when I run it on command prompt. But I don't know how to debug crontab.
Found out that the problem was that the s3cmd configuration file was not found when running s3cmd.
So the fix was simply to copy this .s3config file to a safe shared folder, and then call the s3cmd with the "--config" parameter followed by the file.
Like this:
/share/maintenance/s3-backup/s3cmd/s3cmd --config
/share/maintenance/s3-backup/s3cmd.config --rr sync -rv /share/MD0_DATA/ s3://xxx-backup/xxx-nas/ >> /share/maintenance/s3-backup/logs/backup_`date "+%Y%m%d-%H-%M"`.log 2>&1