Using terraform to create some self signed tlf certs for hashi vault, the main problematic terraform bits in my module is as follows, I have tried 2 ways to get this to work.
First way, which in theory, I think should work:
provisioner "local-exec" {
command = "echo '${self.cert_pem}' > ../tls/ca.pem && chmod 0600 ../tls/ca.pem"
}
}
provisioner "local-exec" {
command = "echo '${self.cert_pem}' > ../tls/vault.pem && echo '${tls_self_signed_cert.vault-ca.cert_pem}' >> ../tls/vault.pem && chmod 0600 ../tls/vault.pem"
}
Which throws this error:
│ ' > ../tls/ca.pem && chmod 0600 ../tls/ca.pem': exit status 2. Output:
│ /bin/sh: 1: cannot create ../tls/ca.pem: Directory nonexistent
And if I replace the .. with a hardcoded path i.e. this:
provisioner "local-exec" {
command = "echo '${self.cert_pem}' > /etc/vault/tls/ca.pem && chmod 0600 /etc/vault/tls/ca.pem"
}
}
provisioner "local-exec" {
command = "echo '${self.cert_pem}' > /etc/vault/tls/vault.pem && echo '${tls_self_signed_cert.vault-ca.cert_pem}' >> /etc/vault/tls/vault.pem && chmod 0600 /etc/vault/tls/vault.pem"
}
I get the same error but obviously showing the path instead:
> /etc/vault/tls/ca.pem && chmod 0600 /etc/vault/tls/ca.pem': exit status
│ 2. Output: /bin/sh: 1: cannot create /etc/vault/tls/ca.pem: Directory
│ nonexistent
If I go on and look at the container for myself, the path /etc/vault/tls is there....
You have to ensure that /etc/vault/tls/ exists before you can write a file into it:
provisioner "local-exec" {
command = "sudo mkdir -p /etc/vault/tls && sudo echo '${self.cert_pem}' > /etc/vault/tls/ca.pem && sudo chmod 0600 /etc/vault/tls/ca.pem"
}
Related
I'm using this shell script in my jenkins file for running my scripts. I implemented rerun and rebot for merging of reports
${Run_Type} = pabot
${REPORT_FOLDER} = output directory
steps {
script {
dir("${NODE_WORKSPACE}") {
try {
sh "rm -rf ../${BATCH_TARGET_DIR_NAME}"
def exitCode;
if(params.Tags == null || params.Tags.isEmpty()){
exitCode = sh script: "${Run_Type} -d ../${REPORT_FOLDER} -v browser:headlesschrome -v webdriver.chrome.driver:chromedriver -i ${TestGroup} . || \
${Run_Type} -d ../${REPORT_FOLDER} --rerunfailed ../${REPORT_FOLDER}/output.xml -o rerun.xml -i ${TestGroup} . || \
rebot -o ../${REPORT_FOLDER}/final.xml -r ../${REPORT_FOLDER}/report.html -l ../${REPORT_FOLDER}/log.html -R ../${REPORT_FOLDER}/output.xml ../${REPORT_FOLDER}/rerun.xml", returnStatus: true
}
else{
exitCode = sh script: "${Run_Type} -d ../${REPORT_FOLDER} -v browser:headlesschrome -v webdriver.chrome.driver:chromedriver -i ${Tags} . || \
${Run_Type} -d ../${REPORT_FOLDER} --rerunfailed ../${REPORT_FOLDER}/output.xml -o rerun.xml -i ${Tags} . || \
rebot -o ../${REPORT_FOLDER}/final.xml -r ../${REPORT_FOLDER}/report.html -l ../${REPORT_FOLDER}/log.html -R ../${REPORT_FOLDER}/output.xml ../${REPORT_FOLDER}/rerun.xml", returnStatus: true
}
if (exitCode != 0) {
TEST_RESULT = "FAILURE"
throw new Exception("exitCode " + exitCode);
}
} catch (Throwable e) {
throw e
}
}
}
}
Here's the deal:
rebot -R (--merge) xml reports is working when test cases are still failed after rerun.
successful_merge_report_because_rerun_still_failed
But when I rerun and the test case/s are passed, rebot --merge/-R don't even run on jenkins
Report displays only the rerun report and I noticed that rebot --merge didn't even run on my jenkins console output
I'm thinking maybe there's a problem with the logic of my shell script since it's working when test cases are still failed after rerun.
here is an isolated one line of my shell script:
sh script: "${Run_Type} -d ../${REPORT_FOLDER} -v browser:headlesschrome -v webdriver.chrome.driver:chromedriver -i ${TestGroup} . || ${Run_Type} -d ../${REPORT_FOLDER} --rerunfailed ../${REPORT_FOLDER}/output.xml -o rerun.xml -i ${TestGroup} . || rebot -o ../${REPORT_FOLDER}/final.xml -r ../${REPORT_FOLDER}/report.html -l ../${REPORT_FOLDER}/log.html -R ../${REPORT_FOLDER}/output.xml ../${REPORT_FOLDER}/rerun.xml", returnStatus: true
I am installing Android studio and following this instruction. https://docs.expo.dev/workflow/android-studio-emulator/
I need to add
[ -d "$HOME/Library/Android/sdk" ] && ANDROID_SDK=$HOME/Library/Android/sdk || ANDROID_SDK=$HOME/Android/Sdk
echo "export ANDROID_SDK=$ANDROID_SDK" >> ~/`[[ $SHELL == *"zsh" ]] && echo '.zshenv' || echo '.bash_profile'
and
echo "export PATH=$HOME/Library/Android/sdk/platform-tools:\$PATH" >> ~/`[[ $SHELL == *"zsh" ]] && echo '.zshenv' || echo '.bash_profile'`
to ~/.zshrc
what I did is,
from the android studio, I see my path is Users/myname/Library/Android/sdk therefore,
nano ~/.zshrc
open ~/.zshrc window.
and added this two line like this...
[ -d "Users/<myname>/Library/Android/sdk" ] && ANDROID_SDK=Users/<myname>/Library/Android/sdk || ANDROID_SDK=Users/<myname>/Android/Sdk
echo "export ANDROID_SDK=$ANDROID_SDK" >> ~/`[[ $SHELL == *"zsh" ]] && echo '.zshenv' || echo '.bash_profile'
and next line, I added..
echo "export PATH=Users/<myname>/Library/Android/sdk/platform-tools:\$PATH" >> ~/`[[ $SHELL == *"zsh" ]] && echo '.zshenv' || echo '.bash_profile'`
and it said, I need to make sure adb is working in terminal. So I typed adb in terminal, and it returns this err msg
zsh: command not found: adb
I am very new to shell things.. please help me!
You don't need to open the file with an editor: just run the two shell lines themselves in your terminal and they will append the EXPORT command to .zshenv with the proper ANDROID_SDK to be sourced next time you open the terminal and at the same time you'll the have ANDROID_SDK populated for your current session to keep on going with the setup.
The lines are doing the two following things:
Checking if $HOME/Library/Android/sdk exists: if so assign $HOME/Library/Android/sdk to ANDROID_SDK, else assign it $HOME/Android/Sdk
Checking if you're using zsh by running $SHELL == *"zsh": if so, append export ANDROID_SDK=$ANDROID_SDK to .zshenv, if not, append it to .bash_profile
If you wanted to do that manually, open .zshenv and append to it export ANDROID_SDK=$ANDROID_SDK (without the echo part) where $ANDROID_SDK is the directory above mentioned ( $HOME/Library/Android/sdk or $HOME/Android/Sdk depending on you having the $HOME/Library/Android/sdk directory or not )
I have
celery==3.1.23
Django==1.9.1
redis==2.10.5
ii redis-server 2:2.8.19-3 amd64 Persistent key-value database with networ
ii redis-tools 2:2.8.19-3 amd64 Persistent key-value database with networ
My configuration settings have the lines
# Celery
BROKER_URL = 'redis://127.0.0.1:6379/0'
BROKER_TRANSPORT = 'redis'
# start worker with '$ celery -A intro worker -l debug'
and my configuration file celery.py (standard practice is to name it this way, but confusing in my opinion) is
from __future__ import absolute_import
import os
import django
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'intro.settings')
django.setup()
app = Celery('intro')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
The config /etc/default/celeryd (also confusing naming) is
# copy this file to /etc/default/celeryd
CELERYD_NODES="w1 w2 w3"
VIRTUAL_ENV_PATH="/srv/intro/bin"
# JRT
CELERY_BIN="${VIRTUAL_ENV_PATH}/celery"
# Where to chdir at start.
CELERYD_CHDIR="/srv/intro/intro"
# Python interpreter from environment.
ENV_PYTHON="$VIRTUAL_ENV_PATH/python"
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryd_multi"
# How to call "manage.py celeryctl"
CELERYCTL="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryctl"
# Extra arguments to celeryd NOTE --beat is vital, otherwise scheduler
# will not run
CELERYD_OPTS="--concurrency=1 --beat"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="jimmy"
CELERYD_GROUP="jimmy"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="intro.settings"
#CELERY_BROKER_URL = 'redis://127.0.0.1:6379/0'
#export DJANGO_SETTINGS_MODULE="settings"
#CELERYD_MULTI="/home/webapps/.virtualenvs/crowdstaff/bin/django-admin.py celeryd_detach"
My /etc/init.d/celeryd file is
#!/bin/sh -e
VERSION=10.1
echo "celery init v${VERSION}."
if [ $(id -u) -ne 0 ]; then
echo "Error: This program can only be used by the root user."
echo " Unprivileged users must use the 'celery multi' utility, "
echo " or 'celery worker --detach'."
exit 1
fi
# Can be a runlevel symlink (e.g. S02celeryd)
if [ -L "$0" ]; then
SCRIPT_FILE=$(readlink "$0")
else
SCRIPT_FILE="$0"
fi
SCRIPT_NAME="$(basename "$SCRIPT_FILE")"
DEFAULT_USER="celery"
DEFAULT_PID_FILE="/var/run/celery/%n.pid"
DEFAULT_LOG_FILE="/var/log/celery/%n.log"
DEFAULT_LOG_LEVEL="INFO"
DEFAULT_NODES="celery"
DEFAULT_CELERYD="-m celery worker --detach"
CELERY_DEFAULTS=${CELERY_DEFAULTS:-"/etc/default/${SCRIPT_NAME}"}
# Make sure executable configuration script is owned by root
_config_sanity() {
local path="$1"
local owner=$(ls -ld "$path" | awk '{print $3}')
local iwgrp=$(ls -ld "$path" | cut -b 6)
local iwoth=$(ls -ld "$path" | cut -b 9)
if [ "$(id -u $owner)" != "0" ]; then
echo "Error: Config script '$path' must be owned by root!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with mailicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change ownership of the script:"
echo " $ sudo chown root '$path'"
exit 1
fi
if [ "$iwoth" != "-" ]; then # S_IWOTH
echo "Error: Config script '$path' cannot be writable by others!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with malicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change the scripts permissions:"
echo " $ sudo chmod 640 '$path'"
exit 1
fi
if [ "$iwgrp" != "-" ]; then # S_IWGRP
echo "Error: Config script '$path' cannot be writable by group!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with malicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change the scripts permissions:"
echo " $ sudo chmod 640 '$path'"
exit 1
fi
}
if [ -f "$CELERY_DEFAULTS" ]; then
_config_sanity "$CELERY_DEFAULTS"
echo "Using config script: $CELERY_DEFAULTS"
. "$CELERY_DEFAULTS"
fi
# Sets --app argument for CELERY_BIN
CELERY_APP_ARG=""
if [ ! -z "$CELERY_APP" ]; then
CELERY_APP_ARG="--app=$CELERY_APP"
fi
CELERYD_USER=${CELERYD_USER:-$DEFAULT_USER}
# Set CELERY_CREATE_DIRS to always create log/pid dirs.
CELERY_CREATE_DIRS=${CELERY_CREATE_DIRS:-0}
CELERY_CREATE_RUNDIR=$CELERY_CREATE_DIRS
CELERY_CREATE_LOGDIR=$CELERY_CREATE_DIRS
if [ -z "$CELERYD_PID_FILE" ]; then
CELERYD_PID_FILE="$DEFAULT_PID_FILE"
CELERY_CREATE_RUNDIR=1
fi
if [ -z "$CELERYD_LOG_FILE" ]; then
CELERYD_LOG_FILE="$DEFAULT_LOG_FILE"
CELERY_CREATE_LOGDIR=1
fi
CELERYD_LOG_LEVEL=${CELERYD_LOG_LEVEL:-${CELERYD_LOGLEVEL:-$DEFAULT_LOG_LEVEL}}
CELERY_BIN=${CELERY_BIN:-"celery"}
CELERYD_MULTI=${CELERYD_MULTI:-"$CELERY_BIN multi"}
CELERYD_NODES=${CELERYD_NODES:-$DEFAULT_NODES}
export CELERY_LOADER
if [ -n "$2" ]; then
CELERYD_OPTS="$CELERYD_OPTS $2"
fi
CELERYD_LOG_DIR=`dirname $CELERYD_LOG_FILE`
CELERYD_PID_DIR=`dirname $CELERYD_PID_FILE`
# Extra start-stop-daemon options, like user/group.
if [ -n "$CELERYD_CHDIR" ]; then
DAEMON_OPTS="$DAEMON_OPTS --workdir=$CELERYD_CHDIR"
fi
check_dev_null() {
if [ ! -c /dev/null ]; then
echo "/dev/null is not a character device!"
exit 75 # EX_TEMPFAIL
fi
}
maybe_die() {
if [ $? -ne 0 ]; then
echo "Exiting: $* (errno $?)"
exit 77 # EX_NOPERM
fi
}
create_default_dir() {
if [ ! -d "$1" ]; then
echo "- Creating default directory: '$1'"
mkdir -p "$1"
maybe_die "Couldn't create directory $1"
echo "- Changing permissions of '$1' to 02755"
chmod 02755 "$1"
maybe_die "Couldn't change permissions for $1"
if [ -n "$CELERYD_USER" ]; then
echo "- Changing owner of '$1' to '$CELERYD_USER'"
chown "$CELERYD_USER" "$1"
maybe_die "Couldn't change owner of $1"
fi
if [ -n "$CELERYD_GROUP" ]; then
echo "- Changing group of '$1' to '$CELERYD_GROUP'"
chgrp "$CELERYD_GROUP" "$1"
maybe_die "Couldn't change group of $1"
fi
fi
}
check_paths() {
if [ $CELERY_CREATE_LOGDIR -eq 1 ]; then
create_default_dir "$CELERYD_LOG_DIR"
fi
if [ $CELERY_CREATE_RUNDIR -eq 1 ]; then
create_default_dir "$CELERYD_PID_DIR"
fi
}
create_paths() {
create_default_dir "$CELERYD_LOG_DIR"
create_default_dir "$CELERYD_PID_DIR"
}
export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"
_get_pidfiles () {
# note: multi < 3.1.14 output to stderr, not stdout, hence the redirect.
${CELERYD_MULTI} expand "${CELERYD_PID_FILE}" ${CELERYD_NODES} 2>&1
}
_get_pids() {
found_pids=0
my_exitcode=0
for pidfile in $(_get_pidfiles); do
local pid=`cat "$pidfile"`
local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
echo "bad pid file ($pidfile)"
one_failed=true
my_exitcode=1
else
found_pids=1
echo "$pid"
fi
if [ $found_pids -eq 0 ]; then
echo "${SCRIPT_NAME}: All nodes down"
exit $my_exitcode
fi
done
}
_chuid () {
su "$CELERYD_USER" -c "$CELERYD_MULTI $*"
}
start_workers () {
if [ ! -z "$CELERYD_ULIMIT" ]; then
ulimit $CELERYD_ULIMIT
fi
_chuid $* start $CELERYD_NODES $DAEMON_OPTS \
--pidfile="$CELERYD_PID_FILE" \
--logfile="$CELERYD_LOG_FILE" \
--loglevel="$CELERYD_LOG_LEVEL" \
$CELERY_APP_ARG \
$CELERYD_OPTS
}
dryrun () {
(C_FAKEFORK=1 start_workers --verbose)
}
stop_workers () {
_chuid stopwait $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}
restart_workers () {
_chuid restart $CELERYD_NODES $DAEMON_OPTS \
--pidfile="$CELERYD_PID_FILE" \
--logfile="$CELERYD_LOG_FILE" \
--loglevel="$CELERYD_LOG_LEVEL" \
$CELERY_APP_ARG \
$CELERYD_OPTS
}
kill_workers() {
_chuid kill $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}
restart_workers_graceful () {
echo "WARNING: Use with caution in production"
echo "The workers will attempt to restart, but they may not be able to."
local worker_pids=
worker_pids=`_get_pids`
[ "$one_failed" ] && exit 1
for worker_pid in $worker_pids; do
local failed=
kill -HUP $worker_pid 2> /dev/null || failed=true
if [ "$failed" ]; then
echo "${SCRIPT_NAME} worker (pid $worker_pid) could not be restarted"
one_failed=true
else
echo "${SCRIPT_NAME} worker (pid $worker_pid) received SIGHUP"
fi
done
[ "$one_failed" ] && exit 1 || exit 0
}
check_status () {
my_exitcode=0
found_pids=0
local one_failed=
for pidfile in $(_get_pidfiles); do
if [ ! -r $pidfile ]; then
echo "${SCRIPT_NAME} down: no pidfiles found"
one_failed=true
break
fi
local node=`basename "$pidfile" .pid`
local pid=`cat "$pidfile"`
local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
echo "bad pid file ($pidfile)"
one_failed=true
else
local failed=
kill -0 $pid 2> /dev/null || failed=true
if [ "$failed" ]; then
echo "${SCRIPT_NAME} (node $node) (pid $pid) is down, but pidfile exists!"
one_failed=true
else
echo "${SCRIPT_NAME} (node $node) (pid $pid) is up..."
fi
fi
done
[ "$one_failed" ] && exit 1 || exit 0
}
case "$1" in
start)
check_dev_null
check_paths
start_workers
;;
stop)
check_dev_null
check_paths
stop_workers
;;
reload|force-reload)
echo "Use restart"
;;
status)
check_status
;;
restart)
check_dev_null
check_paths
restart_workers
;;
graceful)
check_dev_null
restart_workers_graceful
;;
kill)
check_dev_null
kill_workers
;;
dryrun)
check_dev_null
dryrun
;;
try-restart)
check_dev_null
check_paths
restart_workers
;;
create-paths)
check_dev_null
create_paths
;;
check-paths)
check_dev_null
check_paths
;;
*)
echo "Usage: /etc/init.d/${SCRIPT_NAME} {start|stop|restart|graceful|kill|dryrun|create-paths}"
exit 64 # EX_USAGE
;;
esac
exit 0
Which is old, very long, and seems to contain nothing I can change to effect the broker used except the location of the default values script CELERY_DEFAULTS=/etc/default/celeryd (confusing name again). I admit I pretty well copied and pasted this script without a full understanding though I do know how init.d scripts work.
When I run /etc/init.d/celeryd start The workers start up, but ignore the BROKER django settings pointing to my redis server, and try to read RabbitMQ instead. The log file /var/log/celery/w1.log
[2016-11-30 23:44:51,873: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
So celery is trying to use RabbitMQ, not Redis. There are other posts that complain of the same problem on Stack overflow, but none are resolved (as far as I can tell). I put djcelery in installed apps, as it seemed to make celeryd_multi management command available, but I don't want to use celery beat, and the documentation says this is no longer necessary. I have my own queue set up to run management commands, and I have had too many problems with setting up celerybeat in the past.
I have got the thing working by running sudo -u jimmy /srv/intro/bin/celery -A intro worker & and this works and uses the correct Redis queue (does anyone know why is it called a broker?), but wont restart on server power cycle, does not write to the log files, and I just don't feel this is a clean way to run celery workers.
I don't really want to use /etc/init.d scripts as this is the old way of doing things, and running as upstart has come and gone to replace this, and now systemd is the supported way of doing this (please correct me if I am wrong). There is no mention of these methods on the official documentation
http://docs.celeryproject.org/en/v4.0.0/userguide/daemonizing.html#init-script-celeryd
which makes me think that celery is no longer being supported, and perhaps there is a better maintained way of doing this. It is a wonder it has not been built into the core.
I did find this
https://github.com/celery/celery/blob/3.1/extra/supervisord/supervisord.conf
but there is no mention of broker in the config files, and I doubt that this will help me using Redis.
How do I get Celery running as a daemon to start automatically on reboot, and use Redis as a message queue, or is my only way of using Celery for asynchronous running of functions in Django to use the RabbitMQ message queue?
To ensure celery loads the correct broker, add broker parameter to Celery class.
app = Celery('intro', broker=settings.BROKER_URL)
Reference:
http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html#application
I wrote below command, which will copy the id_dsa.pub file to other server as part of my auto login feature. But every time below message is coming on the console:
spawn scp -o StrictHostKeyChecking=no /opt/mgtservices/.ssh/id_dsa.pub root#12.43.22.47:/root/.ssh/id_dsa.pub
Password:
Password:
Below script I wrote for this:
function sshkeygenerate()
{
if ! [ -f $HOME/.ssh/id_dsa.pub ] ;then expect -c" spawn ssh-keygen -t dsa -f $HOME/.ssh/id_dsa
expect y/n { send y\r ; exp_continue } expect passphrase): { send \r ; exp_continue}expect again: { send \r ; exp_continue}
spawn chmod 700 $HOME/.ssh && chmod 700 $HOME/.ssh/*
exit "
fi
expect -c"spawn scp -o StrictHostKeyChecking=no $HOME/.ssh/id_dsa.pub root"#"12.43.22.47:/root/.ssh/id_dsa.pub
expect *assword: { send $ROOTPWD\r }expect yes/no { send yes\r ; exp_continue }
spawn ssh -o StrictHostKeyChecking=no root"#"12.43.22.47 \"chmod 755 /root/.ssh/authorized_keys\"
expect *assword: { send $ROOTPWD\r }expect yes/no { send yes\r ; exp_continue }
spawn ssh -o StrictHostKeyChecking=no root"#"12.43.22.47 \"cat /root/.ssh/id_dsa.pub >> /root/.ssh/authorized_keys\"
expect *assword: { send $ROOTPWD\r }expect yes/no { send yes\r; exp_continue }
sleep 1
exit"
}
You should first create a passwordless ssh to the destination server, then you won't need to enter the password while you will do the scp.
Assuming 192.168.0.11 is the destination machine:
1) ssh-keygen -t rsa
2) ssh sheena#192.168.0.11 mkdir -p .ssh
3) cat .ssh/id_rsa.pub | ssh sheena#192.168.0.11 'cat >> .ssh/authorized_keys'
4) ssh sheena#192.168.0.11 "chmod 700 .ssh; chmod 640 .ssh/authorized_keys"
Link for the refernce:
http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/
This is the code, the idea is to backup a site in the days established in the array (in this test i have put all 7 days):
#!/bin/bash
### Setup Environment ###
DIRS="INFO"
BACKUP=/tmp/backup.$$
NOW=$(date +"%d-%m-%Y")
INCFILE="/root/tar-inc-backup.dat"
DAY=$(date +"%a")
FULLBACKUP="Sun"
DAYSOFBACKUP=( Mon Tue Wed Thu Fri Sat Sun)
### MySQL Config ###
MUSER="INFO"
MPASS="INFO"
MHOST="INFO"
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"
GZIP="$(which gzip)"
### FTP Config ###
FTPD="/"
FTPU="INFO"
FTPP="INFO"
FTPS="INFO"
### Email Config ###
EMAILID="INFO"
### FS Backup ###
[ ! -d $BACKUP ] && mkdir -p $BACKUP || :
### Determine which backup to run ###
for day in ${DAYSOFBACKUP[#]}
do
if [ $day == "$FULLBACKUP" ]; then
i=$(date +"%Hh%Mm%Ss")
FTPD="/full"
FILE="fs-full-$NOW.tar.gz"
tar -zcvf $BACKUP/$FILE $DIRS
else
i=$(date +"%Hh%Mm%Ss")
FILE="fs-i-$NOW-$i.tar.gz"
sudo tar -g $INCFILE -zcvf $BACKUP/$FILE $DIRS
fi
done
### Start MySQL Backup ###
# Get all databases name
DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')"
for db in $DBS
do
if [ db == "bricalia_tienda" ]; then
FILE=$BACKUP/mysql-$db.$NOW-$i.gz
$MYSQLDUMP -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE
fi
done
### FTP Backup to Remote Server ###
#Start FTP backup using ncftp
ftp $FTPS $MUSER $MPASS
bin
sudo mkdir $FTPD
sudo mkdir $FTPD/$NOW
cd $FTPD/$NOW
lcd $BACKUP
mput *
quit
### Backup Fail Check ###
if [ "$?" == "0" ]; then
rm -f $BACKUP/*
else
T=/tmp/backup.fail
echo "Date: $(date)">$T
echo "Hostname: $(hostname)" >>$T
echo "Backup failed" >>$T
mail -s "BACKUP FAILED" "$EMAILID" <$T
rm -f $T
fi
I have an error in "Determine which backup to run in the if clause". It also has a problem in the last line: unexpected end.
The script is explained here: http://piratecove.org/website-backup-script/
Looks like, immediately below that comment you reference, is a duplicated line. The if check:
### Determine which backup to run ###
if [ "$DAY" == "$FULLBACKUP" ]; then
if [ $day == "$FULLBACKUP" ]; then
I would guess removing the second if check would help.
After the "fix", there's another issue:
### Determine which backup to run ###
for day in ${DAYSOFBACKUP[#]} do
if [ $day == "$FULLBACKUP" ]; then
i=$(date +"%Hh%Mm%Ss")
FTPD="/full"
FILE="fs-full-$NOW.tar.gz"
tar -zcvf $BACKUP/$FILE $DIRS
else
i=$(date +"%Hh%Mm%Ss")
FILE="fs-i-$NOW-$i.tar.gz"
tar -g $INCFILE -zcvf $BACKUP/$FILE $DIRS
fi
This for loop isn't right, try this:
### Determine which backup to run ###
for day in ${DAYSOFBACKUP[#]}
do
if [ $day == "$FULLBACKUP" ]; then
i=$(date +"%Hh%Mm%Ss")
FTPD="/full"
FILE="fs-full-$NOW.tar.gz"
tar -zcvf $BACKUP/$FILE $DIRS
else
i=$(date +"%Hh%Mm%Ss")
FILE="fs-i-$NOW-$i.tar.gz"
tar -g $INCFILE -zcvf $BACKUP/$FILE $DIRS
fi
done