Modify Backup Script to Run 4x/week Instead of Daily - backup

I'm looking at modifying a backup script that has been setup for me on my server. The script currently runs each morning to backup all of my domains under the /var/www/vhosts/ directory and I'd like to have it run only four times per week (Sun, Tue, Thu, Sat) instead of daily, if possible. I'm relatively new to the scripting language/commands and was wondering if someone might be able to help me with this? Here is the current script:
umask 0077
BPATH="/disk2/backups/vhosts_backups/`date +%w`"
LOG="backup.log"
/bin/rm -rf $BPATH/*
for i in `ls /var/www/vhosts` on
do
tar czf $BPATH/$i.tgz -C /var/www/vhosts $i 2>>$BPATH/backup.log
done
Thank you,
Jason

To answer my own question (in case it could benefit anyone else), it turns out that the backup script was scheduled through crontab, and that's what needed the adjustment. I did crontab -e and modified the 4th field below from an * to "0,2,4,6" (for Sun, Tue, Thu, Sat).
5 1 * * 0,2,4,6 /root/scripts/vhosts_backup.sh

Related

What is the difference between the permissions tags -rwxr-xr-x and -rwxrwxrwx?

Before trying to assemble sequence data, I get a file size estimate for my raw READ1/READ2 files by running the command ls -l -h from the directory the files are in. The output looks something like this:
-rwxrwxrwx# 1 catharus2021 staff 86M Jun 11 15:03 pluvialis-dominica_JJW362-READ1.fastq.gz
-rwxrwxrwx# 1 catharus2021 staff 84M Jun 11 15:03 pluvialis-dominica_JJW362-READ2.fastq.gz
For a previous run using the identical command, but a different batch of data, the output was as such:
-rwxr-xr-x 1 catharus2021 staff 44M Mar 16 2018 lagopus_lagopus_alascensis_JJW1970_READ1.fastq.gz
-rwxr-xr-x 1 catharus2021 staff 52M Mar 16 2018 lagopus_lagopus_alascensis_JJW1970_READ2.fastq.gz
It doesn't seem to be affecting any downstream commands, but does anyone know why the strings at the very beginning (-rwxrwxrx# vs. -rwxr-xr-x) are different? I assume that they're permissions flags, but google has been less-than-informative when I try to type those in and search.
Thanks in advance for your time.
The coding describes who can access a file in which way. It is oredred:
owner - group - world
rwxr-xr-x
user can read, write and execute
group can only read and execute
world can only read and execute
This prevents that other people can overwrite your data. If you change to
rwxrwxrwx
Everybody can overwrite your data.

Big query make a quick backup with many tables

Currently I'm copying tables with something like this:
#!/bin/sh
export SOURCE_DATASET="BQPROJECTID:BQSOURCEDATASET"
export DEST_PREFIX="TARGETBQPROJECTID:TARGETBQDATASET._YOUR_PREFIX"
for f in `bq ls -n TOTAL_NUMBER_OF_TABLES $SOURCE_DATASET |grep TABLE | awk '{print $1}'`
do
export CLONE_CMD="bq --nosync cp $SOURCE_DATASET.$f $DEST_PREFIX$f"
echo $CLONE_CMD
echo `$CLONE_CMD`
done
(script from here), but it takes ~20min (because of ~600 tables). Maybe there is another way (preferably faster), to make a backup?
As a suggestion you may use Scheduling queries to schedule recurring queries in BigQuery, with this option you will be able to schedule your backups on a daily, weekly, monthly or custom periodicity, leaving the backups of your tables for nights or weekends. You can find more information about it in the following link.
But remember, the time that you backup takes will depend on your tables size.
Well, due to you mentioned that Scheduling queries is not an option for you, another option you can try is run your cp command in the background, this because you are working with a for loop and you are waiting to finish each process, instead of that you can run multiple process in background to get better performance. I made a simple script to test it and it works! First I made test without background process:
#!/bin/bash
start_global=$(date +'%s');
for ((i=0;i<100;i++))
do
start=$(date +'%s');
bq --location=US cp -a -f -n [SOURCE_PROJECT_ID]:[DATASET].[TABLE]
[TARGET_PROJECT_ID]:[DATASET].[TABLE]
echo "It took $(($(date +'%s') - $start)) seconds to iteration umber:
$i"
done
echo "It took $(($(date +'%s') - $start_global)) seconds to the entire
process"
It takes me around 5 seconds per table copied (160 Mb approx), so I spend more less 10 minutes in that process, so I modified the script to use background process:
#!/bin/bash
start_global=$(date +'%s');
for ((i=0;i<100;i++))
do
bq --location=US cp -a -f -n [SOURCE_PROJECT_ID]:[DATASET].[TABLE]
[TARGET_PROJECT_ID]:[DATASET].[TABLE] &
pid_1=$! # Get background process id
done
if wait $pid_1
then
echo -e "Processes termination successful"
else
echo -e "Error"
fi
echo "It took $(($(date +'%s') - $start_global)) seconds to the entire
process"
In this way I only spend 3 minutes to finish the execution.
You may adapt this idea to your implementation, just consider the quotas for Copy jobs, you can check it here.

Set a variable to the output of a command in a bsh script

Running a system with UTC time I'd like to be able to write out to log files with local time so reading them later and identifying the influence of external factors for issues needs less brain work.
The command to get the local date (e.g. Sydney, Australia) when running on UTC is
TZ=Australia/Sydney date
This would return
Sat 15 Sep 17:19:28 AEST 2018
A sample of my script is below. For now, please ignore the fact it is not the best script for the job it appears to be trying to do. My issue is the time being recorded in the log file. What this script does each loop is write the same date/time into the log file - the local time when the script was started.
#!/bin/sh
localdatetime=$(TZ=Australia/Sydney date)
while
do
nc -zw5 192.168.0.199 # IP of router
if [[ $? -eq 0 ]]; then
status="up"
else
status="down"
fi
echo "$localdatetime The router is now $status" >> /home/pi/userX/routerStatus.log
sleep 10
done
What I want is for the current local time to be stored into the log file each loop so that I know from reading the log file when the router was up or down. Is there a way to do this using a variable?
Thanks in advance for any good advice.
What I did in the end was the obvious solution: update the local time variable immediately before trying to write it to the log file.
For example:
#!/bin/sh
while
do
nc -zw5 192.168.0.199 # IP of router
if [[ $? -eq 0 ]]; then
status="up"
else
status="down"
fi
localdatetime=$(TZ=Australia/Sydney date)
echo "$localdatetime The router is now $status" >> /home/pi/userX/routerStatus.log
sleep 10
done

Automating SQL script

I am new to programming world. I have a SQL script which needs to be automated. The automation required is as follows :
1) Script should run every sunday
2) Automatically dump the results in to DUMP_YYYYMMDDHH24MISS.txt
3) Result set is tar gziped
4) upload to SFTP URL with provided username and password.
I am using :
UNIX,
Vertica DB
Can the Gurus here please help ?
This is really 4 questions and should probably asked as such. To answer in the current format though:
1) Schedule a Task Automatically - Crontab
In the terminal, type crontab -e.
If you want something every Sunday at 1am, add the following line:
0 1 * * * 0 /path/to/script/script.sh
This will execute the script every Sunday.
2) Setting the output of the command
I'm only familiar with oracle. The format is probably similar. In order to get the filename as you want it, you'd use the date function as follows. (This is how I would do it in with Oracle):
d=$(date +%Y%M%D%H%M)
var=$(sqlplus -s / as blahblahblah
select * from stuff;
exit
EOF
)
file_name=DUMP_${d}MISS.txt
echo "${var}" >> ${file_name}
Note that your date command is probably different, if you do a man page on date it will tell you which parameters you'd need to get the date formatted as you like.
3) Taring the output
tar -xvf ${file_name}
4) Send over SFTP
You'd have to authenticate the sftp, that is beyond the scope of what anyone can answer without more details. Once you have the machines setup to authenticate, you would do:
sftp username#server<<EOF
put ${file_name}
EOF

How do I create a cron job to run an postgres SQL function?

I assume that all I need to do is to:
Create an sql file e.g. nameofsqlfile.sql contents:
perform proc_my_sql_funtion();
Execute this as a cron job.
However, I don't know the commands that I'd need to write to get this cron job executed as a postgres function for a specified host,port,database, user & his password...?
You just need to think of cronjob as running a shell command at a specified time or day.
So your first job is to work out how to run your shell command.
psql --host host.example.com --port 12345 --dbname nameofdatabase --username postgres < my.sql
You can then just add this to your crontab (I recommend you use crontab -e to avoid breaking things)
# runs your command at 00:00 every day
#
# min hour wday month mday command-to-run
0 0 * * * psql --host host.example.com --port 12345 --dbname nameofdatabase < my.sql
In most cases you can put all of the sql source in a shell 'here document'. The nice thing about here documents is that the shell's ${MY_VAR} are expanded even within single quotes, e.g:
#!/bin/sh
THE_DATABASE=personnel
MY_TABLE=employee
THE_DATE_VARIABLE_NAME=hire_date
THE_MONTH=10
THE_DAY=01
psql ${THE_DATABASE} <<THE_END
SELECT COUNT(*) FROM ${MY_TABLE}
WHERE ${THE_DATE_VARIABLE_NAME} >= '2011-${THE_MONTH}-${THE_DAY}'::DATE
THE_END
YMMV
Check this
http://archives.postgresql.org/pgsql-admin/2000-10/msg00026.php
and
http://www.dbforums.com/postgresql/340741-cron-jobs-postgresql.html
or you can just create a bash script to include your coding and call it from crontab
For Postgresql 10 and above you can use pg_cron. As stated in its README.md,
pg_cron is a simple cron-based job scheduler for PostgreSQL (10 or higher) that runs inside the database as an extension. It uses the same syntax as regular cron, but it allows you to schedule PostgreSQL commands directly from the database: