How do I create a cron job to run an postgres SQL function? - sql

I assume that all I need to do is to:
Create an sql file e.g. nameofsqlfile.sql contents:
perform proc_my_sql_funtion();
Execute this as a cron job.
However, I don't know the commands that I'd need to write to get this cron job executed as a postgres function for a specified host,port,database, user & his password...?

You just need to think of cronjob as running a shell command at a specified time or day.
So your first job is to work out how to run your shell command.
psql --host host.example.com --port 12345 --dbname nameofdatabase --username postgres < my.sql
You can then just add this to your crontab (I recommend you use crontab -e to avoid breaking things)
# runs your command at 00:00 every day
#
# min hour wday month mday command-to-run
0 0 * * * psql --host host.example.com --port 12345 --dbname nameofdatabase < my.sql

In most cases you can put all of the sql source in a shell 'here document'. The nice thing about here documents is that the shell's ${MY_VAR} are expanded even within single quotes, e.g:
#!/bin/sh
THE_DATABASE=personnel
MY_TABLE=employee
THE_DATE_VARIABLE_NAME=hire_date
THE_MONTH=10
THE_DAY=01
psql ${THE_DATABASE} <<THE_END
SELECT COUNT(*) FROM ${MY_TABLE}
WHERE ${THE_DATE_VARIABLE_NAME} >= '2011-${THE_MONTH}-${THE_DAY}'::DATE
THE_END
YMMV

Check this
http://archives.postgresql.org/pgsql-admin/2000-10/msg00026.php
and
http://www.dbforums.com/postgresql/340741-cron-jobs-postgresql.html
or you can just create a bash script to include your coding and call it from crontab

For Postgresql 10 and above you can use pg_cron. As stated in its README.md,
pg_cron is a simple cron-based job scheduler for PostgreSQL (10 or higher) that runs inside the database as an extension. It uses the same syntax as regular cron, but it allows you to schedule PostgreSQL commands directly from the database:

Related

Automating SQL script

I am new to programming world. I have a SQL script which needs to be automated. The automation required is as follows :
1) Script should run every sunday
2) Automatically dump the results in to DUMP_YYYYMMDDHH24MISS.txt
3) Result set is tar gziped
4) upload to SFTP URL with provided username and password.
I am using :
UNIX,
Vertica DB
Can the Gurus here please help ?
This is really 4 questions and should probably asked as such. To answer in the current format though:
1) Schedule a Task Automatically - Crontab
In the terminal, type crontab -e.
If you want something every Sunday at 1am, add the following line:
0 1 * * * 0 /path/to/script/script.sh
This will execute the script every Sunday.
2) Setting the output of the command
I'm only familiar with oracle. The format is probably similar. In order to get the filename as you want it, you'd use the date function as follows. (This is how I would do it in with Oracle):
d=$(date +%Y%M%D%H%M)
var=$(sqlplus -s / as blahblahblah
select * from stuff;
exit
EOF
)
file_name=DUMP_${d}MISS.txt
echo "${var}" >> ${file_name}
Note that your date command is probably different, if you do a man page on date it will tell you which parameters you'd need to get the date formatted as you like.
3) Taring the output
tar -xvf ${file_name}
4) Send over SFTP
You'd have to authenticate the sftp, that is beyond the scope of what anyone can answer without more details. Once you have the machines setup to authenticate, you would do:
sftp username#server<<EOF
put ${file_name}
EOF

How to create a daily dump in MySQL?

I want to make a daily dump of all the databases in MySQL using
Event Scheduler
, by now I have this query to create the event:
DELIMITER $$
CREATE EVENT `DailyBackup`
ON SCHEDULE EVERY 1 DAY STARTS '2015-11-09 00:00:01'
ON COMPLETION NOT PRESERVE ENABLE
DO
BEGIN
mysqldump -user=MYUSER -password=MYPASS all-databases > CONCAT('C:\Users\User\Documents\dumps\Dump',DATE_FORMAT(NOW(),%Y %m %d)).sql
END $$
DELIMITER ;
The problem is that MySQL seems to not recognize the command 'mysqldump' and shows me an error like this: Syntax error: missing 'colon'.
I am not an expert in SQL and I've tried to find the solution, but I couldn't, hope someone can help me with this.
Edit:
Help to make this statement a cron task
For Windows, create a .bat file with the needed command, and then create a scheduled task that runs that .bat file according to a schedule.
Create a .bat file in this fashion, replacing your username, password, and database name as appropriate:
mysqldump --opt --host=localhost --user=root --password=yourpassword dbname > C:\some_folder\some_file.sql
Then go to the start menu, control panel, administrative tools, task scheduler. Hit action > create task. Go to the actions tab, hit new, browse to the .bat file and add it to the task. Then go to the triggers tab, hit new, and define your daily schedule. Refer to http://windows.microsoft.com/en-US/windows/schedule-task
You might want to use a tool like 7zip to compress your backups all in the same command (7zip can be invoked from the command line). An example with 7zip installed would look like:
mysqldump --opt --host=localhost --user=root --password=yourpassword dbname | 7z a -si C:\some_folder\some_file.7z
I use this to include the date and time in the filename:
set _my_datetime=%date:~-4%_%date:~4,2%_%date:~7,2%_%time:~0,2%_%time:~3,2%_%time:~6,2%_%time:~9,2%_
set _my_datetime=%_my_datetime: =_%
set _my_datetime=%_my_datetime::=%
set _my_datetime=%_my_datetime:/=_%
set _my_datetime=%_my_datetime:.=_%
echo %_my_datetime%
mysqldump --opt --host=localhost --user=root --password=yourpassword dbname | 7z a -si C:\some_folder\backup_with_datetime_%_my_datetime%_dbname.7z
#Drew means to use a cronjob. to add a cronjon just start the crontab using this command:
crontab -e
the add a new entry at the end like this:
0 0 * * * mysqldump -u username -ppassword databasename > /path/to/file.sql
this will perform a database dump every day at 00:00
yes program the scheduler to run something like this:
C:/path/to/mysqldump.exe -u username -ppassword databasename > /path/to/file.sql

Expect script does not work under crontab

I have an expect script which I need to run every 3 mins on my management node to collect tx/rx values for each port attached to DCX Brocade SAN Switch using the command #portperfshow#
Each time I try to use crontab to execute the script every 3 mins, the script does not work!
My expect script starts with #!/usr/bin/expect -f and I am calling the script using the following syntax under cron:
3 * * * * /usr/bin/expect -f /root/portsperfDCX1/collect-all.exp sanswitchhostname
However, when I execute the script (not under cron) it works as expected:
root# ./collect-all.exp sanswitchhostname
works just fine.
Please Please can someone help! Thanks.
The script collect-all.exp is:
#!/usr/bin/expect -f
#Time and Date
set day [timestamp -format %d%m%y]
set time [timestamp -format %H%M]
#logging
set LogDir1 "/FPerf/PortsLogs"
et timeout 5
set ipaddr [lrange $argv 0 0]
set passw "XXXXXXX"
if { $ipaddr == "" } {
puts "Usage: <script.exp> <ip address>\n"
exit 1
}
spawn ssh admin#$ipaddr
expect -re "password"
send "$passw\r"
expect -re "admin"
log_file "$LogDir1/$day-portsperfshow-$time"
send "portperfshow -tx -rx -t 10\r"
expect timeout "\n"
send \003
log_file
send -- "exit\r"
close
I had the same issue, except that my script was ending with
interact
Finally I got it working by replacing it with these two lines:
expect eof
exit
Changing interact to expect eof worked for me!
Needed to remove the exit part, because I had more statements in the bash script after the expect line (calling expect inside a bash script).
There are two key differences between a program that is run normally from a shell and a program that is run from cron:
Cron does not populate (many) environment variables. Notably absent are TERM, SHELL and HOME, but that's just a small proportion of the long list that will be not defined.
Cron does not set up a current terminal, so /dev/tty doesn't resolve to anything. (Note, programs spawned by Expect will have a current terminal.)
With high probability, any difficulties will come from these, especially the first. To fix, you need to save all your environment variables in an interactive session and use these in your expect script to repopulate the environment. The easiest way is to use this little expect script:
unset -nocomplain ::env(SSH_AUTH_SOCK) ;# This one is session-bound anyway
puts [list array set ::env [array get ::env]]
That will write out a single very long line which you want to put near the top of your script (or at least before the first spawn). Then see if that works.
Jobs run by cron are not considered login shells, and thus don't source your .bashrc, .bash_profile, etc.
If you want that behavior, you need to add it explicitly to the crontab entry like so:
$ crontab -l
0 13 * * * bash -c '. .bash_profile; etc ...'
$

Execute SQL from file in bash

I'm trying to load a sql from a file in bash and execute the loaded sql. The sql file needs to be versatile, meaning it cannot be altered in order to make things easy while being run in bash (escaping special characters like * )
So I have run into some problems:
If I read my sample.sql
SELECT * FROM SAMPLETABLE
to a variable with
ab=`cat sample.sql`
and execute it
db2 `echo $ab`
I receive an sql error because by doing a cat the * has been replaced by all the files in the directory of sample.sql.
Easy solution would be to replace "" with "\" . But I cannot do this, because the file needs to stay executable in programs like DB Visualizer etc.
Could someone give me hint in the right direction?
The DB2 command line processor has options that accept a filename as input, so you shouldn't need to load statements from a text file into a shell variable.
This command will execute all SQL statements in the file, with newline treated as the statement terminator:
db2 -f sample.sql
This command will execute all SQL statements in the file, with semicolon treated as the statement terminator:
db2 -t -f sample.sql
Other useful CLP flags are:
-x : Suppress the column headings
-v : Echo the statement text immediately before execution
-z : Tee a copy of all CLP output to the filename immediately following this flag
Redirect stdin from the file.
db2 < sample.sql
In case, you have a variable used in your script and wanted to get it replaced by the shell before executed in DB2 then use this approach:
Contents of File.sql:
cat <<xEOF
insert values(1,2) into ${MY_SCHEMA}.${MY_TABLE};
select * from ${MY_SCHEMA}.${MY_TABLE};
xEOF
In command prompt do:
export MY_SCHEMA='STAR'
export MY_TAVLE='DIMENSION'
Then you are all good to get it executed in DB2:
eval File.sq |db2 +p -t
The shell will replace the global variables and then DB2 will execute it.
Hope it helps.

cron script to act as a queue OR a queue for cron?

I'm betting that someone has already solved this and maybe I'm using the wrong search terms for google to tell me the answer, but here is my situation.
I have a script that I want to run, but I want it to run only when scheduled and only one at a time. (can't run the script simultaneously)
Now the sticky part is that say I have a table called "myhappyschedule" which has the data I need and the scheduled time. This table can have multiple scheduled times even at the same time, each one would run this script. So essentially I need a queue of each time the script fires and they all need to wait for each one before it to finish. (sometimes this can take just a minute for the script to execute sometimes its many many minutes)
What I'm thinking about doing is making a script that checks myhappyschedule every 5 min and gathers up those that are scheduled, puts them into a queue where another script can execute each 'job' or occurrence in the queue in order. Which all of this sounds messy.
To make this longer - I should say that I'm allowing users to schedule things in myhappyschedule and not edit crontab.
What can be done about this? File locks and scripts calling scripts?
add a column exec_status to myhappytable (maybe also time_started and time_finished, see pseudocode)
run the following cron script every x minutes
pseudocode of cron script:
[create/check pid lock (optional, but see "A potential pitfall" below)]
get number of rows from myhappytable where (exec_status == executing_now)
if it is > 0, exit
begin loop
get one row from myhappytable
where (exec_status == not_yet_run) and (scheduled_time <= now)
order by scheduled_time asc
if no such row, exit
set row exec_status to executing_now (maybe set time_started to now)
execute whatever command the row contains
set row exec_status to completed
(maybe also store the command output/return as well, set time_finished to now)
end loop
[delete pid lock file (complementary to the starting pid lock check)]
This way, the script first checks if none of the commands is running, then runs first not-yet run command, until there are no more commands to be run at the given moment. Also, you can see what command is executing by querying the database.
A potential pitfall: if the cron script is killed, a scheduled task will remain in "executing_now" state. That's what the pid lock at beginning and end is for: to see if the cron script terminated properly. pseudocode of create/check pidlock:
if exists pidlockfile then
check if process id given in file exists
if not exists then
update myhappytable set exec_status = error_cronscript_died_while_executing_this
where exec_status == executing_now
delete pidlockfile
else (previous instance still running)
exit
endif
endif
create pidlockfile containing cron script process id
You can use the at(1) command inside your script to schedule its next run. Before it exits, it can check myhappyschedule for the next run time. You don't need cron at all, really.
I came across this question while researching for a solution to the queuing problem. For the benefit of anyone else searching here is my solution.
Combine this with a cron that starts jobs as they are scheduled (even if they are scheduled to run at the same time) and that solves the problem you described as well.
Problem
At most one instance of the script should be running.
We want to cue up requests to process them as fast as possible.
ie. We need a pipeline to the script.
Solution:
Create a pipeline to any script. Done using a small bash script (further down).
The script can be called as
./pipeline "<any command and arguments go here>"
Example:
./pipeline sleep 10 &
./pipeline shabugabu &
./pipeline single_instance_script some arguments &
./pipeline single_instance_script some other_argumnts &
./pipeline "single_instance_script some yet_other_arguments > output.txt" &
..etc
The script creates a new named pipe for each command. So the above will create named pipes: sleep, shabugabu, and single_instance_script
In this case the initial call will start a reader and run single_instance_script with some arguments as arguments. Once the call completes, the reader will grab the next request off the pipe and execute with some other_arguments, complete, grab the next etc...
This script will block requesting processes so call it as a background job (& at the end) or as a detached process with at (at now <<< "./pipeline some_script")
#!/bin/bash -Eue
# Using command name as the pipeline name
pipeline=$(basename $(expr "$1" : '\(^[^[:space:]]*\)')).pipe
is_reader=false
function _pipeline_cleanup {
if $is_reader; then
rm -f $pipeline
fi
rm -f $pipeline.lock
exit
}
trap _pipeline_cleanup INT TERM EXIT
# Dispatch/initialization section, critical
lockfile $pipeline.lock
if [[ -p $pipeline ]]
then
echo "$*" > $pipeline
exit
fi
is_reader=true
mkfifo $pipeline
echo "$*" > $pipeline &
rm -f $pipeline.lock
# Reader section
while read command < $pipeline
do
echo "$(date) - Executing $command"
($command) &> /dev/null
done