How to create a daily dump in MySQL? - sql

I want to make a daily dump of all the databases in MySQL using
Event Scheduler
, by now I have this query to create the event:
DELIMITER $$
CREATE EVENT `DailyBackup`
ON SCHEDULE EVERY 1 DAY STARTS '2015-11-09 00:00:01'
ON COMPLETION NOT PRESERVE ENABLE
DO
BEGIN
mysqldump -user=MYUSER -password=MYPASS all-databases > CONCAT('C:\Users\User\Documents\dumps\Dump',DATE_FORMAT(NOW(),%Y %m %d)).sql
END $$
DELIMITER ;
The problem is that MySQL seems to not recognize the command 'mysqldump' and shows me an error like this: Syntax error: missing 'colon'.
I am not an expert in SQL and I've tried to find the solution, but I couldn't, hope someone can help me with this.
Edit:
Help to make this statement a cron task

For Windows, create a .bat file with the needed command, and then create a scheduled task that runs that .bat file according to a schedule.
Create a .bat file in this fashion, replacing your username, password, and database name as appropriate:
mysqldump --opt --host=localhost --user=root --password=yourpassword dbname > C:\some_folder\some_file.sql
Then go to the start menu, control panel, administrative tools, task scheduler. Hit action > create task. Go to the actions tab, hit new, browse to the .bat file and add it to the task. Then go to the triggers tab, hit new, and define your daily schedule. Refer to http://windows.microsoft.com/en-US/windows/schedule-task
You might want to use a tool like 7zip to compress your backups all in the same command (7zip can be invoked from the command line). An example with 7zip installed would look like:
mysqldump --opt --host=localhost --user=root --password=yourpassword dbname | 7z a -si C:\some_folder\some_file.7z
I use this to include the date and time in the filename:
set _my_datetime=%date:~-4%_%date:~4,2%_%date:~7,2%_%time:~0,2%_%time:~3,2%_%time:~6,2%_%time:~9,2%_
set _my_datetime=%_my_datetime: =_%
set _my_datetime=%_my_datetime::=%
set _my_datetime=%_my_datetime:/=_%
set _my_datetime=%_my_datetime:.=_%
echo %_my_datetime%
mysqldump --opt --host=localhost --user=root --password=yourpassword dbname | 7z a -si C:\some_folder\backup_with_datetime_%_my_datetime%_dbname.7z

#Drew means to use a cronjob. to add a cronjon just start the crontab using this command:
crontab -e
the add a new entry at the end like this:
0 0 * * * mysqldump -u username -ppassword databasename > /path/to/file.sql
this will perform a database dump every day at 00:00

yes program the scheduler to run something like this:
C:/path/to/mysqldump.exe -u username -ppassword databasename > /path/to/file.sql

Related

Automating SQL script

I am new to programming world. I have a SQL script which needs to be automated. The automation required is as follows :
1) Script should run every sunday
2) Automatically dump the results in to DUMP_YYYYMMDDHH24MISS.txt
3) Result set is tar gziped
4) upload to SFTP URL with provided username and password.
I am using :
UNIX,
Vertica DB
Can the Gurus here please help ?
This is really 4 questions and should probably asked as such. To answer in the current format though:
1) Schedule a Task Automatically - Crontab
In the terminal, type crontab -e.
If you want something every Sunday at 1am, add the following line:
0 1 * * * 0 /path/to/script/script.sh
This will execute the script every Sunday.
2) Setting the output of the command
I'm only familiar with oracle. The format is probably similar. In order to get the filename as you want it, you'd use the date function as follows. (This is how I would do it in with Oracle):
d=$(date +%Y%M%D%H%M)
var=$(sqlplus -s / as blahblahblah
select * from stuff;
exit
EOF
)
file_name=DUMP_${d}MISS.txt
echo "${var}" >> ${file_name}
Note that your date command is probably different, if you do a man page on date it will tell you which parameters you'd need to get the date formatted as you like.
3) Taring the output
tar -xvf ${file_name}
4) Send over SFTP
You'd have to authenticate the sftp, that is beyond the scope of what anyone can answer without more details. Once you have the machines setup to authenticate, you would do:
sftp username#server<<EOF
put ${file_name}
EOF

How do I get a user input and apply it in a sql statement in bash?

I have two scripts. One is named sqlscript.sql and the other is named script.sh I have all of the queries needed written in my sql script. They are just a bunch of update statements. For example:
UPDATE xxDev.SYS_PARAMS SET val = 'serverName' WHERE lower(name) = 'enginebaseurl';
I'm running the .sql script IN the .sh script. When the .sh script runs, I want it to prompt the user for a server name and take that user input and replace it in serverName in the sql statements.
I'm brand new to both bash scripting and this website, so I hope I'm making sense asking this question. I'm using PuTTY if that makes a difference at all.
Suppose you use MySQL, try something like:
# TODO: prompt user for server name and store it into variable serverName
serverName="get from user"
cat <<"EOF" | mysql -u user1 -p passwd -h server1 -P 3306 -D db1
UPDATE xxDev.SYS_PARAMS SET val = '$serverName' WHERE lower(name) = 'enginebaseurl';
EOF
So in this example, you embed the sql script into the .sh so that you don't have to maintain two files.
I would probably use a variable
set #val 'serverName'
UPDATE xxDev.SYS_PARAMS SET val = #val WHERE lower(name) = 'enginebaseurl';
You can split the sqlscript.sql into
set-val.sql
set #val 'serverName'
and the actual update statements. Then you can recreate the set-val.sql from your user input:
echo -n "enter server: "
read server
echo "set #val '$server' > set-val.sql
and then you forward both files to mysql:
cat set-val.sql sqlscript.sql | mysql
You should probably use this only for internal things, it seems a little fragile.
I'm going let you figure out how to pass a shell parameter into your sql command, but here's an incredibly cool way to query the user for the server name. It might even be POSIX compliant.
#!/bin/sh
echo -n "Hit me with that server name: "; read serverName
echo "${serverName}! Outstanding! Pick up \$200 when you pass Go!"

connect to sqlplus only once without writing to a file in a loop

I have a requirement for which I need to write a ksh script that reads command line parameters into arrays and creates DML statements to insert records into an oracle database. I've created a script as below to achieve this. However, the user invoking the script doesn't have permission to write into the directory where the script has to run. So, is there a way we can fire multiple inserts on the database without connecting to sqlplus multiple times within the loop and at the same time, NOT create temp sql file as below? Any ideas are highly appreciated. Thanks in advance!
i=0
while (( i<$src_tbl_cnt ))
do
echo "insert into temp_table values ('${src_tbl_arr[$i]}', ${ins_row_arr[$i]}, ${rej_row_arr[$i]});" >> temp_scrpt.sql
(( i+=1 ))
done
echo "commit; disc; quit" >> temp_scrpt.sql
sqlplus user/pass#db # temp_scrpt.sql
Just use the /tmp directory.
The /tmp directory is guaranteed to be present on any unix-family server. It is there precisely for needs like this. Definitely do something like add the current process ID in the file name so that multiple users don't step on each other. So the total name is something like /tmp/temp_$PID_scrpt.sql or the like.
When done, be sure to also delete that file--say, in a line right after the sqlplus call. Thus be sure to store the file name in a variable and delete what's in that variable.
It should go without saying, but in a well run shop: 1) The admins should have put more than enough space in /tmp, 2) All the users in the community should not be deleting other's files in /tmp or overloading it so it runs out of space. 3) The admins should setup a job that deletes files from /tmp after a certain age so that if your script fails before it deletes the temporary file, it won't be there forever.
So really, this answer is more about /tmp and managing it effectively--but that really is what you need. Using temporary files is a powerful technique, so your design is good. And the reality that users often won't have rights in a directory is common, so /tmp is your answer.
Instead of creating a temporary file you can directly pipe the output of an input generating block into sqlplus, in your shell script.
Example:
{
echo 'set auto off;'
for ((i=0; i<100; i++)); do
echo "insert into itest(i) values ($i);"
done
# echo 'rollback;' # for testing
echo 'commit;'
} | sqlplus -S juser/secret#db > /dev/null
This works with Ksh 93 and Bash (perhaps even with Ksh 88 modulo the (( expression syntax).
The corresponding DDL statement for the test table:
create table itest ( i number(36) ) ;
PS: Btw, even when creating a temporary file is preferred - redirecting the output is way more efficient than doing an append-style redirect for each line, e.g.:
{ for ((i=0; i<100; i++)); do echo "line $i"; done; echo end; } > foo.tmp
the below piece of code will keep connecting to SQLplus multiple times or it will connect only once ?
{
echo 'set auto off;'
for ((i=0; i<100; i++)); do
echo "insert into itest(i) values ($i);"
done
echo 'rollback;' # for testing
echo 'commit;'
} | sqlplus -S juser/secret#db > /dev/null

How do I create a cron job to run an postgres SQL function?

I assume that all I need to do is to:
Create an sql file e.g. nameofsqlfile.sql contents:
perform proc_my_sql_funtion();
Execute this as a cron job.
However, I don't know the commands that I'd need to write to get this cron job executed as a postgres function for a specified host,port,database, user & his password...?
You just need to think of cronjob as running a shell command at a specified time or day.
So your first job is to work out how to run your shell command.
psql --host host.example.com --port 12345 --dbname nameofdatabase --username postgres < my.sql
You can then just add this to your crontab (I recommend you use crontab -e to avoid breaking things)
# runs your command at 00:00 every day
#
# min hour wday month mday command-to-run
0 0 * * * psql --host host.example.com --port 12345 --dbname nameofdatabase < my.sql
In most cases you can put all of the sql source in a shell 'here document'. The nice thing about here documents is that the shell's ${MY_VAR} are expanded even within single quotes, e.g:
#!/bin/sh
THE_DATABASE=personnel
MY_TABLE=employee
THE_DATE_VARIABLE_NAME=hire_date
THE_MONTH=10
THE_DAY=01
psql ${THE_DATABASE} <<THE_END
SELECT COUNT(*) FROM ${MY_TABLE}
WHERE ${THE_DATE_VARIABLE_NAME} >= '2011-${THE_MONTH}-${THE_DAY}'::DATE
THE_END
YMMV
Check this
http://archives.postgresql.org/pgsql-admin/2000-10/msg00026.php
and
http://www.dbforums.com/postgresql/340741-cron-jobs-postgresql.html
or you can just create a bash script to include your coding and call it from crontab
For Postgresql 10 and above you can use pg_cron. As stated in its README.md,
pg_cron is a simple cron-based job scheduler for PostgreSQL (10 or higher) that runs inside the database as an extension. It uses the same syntax as regular cron, but it allows you to schedule PostgreSQL commands directly from the database:

Execute SQL from file in bash

I'm trying to load a sql from a file in bash and execute the loaded sql. The sql file needs to be versatile, meaning it cannot be altered in order to make things easy while being run in bash (escaping special characters like * )
So I have run into some problems:
If I read my sample.sql
SELECT * FROM SAMPLETABLE
to a variable with
ab=`cat sample.sql`
and execute it
db2 `echo $ab`
I receive an sql error because by doing a cat the * has been replaced by all the files in the directory of sample.sql.
Easy solution would be to replace "" with "\" . But I cannot do this, because the file needs to stay executable in programs like DB Visualizer etc.
Could someone give me hint in the right direction?
The DB2 command line processor has options that accept a filename as input, so you shouldn't need to load statements from a text file into a shell variable.
This command will execute all SQL statements in the file, with newline treated as the statement terminator:
db2 -f sample.sql
This command will execute all SQL statements in the file, with semicolon treated as the statement terminator:
db2 -t -f sample.sql
Other useful CLP flags are:
-x : Suppress the column headings
-v : Echo the statement text immediately before execution
-z : Tee a copy of all CLP output to the filename immediately following this flag
Redirect stdin from the file.
db2 < sample.sql
In case, you have a variable used in your script and wanted to get it replaced by the shell before executed in DB2 then use this approach:
Contents of File.sql:
cat <<xEOF
insert values(1,2) into ${MY_SCHEMA}.${MY_TABLE};
select * from ${MY_SCHEMA}.${MY_TABLE};
xEOF
In command prompt do:
export MY_SCHEMA='STAR'
export MY_TAVLE='DIMENSION'
Then you are all good to get it executed in DB2:
eval File.sq |db2 +p -t
The shell will replace the global variables and then DB2 will execute it.
Hope it helps.