i have created the perl script in which i am connecting to vsql and running queries. when i run the script mannually, it is creating output files as expected. but when i set this script in crontab then output file is not generating. perl script is given below
#!/usr/bin/perl
$timenow = `date "+%H_%M"`;
chomp($timenow);
$cmd = "/opt/vertica/bin/vsql -d xxxx-U xxxxx -w xxxxx -F \$'--FSEP--' -At -o dumpfile_" . $timenow . ".txt -c \"SELECT CURRENT_TIMESTAMP(1) AS time;\"";
print "$cmd\n";
system($cmd);
and below is the contab entry
*/2 * * * * /usr/bin/perl /tmp/test.pl
can somebody please help what i am doing wrong ?
Your output is written to a file called dumpfile_[timestamp].txt. But where is that file?
Your command contains no directory path for that file. So it will be written to the current directory. The current directory for a cronjob is the home directory for the user that owns the cronjob. Have you tried looking there?
It's always better to be more specific about which directory you want the files written to. Two ways to do that are:
Change to the directory before running your command
*/2 * * * * cd /some_directory_path && /usr/bin/perl /tmp/test.pl
Include the full path in your Perl program
-o /some_directory_path/dumpfile_" . $timenow . ".txt
Update: Ok, take two.
Who owns this cronjob? Any output from the cronjob will be emailed to the owner. And there's definitely output as you have a print() statement. Any errors will be included in the same email. Do you get that email? What's in it?
If you don't get the email, you can change the address that the email is sent to by adding a MAILTO parameter to the crontab. It will look like this:
MAILTO=someone#example.com
*/2 * * * * /usr/bin/perl /tmp/test.pl
Bear in mind that the server might not be set up to send external email, so you might need to use the local mail program on the server.
If you can't get the email to work out, you could look for cron errors in /var/log/syslog (or, on a systemd system, try journalctl _COMM=cron).
The print() output is going somewhere. You need to track it down.
Related
Files are being written to a directory using the COPY query:
Copy (SELECT * FROM animals) To '/var/lib/postgresql/data/backups/2020-01-01/animals.sql' With CSV DELIMITER ',';
However if the directory 2020-01-01 does not exist, we get the error
could not open file "/var/lib/postgresql/data/backups/2020-01-01/animals.sql" for writing: No such file or directory
PostgeSQL server is running inside a Docker container with the volume mapping /mnt/backups:/var/lib/postgresql/data/backups
The Copy query is being sent from a Node.js app outside of the Docker container.
The mapped host directory /mnt/backups was created by Docker Compose and is owned by root, so the Node.js app sending the COPY query is unable to create the missing directories due to insufficient permissions.
The backup file is meant to be transferred out of the Docker container to the Docker host.
Question: Is it possible to use an SQL query to ask PostgreSQL 11.2 to create a directory if it does not exist? If not, how will you recommend the directory creation be done?
Using Node.js 12.14.1 on Ubuntu 18.04 host. Using PostgreSQL 11.2 inside container, Docker 19.03.5
An easy way to solve it is to create the file directly into the client machine. Using STDOUT from COPY you can let the query output be redirected to the client standard output, which you can catch and save in a file. For instance, using psql in the client machine:
$ psql -U your_user -d your_db -c "COPY (SELECT * FROM animals) TO STDOUT WITH CSV DELIMITER ','" > file.csv
Creating an output directoy in case it does not exist:
$ mkdir -p /mnt/backups/2020-01/ && psql -U your_user -d your_db -c "COPY (SELECT * FROM animals) TO STDOUT WITH CSV DELIMITER ','" > /mnt/backups/2020-01/file.csv
On a side note: try to avoid exporting files into the database server. Although it is possible, I consider it a bad practice. Doing so you will either write a file into the postgres system directories or give the postgres user permission to write somewhere else, and it is something you shouldn't be comfortable with. Export data directly to the client either using COPY as I mentioned or follow the advice from #Schwern. Good luck!
Postgres has its own backup and restore utilities which are likely to be a better choice than rolling your own.
When used with one of the archive file formats and combined with pg_restore, pg_dump provides a flexible archival and transfer mechanism. pg_dump can be used to backup an entire database, then pg_restore can be used to examine the archive and/or select which parts of the database are to be restored. The most flexible output file formats are the “custom” format (-Fc) and the “directory” format (-Fd). They allow for selection and reordering of all archived items, support parallel restoration, and are compressed by default. The “directory” format is the only format that supports parallel dumps.
A simple backup rotation script might look like this:
#!/bin/sh
table='animals'
url='postgres://username#host:port/database_name'
date=`date -Idate`
file="/path/to/your/backups/$date/$table.sql"
mkdir -p `dirname $file`
pg_dump $url -w -Fc --table=$table -f $file
To avoid hard coding the database password, -w means it will not prompt for a password and instead look for a password file. Or you can use any of many Postgres authentication options.
I am writing a BASH deployment script on RH 5. Script runs great and send out an email at the end of the script run. However, what I need to do is, at the end of the script, if I detect any failure, I need to copy log files back local server to attach to the email.
Script can detect failure fine, how to copy log files back. I don't want to just cat the log files as they can be huge.
Any suggestions?
Thanks
S
If I understand correctly your problem, you should use scp
http://linux.die.net/man/1/scp
and here you can find how to automate the login so you can use it in a script
http://linuxproblem.org/art_9.html
I can't see any easy way of avoiding a second login with scp/sftp. If you're sure that it's only the log file that will be returned you could do something like the following:
ssh -e none REMOTE SCRIPT | gzip -dc > LOGFILE
Inside SCRIPT you have something like gzip -c LOGFILE when if fails.
I have set up a php file to run that just echos hello.
<?php
echo hello;
?>
My cron job looks like this:
/usr/local/bin/php -f “/home/username/public_html/mls/test.php”
when my script runs i get a confirmation email that says:
Could not open input file: /home/username/public_html/mls/test.php
I don't know what is causing this. I am using godaddy's virtual private server with cpanel x installed. I have used the ssh to set permissions 777 on folder and file and still can not get it to run.
Any advice would be helpful. Thanks.
For some reason PHP cannot open the file. Try replacing /usr/local/bin/php -f with "ls -la" to try to crib some more information. Remember to NOT quote the file name in the crontab: php -f filename.php, not php -f "filename.php", unless it contains spaces -- and then it's better to use single quotes.
Possibly, try "ls -la /home", "ls -la /home/username", "ls -la ~/public_html" and so on.
Also try appending
2>&1
to the command line, in case only stdout is mailed to you (I don't really think so, but being sure costs little).
One other possibility
The crontab as it is refers /home/username/public_html/mls/test.php - that is, a public HTML directory inside username's commonest value for a home directory.
It is possible that the cron job is either not running with the appropriate user and privileges, or that the user it "sees" is actually a virtual user - there is no "/home/username" at all - and the "home directory" is elsewhere, possibly even existing just as long as the cron job runs. In this case the solution might be to refer to
~/public_html/mls/test.php
or, as described above, to first run a command such as pwd or ls -la to determine exactly where the cron job's current working directory is.
If this, too, fails, then another workaround could be to invoke the PHP HTTP handler via curl or lynx:
/usr/bin/curl http://www.thishostname.com/mls/test.php
Possibly using either some environment variable or curl header or _GET option to authenticate to the script as the cron job, and avoid it being accessible from the outside.
I have script which runs manually fine but not getting the desired output when run through cronjob. Please let me know if anything wrong with the script.
#!/usr/bin/ksh
file1=$(find *-* -mtime 1)
file2=$(find *-* -mtime 2)
basefile1=$(basename $file1)
basefile2=$(basename $file2)
cd /gtxappl/Release/SCMAudit
./cmp.sh $basefile1 $basefile2 > dailyAuditChecks.txt
mailx -s "Daily Checks Report" ****#homeretailgroup.com < dailyAuditChecks.txt
From Admin's Choice:
5. Crontab Environment
cron invokes the command from the user’s HOME directory with the shell, (/usr/bin/sh).
cron supplies a default environment for every shell, defining:
HOME=user’s-home-directory
LOGNAME=user’s-login-id
PATH=/usr/bin:/usr/sbin:.
SHELL=/usr/bin/sh
Users who desire to have their .profile executed must explicitly do so in the crontab entry or in a script called by the entry.
I recommend using absolute paths wherever possible and don't forget about executing your .profile if you need environment variables.
Can someone link me to a tutorial or explain if there is a way to create some sort of batch file of mysql scripts / stored procs and run them all at the same time? I can not seem to find any documentation on this online but I feel that I might be searching using the wrong terms.
You can chain mysql scripts by calling them from within a script using the source command (details of command line options)
# my_textfile.sql
# ---------------
USE my_database;
\. subscript1.sql
\. subdir/subscript2.sql
\. /full/path/to/subscript3.sql
Command Line:
mysql < my_textfile.sql
Don't forget the command line options, if you are going to script the files you might need the password/ user account.
mysql -uyouraccount -pyourpassword YourDatabase < mytextfile.sql
This isn't the most secure way to do it because it puts your username/ password on the command line but it works. If you are doing much scripting I suggest you look into .my.cnf and the various options for saving your account/ password in there (and securing that file).
You can simply create a text file with SQL statements separated with ; and then execute all statements with the MySQL command line client:
# my_textfile.sql
# ---------------
USE my_database;
SELECT * FROM table1;
UPDATE table2 SET foo='bar';
Command Line:
mysql < my_textfile.sql
For peeps running MAMP PRO on OS X Yosemite, I was able to get all my *.sql scripts executed (import) by running from terminal:
/Applications/MAMP/Library/bin/mysql -h localhost -u root -p < /Applications/MAMP/myDBRestore.sql
myDBRestore.sql contained a reference to all the MySQL DB scripts as thus:
\. /full/path/to/sql/file1.sql
\. /full/path/to/sql/file2.sql
\. /full/path/to/sql/file3.sql
...
\. /full/path/to/sql/file(n).sql
where n is the last .sql file in the directory.