Cronjob script fails but manually runs fine - aix

I have script which runs manually fine but not getting the desired output when run through cronjob. Please let me know if anything wrong with the script.
#!/usr/bin/ksh
file1=$(find *-* -mtime 1)
file2=$(find *-* -mtime 2)
basefile1=$(basename $file1)
basefile2=$(basename $file2)
cd /gtxappl/Release/SCMAudit
./cmp.sh $basefile1 $basefile2 > dailyAuditChecks.txt
mailx -s "Daily Checks Report" ****#homeretailgroup.com < dailyAuditChecks.txt

From Admin's Choice:
5. Crontab Environment
cron invokes the command from the user’s HOME directory with the shell, (/usr/bin/sh).
cron supplies a default environment for every shell, defining:
HOME=user’s-home-directory
LOGNAME=user’s-login-id
PATH=/usr/bin:/usr/sbin:.
SHELL=/usr/bin/sh
Users who desire to have their .profile executed must explicitly do so in the crontab entry or in a script called by the entry.
I recommend using absolute paths wherever possible and don't forget about executing your .profile if you need environment variables.

Related

What is the difference between calling a command via "wsl [command]" and opening a wsl shell and calling "[command]"?

I am using Ubuntu via WSL 2.0 on Windows 10 and would like to run Texlive from the Windows command line. To do so I prepended the Texlive folder to the path in /etc/environment (I also tried a number of other locations eg. $HOME/.bashrc):
C:\Users\scott\Documents>wsl echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/mnt/c/Windows/system32:...
C:\Users\scott\Documents>wsl
scott#SCOTT-PC:/mnt/c/Users/scott/Documents$ echo $PATH
/usr/local/texlive/2020/bin/x86_64-linux:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/mnt/c/Windows/system32:...
Why is there a difference between these two paths? Is it possible to change the first PATH variable?
To be honest, when I first looked at this question, I thought it would be an easy answer. Oh how wrong I was. There are a lot of nuances to how this works.
Let's start with the fairly "easy" part, though. The main difference between the first method and the second:
wsl by itself launches into a login (and interactive) shell
the shell launched with wsl echo $PATH is neither a login shell nor an interactive shell
So the first will source both login scripts (e.g. ~/.profile) and interactive startup scripts (e.g. ~/.bashrc). The second form does not get to source either of these.
You can see this a different way (and get to the solution) with the following commands:
wsl -e bash -c 'echo $PATH'
wsl -e bash -li -c 'echo $PATH'
The -li forces bash to run as a login and interactive shell, thus sourcing all of the applicable startup scripts. And, as #bovquier points out in the comments, a single quote is needed here to prevent PowerShell from interpolating the $ before it gets to Bash. That, or escape it.
You should be able to run TeX Live the same way, just replacing the "echo $PATH" with the startup command you need for TeX Live.
A second option would be to create a script that both adds the path and runs the command, and just launch that script through wsl /path/to/script.sh
That said, I honestly don't think that your current login/interactive PATH is coming from /etc/environment. In my testing, at least, /etc/environment has no use in WSL, and that's to be expected. /etc/environment is only sourced by PAM modules, and with no login check performed by WSL, there's no reason to invoke PAM in either the wsl nor the wsl echo $PATH commands.
I'd expect that you still have the PATH setting in ~/.bashrc or somewhere similar), and that's where the shell is picking it up from at the moment.
While this isn't necessarily critical to understanding the answer, you might also wonder, if /etc/environment isn't used for setting the default (non-login, non-interactive) path in WSL, what is? The answer seems to be that it is hard-coded into the init that starts up WSL. That init is also what appends the Windows path (assuming you don't have that feature disabled in /etc/wsl.conf).

Running a crontab job from locally stored script

Having trouble running a crontab psql backup job from a locally stored script. I added the job via crontab -e and when I used crontab -l, it shows up in the list of jobs. The script that it is supposed to run works fine, checked that, runs as it should and dumps the output on the designated s3 bucket when using ./backup.sh
This is what I set the job as:
59 23 * * 7 /Users/myusername/backup.sh
The job should run at 11:59PM every Sunday, but it doesn't. I can't figure out what the issue is (do I need to leave line breaks/spaces in between each job, or just after the very lost job in my crontab list?
Any help would be very much appreciated. Thanks.
Depending on your distribution, you might want to check logs for Cron service.
Non-exhaustive list of possible problem reasons:
Cron service is not running at all and hence is not starting any of the tasks;
Usually Cron passes your script a very limited set of environment variables, so your script might fail because of some missing environment. That will probably be reflected in cron daemon logs
What can you do
Cron service: if your distro uses systemd then try running systemctl status cron (or systemctl status crond?) to check if it is running.
Your script is started but fails: here are several things to try.
Try checking cron service logs, maybe with something like journalctl --unit cron or journalctl -f before the script should be started;
Check if there is a dead.letter file in your home directory containing output of the failed script. When Cron starts your script and the script outputs something (which is considered a problem), that output is mailed to you. If mailing is not properly configured then it usually goes to that file.
Put something like this in the beginning of your script:
(
date
id -a
set
echo
) >> /tmp/myscript.log
Then wait until cron runs your script and check if the file /tmp/myscript.log was created. Then try to run your script manually, replicating all the environment created by cron which you now know. I.e. unset all but the variables Cron leaves, and make sure id is correct.

Reading profile script in non-interactive mode with AIX implementation of ksh

Please note that this is an AIX related question.
I have a jenkins server running on Redhat which is running a node via SSH on an AIX server.
The commands are run non-interactively using SSH to a user on the AIX machine who has ksh as its standard shell.
The problem is that this build needs a number of environment variables, and i can't seem to get it to work.
I have tried:
Jenkins allows me to set some environment variables for the session. So i tried:
ENV="$HOME/.profile"
I tried creating a .kshrc file containing
. .profile
But none of these approaches seems to make KSH run the .profile script.
The .profile script contains the environment setup for the user i need.
How do i get an AIX implementation of KSH to run my .profile script before executing commands?
You need to specifically tell Jenkins that you want to execute them in ksh shell.
By default, Jenkins runs as sh <commands>.
Add a shebang in your shell command as first line,
#!/bin/ksh
Most shells don't source their .profile files on non-interactive sessions. A simple solution is to source the .profile yourself as part of the command you are sending.
So instead of
yourcommand1; yourcommand2
you should send
. ~/.profile; yourcommand1; yourcommand2
over ssh
UPDATE after reading the comment about Jenkins controlling the ssh command
In the case your ssh command is performed by Jenkins you should have a look at https://wiki.jenkins-ci.org/display/JENKINS/SSH+Slaves+plugin, especially the 'Login profile files' paragraph.
I'd say one of these solutions is best
Set all environment variables from Jenkins using the node's configure page. Install the EnvInject plugin to do this.
Write a wrapper around the java command on the slave that sources your profile script and adjust the JavaPath (also on the node's configure page) to point to that wrapper.
The only way I know of for setting environment variables that will apply for non-interactive shells on AIX is via /etc/environment. I believe this is the correct place, but it will of course then apply to all users and all shells.

Cron Job - Could not open input file:

I have set up a php file to run that just echos hello.
<?php
echo hello;
?>
My cron job looks like this:
/usr/local/bin/php -f “/home/username/public_html/mls/test.php”
when my script runs i get a confirmation email that says:
Could not open input file: /home/username/public_html/mls/test.php
I don't know what is causing this. I am using godaddy's virtual private server with cpanel x installed. I have used the ssh to set permissions 777 on folder and file and still can not get it to run.
Any advice would be helpful. Thanks.
For some reason PHP cannot open the file. Try replacing /usr/local/bin/php -f with "ls -la" to try to crib some more information. Remember to NOT quote the file name in the crontab: php -f filename.php, not php -f "filename.php", unless it contains spaces -- and then it's better to use single quotes.
Possibly, try "ls -la /home", "ls -la /home/username", "ls -la ~/public_html" and so on.
Also try appending
2>&1
to the command line, in case only stdout is mailed to you (I don't really think so, but being sure costs little).
One other possibility
The crontab as it is refers /home/username/public_html/mls/test.php - that is, a public HTML directory inside username's commonest value for a home directory.
It is possible that the cron job is either not running with the appropriate user and privileges, or that the user it "sees" is actually a virtual user - there is no "/home/username" at all - and the "home directory" is elsewhere, possibly even existing just as long as the cron job runs. In this case the solution might be to refer to
~/public_html/mls/test.php
or, as described above, to first run a command such as pwd or ls -la to determine exactly where the cron job's current working directory is.
If this, too, fails, then another workaround could be to invoke the PHP HTTP handler via curl or lynx:
/usr/bin/curl http://www.thishostname.com/mls/test.php
Possibly using either some environment variable or curl header or _GET option to authenticate to the script as the cron job, and avoid it being accessible from the outside.

Default c-shell, change to bash but allow for scp

Hi so I am trying to modify my .cshrc file to make bash my default. It is on a school account so I cannot change the main settings but can change the profile. The problem is that when I use the command:
bash
in my .cshrc it works when I am logging in just fine. But anytime I try to scp files it does not work because it launches the .cshrc and scp gets confused when it changes to the bash terminal.
Does anyone know how to get around this? Possibly launch bash in quiet mode...
In general, you shouldn't do anything that invokes an interactive application or produces visible output in your .cshrc. The problem is that .cshrc is sourced for non-interactive shells. And since your default shell is csh, you're going to have csh invoked non-interactively in a lot of cases -- as you've seen with scp.
Instead, I'd just invoke bash -- or, better, bash -l -- manually from the csh prompt. You can set up an alias like, say, alias b bash -l.
If you're going to invoke a new shell automatically on login (which is still not a good idea), put it in your .login, not your .cshrc.
This is assuming chsh doesn't work, but it should -- try it.