Web2py - Cron task debug - module

I recently updated to the newest version of web2py (v2.1.1) but even in the previous version I was still experiencing this issue.
I want to run a cron job. In my efforts to just get the cron working I have followed this tutorial. Something really simple to see it working. But I must have done something wrong as I am not sure even this is working.
Below is my terminal once I start web2py. There are 3 modules I would like to run every minute. The last module is from the aforementioned tutorial. I have also followed the tutorials' logging.conf so that output is reflected below:
2012-10-18 16:50:00,060 - web2py.cron - DEBUG - hard cron invocation
2012-10-18 16:50:00,060 - web2py.cron - DEBUG - WEB2PY CRON: Acquiring lock
2012-10-18 16:50:00,061 - web2py.cron - INFO - WEB2PY CRON (hard): ircmessage executing *applications/ircmessage/modules/get_messages.py in /home/web2py/src/web2py at 2012-10-18 16:50:00.061575
2012-10-18 16:50:00,062 - web2py.cron - INFO - WEB2PY CRON (hard): ircmessage executing **applications/ircmessage/modules/addline.py in /home/web2py/src/web2py at 2012-10-18 16:50:00.062092
2012-10-18 16:50:00,065 - web2py.cron - INFO - WEB2PY CRON (hard): ircmessage executing **applications/ircmessage/modules/background_updater.py in /home/web2py/src/web2py at 2012-10-18 16:50:00.065794
2012-10-18 16:50:00,069 - web2py.cron - DEBUG - WEB2PY CRON: Releasing cron lock
2012-10-18 16:50:00,934 - web2py.cron - DEBUG - WEB2PY CRON Call returned success:
>>>
2012-10-18 16:50:00,938 - web2py.cron - DEBUG - WEB2PY CRON Call returned success:
>>>
2012-10-18 16:50:00,963 - web2py.cron - DEBUG - WEB2PY CRON Call returned success:
Here is my crontab:
#crontab
0-59/1 * * * * root *applications/ircmessage/modules/get_messages.py
0-59/1 * * * * root **applications/ircmessage/modules/addline.py
0-59/1 * * * * root **applications/ircmessage/modules/background_updater.py
A quick look at the simple addline.py module:
#!/usr/bin/env python
# coding: utf8
from gluon import *
from gluon.debug import dbg
with open("text.txt", "a") as myfile:
myfile.write("appended text")
dbg.set_trace() # stop here! **
** Notice that I used the dbg.set_trace -- This trace doesn't not appear in the debug section in the admin.
Since updating to the new version of web2py I noticed that cron tasks are not ran on startup automatically (I may have misinterpreted this however?). I start web2py like this:
./web2py.py -i xxx.xxx.xx.xx -p 8000 -c /etc/ssl/certs/my_cert_file.crt -k /etc/ssl/certs/my_cert_key.key -a apassword --run-cron
Any advice on how to get the simplest of cron tasks working would be greatly appreciated. Also advice on how I might go about debugging whether the cron is actually being invoked would be welcomed. Actually, advice in general would be very beneficial.
Thank you for your time and suggestions in advance.

Since web2py 2.1.1 cron is disabled by default (because we want to encourage use of the scheduler). You need the -Y option to enable it.

Related

Multiple server builds with the same image - one will not run a specific crontab line

We have a watchdog script that runs on a number of servers that will restart some services if they fail. We have them running successfully on a number of sites, however we have one site which will not run the crontab entry that triggers the watchdog. If we run the entry from the command line - it works fine
When the watchdog is installed it puts the following line into crontab. You just remove the '#' to enable it
#*/5 * * * * root /usr/local/fusion/scripts/watch_fusion_services 60
Other entries in Crontab do run - it's just this one line
I have done the following to try and resolve the issue
Removed crontab entries for the watchdog and reinstalled watchdog
Checked syslog getting this error:
Error: bad hour; while reading /etc/crontab
Changed the crontab line to be 6 minutes instead of 5 (as there was another cron job running every 5 minutes as well at this site)
Syslog error no longer occurring, however watchdog still does not work via crontab. No error messages in syslog
tested running the crontab line from the command prompt - this works okay
Attempted same process on a test VM - worked okay
Attempted same process in live environment - tested okay
checked versions of the ICA - both the same - GNU/Linux 3.13.0-117-generic x86_64
ran ntqp -p on server that is having issues - time is 'LOCAL'.
typed entry by hand - same issues occurring
I could try rebooting the server - but it seems a bit extreme for one crontab entry not working
Does anyone have any ideas about this one?
There was a line of junk in crontab that we all missed. One of the other technicians I work with pointed it as he walked past our department.
Sometimes you just can't see the wood from the trees when you are looking at a problem too long

CRON job not setting up on linux

I have setup the corn command using crontab -e command.
MAILTO=""
* * * * * /usr/bin/php7.2 /var/www/vhosts/hostname/httpdocs/bin/magento cron:run --group="test"
I have created module to run CRON job automatically. But it is not working automatically.
Instead when I hit command php bin/magento cron:run it works.
I am surprised cron task is performing manually but not automatically.
My bad. It is working fine now.
The actual issue was with the user setting up the CRON job. I was setting the CRON job with root user. Later I set up the same CRON using developer user having ftp access and it worked perfectly.

Flink job started from another program on YARN fails with "JobClientActor seems to have died"

I'm new flink user and I have the following problem.
I use flink on YARN cluster to transfer related data extracted from RDBMS to HBase.
I write flink batch application on java with multiple ExecutionEnvironments (one per RDB table to transfer table rows in parrallel) to transfer table by table sequentially (because call of env.execute() is blocking).
I start YARN session like this
export YARN_CONF_DIR=/etc/hadoop/conf
export FLINK_HOME=/opt/flink-1.3.1
export FLINK_CONF_DIR=$FLINK_HOME/conf
$FLINK_HOME/bin/yarn-session.sh -n 1 -s 4 -d -jm 2048 -tm 8096
Then I run my application on YARN session started via shell script transfer.sh. Its content is here
#!/bin/bash
export YARN_CONF_DIR=/etc/hadoop/conf
export FLINK_HOME=/opt/flink-1.3.1
export FLINK_CONF_DIR=$FLINK_HOME/conf
$FLINK_HOME/bin/flink run -p 4 transfer.jar
When I start this script from command line manually it works fine - jobs are submitted to YARN session one by one without errors.
Now I should be able to run this script from another java program.
For this aim I use
Runtime.exec("transfer.sh");
(maybe are there better ways to do this? I have seen at REST API but there are some difficulties because job manager is proxied by YARN).
At the beginning is works as usually - first several jobs are submitted to session and finished successfully. But the following jobs are not submitted to YARN session.
In /opt/flink-1.3.1/log/flink-tsvetkoff-client-hadoop-dev1.log I see error (and no another errors found in DEBUG level)
The program execution failed: JobClientActor seems to have died before the JobExecutionResult could be retrieved.
I have tried to analyse this problem by myself and found out that this error has occurred in JobClient class while sending ping request with timeout to JobClientActor (i.e. YARN cluster).
I tried to increase multiple heartbeat and timeout options like akka.*.timeout, akka.watch.heartbeat.* and yarn.heartbeat-delay options but it doesn't solve the problem - new jobs are not submit to YARN session from CliFrontend.
The environment for both case (manual call and call from another program) is the same. When I call
$ ps axu | grep transfer
it will give me output
/usr/lib/jvm/java-8-oracle/bin/java -Dlog.file=/opt/flink-1.3.1/log/flink-tsvetkoff-client-hadoop-dev1.log -Dlog4j.configuration=file:/opt/flink-1.3.1/conf/log4j-cli.properties -Dlogback.configurationFile=file:/opt/flink-1.3.1/conf/logback.xml -classpath /opt/flink-1.3.1/lib/flink-metrics-graphite-1.3.1.jar:/opt/flink-1.3.1/lib/flink-python_2.11-1.3.1.jar:/opt/flink-1.3.1/lib/flink-shaded-hadoop2-uber-1.3.1.jar:/opt/flink-1.3.1/lib/log4j-1.2.17.jar:/opt/flink-1.3.1/lib/slf4j-log4j12-1.7.7.jar:/opt/flink-1.3.1/lib/flink-dist_2.11-1.3.1.jar:::/etc/hadoop/conf org.apache.flink.client.CliFrontend run -p 4 transfer.jar
I also tried to update flink to 1.4.0 release or change parallelism of job (even to -p 1) but error has still occurred.
I have no idea what could be different? Is any workaround by the way?
Thank you for any help.
Finally I find out how to resolve that error
Just replace Runtime.exec(...) with new ProcessBuilder(...).inheritIO().start().
I really don't know why the call of inheritIO helps in that case because as I understand it just redirects IO streams from child process to parent process.
But I have checked that if I comment out this line of code the program begins to fall again.

Running a crontab job from locally stored script

Having trouble running a crontab psql backup job from a locally stored script. I added the job via crontab -e and when I used crontab -l, it shows up in the list of jobs. The script that it is supposed to run works fine, checked that, runs as it should and dumps the output on the designated s3 bucket when using ./backup.sh
This is what I set the job as:
59 23 * * 7 /Users/myusername/backup.sh
The job should run at 11:59PM every Sunday, but it doesn't. I can't figure out what the issue is (do I need to leave line breaks/spaces in between each job, or just after the very lost job in my crontab list?
Any help would be very much appreciated. Thanks.
Depending on your distribution, you might want to check logs for Cron service.
Non-exhaustive list of possible problem reasons:
Cron service is not running at all and hence is not starting any of the tasks;
Usually Cron passes your script a very limited set of environment variables, so your script might fail because of some missing environment. That will probably be reflected in cron daemon logs
What can you do
Cron service: if your distro uses systemd then try running systemctl status cron (or systemctl status crond?) to check if it is running.
Your script is started but fails: here are several things to try.
Try checking cron service logs, maybe with something like journalctl --unit cron or journalctl -f before the script should be started;
Check if there is a dead.letter file in your home directory containing output of the failed script. When Cron starts your script and the script outputs something (which is considered a problem), that output is mailed to you. If mailing is not properly configured then it usually goes to that file.
Put something like this in the beginning of your script:
(
date
id -a
set
echo
) >> /tmp/myscript.log
Then wait until cron runs your script and check if the file /tmp/myscript.log was created. Then try to run your script manually, replicating all the environment created by cron which you now know. I.e. unset all but the variables Cron leaves, and make sure id is correct.

catalina.sh jdpa start doesn't start the server

I was trying to debug the webapp so i wanted to attach remote debugger to apache tomcat. But when i run the command "catalina.sh jdpa start" it doesn't start the server instead it shows me this
./catalina.sh jdpa run
Using CATALINA_BASE: /Users/rsingh/work/apache-tomcat-7.0.27*
Using CATALINA_HOME: /Users/rsingh/work/apache-tomcat-7.0.27
Using CATALINA_TMPDIR: /Users/rsingh/work/apache-tomcat-7.0.27/temp
Using JRE_HOME: /System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home
Using CLASSPATH: /Users/rsingh/work/apache-tomcat- 7.0.27/bin/bootstrap.jar:/Users/rsingh/work/apache-tomcat-7.0.27/bin/tomcat-juli.jar
Usage: catalina.sh ( commands ... )
commands:
debug Start Catalina in a debugger
debug -security Debug Catalina with a security manager
jpda start Start Catalina under JPDA debugger
run Start Catalina in the current window
run -security Start in the current window with security manager
start Start Catalina in a separate window
start -security Start in a separate window with security manager
stop Stop Catalina, waiting up to 5 seconds for the process to end
stop n Stop Catalina, waiting up to n seconds for the process to end
stop -force Stop Catalina, wait up to 5 seconds and then use kill -KILL if still running
stop n -force Stop Catalina, wait up to n seconds and then use kill -KILL if still running
configtest Run a basic syntax check on server.xml - check exit code for result
version What version of tomcat are you running?
Note: Waiting for the process to end and use of the -force option require that $CATALINA_PID is defined*
I do not see the logs file being created for apache and i do not see any error in the syntax i am starting the server with. Has anyone ever faced it?
You have a typo ("p" and "d" reversed). The command should be:
catalina.sh jpda start