Cron job permission denied - permissions

I'm running a python script called TGubuntu.py.
I used ls -l , and the permissions of the script are -rwxrwxrwx 1 ubuntu ubuntu 503 Jan 13 19:07 TGubuntu.py, which should mean that anyone can execute the file, right?
But I still get in the log /bin/sh: 1: /home/ubuntu/TestTG/TGubuntu.py: Permission denied for some reason.
When I run the script manually it works perfectly.
Any Ideas?
I put it in the sudo crontab like this
* * * * * /home/ubuntu/TestTG/TGubuntu.py
But even in the root (cron) mail log it says Permission Denied!

Couldn't figure out what the problem was, so I accomplished my goal using a different method.
I ran a python script that uses the schedule module to call my script. Then I just let the "Timer" run on screen indefinitely.

Related

Can't save to crontab via SSH, but can when logged in locally

I have a remote headless server (MacOS BigSur 11.3.1). When I log in via ssh (with either the root user or regular user), I am unable to save to the crontab.
When I use the following command:
% crontab -e
I can see a cronjob that I saved when I was logged in locally (not via ssh). After editing and exiting the crontab, I get the following error:
crontab: installing new crontab
crontab: tmp/tmp.1028: Operation not permitted
crontab: edits left in /tmp/crontab.kKYx3tt4c1
While logged into ssh, I have instead tried to edit the crontab with this command:
% sudo crontab -e
To my surprise, the cronjob that I saved when logged in locally is not listed. It is as if it is a different crontab for a different user. In any case, I can't save to the crontab when using sudo either. It gives the exact same error as above.
I have followed the advice of a few internet posts suggesting allowing the cron and sshd executables "Full Disk Access" through the Mac System Preferences. However, the same error persists.
I'm not sure what to try next.
So the issue was solved by giving sshd-keygen-wrapper full disk access. Don't ask me why that needs it, but it is working now. I hope this helps anyone with the same issue.

Getting "chmod(): Operation not permitted" on "composer update"

When I run 'composer update' I get this error:
Writing lock file
Generating autoload files
[ErrorException]
chmod(): Operation not permitted
*It works just fine with sudo, but then I have to reset the owner & permissions, Which is really annoying...
**I also tried to reset the owner of ~/.composer to www-data with 777, no effect.
***I'm using Ubuntu 16.04 LTS + Apache/2.4.18 & php7.0.26
Any idea?
chmod will only work without sudo if the owner of the file is the same as the one running the composer update command.
The problem is that the error message doesn't tell you which file it's trying to chmod.
This depends on the project.
Running the command in verbose mode will give you more details:
composer update -v
In my case, it gave me a stack trace, showing which file called chmod(), and the line number.
However, it didn't give me the path of the file passed to chmod().
I had to add a simple echo right before the call to chmod() (without forgetting to remove it afterwards).
Once you know which file/folder is responsible for the error message, change its owner with chown.
In my case (Magento 2.3), the culprit was the bin/magento file, which needs to be owned by the user running the composer commands.

at command in ubuntu apache error 'You do not have permission to use at'

I am pretty new at php and ubuntu. I have 2 servers set up, one for development and one for staging. On the dev machine I can use the at command without a problem, but on staging I get a permissions error. The at.deny (and at.allow) files are identical, so it must be another permissions issue.
Any clues?
I see that on the staging server I can only use at command as root. How can I fix this to be able to use the at command as www-data? Again... I checked the at.allow and at.deny files ... they are not the problem here.
1) Check if you have file /etc/at.allow.
If it exists - just add your user in new line.
If not exists - try to find your user in /etc/at.deny and remove/comment it.
2) Restart "at" daemon:
sudo atd restart
3) Check:
at -l
or
sudo -u myuser at -l
The error should not be output.

Setup Amazon S3 backup on QNAP using s3cmd

I own a QNAP-219P and I want to set this up manually using s3cmd.
I did quite a bit of research on this, and here are the references I got:
http://web.archive.org/web/20091120211330/http://codemonkeybrown.com/qnaps3.html
http://wiki.qnap.com/wiki/Running_Your_Own_Application_at_Startup
http://wiki.qnap.com/wiki/Add_items_to_crontab
http://blog.wingateuk.com/2013/03/cloud-backup-on-qnap-nas.html?showComment=1413660445187#c8935766892046800936
I'm trying to get the s3cmd to work on my TS-219P.
I got everything to work (on command line), even running the script file (s3-backup.sh) on command line:
#!/bin/bash <-- I also tried #!/bin/sh
/share/maintenance/s3cmd-1.5.0-rc1/s3cmd --rr sync -rv /share/all-shared-folders/emilie/ s3://kingjim-backup/kingjim-nas/emilie/ >> /share/maintenance/log/s3cmd/backup_`date "+%Y%m%d-%H-%M"`.log <-- I also tried running s3cmd via python by adding /usr/bin/python on the front.
If I run using the SSH command prompt, it seems to work perfectly.
The problem though, is the cronjob. I can confirm the cronjob trigger, and it was run, because my log file (the one above) was generated, but the log is always empty, even though I'm sure there are some new files created/modified.
This is my cronjob task:
14 3 * * * /share/maintenance/s3-backup.sh 2>&1 | logger
I've done a number of different variations on the above, but couldn't find out what was missing.
I feel like some dependency is missing when the crontab is running, as compared to when I run it on command prompt. But I don't know how to debug crontab.
Found out that the problem was that the s3cmd configuration file was not found when running s3cmd.
So the fix was simply to copy this .s3config file to a safe shared folder, and then call the s3cmd with the "--config" parameter followed by the file.
Like this:
/share/maintenance/s3-backup/s3cmd/s3cmd --config
/share/maintenance/s3-backup/s3cmd.config --rr sync -rv /share/MD0_DATA/ s3://xxx-backup/xxx-nas/ >> /share/maintenance/s3-backup/logs/backup_`date "+%Y%m%d-%H-%M"`.log 2>&1

rake task in cron

I have a shell script (/home/user/send_report.sh) that runs my rake task:
cd /home/user/rails/app
/home/user/.rvm/gems/ruby-1.9.2-p136/bin/rake report:send
When I run it in console sh /home/user/send_report.sh it works properly.
I am trying to make a cron task with my script: */10 * * * * sh /home/user/send_report.sh, but nothing happens! Rake task should send mail, but this does not happen.
Content of /var/log/cron.log:
Jun 2 21:40:01 ubuntu CRON[1253]: (user) CMD (sh /home/user/send_report.sh)
Jun 2 21:40:01 ubuntu CRON[1251]: (user) MAIL (mailed 240 bytes of output but got status 0x0001#012)
Please, help me to get the working rake script with crontab.
Apart from the fact that you should use /bin/sh, i don't see anything wrong on the cron job. When you run manually, you get the email as you said right ? It does not only work when you do it with cron ? It seems that it could be a misconfiguration of the email server or maybe that the mail server port is blocked ?
Problem was that RVM single-user installation doesn't supports cron tasks. Installed RVM as multi-user and crontab worked properly.