I'm trying to setup a crontab to execute at set intervals. The crontab job is setup as part of my PHP-Slim application running on Apache. For some reason, it just doesn't add the job to the crontab, so when I run the command:
crontab -u daemon -l
It says 'no crontab for daemon' (daemon is the default Apache account). I did manage to get the cronjob manually added using another account (and it executes with no further issues) so it's most likely a permissions issue. What is the best way to troubleshoot this, without resorting to things like chmod 777 (it will be a production server so I need to careful with setting permissions and documenting them)?
Managed to find the answer just after posting.
I looked in the log file for cron:
cat /var/log/cron
Lots of (daemon) AUTH (crontab command not allowed) error messages. Some further googling lead me to look at /etc/cron/allow which doesn't exist, but /etc/cron.deny does, and the daemon account was listed there. Problem solved.
By default we do not allow the user daemon to run crontab jobs. If you want that user to run crontab jobs, you would need to modify /etc/cron.deny and remove the daemon user from there.
Hope it helps.
Related
I installed docker machine, and then created a new docker-machine on Windows 10.
Now I run ls to see the list of docker machines.
Now I run the following command
docker-machine start hypervdockermachine
Now I am stuck at this
Waiting for SSH to be available...
Too many retries waiting for SSH to be available. Last error: Maximum number of retries (60) exceeded
I have seen the git hub issue here, but not clear what to do.
Is there a way to solve this problem? I am not good at ssh
UPDATE
I just found a workaround.
You can run the above commands with git bash.
Most important, you must run git bash as admin. Else you will end up scratching your head.
Even the basic
docker-machine ls
will not show up anything without being an admin.
Finally if you are seeing the following error
Unable to query docker version: Get https://192.168.0.105:2376/v1.15/version: x509: certificate signed by unknown authority
Then you have to look at this issue.
docker-machine regenerate-certs yourdockermachinename
If needed user --force option
I got into the same problem after I moved .docker to partition D: and created a symlink to C:\Users\username\.docker, following this SO answer. I removed the old machines and configured new ones, and tried to regenerate the certs as suggested in the OP workaround but the problem was not solved.
After googling, I found this OpenSSH wiki page
and suspected that the cause of the problem was related to permissions.
So I could solve the problem by trying two different things:
Delete .ssh (source)
fix permissions to D:\path\to\.docker, allowing only SYSTEM, Administrators and my user to have full control access (source). These permissions were the same defined for .docker when it was under C:\Users\username\, but moving the folder to another partition made it inherit different permissions. To avoid dealing to much with it, I keep inheritance enabled changed the permissions directly in D: rather than in .docker folder.
I am developing a simple Flask application (configured with a Apache webserver) which provides a web interface for docker management. My apache server runs as ‘www-data’ user and it uses the same for all of its API operations.
But i get the ‘Permission denied’ error for the following,
docker images
docker run, etc…
as it doesnt allow ‘www-data’ user to run the above commands.
Can you please provide me a suggestion on using the ‘www-data’ user for docker operations.
I dont want to add ‘www-data’ user to sudoers list.
Is adding the user to docker group alone will be a proper solution ???
Or please suggest me a best practice solution for this.
Thanks
GuruPrasad
It would be easier, clearer, and no less dangerous to tell Apache to run your process as root.
Remember that, if you can run any Docker command at all, you can trivially get unrestricted root-level access to anything on the system. For example, if your tool decides it really does want www-data to be in the host's sudoers list, it can
docker run --rm -v /:/host busybox \
sh -c 'echo www-data ALL = (ALL) NOPASSWD: ALL >> /host/etc/sudoers'
Depending on what your management tool does, it potentially is offering equal unprotected root-level access to the host to anyone who can reach the Web page. Even if it isn't, you need to be extremely careful with how you invoke Docker (another SO answer I was looking at had the potential to root the system if a user could create a directory with an arbitrary name and run the script from there, for instance).
I saw this is a way to resolve jenkins build script upload to aws instance issue with authentication, bu what exactly this mean? how should I execute it? I can't execute it in terminal as jenkins is a command not found, but I do have Jenkins running in local.
This:
jenkins ALL=(ALL) NOPASSWD: ALL
Means to run ALL commands without a password for a user named jenkins.
And is a syntax/configuration for the program sudo which basically allows users to run programs with the security privileges of another user, by default the superuser.
It definitely creates a security issue but you need to find a way to deal with it since it is giving "root" privileges to the user.
You can read more about sudo (sudoers) here: https://www.sudo.ws/intro.html
Trying to setup a Cron task that gets a file via FTP however seems to fail due to file permissions.
Code runs perfect in the browser, ie when apache is the owner, however fails when Cron runs the same page.
I'm assuming this is a directory/file permission error, if so who should I set the directory owner too for Cron jobs?
Most likely Dan's thought is going to be your problem. However if it works from a browser you can also call the page like this:
wget -q "http://www.domain.com/path/to/script/script.whatever" >/dev/null 2>&1
if you still get errors you can remove the >/dev/null 2>&1 part & [if your email address is in the domain administrator account correctly] output, including errors should get emailed to you.
As for the correct permissions, don't change the default plesk ones or you will get issues with normal ftp.
Defaults are:
everything under httpdocs = ftpuser.psacln
anything written by php/apache = apache.apache ~ unless you are running php as a cgi on that domain,, then they will belong to the ftp user as well.
-sean
cron jobs will run as the user that created them. More likely than a permissions error is a path error. If you're not specifying full absolute paths to the program/script to run, and to any files you reference, you'll likely have problems as cron won't have the same PATH in its environment as Apache does or you do at your shell prompt.
I set up Jenkins CI to deploy my PHP app to our QA Apache server and I ran into an issuse. I successfully set up the pubkey authentication from the local jenkins account to the remote apache account, but when I use rsync, I get the following error:
[jenkins#build ~]# rsync -avz -e ssh test.txt apache#site.example.com:/path/to/site
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(64) [sender=2.6.8]
[jenkins#build ~]#
One potential problem is that the remote apache account doesn't have a valid shell account, should I create a remote account with shell access and part of the "apache" group? It is not an SSH key problem, since ssh apache#site.example.com connects successfully, but quickly kicks me out since apache doesn't have a shell.
That would probably be the easiest thing to do. You will probably want to only set it up with a limited shell like rssh or scponly to only allow file transfers. You may also want to set up a chroot jail so that it can't see your whole filesystem.
I agree that that would probably be the easiest thing to do. We do something similar, but use scp instead. Something like:
scp /path/to/test.txt apache#site.example.com:/path/to/site
I know this is pretty old thread, but if somebody comes across this page in future...
I had the same problem, but got that fixed when I fixed my .bashrc .
I removed the statement "echo setting DISPLAY=$DISPLAY" which was there before in my .bashrc. rsync has issues with that statement for some reason.
So, fixing .bashrc/.cshrc/.profile errors helped me.