at command in ubuntu apache error 'You do not have permission to use at' - apache

I am pretty new at php and ubuntu. I have 2 servers set up, one for development and one for staging. On the dev machine I can use the at command without a problem, but on staging I get a permissions error. The at.deny (and at.allow) files are identical, so it must be another permissions issue.
Any clues?
I see that on the staging server I can only use at command as root. How can I fix this to be able to use the at command as www-data? Again... I checked the at.allow and at.deny files ... they are not the problem here.

1) Check if you have file /etc/at.allow.
If it exists - just add your user in new line.
If not exists - try to find your user in /etc/at.deny and remove/comment it.
2) Restart "at" daemon:
sudo atd restart
3) Check:
at -l
or
sudo -u myuser at -l
The error should not be output.

Related

Apache Airflow command not found with SSHOperator

I am trying to use the SSHOperator to SSH into a remote machine and run an external application through the command line. I have setup the SSH connection via the admin page.
This section of code is used to define the commands and the SSH connection to the external machine.
sshHook = SSHHook(ssh_conn_id='remote_comp')
command_1 ="""
cd /files/232-065/Rans
bash run.sh
"""
Where 'run.sh' runs the shell script:
#!/bin/sh
starccm+ -batch run_export.java Rans_Model.sim
Which simply runs the commercial software starccm+ with some options I have specified.
This section defines the task:
inlet_profile = SSHOperator(
task_id='inlet_profile',
ssh_hook=sshHook,
command=command_1
)
I have confirmed the SSH connection works by giving a simple 'ls' command and checking the output.
The error that I get is:
bash run.sh, error: run.sh: line 2: starccm+: command not found
The command in 'run.sh' works when I am logged into the machine (it does not require a GUI). This makes me think that there is a problem with the SSH session and it is not the same as the one that Apache Airflow logs into, but I am not sure how to solve this problem.
Does anyone have any experience with this?
There is no issue with SSH connection (at least from the error message). However, the issue is with starccm+ installation path.
Please check the installation path of starccm+ .
Check if the installation path is part of $PATH env variable
$ echo $PATH
If not, then install it in the standard locations like /bin or /usr/bin etc (provided they are included in $PATH variable), or export the installed director into PATH variable like this,
$ export PATH=$PATH:/<absolute_path>
It is not ideal but if you struggle with setting the path variable you can run starccm stating the full path like:
/directory/where/star/is/installed/starccm+ -batch run_export.java Rans_Model.sim

Can't save to crontab via SSH, but can when logged in locally

I have a remote headless server (MacOS BigSur 11.3.1). When I log in via ssh (with either the root user or regular user), I am unable to save to the crontab.
When I use the following command:
% crontab -e
I can see a cronjob that I saved when I was logged in locally (not via ssh). After editing and exiting the crontab, I get the following error:
crontab: installing new crontab
crontab: tmp/tmp.1028: Operation not permitted
crontab: edits left in /tmp/crontab.kKYx3tt4c1
While logged into ssh, I have instead tried to edit the crontab with this command:
% sudo crontab -e
To my surprise, the cronjob that I saved when logged in locally is not listed. It is as if it is a different crontab for a different user. In any case, I can't save to the crontab when using sudo either. It gives the exact same error as above.
I have followed the advice of a few internet posts suggesting allowing the cron and sshd executables "Full Disk Access" through the Mac System Preferences. However, the same error persists.
I'm not sure what to try next.
So the issue was solved by giving sshd-keygen-wrapper full disk access. Don't ask me why that needs it, but it is working now. I hope this helps anyone with the same issue.

Laravel 5.4 - 500 error after install

Yesterday I've installed a fresh CentOS 7 VM with Apache, MySQL and PHP 7.0.17.
After that, I installed composer and all other required php-packages.
Then I followed this guide to install Firefly-iii : https://firefly-iii.github.io/using-installing.html.
So far so good. The database is migrated and seeded from the php artisan migrate command.
Now the problem, when I try to access the application from the browser, a 500 error appears. No log rules, nothing.
Alright, this might be an permissions problem. I have changed the owner to apache:apache, no result. Set the storage and bootstrap/cache folder to 777 no result.
Alright... What now. Ah, maybe the user or usergroup is incorrect. I've copied my public/index.php and built in some try catch statement (still no log).
When I open the application in the browser finally some result is returned.
This try/catch:
try {
$response = $kernel->handle(
$request = Illuminate\Http\Request::capture()
);
} catch (Exception $e){
echo $e->getMessage();
echo '<br/>';
echo 'User: '.exec('whoami');
echo '<br/>';
echo 'Group: '.exec('groups');
echo '<br/>';
}
returns the following result:
The stream or file "/var/www/html/application-folder/storage/logs/application-name-2017-04-06.log" could not be opened: failed to open stream: Permission denied
User: apache
Group: apache
After this message I've created the /var/www/html/application-folder/storage/logs/application-name-2017-04-06.log file and changed the permissions to 777.
Here is a little piece of my bash history :
[user#16 logs]$ sudo chmod 777 firefly-iii-2017-04-06.log
[sudo] password for user:
[user#16 logs]$ ls -l
-rwxrwxrwx+ 1 apache apache 5 Apr 6 14:18 firefly-iii-2017-04-06.log
[user#16 logs]$ chmod 777 firefly-iii-2017-04-06.log
chmod: changing permissions of ‘firefly-iii-2017-04-06.log’: Operation not permitted
This error messages is still returning and at this moment I've no idea what else I can try to fix this problem.
Does anyone knows a solution or has anybody else expecting this strange behavior?
Please help me, I am completely stuck at this moment and don't know what to do now and how I can solve this problem.
After a little bit of search yesterday I've found this answer : https://stackoverflow.com/a/37258323/1805919
I've tried it on my own server, and at this moment the application is accessible trough the browser.
Prove this is the problem by turning off selinux with the command
setenforce 0
This should allow writing, but you've turned off added security
server-wide. That's bad. Turn SELinux back
setenforce 1
Then finally use SELinux to allow writing of the file by using this
command
chcon -R -t httpd_sys_rw_content_t storage
And you're off!
First of all your server has to be in owner group of application path.
Second you have to set permissions to storage folder. Here is an example how to set permissions on *nix systems:
sudo chmod -R ug+rwx storage bootstrap/cache
Have you tried setting a new application key? Have you also ran your migrations and created the database for the application?

Apache fails to start with - "bad user name" in stdout.log. I specified LDAP user in httpd.conf.

I am using Ubuntu 14.04.01 and Apache - 2.2.31.
In httpd.conf I have
User build
Group build
Trying to start apache -
apache/logs$ cat stdout.log
httpd: bad user name build
Before that I tried to run:
. bin/envvars
When I created local user "test"
useradd -m test -G sudo -s /bin/bash
and specified it in httpd.conf, then I was able to start apache.
But, I need to use LDAP user "build".
Finally, I solved this issue:
sudo ltrace -f sh apachectl configtest 2>out.log
Investigated output in out.log file and found a call - 'getpwnam("build")', which returned '0'. From documentation I understood that "The given name or uid was not found.", but when I was calling 'id build' I was able to see that user exist and list of the network groups to which this user belongs to. Then I connected to another vm, where I was able to use LDAP user to start apache and ran
ldd bin/httpd
and compared the output. One of vm's was missing "libldap-2.3.so.0"

Unable to ssh into a newly created user on Centos on first attempt

this is what I am doing:
Creating a new server on Linode. OS is centos 6.5
Logging in as root
running the following script to add a user called shortfellow which does not have a password.
The script is:
#!/bin/bash
yum -y update
adduser shortfellow
mkdir -p /home/shortfellow/.ssh
echo "ssh-rsa REALLYLONGSSHPUBLICKEY shortfellow#example.io" >> /home/shortfellow/.ssh/authorized_keys
chmod -R 700 /home/shortfellow/.ssh
chown -R shortfellow:shortfellow /home/shortfellow/.ssh
su - shortfellow
exit
The problem is that first time when I try to ssh into the system. It does not work at all. It simply asks for the password. I hit ctrl + c and try to ssh again as the same user, it works.
this behaviour is really annoying because I am writing code to create the server programmatically and it does not work because of this silly issue.
Does anyone have any idea why this might not be working as expected?
I did /sbin/mkhomedir_helper shortfellow in the script before the exit and it works correctly after that.
I guess the issue was really that the home directory for a user is created only at login and when I tried to programmatically create this user this would not happen for some reason.