So I just recently downloaded Apache server with all of its files (httpd, apr, apr-util, pcre) following the instructions dictated here: http://httpd.apache.org/docs/2.4/install.html
However, after set-up, when I tried to start my Apache server, which is located in my usr/local/bin/, I was prompted with this message:
[allen#allen-lnx ~]$ /usr/local/bin/apachectl start
(13)Permission denied: AH00091: httpd: could not open error log file /usr/local/logs/error_log.
AH00015: Unable to open logs
After some research, I have found that I need to edit my httpd.conf file, which I did so earlier to allow for the correct ServerName and Listen options. However, I am unsure as to how to edit my conf file to allow for access to the "logs" directory.
Notably, the command will run when I use the "sudo" command, but I would prefer to not always use that since it seems like a work around.
Any help would appreciated. Thanks!
Edit: I've actually noticed that I may have two httpd.conf files, which is proving to be a little troublesome. The other one is located in my root /etc/ directory (etc/httpd/conf/httpd.conf). I think my modified question now is... which one should I be keeping? Is the /etc/ version the one that is built in, as indicated by faff's comment below?
Current Solution: I figured I would just accept the fact that I need to use sudo when editing this file since I need to be root. I might change it later so that I'm always running as root, but for now, sudo will suffice.
This looks like an issue with he filesystem permissions. Make sure the /usr/local/logs/ directory exists and is writeable by the user you're running Apache as.
If you don't want to have your logs directory writeable by normal user, you can create the log file:
sudo touch /usr/local/logs/error_log
And then change the owner of the file to the correct user:
sudo chown allen /usr/local/logs/error_log
Assuming you want to run Apache as the user allen.
If you want to change the location of Apache logfile, look for the ErrorLog directive in your httpd.conf file (you will have to add it if it's not there):
ErrorLog path/to/logfile
For everyone that is using SELinux, if you deleted the folder or come across similar problems you may need to do several things.
Re-link the folder with ln -s /var/log/httpd /etc/httpd/logs
By default logs are kept under the var folder but are referenced in the /etc/httpd/logs folder
Apply SELinux security permissions with chcon system_u:object_r:httpd_config_t:s0 /etc/httpd/logs
And of course run everything as admin
Changing SELinux security policy to permissive fixed my problem.
Before fix my SELinux worked with enforced mode:
$ sestatus -v
sestatus -v
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: enforcing
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Max kernel policy version: 30
I changed security policy in SELinux configuration file and in the system.
#/etc/selinux/config
SELINUX=permissive
# In terminal set SELinux to run in permissive mode.
$ setenforce 0
After fix my SELinux worked with enforced mode:
$ sestatus -v
SELinux status: enabled
Current mode: permissive
Mode from config file: permissive
Policy MLS status: enabled
Policy deny_unknown status: allowed
Max kernel policy version: 30
For those who are stuck with the SElinux policies, I was able to do it by creating a custom policy
Basically I wanted to move the /var/log/httpd to my own directory under /r/
So I run the following
semanage fcontext -a -t httpd_sys_content_t "/r/www(/.*)?"
semanage fcontext -a -t httpd_log_t "/r/logs(/.*)?"
restorecon -Rv /z/logs/
restorecon -Rv /z/www/
service httpd restart
# worked
Related
I'm facing a weird behavior trying to run rsync as sudo through ssh with passwordless login.
This is something I do with dozens of servers, I'm having this frustrating problem connecting to a couple of Ubuntu 18.04.4 servers
PREMISE
the passwordless SSH from CLIENT to SERVER with account USER works
nicely
When I'm logged in SERVER I can sudo everything with
account USER
On SERVER I've added the following to /etc/sudoers
user ALL=NOPASSWD:/usr/bin/rsync
Now, if I launch this simple test from machine CLIENT as user USER, I receive the following sudo error message:
$ ssh utente#192.168.200.135 -p 2310 sudo rsync
sudo: no tty present and no askpass program specified
Moreover, looking in the SERVER's /var/log/auth.log I found this errors:
sudo: pam_unix(sudo:auth): conversation failed
sudo: pam_unix(sudo:auth): auth could not identify password for [user]
am not an PAM expert, but tested the following solution working on Ubuntu 16.04.5 and 20.04.1
NOTE : Configuration set to default on /etc/ssh/sshd_config
$ sudo visudo -f /etc/sudoers.d/my_config_file
add the below lines
my_username ALL=(ALL) NOPASSWD:ALL
and don't forget to restart sshd
$ sudo systemctl restart sshd
I've found a solution thanks to Centos. Infact, because of the more complex configuration of /etc/sudoers in Centos (compared to Ubuntu or Debian), I've been forced to put my additional configurations to an external file in /etc/sudoers.d/ instead than putting it directly into /etc/sudoers
SOLUTION:
Putting additional configurations directly into /etc/sudoers wouldn't work
Putting the needed additional settings in a file within the directory /etc/sudoers.d/ will work
e.g. , these are the config lines put in a file named /etc/sudoers.d/my_config_file:
Host_Alias MYSERVERHOST=192.168.1.135,localhost
# User that will execute Rsync with Sudo from a remote client
rsyncuser MYSERVERHOST=NOPASSWD:/usr/bin/rsync
Why /etc/sudoers didn't work? It's unknown to me even after two days worth of Internet search. I find this very obscure and awful.
What follows is a quote from this useful article: https://askubuntu.com/a/931207
Unlike /etc/sudoers, the contents of /etc/sudoers.d survive system upgrades, so it's preferrable to create a file there than to modify /etc/sudoers.
For the editing of any configuration file to be used by sudo the command visudo is preferable.
i.e.
$ sudo visudo -f /etc/sudoers.d/my_config_file
I had a similar problem on a custom linux server, but the solution was similar to the answers above.
As soon as I removed the line your_user ALL=(ALL) NOPASSWD:ALL from /etc/sudoers, the errors were gone.
Getting following error in NGINX server, Using LetsEncrypts free SSL Certificate.
2016/06/23 19:53:13 [warn] 5013#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1
2016/06/23 19:53:13 [emerg] 5013#0: BIO_new_file("/etc/letsencrypt/live/abc/fullchain.pem") failed (SSL: error:0200100D:system library:fopen:Permission denied:fopen('/etc/letsencrypt/live/abc/fullchain.pem','r') error:2006D002:BIO routines:BIO_new_file:system lib)
Both problems have one root cause.
This error usually happens, when you try to start nginx with non-root user. You could try to start as root or under sudo.
Looks like you have permission on your pem file, which not allow nginx to read it when you start it from non-root user, you could try to change file permission or start nginx as root or under sudo.
Hello I also had the same similar issue.
setenforce 0
It was solved this way.
If your chmod and chown is all correct on the file, this is probably because you copied a file into a folder - maybe home folder (say) - then mv'd the file into position for NGINX. SeLinux remembers the original file creation location and applies the rights wherever the file is mv'd to - to reset the SeLinux permissions to the current location/file permissions use
restorecon filename
This usually sorts it
If you run nginx worker process from www-data user, it needs just set rights on /etc/letsencrypt/ folder:
chown -R www-data:www-data /etc/letsencrypt/
chmod -R 755 /etc/letsencrypt/
It should works
In my case(Oracle Linux 8), similar issue resolved after changing contexts of .pem files.
$ chcon -t httpd_sys_content_t *.pem
In my case, I solved it by copying the files to the /etc/ssl/certs/ folder and changing the path in nginx.conf
I'm distributing load between two web servers, which means all of the Apache settings and vhosts are pretty much identical, and I wanted to make sure they stay that way by using LSync (or if there's another solution that helps with the problem I'm having, let me know)
So obviously Apache runs as the apache user, and we cant enable root SSH logins, so I created an lsync user that can SSH between the two servers using RSA keys.
And now I'm running into some permissions errors, which is kinda what I expected to happen really. What I'm trying now is I added the lsync user to the apache group, and the apache user to the lsync group... and that seems to work ok, as long as the files are chowned 7 for both the user and the group...
I thought about setting a cron job to chown apache.apache every so often, and maybe even chmod +rwx for the group and user, but I'm sure that would cause some other issues.
I thought about having lsync run as the apache user, but it looks like the apache home directory needs to actually be owned by root.root.. so that would cause issues with the apache user trying to ssh in and read from the .ssh directory.
I couldn't find much about this when I looked on Google... Most people just used the root user for lsync, which is out of the question.
So if anyone has a fix, that would be great! thanks
P.S. I know that I can allow the lsync user to execute specific commands via sudo, if I properly configure the sudoers configuration... is there a way to have it sudo chown apache.apache /var/www && sudo chmod -R u+rwx /var/www or something?
rsync has an option for forcing the permissions of the files it creates on the destination: --chmod=<blah>. lsyncd does not have direct support for this, but can pass-through rsync flags.
Try adding this to your lsyncd configuration:
_extra = {"--chmod=Dug+rwx,Fug+rw"}
That should ensure that directories, D, have read/write/execute permissions for owner and group, and files, F, have read/write permissions for owner and group. Any other permissions should be set as they are on the source server.
If you need the files to be owned by the apache user then you could set up a chown cron job, as you suggest, but you might find that a constantly running script that reads the output from inotifywatch will be more responsive (and mostly idle).
You might consider having the apache user run an rsync daemon. It's little used since tunnelling rsync through ssh is more convenient and more secure, but it might help you side-step this problem.
You need to set up a configuration file, and then simply launch it with rsync --daemon using whatever init system your distro has.
You can then configure your lsynd with target = "rsync://server/path".
If the connection between the servers is local and the network is trusted then you're done, otherwise you should configure the rsync daemon to listen only on 127.0.0.1, and then use an ssh -L port mapping to route the traffic through an encrypted tunnel (the owner of the tunnel is not important).
I'm trying to setup Apache (httpd) with mod_wsgi to run a single Django site.
(13)Permission denied: httpd: could not open error log file /var/mail/django-error-log.
Unable to open logs
I've already done: chown apache django-error-log to make sure the ownership is set to apache and verified it with ls -l
ls -l
total 0
-rw-r--r--. 1 apache root 0 Jan 10 01:40 django-error-log
Any idea what's causing the permission denied?
Highly possible you have SELinux Enforcing.
Just disable it (SELINUX=disabled) and try again.
1.) vi /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
2.) And then "reboot"
3.-) Try again
You can use "getenforce" command to verify the current status, as follows:
[root#instance-1 selinux]# getenforce
Disabled
[root#instance-1 selinux]#
Regards
I am trying to install a laravel project in google compute engine with "Red Hat Enterprise Linux Server 7".
I followed this blog: http://tecadmin.net/install-laravel-framework-on-centos/
Completed the laravel project download, set up user permission for user "apache" and group "apache". After all this, I am getting error as
Error in exception handler: The stream or file "/var/www/html/project/app/storage/logs/laravel.log" could not be opened: failed to open stream: Permission denied in /var/www/html/project/bootstrap/compiled.php:9072
Who ever had the problem earlier, mentions the solution as set proper permission for the log files. I have verified that app/storage folder has correct permissions.
I know I am missing something very simple, but could not get this working.
Any help will be greatly appreciated.
UPDATE:
These are the permissions I have applied:
chown -R apache:apache project
chmod 775 project
chmod 775 project/app/storage
chmod -R 777 project/app/storage
And these are the permissions I can see for the folder:
drwxrwxr-x. 7 apache apache 4096 Dec 23 13:54
drwxrwxr-x. 7 apache apache 84 Dec 23 13:53 storage
-rwxrwxrwx. 1 apache apache 0 Dec 23 14:01 laravel.log
Not able to figure out if this is an issue with RHEL linux 7 issue. I gave up on this after a while and created a VM with centOS 6 which is now working properly. Thanks a lot #ykbks for helping me with this.
Needs to disable SELinux.
~]# cat /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - SELinux is fully disabled.
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection.
SELINUXTYPE=targeted
# SETLOCALDEFS= Check local definition changes
SETLOCALDEFS=0
Changing the value of SELINUX to disabled changes the state of SELinux and the name of the policy to be used the next time the system boots.