I'm developing a website on a totally offline environment. also, I use gitlab runner for CI and the host is CentOS 7.
the problem is that gitlab runner uses gitlab-runner user on centos for deploying laravel application and apache uses apache user for running laravel.
I got Permission denied error on apache til I changed ownership of files. after that I get this error on apache log:
Uncaught UnexpectedValueException: The stream or file "storage/logs/laravel.log" could not be opened: failed to open stream: Permission denied
it seems that some vendor libraries like monolog want to write error or debug logs onto storage/logs/laravel.log but it gets permission denied. :(
.gitlab-ci.yml
stages:
- build
- test
- deploy
buildBash:
stage: build
script:
- bash build.sh
testBash:
stage: test
script:
- bash test.sh
deployBash:
stage: deploy
script:
- sudo bash deploy.sh
build.sh
#!/bin/bash
set -xe
# creating env file from production file
cp .env.production .env
# initializing laravel
php artisan key:generate
php artisan config:cache
# database migration
php artisan migrate --force
deploy.sh
#!/bin/bash
PWD=$(pwd)'/public'
STG=$(pwd)'/storage'
ln -s $PWD /var/www/html/public
chown apache.apache -R /var/www/html/public
chmod -R 755 /var/www/html/public
chmod -R 775 $STG
Am I using gitlab runner correct? how can I fix the permission denied error?
SELinux
I found the problem and it was selinux, like always it was selinux and I ignored it at the begining
What's the problem:
you can see selinux context on files with ls -lZ command, by default all files on www are httpd_sys_content_t, the problem is that selinux just allow apache to read these files. you should change storage and bootstrap/cache context so it can be writable.
there are 4 apache context type:
httpd_sys_content_t: read-only directories and files
httpd_sys_rw_content_t: readable and writable directories and files used by Apache
httpd_log_t: used by Apache for log files and directories
httpd_cache_t: used by Apache for cache files and directories
What to do:
first of all install policycoreutils-python for better commands
yum install -y policycoreutils-python
after installing policycoreutils-python the semanage command is available, so you can change file context like this:
semanage fcontext -a -t httpd_sys_rw_content_t "/var/www/html/laravel/storage(/.*)?"
semanage fcontext -a -t httpd_sys_rw_content_t "/var/www/html/laravel/bootstrap/cache(/.*)?"
don't forget to commit the changes by this command:
restorecon -Rv /var/www/html/laravel/storage
restorecon -Rv /var/www/html/laravel/bootstrap/cache
the problem is solved :)
ref: http://www.serverlab.ca/tutorials/linux/web-servers-linux/configuring-selinux-policies-for-apache-web-servers/
Related
I am getting the following error when I run docker-compose up:
Thanks a lot for your help
I resolved this problem by adding this to the Dockerfile after it copies the scripts to docker-entrypoint-initdb.d
RUN chown -R mysql:mysql /docker-entrypoint-initdb.d/
Example Dockerfile:
FROM mysql:latest
ENV MYSQL_DATABASE NAME_DATABASE
ENV MYSQL_ROOT_PASSWORD ***********
COPY ./sql-scripts/ /docker-entrypoint-initdb.d/
RUN chown -R mysql:mysql /docker-entrypoint-initdb.d/
EXPOSE 3306
CMD ["mysqld", "--character-set-server=utf8mb4", "--collation-server=utf8mb4_unicode_ci"]
The next step is to build the image:
docker build -t image-db:latest .
The next step is to create the container
docker run -d -p 3306:3306 --name container-db image-db:latest
You should not override the postgres image entrypoint. It is designed to look for .sql files in /docker-entrypoint-initdb.d/ directory (See line in script).
You should just mount your .sql files into /docker-entrypoint-initdb.d/ and it should be processed on startup (only if database does not already exist)
I had the same issue, however, my problem occurred due to Linux user. I am using root as a runner so the problem happened because the mounting volume in the local machine did not have permissions. in this regard, I used chmod -R 777 scripts and it worked fine. Technically, you need to set permissions for both local machine and your container.
I am install nfs using this command in fedora 32:
sudo dnf install nfs-utils
and then I create a dir to export storage:
[dolphin#MiWiFi-R4CM-srv infrastructure]$ cat /etc/exports
/home/dolphin/data/k8s/monitoring/infrastructure/jenkins *(rw,no_root_squash)
now I could mount this dir with root user like this:
sudo mount -t nfs -o v3 192.168.31.2:/home/dolphin/data/k8s/monitoring/infrastructure/jenkins /mnt
now I want to make a step forward to make it it avaliable to any user from any ip(the client could mount nfs without using sudo), so I first try to chown of this folder:
chown 777 jenkins
and then I want to make this jenkins folder group and user to nfsnobody:
[dolphin#MiWiFi-R4CM-srv infrastructure]$ chown -R nfsnobody jenkins
chown: invalid user: ‘nfsnobody’
and I do not find any nfsnobody content from /etc/passwd. what should I do to fix invalid user: ‘nfsnobody’ problem? should nfs-util added it automatically?
Right now nobody used by default probably after RedHat/Centos versions 8
You can simply use
chown -R nobody jenkins
Or
Change it from /etc/idmapd.conf
[Mapping]
Nobody-User = nfsnobody
Nobody-Group = nfsnobody
To put the changes into effect restart the rpcidmapd service and remount the NFSv4 filesystem:
service rpcidmapd restart
mount -o remount /nfs/mnt/point
On Red Hat Enterprise Linux 6, if the above settings have been applied and UID/GID’s are matched on server and client and users are still being mapped to nobody:nobody then a clearing of the idmapd cache may be required.
# nfsidmap -c
I have hosted one docker with PHP in a shared server of our office environments. Previously it was working fine without any issue. All the users were able to access the site via port forwarding to 8080. Here is my docker file details -
# Choose Repo from Docker Hub
FROM centos:latest
# Provide details of maintainer
MAINTAINER ritu
#Install necessary software
RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum -y install http://rpms.remirepo.net/enterprise/remi-release-7.rpm
RUN yum -y install yum-utils
RUN yum-config-manager --enable remi-php56
RUN yum -y install php php-mcrypt php-cli php-gd php-curl php-mysql php-ldap php-zip php-fileinfo php-devel php-pear make gcc systemtap-sdt-devel httpd unzip postfix
RUN export PHP_DTRACE=yes
RUN curl -sS https://getcomposer.org/installer | php
RUN mv -f composer.phar /usr/local/bin/composer
RUN chmod +x /usr/local/bin/composer
RUN composer require phpmailer/phpmailer
COPY phpinfo.php /var/www/html/
COPY php.ini /var/www/
COPY httpd.conf /var/www/
RUN cp -f /var/www/httpd.conf /etc/httpd/conf/
COPY *.rpm /var/www/
#Install & Configure OCI for PHP
COPY oci8-2.0.12.tgz /
RUN tar -xvf oci8-2.0.12.tgz
RUN yum -y localinstall /var/www/*.rpm --nogpgcheck
COPY client.sh /etc/profile.d/
RUN chmod +x /etc/profile.d/client.sh
RUN cp -f /var/www/php.ini /etc/
COPY php_oci8_int.h oci8-2.0.12/
COPY Log_Check.zip /
RUN unzip Log_Check.zip
RUN cp -a -R /Log_Check/* /var/www/html/
WORKDIR /oci8-2.0.12
RUN phpize
RUN ./configure --with-oci8=/usr/lib/oracle/12.2/client64
RUN cp -f /usr/include/oracle/12.2/client64/*.h /oci8-2.0.12/include/
RUN make
RUN make install
RUN ls /var/www/html/
RUN rm -rf /var/run/apache2/apache2.pid
#Expose necessary ports
EXPOSE 80
EXPOSE 1521
EXPOSE 25
#Provide Entrypoint
CMD ["-D", "FOREGROUND"]
ENTRYPOINT ["/usr/sbin/httpd"]
Suddenly one of my friend added another docker with same port 8080 in the same server. After that my docker got stopped. with below error -
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.18.0.3. Set the 'ServerName' directive globally to suppress this message
httpd (pid 1) already running
After several hours of googling and after trying lots of commands, I found that its easy to remove the entire container as well as images from the server. Hence I removed all containers with docker rm followed by image deletion with docker rmi. Again i have recreated the docker image on my local system (its working here) and transferred to server. Again I tried to run the docker. But faced same issue again.
Unable to find out the cause & solution. Need some help.
first remove ENTRYPOINT from your Dockerfile and just use:
CMD [ "/usr/sbin/httpd", "-X" ]
the warning regarding AH00558 is comming from your configuration and it i complaining about you do not use www.test.com you can ignore that for now and apache will still working. if you want to read more see this
I'm trying to deploy Moodle into Docker.
Here is the steps I followed:
First, create a new network for the application and the database:
$ docker network create moodle
Then, start a new database process in an isolated container:
$ docker run --name mysql --network moodle -e MYSQL_ROOT_PASSWORD=password -d mysql
Finally, you can run this moodle image and link it to your mysql container:
$ docker run --name my-moodle --network moodle --link mysql:database -p 8080:80 -d aesr/moodle
Access it via http://localhost:8080 or http://host-ip:8080 in a browser.
But while installing moodle I'm getting this error:
Data directory (/var/www/moodledata) cannot be created by the installer.
Maybe because of Apache doesn't have the proper permission. I'm running Docker on Windows.
My solutions worked on Centos 7.
Just move out the moodledata to somewhere else, like
mkdir /moodledata
chown -R apache:apache /moodledata
Because it calls the folder /var can be expose from internet and not accept to start the Installation
i'm facing the next error in a centos 7 server
I take a look to similar questions saying that is because SELinux doesn't allow to httpd to write in my /home folder, i've tried changing the owner of the folder without success; try changing the context (chcon) to httpd_sys_rw_content_t of my /home with the same error; try disabling SELinux and the error persists; and in the file httpd.conf change the User and Group from apache to test this didn't work either. My server is:
LSB Version: :core-4.1-amd64:core-4.1-noarch
Distributor ID: CentOS
Description: CentOS Linux release 7.4.1708 (Core)
Release: 7.4.1708
Codename: Core
and
Linux localhost 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
When I execute move_uploaded_file() from php -a as user test it works normally, i see that the issue is with the user apache
TLDR:
Do not run setenforce 0 command, this will disable SELinux! You should not disable SELinux for security reasons.
The solution:
You should update policy to make SELinux allow read and write on specific directories:
To allow apcahe to read and write.
chcon -R -t httpd_sys_rw_content_t /path/your_writabl_dir
For read only directories:
chcon -R -t httpd_sys_content_t /path/yourdir
For example you can make your public (document root) directory read only and only allow write on directories that you allow you app to write on:
# Make all read only
chcon -R -t httpd_sys_content_t /var/www/myapp
# Only allow write on uploads dir for example
chcon -R -t httpd_sys_rw_content_t /var/www/myapp/public/uploads