I have a container which is using shared volume with host. I want to give it a full permissions. At present, it is:
ls -l
drwxr-xr-x 8 user user 4096 Aug 9 04:47 Data
But I want it to be:
ls -l
drwxrwxrwx 8 user user 4096 Aug 9 04:47 Data
I have a below deployment file:
----
----
spec:
containers:
- name: logger
image: logger_image
volumeMounts:
- mountPath: /Data
name: Data-files
securityContext:
privileged: true
volumes:
- name: Data-files
hostPath:
path: /home/user/Documents/Data
----
----
I have even set it as privileged but still, the volumes do not have full permissions. What should I add in deployment file to make the volume full permissions?
Thanks
Your permissions on /home/user/ or /home/user/Documents/ folders don't allow the process' owner (of logger_image) to access the folder and write.
Try to create /Data (on your root) and set the proper permissions.
I resolved this issue by mentioning the appropriate commands to give full permissions to that directory in the Dockerfile itself.
In the dockerfile:
RUN mkdir -p /Data
RUN chmod 777 -R Data/
and then later used the same kubernetes deployment file and it worked fine with full permissions.
Thanks
Related
I am trying to set up a local dev LEMP stack for a Slim-4 project using podman-compose. So far I have containers for PHP and Nginx. Nginx runs but gives a 500 error on trying to access the log directory - permission denied. This directory is outside of the public directory that is served by nginx.
I have selinux set to permissive to eliminate its issues.
I have used podman unshare to set ownership to the container's Nginx UID:GID.
I tried the setup with only a simple index file - the file is served with no issues. So, nginx/podman has access to the nginx configuration file on the host. The issue must be with write permissions.
Here is my docker-compose file:
version: '3.7'
# Services
services:
# Nginx Service
nginx:
image: nginx:1.17
ports:
- 8090:80
volumes:
- .:/var/www/php:z
- ./.docker/nginx/conf.d:/etc/nginx/conf.d:ro
depends_on:
- php
# PHP Service
php:
image: php:7.4-fpm
working_dir: /var/www/php
volumes:
- .:/var/www/php
What am I missing?
The issue was that I incorrectly assumed I needed to set permissions to allow Nginx to have access.
Instead I needed to grant the group www-data access permissions.
How I did it:
log into the running Nginx container podman exec -it [container ID] bash
find the www-data GID (Group ID) - from the container command line, cat /etc/passwd | grep www-data
note the GID (in the result you will see something like ...x:33:33... 33:33 is the user:group)
exit the container cli with exit
in your development/host cli, at the root of your project, run podman unshare chown -R 0:[the www-data GID you found above] . (don't miss the '.')
Explanation:
podman unshare puts you in a modified userspace that matches the container
chown changes ownership
-R means recursive
the number to the left of the ':' is the UID (User ID), the number to the right is the GID
the '.' is the current directory.
I hope this helps someone. I spent hours learning the above.
I am trying to add Docker registry for Spinnaker using the below command:
hal config provider docker-registry account add docker-registry-test
--address docker.xyz.com --repositories dept-test/test-apps/testsvc/test-service,dept-test/test-apps/testsvc1/test-service1
--username user --password
I would like to add more repositories under the same account.
How can I add repositories?
Also, I want all of my repositories available under dept-test. Whatever the repos available now and should display repos as and when it gets added.
Following configuration will get all images from your registry with 5 minutes cache refresh:
dockerRegistry:
enabled: true
accounts:
- name: docker-registry
providerVersion: V1
address: https://docker.cluster.local
cacheIntervalSeconds: 300
clientTimeoutMillis: 60000
cacheThreads: 1
paginateSize: 20
sortTagsByDate: true
trackDigests: false
username: docker
passwordFile: /data/accounts/docker-registry-password
I'm new to docker. I'm trying to switch from a traditional VMs setup to a dockerized one for a bunch of websites I manage. I tried with Docker Compose and Wordpress, this is my docker-compose.yml file:
version: "3"
services:
blog2:
image: wordpress:4.9.6-apache
volumes:
- blog2:/var/www/html
environment:
WORDPRESS_DB_PASSWORD:
depends_on:
- mysql
mysql:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD:
volumes:
blog2:
It works and it creates a blog2 volume I can access on the main filesystem from /var/lib/docker/volumes/blog2. I can also connect with SFTP and edit files, everything works.
Files in the /var/www/html directory are owned by www-data user. If I edit them it's ok but if I add a new file... it is owned by the user I'm using on the server (in my test case it's root, but it can be any other user). So they cannot be modified by www-data, if the webserver need to edit or delete them.
How can I fix this problem? My idea is to add a user to every Docker container, add him to the www-data group and chown the entire /var/www/html to this user, so that initial and future files can be red or written by both, no matter if they are created by www-data or this user.
Can it work? And can I write it in the docker-compose.yml file to have this set up when I do docker-compose up -d at container creation? :)
Thank you in advance.
One solution to your problem is to start the wordpress container using a different user. This is documented under Running as an arbitrary user on the dockerhub page for the wordpress image.
Inside the docker compose file you can set the user that will be running inside the container. For instance, you can specify user 1000 which will map to user 1000 on the machine.
Thus you can find the uid of user www-data and use that uid to start the container:
...
services:
blog2:
image: wordpress:4.9.6-apache
user: 1000:1000
volumes:
- blog2:/var/www/html
environment:
WORDPRESS_DB_PASSWORD:
depends_on:
- mysql
...
I have a Kubernetes redis Pod, which I need to backup/restore its data through dump.rdb. When restore, I put dump.rdb under /data and launch the pod with this config:
containers:
- name: redis
volumeMounts:
- mountPath: /data/
name: data-volume
volumes:
- name: data-volume
hostPath:
path: /data/
type: Directory
It can see the dump.rdb from host's /data dir, but when Redis saves any changes in the Pod, it only updated the /data dir within the Pod not the host. My goal is to be able to backup the dump.rdb on the host, so I need the dump.rdb on the host to get updated too. What am I missing here?
Const`s question helps to find the solution for Joe.
Joe missed the place where the file was stored.
My suggestion: try to use NFS volume for storing and restoring backups, it may be easier than using the hostPath
When running a Docker container, what is the correct way allow Apache write access to a synced volume?
Important, I would like the folder synced so local changes are immediately reflected inside the container as is happening now with the current run command
When running the container with:
docker run -v /Local/folder/toSync:/var/www/html -p 8080:80 --sig-proxy=false the-image
The Docker Apache process does not have write access to the folder because the owner and group of that folder is set to 1000 staff Where I believe 1000 is replacing my local username (which is absent from the container). Apache is running as www-data and therefore cannot write to the file.
Attempting to set the local folder user/group to www-data results in chown: www-data: illegal user name
What is the correct way to set permissions and/or mount the volume to permit Apache write access?
You need to put the :z on the end of your volume.
-v /Local/folder/toSync:/var/www/html:z
Also, the syntax of your chown statement doesn't looks right. Changing the owner of the volume should work fine, and you will still need to do this.
FROM php:7.4-apache
WORKDIR /var/www/html
EXPOSE 80
WORKDIR /var/www
RUN chown -R www-data html