rancher - help on getting NFS working - volumeDriver.create: failed - nfs

I've installed NFS 0.30 (last version) stack using catalog.
The options are:
MOUNT_DIR /
MOUNT_OPTS proto=tcp,port=2049,nfsvers=4
NFS_SERVER xxx.xxx.xxx.xxx (digitalocean droplet public ip)
The container starts normally and seems to be working fine. So, then I try to create a simple stack using the NFS with this docker-compose:
version: '2'
services:
web:
image: nginx
volumes:
- bar:/var/bar
volumes:
bar:
driver: rancher-nfs
And I get this error:
(Expected state running but got error: Error response from daemon: create aaaa_bar_8fa9a: VolumeDriver.Create: Failed nsenter -t 11437 -n mount -o proto=tcp,port=2049,nfsvers=4 xx.xx.xx.xx:/ /tmp/5ht8d)

First of all the last version of NFS is 4.2 and the last version of the catalog is v0.4.0.
with the variable MOUNT_DIR you define where is the export point on your server, you are sure you want mount in the root dir?
You have create NFS export?
After a NFS Export is created test by upgrading your service with new variable mount point (for example /mnt/<environment_name>/)

Related

docker volume, configuration files aren't generated

Same kind of issue than : what causes a docker volume to be populated?
I'm trying to share configuration file of apache in /etc/apache2 with my host, and file aren't generated automatically within the shared folder.
As minimal example:
Dockerfile
FROM debian:9
RUN apt update
#Install apache
RUN apt install -y apache2 apache2-dev
ENTRYPOINT ["apache2ctl", "-D", "FOREGROUND"]
docker-compose.yml
version: '2.2'
services:
apache:
container_name: apache-server
volumes:
- ./log/:/var/log/apache2
- ./config:/etc/apache2/ #remove it will let log generating files
image: httpd-perso2
build: .
ports:
- "80:80"
With this configuration, nor ./config nor ./log will be filled with files/folders generated by the container, even if log files should have some error (getting apache-server | The Apache error log may have more information.)
If I remove the ./config volume, apache log files will be generated properly. Any clue for which reason this can append ? How can I share apache config file ?
Having the same issue with django settings file, seem to be related to config file generated by an application.
What I tried :
- using VOLUME in Dockerfile
- running docker-compose as root or chmod 777 on folders
- Creating file within the container to those directory to see if they are created on the host (and they did)
- On host, creating shared folder chown by the user (chown by root if they are automatically generated)
- Trying with docker run, having exactly the same issue.
For specs:
- Docker version 19.03.5
- using a VPS with debian buster
- docker-compose version 1.25.3
Thanks for helping.

docker run -v bindmount failing

I am rather new to docker images and am trying to set up a selenium/standalone-firefox image linked to a local folder.
I'm running Docker version 19.03.2, build 6a30dfc on Windows 10 and have unsuccessfully tried figuring out the correct working of the docker run -v syntax because it either is unspecific (i.e. too little context for me to make sense of it) or on the wrong platform).
Running docker as admin the the cmd, I used docker run -d -v LOCAL_PATH:C:\Users\Public.
This throws docker: Error response from daemon: invalid mode: \Users\Public as an error message.
I want to bind the running container to the folder C:\Users\Public (or another folder on the host machine - this is for illustration purposes).
Can someone point me to the (I fear obvious) mistake I'm making? I essentially want to achieve the container's output data (for later scraping) being stored in the host machine's folder C:\Users\Public. The container's output folder should be named myfolder.
** EDIT **
Digging around, I found this (see Volume Mapping).
I have thus tried the following code:
>docker run -d -p 4444:4444 --name selenium-hub selenium/hub
>docker run -d --link selenium-hub:hub -v C:/Users/Public:/home/seluser/Downloads selenium/node-chrome
while the former works fine (it only runs the container), the latter throws the error:
docker: Error response from daemon: Drive has not been shared.
Docker for Windows (and Mac) require you to share drives to be able to volume mount - https://docs.docker.com/docker-for-windows/ (Under Shared drives).
You should be able to find it under your Docker Settings > Shared Drives. Ensure your C:\ is selected and restart the daemon. After that, you can run:
docker run -d --link selenium-hub:hub -v C:/Users/Public:/home/seluser/Downloads selenium/node-chrome
base on the documation:
https://github.com/SeleniumHQ/docker-selenium
this path does not exist in container and its linux container.
"C:\Users\Public\Documents\TMP_DOCKERS\firefox selenium/standalone-firefo"

Docker stack: Apache service will not start

Apache service in docker stack never starts (or, to be more accurate, keeps restarting). Any idea what's going on?
The containers are the ones in: https://github.com/adrianharabula/lampstack.git
My docker-compose.yml is:
version: '3'
services:
db:
image: mysql:5.7
volumes:
- ../db_files:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: toor
#MYSQL_DATABASE: testdb
#MYSQL_USER: docky
#MYSQL_PASSWORD: docky
ports:
- 3306:3306
p71:
# depends_on:
# - db
image: php:7.1
build:
context: .
dockerfile: Dockerfile71
links:
- db
volumes:
- ../www:/var/www/html
- ./php.ini:/usr/local/etc/php/conf.d/php.ini
- ./virtualhost-php71.conf:/etc/apache2/sites-available/001-virtualhost-php71.conf
- ../logs/71_error.log:/var/www/71_error.log
- ../logs/71_access.log:/var/www/71_access.log
environment:
DB_HOST: db:3306
DB_PASSWORD: toor
ports:
- "81:80"
pma:
depends_on:
- db
image: phpmyadmin/phpmyadmin
And I start it with:
docker stack deploy -c docker-compose.yml webstack
db and pma services start correctly, but p71 service keeps restarting
docker service inspect webstack_p71
indicates:
"UpdateStatus": {
"State": "paused",
"StartedAt": "2018-01-19T16:28:17.090936496Z",
"CompletedAt": "1970-01-01T00:00:00Z",
"Message": "update paused due to failure or early termination of task 45ek431ssghuq2tnfpduk1jzp"
}
As you can see by the docker-composer.yml I already commented out the service dependency to avoid failures if dependencies are not met at first run.
$ docker service logs -f webstack_p71
$ docker service ps --no-trunc webstack_p71
What should I do to get that Apache/PHP (p71) service running?
All the containers work when run independently:
$ docker build -f Dockerfile71 -t php71 .
$ docker run -d -p 81:80 php71:latest
First of all, depends_on option doesn't work in swarm mode of version 3.(refer this issue)
In short...
depends_on is a no-op when used with docker stack deploy. Swarm mode
services are restarted when they fail, so there's no reason to delay
their startup. Even if they fail a few times, they will eventually
recover.
Of course, even though depends_on doesn't work, p71 should work properly since it would restart after the failure.
Thus, I think there is some error while running p71 service. that would be why the service keeps restarting. however, I am not sure what is happening inside the service only with the information you offer.
You could check the trace by checking log.
$ docker service logs -f webstack_p71
and error message
$ docker service ps --no-trunc webstack_p71 # check ERROR column

AMC for the aerospike server running inside docker

I have run the aerospike server inside docker container using below command.
$ docker run -d -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 -p 8081:8081 --name aerospike aerospike/aerospike-server
89b29f48c6bce29045ea0d9b033cd152956af6d7d76a9f8ec650067350cbc906
It is running succesfully. I verified it using the below command.
$ docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES
89b29f48c6bc aerospike/aerospike-server "/entrypoint.sh asd"
About a minute ago Up About a minute 0.0.0.0:3000-3003->3000-3003/tcp, 0.0.0.0:8081->8081/tcp aerospike
I'm able to successfully connect it with aql.
$ aql
Aerospike Query Client
Version 3.13.0.1
C Client Version 4.1.6
Copyright 2012-2016 Aerospike. All rights reserved.
aql>
But when I launch the AMC for aerospike server in docker, it is hanging and it is not displaying any data. I've attached the screenshot.
Did I miss any configuration. Why it is not loading any data?
You can try the following:
version: "3.9"
services:
aerospike:
image: "aerospike:ce-6.0.0.1"
environment:
NAMESPACE: testns
ports:
- "3000:3000"
- "3001:3001"
- "3002:3002"
amc:
image: "aerospike/amc"
links:
- "aerospike:aerospike"
ports:
- "8081:8081"
Then go to http://localhost:8081 and enter in the connect window "aerospike:3000"

create a Docker Swarm v1.12.3 service and mount a NFS volume

I'm unable to get a NFS volume mounted for a Docker Swarm, and the lack of proper official documentation regarding the --mount syntax (https://docs.docker.com/engine/reference/commandline/service_create/) doesnt help.
I have tried basically this command line to create a simple nginx service with a /kkk directory mounted to an NFS volume:
docker service create --mount type=volume,src=vol_name,volume-driver=local,dst=/kkk,volume-opt=type=nfs,volume-opt=device=192.168.1.1:/your/nfs/path --name test nginx
The command line is accepted and the service is scheduled by Swarm, but the container never reaches "running" state and swarm tries to start a new instance every few seconds. I set the daemon to debug but no error regarding the volume shows...
Which is the right syntax to create a service with a NFS volume?
Thanks a lot
I found an article here that shows how to mount nfs share (and that works for me): http://collabnix.com/docker-1-12-swarm-mode-persistent-storage-using-nfs/
sudo docker service create \
--mount type=volume,volume-opt=o=addr=192.168.x.x,volume-opt=device=:/data/nfs,volume-opt=type=nfs,source=vol_collab,target=/mount \
--replicas 3 --name testnfs \
alpine /bin/sh -c "while true; do echo 'OK'; sleep 2; done"
Update:
In case you want to use it with docker-compose you can do it the following:
version: '3'
services:
alpine:
image: alpine
volumes:
- vol_collab:/mount
deploy:
mode: replicated
replicas: 2
command: /bin/sh -c "while true; do echo 'OK'; sleep 2; done"
volumes:
vol_collab:
driver: local
driver_opts:
type: nfs
o: addr=192.168.xx.xx
device: ":/data/nfs"
and then run it with
docker stack deploy -c docker-compose.yml test
you could also this in docker compose to create nfs volume
data:
driver: local
driver_opts:
type: "nfs"
o: addr=<nfs-Host-domain-name>,rw,sync,nfsvers=4.1
device: ":<path to directory in nfs server>"