Using docker volume with a NFS partition - nfs

I have a NFS partition on the host, if add it to a container with
docker run -i -t -v /srv/nfs4/dir:/mnt ubuntu
/mnt will contain the shared data, but doesn't it cause conflicts? Since it hasn't been mounted with nfs-client?

Docker uses bind mounts to share host directories with containers. Docker handles namespace permission so that the container can access the mount. Otherwise from the host's perspective, the bind mounted NFS share is just being accessed by another process. It's safe to bind mount an NFS share elsewhere on the filesystem. Using it from within a Docker container is no different.

As of Docker 1.7+ you can use a Volume Plugin. See the Docker Volume Plugin section for details.
As far as NFS goes you can use the Docker Netshare plugin which handles mounding NFS, CIFS and AWS EFS file systems.

You have to share /srv/nfs4/ in your default docker machine. Go to virtualbox > default (or boot2docker) > settings > Shared Folder

Related

How can I set permissions for mounted azure fileshare?

I want to deploy NextCloud on Azure Container Instances. I was able to set up the container group using Azure CLI like this:
az container create
--resource-group NextCloud
--name nextcloudcontainer
--image nextcloud
--dns-name-label somelabel
--ports 80 443
--azure-file-volume-account-name myaccountname
--azure-file-volume-account-key myaccountkey
--azure-file-volume-share-name nextcloudfs
--azure-file-volume-mount-path /var/lib/nextcloud/
--os-type Linux
--cpu 1
--memory 2
--location germanywestcentral
--restart-policy OnFailure
Problem is, that the drive /var/lib/nextcloud/ is mounted with permissions 777, but for nextcloud I require 770. This cannot be changed with chmod afterwards, but only at deployment time. How could this be achieved?
I saw this post, but I do not understand, how this could be done as with restart of the container, I would have to do this manually every time.
There is a way to change permission in Azure Files at mount time with mount param filemode and dirmode. However in ACI, we don't have that flexibility in ACI to change the param. We are aware of this request and working on it.

How to get IP Address of Docker Desktop VM?

I'm in a team where some of us use docker toolbox and some user docker desktop. We're writing an application that needs to communicate to a docker container in development.
On docker toolbox, I know the docker-machine env command sets the docker host environment variable and I can use that to get the ip of the virtual machine that's running the docker engine. From there I just access the exposed ports.
What's the equivalent way to get that information on docker desktop? (I do not have a machine that has docker desktop, only docker toolbox but I'm writing code that should be able to access the docker container on both)
On windows OS, after installed docker, there is an entry added by docker inside your hosts file (C:\Windows\System32\drivers\etc\hosts), which states the IP as:
Added by Docker Desktop
10.xx.xx.xx host.docker.internal
Below section got added in my /etc/hosts:
# Added by Docker Desktop
192.168.99.1 host.docker.internal
192.168.99.1 gateway.docker.internal
Then I was able to access by adding the port to which the app was bind to.
This command should display the IP
ping -q -c 1 docker.local | sed -En "s/^.*\((.+)\).*$/\1/p"
ipconfig can get you this information as well

migrating lxc to lxd

I've looked all over, but can't see if there is a way. I have a couple LXC containers running Ubuntu 14.04 on top of a Ubuntu 14.04 Host. They've become pretty important to me, so I want to be able to easily backup / migrate the LXC containers to another server if the host's hardware should fail.
I've built a new Ubuntu 15.1 server now with LXD and have logged out and back in and see the new group. For testing, I tar'd one of my existing LXC containers up with the --numeric-owner switch on my 14.04 Host:
tar --numeric-owner -czvf ContToBeMoved.tgz /var/lib/lxc/my_container
---then on new server ---
tar --numeric-owner -xzvf ContToBeMoved.tgz -C /var/lib/lxc/
...and have successfully restored the LXC container on the new server 15.1 server.
When I run the LXD commands though, LXD doesn't see the container. I tried moving the container to the /var/lib/lxd/containers directory and still, LXD doesn't see it. Is there a way to edit / clone / migrate the LXC container so that we can use LXD moving forward?
Thanks in advance.
LXD uses a sqlite database for container configuration so just dumping the container's rootfs in place won't be quite enough.
The easiest way to do what you want is to create a new container with LXD, then remove its rootfs from /var/lib/lxd/containers/NAME/rootfs and substitute the one from your original host.
Note that LXD runs unprivileged containers by default, if your source container was privileged (/var/lib/lxc/NAME/rootfs is owned by root:root instead of 100000:100000), then you'll want to run the following too:
lxc config set NAME security.privileged true

How to setup a small website using docker

I have a question regarding Docker. That container's concept being totally new to me and I am sure that I haven't grasped how things work (Containers, Dockerfiles, ...) and how they could work, yet.
Let's say, that I would like to host small websites on the same VM that consist of Apache, PHP-FPM, MySQL and possibly Memcache.
This is what I had in mind:
1) One image that contains Apache, PHP, MySQL and Memcache
2) One or more images that contains my websites files
I must find a way to tell in my first image, in the apache, where are stored the websites folders for the hosted websites. Yet, I don't know if the first container can read files inside another container.
Anyone here did something similar?
Thank you
Your container setup should be:
MySQL Container
Memcached Container
Apache, PHP etc
Data Conatainer (Optional)
Run MySQL and expose its port using the -p command:
docker run -d --name mysql -p 3306:3306 dockerfile/mysql
Run Memcached
docker run -d --name memcached -p 11211:11211 borja/docker-memcached
Run Your web container and mount the web files from the host file system into the container. They will be available at /container_fs/web_files/ inside the container. Link to the other containers to be able to communicate with them over tcp.
docker run -d --name web -p 80:80 \
-v /host_fs/web_files:/container_fs/web_files/ \
--link mysql:mysql \
--link memcached:memcached \
your/docker-web-container
Inside your web container
look for the environment variables MYSQL_PORT_3306_TCP_ADDR and MYSQL_PORT_3306_TCP_PORT to tell you where to conect to the mysql instance and similarly MEMCACHED_PORT_11211_TCP_ADDR and MEMCACHED_PORT_11211_TCP_PORT to tell you where to connect to memcacheed.
The idiomatic way of using Docker is to try to keep to one process per container. So, Apache and MySQL etc should be in separate containers.
You can then create a data-container to hold your website files and simply mount the volume in the Webserver container using --volumes-from. For more information see https://docs.docker.com/userguide/dockervolumes/, specifically "Creating and mounting a Data Volume Container".

Access Web Files on VirtualBox Guest Shared Folder

Okay, so my setup:
Windows 8.1 host, CentOS 6.5 guest, Virtualbox 4.3.12
I have a folder in My Documents(Windows) that I use as a shared folder in my guest(CentOS), which is mounted in var/www/htdocs/shared
The purpose of this is to host my web project in the VM, but access and edit the files in Windows. And this works pretty well. The files in the shared folder can be accessed on my host and guest and can be edited as needed. I can access the web service in a browser from Windows just fine.
BUT, when I try to run the files in the shared folder from a browser, I get a 403 forbidden error. The permissions on the guest show as rwxrwxrwx, so I don't know why I don't have permission to access them in a browser, and I can't change these in CentOS.
The ways I mounted the drive is like this:
mount -t vboxsf shared shared
mount -t vboxsf -o rw,exec shared shared
mount -t vboxsf -o rw,exec,uid=1000,gid=1000 shared shared
I got the same results for each.
So, that's my issue. How can I access files in a Virtualbox shared folder from my browser on the host?
To change the permissions on the directory, you can use the dmode and fmode parameters in the mount statement:
mount -t vboxsf -o rw,dmode=775,fmode=775 shared shared
You don't need to specify the uid and gid, but you need to add the apache user to the vboxsf group:
usermod -G vboxsf apache
And finally, what actually made it work is you need to disable selinux. Now I can view/edit my files in Windows and let the VM serve them in a browser. The goal of this was to be able to develop on Windows, but let my web app run in an environment identical to the production server. Hopefully this helps someone.