In the Docker best practices guide it states:
You are strongly encouraged to use VOLUME for any mutable and/or user-serviceable parts of your image.
And by looking at the source code for e.g. the cpuguy83/nagios image this can clearly be seen done, as everything from nagios to apache config directories are made available as volumes.
However, looking at the same image the apache service (and cgi-scripts for nagios) are run as the nagios user by default. So now I'm in a pickle, as I can't seem to figure how to add my own config files in order to e.g. define more hosts for nagios monitoring. I've tried:
FROM cpuguy83/nagios
ADD my_custom_config.cfg /opt/nagios/etc/conf.d/
RUN chown nagios: /opt/nagios/etc/conf.d/my_custom_config.cfg
CMD ["/opt/local/bin/start_nagios"]
I build as normal, and try to run it with docker run -d -p 8000:80 <image_hash>, however I get the following error:
Error: Cannot open config file '/opt/nagios/etc/conf.d/my_custom_config.cfg' for reading: Permission denied
And sure enough, the permissions in the folder looks like (whist the apache process runs as nagios):
# ls -l /opt/nagios/etc/conf.d/
-rw-rw---- 1 root root 861 Jan 5 13:43 my_custom_config.cfg
Now, this has been answered before (why doesn't chown work in Dockerfile), but no proper solution other than "change the original Dockerfile" has been proposed.
To be honest, I think there's some core concept here I haven't grasped (as I can't see the point of declaring config directories as VOLUME nor running services as anything other than root) - so provided a Dockerfile as above (which follows Docker best practices by adding multiple volumes) is the solution/problem:
To change NAGIOS_USER/APACHE_RUN_USER to 'root' and run everything as root?
To remove the VOLUME declarations in the Dockerfile for nagios?
Other approaches?
How would you extend the nagios dockerfile above with your own config file?
Since you are adding your own my_custom_config.cfg file directly into the container at build time just change the permissions of the my_custom_config.cfg file on your host machine and then build your image using docker build. The host machine permissions are copied into the container image.
Related
I am running SRS from a Docker container. On Manjaro I can issue a docker command in terminal, and stream video from OBS to the SRS instance I just launched. I can then view live video in a web page, from that server.
BUT!
The configuration is a bit opaque. The config files refer to a directory ("./objs") that is likely defined in a config file, but I can't find it.
AND!
I'm still a bit confused about how Docker works- isn't there a straightforward way to edit an image or container?
I can see all the config files in /etc/srs (like srs.conf, rtmp.conf and so on). The config files refer to the directory ./objs for the location of (for instance) the nginx root, index.html. But I cannot find that directory.
I also have installed SRS from official repos, on the same machine. I launched srs from /usr/bin/srs (not from the Docker image), but it immediately quits. So I gave up for now. Running it from Docker seems to work pretty well.
When start SRS by docker, it actually the same to start in shell:
cd /usr/local/srs && ./objs/srs -c conf/srs.conf
You can also use docker inspect ossrs/srs:5 to get the information:
"Cmd": ["./objs/srs", "-c", "conf/srs.conf"],
"WorkingDir": "/usr/local/srs",
The current work directory is /usr/local/srs, so ./objs is /usr/local/srs/objs, and I will describe more about why relative directories.
The configuration file is conf/srs.conf, which is actually /usr/local/srs/conf/srs.conf, and there are lots of configuration examples in ./conf.
The examples in ossrs.io is in the same format, uses a configuration and depends on the work directory.
SRS uses relative directory ./objs or ./conf because there are some features depend on it. You can see full.conf for all the configurations that supported.
We use relative directory because you can easily change to other directory, or start more than one SRS server. Of course, you could change the relative directory to abolute directory, please search ./objs in full.conf, for example:
pid ./objs/srs.pid;
ff_log_dir ./objs;
srs_log_file ./objs/srs.log;
http_server {
dir ./objs/nginx/html;
}
vhost __defaultVhost__ {
hls {
hls_path ./objs/nginx/html;
}
Please note that you can use docker volume to mount directory of host for container.
I'd really appreciate any help in tracking down and diagnosing an umask issue on Ubuntu:
I'm running php5-fpm with Apache via proxy_fcgi. The process is running with a umask of 0022 (confirmed by having PHP send the results of umask() into a file [the result is '18' == 0022]). I'd like to change this to 0002, but can't track down where the umask is coming from.
Apache is set with umask 0002, and as a test, if I disable proxy_fcgi and run my test above, I get a file with u+g having rw access (and the file contents confirm the umask as '2' == 0002).
If I sudo -iu fpmuser and run umask the results are 0002.
System info:
PHP: 5.5.3-1ubuntu2.1
Apache: 2.4.6
Ubuntu: 13.10
PHP-PFM is listing using TCP ports (as Unix ports aren't yet working/support)
So far I've tried the following (each followed by a system restart and a retest):
adding umask 0002 to the start of /etc/init.d/php5-fpm
adding --umask 0002 into the start-stop-daemon calls in /etc/init.d/php5-fpm
adding umask 0002 to .profile in the home of the fpm user
Something is clearly adjusting the umask of the php-fpm process - so, how can I begin tracing what is forcing the umask 0022 onto the php-fpm process?
EDIT (1):
adjusting the system wide umask via /etc/login.defs (see How to set system wide umask?) affects the umask elsewhere (e.g. comannds via sudo now have a umask of 0002), but still php-fpm creates files with a umask of 0022. Note that I verified that session optional pam_umask.so was also present in /etc/pam.d/common-session-noninteractive and I tested umasks of 002 and 0002.
EDIT (2):
I have been able to replicate the issue using nginx and php5-fpm (using unix sockets set to listen mode '0666').
I would love to trace where the umask is coming from but I'd settle for some way to force it to what I want.
I should add that the first test was done on an Amazon Ubuntu 13.10 image. My tests in 'edit 2' where completed using a copy of the Ubuntu13.10 server ISO setup from scratch in a virtual machine. All installations were completed via apt-get rather than by downloading the source and building.
EDIT (3):
I have confirmed I can manipulate the umask manually by either of the following (verified by checking the permissions on the test file created):
a. In a shell, set a umask then run /usr/sbin/php-fpm from the shell
b. In a shell, run the following with whatever umask value I like:
start-stop-daemon --start --quiet --umask 0002 --pidfile /var/run/php5-fpm.pid --exec /usr/sbin/php5-fpm -- --daemonize --fpm-config /etc/php5/fpm/php-fpm.conf
However this exact same command in the /etc/init.d/php5-fpm file fails to adjust the umask when running sudo service php5-fpm stop; sudo service php5-fpm start or at reboot.
Not a solution for generically tracing where umask settings are coming from on ubuntu (the only way I've found so far is the good old hard work approach of replicating the issue, attempting to isolate it to a script or a function, then stepping back through each script/function that is called recursively) but a solution to the php5-fpm umask issue. I've found a lot of hits on google, stackoverflow, and elsewhere for the problem, but so far no solution. Hopefully this is useful for people.
Edit /etc/init/php-fpm.conf to include the line umask 0002 (or whatever umask you wish). My version of the file now looks like this:
# php5-fpm - The PHP FastCGI Process Manager
description "The PHP FastCGI Process Manager"
author "Ondřej Surý <ondrej#debian.org>"
start on runlevel [2345]
stop on runlevel [016]
### my edit - change umask setting
umask 0002
pre-start exec /usr/lib/php5/php5-fpm-checkconf
respawn
exec /usr/sbin/php5-fpm --nodaemonize --fpm-config /etc/php5/fpm/php-fpm.conf
Explanation
Having traced through the service command which launches php5-fpm at startup, it runs some checks (line 118 on my copy) for /etc/init/${SERVICE}.conf, along with verifying initctl is present and can report it's version. If these tests are passed then upstart is used which in the case of php5-fpm uses the /etc/init/php-fpm.conf file.
The ubuntu upstart site gives pretty clear instructions. In particular you can check out the upstart cookbook for the specifics you need.
As best I can work out that means that therefore the 'service' command was never actually running the start-stop-daemon … commands found in /etc/init.d/php5-fpm which is why my previous edits were having no effect. Instead it passes off to upstart (actually initctl) when you use something like service php5-fpm start, etc.
If you use systemd, in the /etc/systemd/system directory, create a new directory called php7.2-fpm.service.d. The name of this directory will vary depending on your distro and PHP version. Run systemctl list-units --type=service | grep --ignore-case php to find out what to call it. Inside of this directory, place a file called umask.conf with the contents:
# /etc/systemd/system/php7.2-fpm.service.d/umask.conf
[Service]
UMask=0002
For the changes to take effect, run:
systemctl daemon-reload && systemctl restart php7.2-fpm
The benefit of this solution is that your customizations are not lost when packages get updated.
Explanation of how this works from the systemd manual:
Along with a unit file foo.service, a "drop-in" directory foo.service.d/ may exist. All files with the suffix ".conf" from this directory will be parsed after the file itself is parsed. This is useful to alter or add configuration settings for a unit, without having to modify unit files. Each drop-in file must have appropriate section headers. Note that for instantiated units, this logic will first look for the instance ".d/" subdirectory and read its ".conf" files, followed by the template ".d/" subdirectory and the ".conf" files there.
In addition to /etc/systemd/system, the drop-in ".d" directories for system services can be placed in /usr/lib/systemd/system or /run/systemd/system directories. Drop-in files in /etc take precedence over those in /run which in turn take precedence over those in /usr/lib. Drop-in files under any of these directories take precedence over unit files wherever located. Multiple drop-in files with different names are applied in lexicographic order, regardless of which of the directories they reside in.
better copy systemd script before editing php5-fpm.service or it will be overwritten on next update:
cp /lib/systemd/system/php5-fpm.service /etc/systemd/system/
vi /etc/systemd/system/php5-fpm.service
Add: UMask=0002 in [Service] section.
systemctl daemon-reload
systemctl restart php5-fpm
Source: https://ispire.me/running-php-fpm-with-different-user-group-using-umask/
okey, but this applies to all the pools.
Would be handy to be able to set it with something like
env[umask] = 0002
(no chance for this to work)
been googling, but doesn't seem to be a way to do this on a per host basis.
I have setup an NFS file share between two CentOS 6, 64 machines. On the server the folder being shared was originally owned by the root user. On the client it turned up as being owned by nfsnobody. When I tried to write to the folder from the client I got a permissions error. So I changed the folder ownership on the server to nfsnobody and chmod'd it to 777. However, still no joy - I continue to get a permissions error. Clearly, there is more to this. I would be much obliged to any Linux gurus out there (I personally wouldn't merit being called anything more than a newbie) who might be able to help fix this issue.
Edit - I should have mentioned that trying to write to the shared folder from the client actually manages to create a file entry. However, the file size is 0 and the permissions error is reported.
The issue here is to do with the entry in /etc/exports. It should read
folder ip(rw,**all_squash**,sync,no_subtree_check)
I had missed the all_squash bit. That apart, make sure that the folder on the server is owned by nfsnobody. On my setup both my client and server nfsnobodies ended up with a user id if 65534. However, it is well worth checking this (/etc/groups) or else... .
Here are a couple of useful references
How to setup an NFS SErver
NFS on CentOS
For the benefit of anyone looking to setup an NFS server I give below what worked for me on my CentOS 6 64bit machines.
SERVER
yum install nfs-utils nfs-utils-lib - install NFS
rpm -q nfs-utils - check the install
/etc/init.d/rpcbind start
chkconfig --levels 235 nfs on
/etc/init.d/nfs start
chkconfig --level 35 rpcbind on
With this done you should create the folder you want to share
mkdir folder
chown 65534:65534 folder
chmod 755 folder
Now define the folder to be shared/exported. Use your favorite text editor (vi or whatever) to
open/create /etc/exports
folder clientIP (rw,all_squash,sync,no_subtree_check)
Client
Install, check, bind and start as above
mount -t nfs serverIP:folder clientFolderLocation
If all goes well you should now be able to write a little script on your client
<?php
$file = $_SERVER['DOCUMENT_ROOT']."/../nfsfolder/test.txt";
file_put_contents($file,'Hello world of NFS!');
?>
browse to it and find that test.txt now exists on the server with the content "Hello world of NFS". In the example I have placed my mounted drive one level before document_root.
I am working on a website powered by Node. So I have made a simple Dockerfile that adds my site's files to the container's FS, installs Node and runs the app when I run the container, exposing the private port 80.
But if I want to change a file for that app, I have rebuild the container image and re-run it. That takes some seconds.
Is there an easy way to have some sort of "live sync", NFS like, to have my host system's app files be in sync with the ones from the running container?
This way I only have to relaunch it to have changes apply, or even better, if I use something like supervisor, it will be done automatically.
You can use volumes in order to do this. You have two options:
Docker managed volumes:
docker run -v /src/path nodejsapp
docker run -i -t -volumes-from <container id> bash
The file you edit in the second container will update the first one.
Host directory volume:
docker run -v `pwd`/host/src/path:/container/src/path nodejsapp
The changes you make on the host will update the container.
If you are under OSX, those kind of volume shares can become very slow, especially with node-based apps ( a lot of files ). For this issue, http://docker-sync.io can help, by providing a volume-share like synchronisation, without using volume shares, this usually speeds up your container read/write speed of the code-directory from 50-80 times, depending on what docker-machine you use.
For performance see https://github.com/EugenMayer/docker-sync/wiki/4.-Performance and for easy examples how to use it, see the boilerplates https://github.com/EugenMayer/docker-sync-boilerplate for your case the unison example https://github.com/EugenMayer/docker-sync-boilerplate/tree/master/unison is the one you would need for NFS like sync
docker run -dit -v ~/my/local/path:/container/path/ myimageId
For /container/path/ you could use for instance /usr/src/app.
The flags:
-d = detached mode,
-it = interactive,
-v + paths = specifies the volume.
(If you just care about the volume, you can drop the -dit flag.)
Docker run reference
I use Scaffold's File Sync functionality for this. It gets the job done, and without needing overly complex configuration.
Setting up Scaffold in my project was as simple as installing Skaffold (through chocolatey, since I'm on Windows), running skaffold init --generate-manifests in my project folder, and answering a couple questions it asked.
I would like to have a VM to look at how applications appear and to develop OS-specific applications, however, I want to keep all my code on my Windows machine so if I decide to nuke a VM or anything like that, it's all still there.
If it matters, I'm using VirtualBox.
This is usually handled with network shares. Share your code folder from your host machine and access it from the VMs.
Aside from network shares, another tool to use for this is a version-control system.
You should always be able make a normal network connection between the VM and the hosting OS, as though it were another computer on the same network. Which, in some sense, it is.
I do this all the time.
I have a directory in a Windows drive that I mount in my host ubuntu 12.04.
I run virtualbox ubuntu 13.04 as a guest.
I want the guest to mount the Windows directory with full non-root permissions.
I do almost all my work from a bash shell, so this method is natural for me.
When searching for methods to automatically mount virtualbox shared folders,
reliable and correct methods are hard to distinguish from those that fail.
Failures include getting and setting permissions, as well as other problems.
Methods that fail include:
modifying /etc/fstab
modifying /etc/rc.local
I am fairly certain that rc.local can be used,
but no methods I have tried worked.
I welcome improvements on these guidelines.
On virtualbox 4.2.14 running nautilus (bash terminal) on an ubuntu 13.04 guest,
Below is a working method to mount Common (sharename)
on /home/$USER/Desktop/Common (mountpoint) with full permissions.
(Note the ‘\’ command continuation character in the find command.)
First time only: create your mountpoint, modify your .bashrc file, and run it.
Respond with password when requested.
These are the four command-lines needed:
mkdir $HOME/Desktop/Common
sudo echo “$USER ALL=(ALL) NOPASSWD:ALL” >> /etc/sudoers
find $HOME/Desktop/Common -maxdepth 0 -type d -empty -exec sudo \
mount -t vboxsf -o \
uid=`id -u $USER`,gid=`id -g $USER` Common $HOME/Desktop/Common \;
source ~/.bashrc # Needed if you want to mount Common in this bash.
All other times: simply launch a bash shell.
The find command mounts the shared directory if the mountpoint directory is empty.
If the mountpoint directory is not empty, it does not run the mount command.
I hope this is error-free and sufficiently general.
Please let me know of corrections and improvements.