Run mounts on CentOS startup in a WSL container - windows-subsystem-for-linux

I have a bunch of bind mounts that I want created in a CentOS contained in WSL that can be created running bind --mount commands
I would like to run a bunch of generic commands on CentOS init for that matter
How do I set up a script to be run when the container is initialized, to manage mounts and other static config?
Clarification: I do not want to mount Windows local paths inside the container, but container paths (i.e. mount --bind /root/x /root/y)
Update: for the mount part, this worked: https://learn.microsoft.com/en-us/windows/wsl/wsl-config : Create /etc/wsl.conf with
[automount]
mountFsTab = true
And created /etc/fstab with
/source1 /target1 none bind
...
Still looking for an answer to the second part, running a script when the container is initialized.

For adding internal bind mounts to the WSL container:
Documentation: https://learn.microsoft.com/en-us/windows/wsl/wsl-config
Create /etc/wsl.conf inside the WSL container with:
[automount]
mountFsTab = true
Create /etc/fstab with:
/source1 /target1 none bind
...
For running command(s) at WSL startup (since build 21286):
Documentation: https://learn.microsoft.com/en-us/windows/wsl/release-notes#build-21286
Create or add to /etc/wsl.conf:
[boot]
command=...

Related

Can't create extension pg_cron in bitnami:postgres docker container?

I am running a docker container with a Database which is working with the bitnami:postgres image. It is all working fine but now I want to install pg_cron to schedule autmatic jobs.
I installed it and it is available as a possible extension in Dbeaver. But when I select and install it I get the message:
ERROR: extension "pg_cron" must be installed in schema "pg_catalog"
When i am using the command
Create Extension pg_cron;
I get:
ERROR: pg_cron can only be loaded via shared_preload_libraries
Hinweis: Add pg_cron to the shared_preload_libraries configuration variable in postgresql.conf.
I tried to change the postgresql.conf file but when I restart my docker container to apply the changes shared_preload_libraries is always reset to pgaudit.

Airflow variables and connections

I have a Airflow 1.10.11 running in an EC2 setup by my predecessors with docker-compose.
I wish to learn how to set it up. I have the docker-compose file. But i need all the configurations.
I know the config file can be found in scheduler container under /opt/bitnami/airflow/airflow.cfg.
There are many connections and variables and xcom in the UI. Where can I find them in which container?
Or how could I export them? some variables are encrypted and i can only see ***, so i could not recreate one by one in UI. Thanks
i saw the documantation on exporting connections using command: airflow connections export connections.json
where do i execute this command in CLI, in which container?
You can run on each airflow-* running container.
docker exec -it airflow-worker bin/bash
Or you can run with docker-compose from your machine . for more details see here
docker-compose run airflow-worker airflow [command]

Where is locate pidfile | docker for windows | docker.pid

I would like to have docker inside docker for use CI-agent. But for it I need to share docker.pid file inside docker container and I can't find that file in this path C:\ProgramData\docker.pid and even I try to add this in docker daemon config:
{
...
"pidfile": "C:\\docker.pid",
...
}
And after a restart, that file didn't appear.
Could you please help me?
Also tried different variant in config file like "C:\docker.pid", "C:/docker.pid". The same behavior.
The docker logs is clean about creating or removing docker.pid file.
Software info
Windows Version: 10 1809 build 17763
Docker for Windows Version: 2.0.0.2 31259
Expected behavior
Create pid file in path C:\docker.pid
Actual behavior
The file is absent
Also created an issue in github
https://github.com/docker/for-win/issues/3741
I found a way to run Docker inside docker.
These two topics help me:
https://forums.docker.com/t/solved-using-docker-inside-another-container/12222/3
Bind to docker socket on Windows
I needed docker.sock file and it locate //var/run/docker.sock so
-v //var/run/docker.sock:/var/run/docker.sock
resolve my problem.

Changing permissions of added file to a Docker volume

In the Docker best practices guide it states:
You are strongly encouraged to use VOLUME for any mutable and/or user-serviceable parts of your image.
And by looking at the source code for e.g. the cpuguy83/nagios image this can clearly be seen done, as everything from nagios to apache config directories are made available as volumes.
However, looking at the same image the apache service (and cgi-scripts for nagios) are run as the nagios user by default. So now I'm in a pickle, as I can't seem to figure how to add my own config files in order to e.g. define more hosts for nagios monitoring. I've tried:
FROM cpuguy83/nagios
ADD my_custom_config.cfg /opt/nagios/etc/conf.d/
RUN chown nagios: /opt/nagios/etc/conf.d/my_custom_config.cfg
CMD ["/opt/local/bin/start_nagios"]
I build as normal, and try to run it with docker run -d -p 8000:80 <image_hash>, however I get the following error:
Error: Cannot open config file '/opt/nagios/etc/conf.d/my_custom_config.cfg' for reading: Permission denied
And sure enough, the permissions in the folder looks like (whist the apache process runs as nagios):
# ls -l /opt/nagios/etc/conf.d/
-rw-rw---- 1 root root 861 Jan 5 13:43 my_custom_config.cfg
Now, this has been answered before (why doesn't chown work in Dockerfile), but no proper solution other than "change the original Dockerfile" has been proposed.
To be honest, I think there's some core concept here I haven't grasped (as I can't see the point of declaring config directories as VOLUME nor running services as anything other than root) - so provided a Dockerfile as above (which follows Docker best practices by adding multiple volumes) is the solution/problem:
To change NAGIOS_USER/APACHE_RUN_USER to 'root' and run everything as root?
To remove the VOLUME declarations in the Dockerfile for nagios?
Other approaches?
How would you extend the nagios dockerfile above with your own config file?
Since you are adding your own my_custom_config.cfg file directly into the container at build time just change the permissions of the my_custom_config.cfg file on your host machine and then build your image using docker build. The host machine permissions are copied into the container image.

How do I run puppet agent inside a docker container to build it out. How do I achieve this?

If I run a docker container with CMD["/use/sbin/ssh", "-D"], I can have them running daemonized, which is good.
Then, I want to run puppet agent too, to build our said container as say an apache server.
Is it possible to do this and then expose the apache server?
Here is another solution. We use ENTRYPOINT docker file instruction as described here: https://docs.docker.com/articles/dockerfile_best-practices/#entrypoint. Using it you can run puppet agent and other services in background before instruction from CMD or command passed via docker run.