lxc create unprivileged containers - virtual-machine

I've installed lxc for create containers and I've done the commands for create unprivileged containers but I've this errors when I do:
[andrea#andrea lxc]$ lxc-create -t download -n prova0
lxc-create: conf.c: chown_mapped_root: 3406 No mapping for container root
lxc-create: lxccontainer.c: do_bdev_create: 943 Error chowning /home/andrea/.local/share/lxc/prova0/rootfs to container root
lxc-create: conf.c: suggest_default_idmap: 4444 Your system is not configured with subuids
lxc-create: lxccontainer.c: do_lxcapi_create: 1408 Error creating backing store type (none) for prova0
lxc-create: lxc_create.c: main: 274 Error creating container prova0

lxc-create: ... Your system is not configured with subuids
As per the above error message, it sounds like you're trying to create an unprivileged container without subuids configured. These steps are for Ubuntu 14.04, but I suspect they will work on Fedora as well.
$ mkdir -p ~/.config/lxc
$ echo "lxc.id_map = u 0 100000 65536" > ~/.config/lxc/default.conf
$ echo "lxc.id_map = g 0 100000 65536" >> ~/.config/lxc/default.conf
$ echo "lxc.network.type = veth" >> ~/.config/lxc/default.conf
$ echo "lxc.network.link = lxcbr0" >> ~/.config/lxc/default.conf
$ echo "$USER veth lxcbr0 2" | sudo tee -a /etc/lxc/lxc-usernet
Once these are configured, you should be able to create an ubuntu container, as follows:
$ lxc-create -t download -n u1 -- -d ubuntu -r trusty -a amd64
Taken from the Ubuntu Server LXC guide:
https://help.ubuntu.com/lts/serverguide/lxc.html#lxc-unpriv

Related

Why Molecule is not able to start a docker container (Failed to create temporary directory)

I found similar case here, that I am using molecule to test my ansible roles, but for some reason it is skipping "creation" part and gives error like:
fatal: [rabbitmq]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo ~/.ansible/tmp `\"&& mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" && echo ansible-tmp-1638541586.6239848-828-250053975102429=\"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" ), exited with result 1", "unreachable": true}
It is skipping the create process: Skipping, instances already created. However, nothing is running:
name#EEW00438:~/.cache$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
name#EEW00438:~/.cache$
what I tried:
molecule destroy
molecule reset
restart
rm -rf ~/.cache/
changed remote_tmp to /tmp/.ansible/ in /etc/ansible/ansible.cfg
reinstall molecule
This issue is only with one role.
UPDATE:
it is failing on step:
mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623 `\" && echo ansible-tmp-1638782939.31706-2913-12516475286623=
mkdir: cannot create directory ‘"/home/user/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623"’: No such file or directory
I stumbled upon this issue as well.
When you create the role you need to create it as molecule init role --driver-name docker ns.myrole to enable docker. Be sure to install the docker driver too if you haven't pip install --upgrade molecule-docker
So if you need to tweak the container that runs, edit molecule.yml. It defaults to centos. I switched to ubuntu in there, an created a Dockerfile to provision the container with things that need to exist.
molecule.yml
---
dependency:
name: galaxy
driver:
name: docker
platforms:
- name: instance
image: ubuntu:22.04 # this is required but ignored since I specify a `dockerfile`
pre_build_image: false
dockerfile: Dockerfile
provisioner:
name: ansible
verifier:
name: ansible
For example, Ubuntu 22.04 doesn't use python anymore, so I added an alias at the end of what molecule renders so that Ansible can use python and have it redirect to python3
FROM ubuntu:22.04
RUN if [ $(command -v apt-get) ]; then export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y python3 sudo bash ca-certificates iproute2 python3-apt aptitude && apt-get clean && rm -rf /var/lib/apt/lists/*; \
elif [ $(command -v dnf) ]; then dnf makecache && dnf --assumeyes install /usr/bin/python3 /usr/bin/python3-config /usr/bin/dnf-3 sudo bash iproute && dnf clean all; \
elif [ $(command -v yum) ]; then yum makecache fast && yum install -y /usr/bin/python /usr/bin/python2-config sudo yum-plugin-ovl bash iproute && sed -i 's/plugins=0/plugins=1/g' /etc/yum.conf && yum clean all; \
elif [ $(command -v zypper) ]; then zypper refresh && zypper install -y python3 sudo bash iproute2 && zypper clean -a; \
elif [ $(command -v apk) ]; then apk update && apk add --no-cache python3 sudo bash ca-certificates; \
elif [ $(command -v xbps-install) ]; then xbps-install -Syu && xbps-install -y python3 sudo bash ca-certificates iproute2 && xbps-remove -O; fi
RUN echo 'alias python=python3' >> ~/.bashrc
It's been years since I last used Molecule, and I must say... it's gone downhill. It used to be easy/clear/direct to get things working. Sigh. I guess I should stick to containers and force the migration off VMs sooner!
The problem may be caused by a Docker context change performed at the start of Docker Desktop. Despite this, Molecule does create a container, but in an inactive context.
At startup, Docker Desktop automatically switches the context from default to desktop-linux [1]. The active context determines which containers are available from CLI.
The context cannot be set in the molecule, i.e. the default context is always used to create containers [2].
$ molecule create --scenario-name test
... # The output with the error is skipped because it duplicates the output from the question
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
desktop-linux * moby unix:///home/bkarpov/.docker/desktop/docker.sock
$ docker context use default
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a71bfd28992f geerlingguy/docker-ubuntu2004-ansible "bash -c 'while true…" 5 minutes ago Up 5 minutes some-instance
$ molecule login --scenario-name test
INFO Running test > login
root#some-instance:/#
Solutions
Switch the context back to default manually
docker context use default
This solution is suitable for one-time execution, since the context will need to be switched every time Docker Desktop is started. Docker Desktop service will continue to work using the desktop-linux context.
Issue with the request to add context switching to Docker Desktop - https://github.com/docker/roadmap/issues/47
Stop Docker Desktop
systemctl --user stop docker-desktop
Stopping the Docker Desktop service will automatically switch to the default context.
Set DOCKER_CONTEXT so that Docker Desktop does not change the context in the current shell
export DOCKER_CONTEXT=default
systemctl --user restart docker-desktop
When stopping, the context returns to default, and when starting, it does not switch to desktop-linux.
References
https://docs.docker.com/desktop/install/ubuntu/#launch-docker-desktop
https://github.com/ansible-community/molecule-docker#faq

git clone fail in gitlab runner docker

No idea why the git clone fail for all the time, I have add the correct host key and private key, but it still fail. Someone said the gitlab pipeline not support pulling from http, so I changed to ssh, but still failed
$ echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
$ chmod 600 ~/.ssh/known_hosts
$ echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
$ id
uid=0(root) gid=0(root) groups=0(root)
$ ssh-agent bash -c 'ssh-add /mytest/private;git clone
git#gitlab.home.kd:root/ansible-home.git --recursive -vvvvv'
Identity added: /mytest/private (/mytest/private)
Cloning into 'ansible-home'...
Warning: Permanently added 'gitlab.home.kd' (ECDSA) to the list of
known hosts.
Server supports multi_ack_detailed
Server supports side-band-64k
Server supports ofs-delta
Server version is git/2.18.1
want e959694c7a5c95f27572ae6f2aa6e1aa6fa23a99 (HEAD)
want 989fd778545ca1ae507cad35ae224d8bb92f2db4 (refs/heads/dev)
want e959694c7a5c95f27572ae6f2aa6e1aa6fa23a99 (refs/heads/master)
done
$ ls /ansible-home
ls: cannot access '/ansible-home': No such file or directory
ERROR: Job failed: exit code 1

I would like to set up rfc5766-turn-server in Ubuntu 14.04, can anyone give me the set of steps listed all together ? I am doing it in AWS EC2

I have tried to install and set up rfc5766-turn-server in AWS EC2 but unable to do it as I do not see a proper flow of work or command line for that, can someone help me about this ? I need to set it up in Ubuntu 14.04
do an ssh login to your ec2 instance, then run the below commands for installing and starting the turn server.
commands for installing turnserver:
sudo apt-get update
sudo apt-get install make gcc libssl-dev libevent-dev wget -y # for installing modules required by turn server
mkdir ~/turn && cd ~/turn # creating temp directory
wget turnserver.open-sys.org/downloads/v3.2.5.9/turnserver-3.2.5.9.tar.gz # downloading the TURN source code
tar -zxvf *.gz # extract
cd turn*
make
sudo make install # installing the rfc5766
cd ../.. && rm -rf turn # cleaning up
command for starting the TURN server:
turnserver -a -o -v -n -u user:root -p 3478 -L INT_IP -r someRealm -X EXT_IP/INT_IP
assumptions:
your ip, internal ip = EXT_IP, INT_IP
desired port for listening: 3478
single credential username:password = user:root
realm: someRealm
in your WebRTC app, you can use trun server like:
{
url: 'turn:user#EXT_IP:3478',
credential: 'root'
}

Docker HTTPS access - ONLYOFFICE3

I'm following the ONLYOFFICE Docker documentation
(GITHUB ONLYOFFICE docker HTTPS access) to get ONLYOFFICE
documentserver and communityserver running with HTTPS.
What I've tried:
1.
I've created the cert files (.crt, .key, .pem) like mentioned in the documentation. After that I created a file named env.list in my home dir /home/jw/data/ with the following content:
SSL_CERTIFICATE_PATH=/opt/onlyoffice/Data/certs/onlyoffice.crt
SSL_KEY_PATH=/opt/onlyoffice/Data/certs/onlyoffice.key
SSL_DHPARAM_PATH=/opt/onlyoffice/Data/certs/dhparam.pem
SSL_VERIFY_CLIENT=true
2.
After that I added the directory /home/jw/data/ to my $PATH environment
variable:
PATH=$PATH:/home/jw/data/; export PATH
3.
On the same shell I started the docker container like this:
sudo docker run -i -t -d --name onlyoffice-document-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/jw/data/env.list onlyoffice/documentserver
4.
The documentserver is running fine. After that I've started the
communityserver with:
sudo docker run -i -t -d --link onlyoffice-document-server:document_server --env-file /home/jw/data/env.list onlyoffice/communityserver
5.
With the command docker ps -a I see booth docker containers running fine:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f573111f2e5 onlyoffice/communityserver "/bin/sh -c 'bash -C " 29 seconds ago Up 28 seconds 80/tcp, 443/tcp, 5222/tcp lonely_mcnulty
23543300fa51 onlyoffice/documentserver "/bin/sh -c 'bash -C " 42 seconds ago Up 41 seconds 80/tcp, 0.0.0.0:443->443/tcp onlyoffice-document-server
But when I'm trying to access https://localhost there is an error "Secure
Connection Failed" in Firefox.
Did I miss something?
Okay got it:
I've changed the environment variables in env.list to:
SSL_CERTIFICATE_PATH=/var/www/onlyoffice/Data/certs/onlyoffice.crt
SSL_KEY_PATH=/var/www/onlyoffice/Data/certs/onlyoffice.key
SSL_DHPARAM_PATH=/var/www/onlyoffice/Data/certs/dhparam.pem
After that used the following command to run ONLY the documentserver:
sudo docker run -i -t -d --name onlyoffice-document-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/jw/data/env.list onlyoffice/documentserver
The ONLYOFFICE OnlineEditor API is now available over HTTPS:
https://localhost/OfficeWeb/apps/api/documents/api.js
If you want to use CommunityServer with HTTPS just change the run command above to:
sudo docker run -i -t -d --name onlyoffice-community-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/<username>/env.list onlyoffice/communityserver
Thank you anyway!

Running a Docker container that accept traffic from the host

I have the following config:
Dockerfile
FROM centos
MAINTAINER Eduar Tua <eduartua#gmail.com>
RUN yum -y update && yum clean all
RUN yum -y install httpd && yum clean all
RUN echo "Apache works" >> /var/www/html/index.html
EXPOSE 80
ADD run-apache.sh /run-apache.sh
RUN chmod -v +x /run-apache.sh
CMD ["/run-apache.sh"]
The run-apache.sh script:
#!/bin/bash
rm -rf /run/httpd/* /tmp/httpd*
exec /usr/sbin/apachectl -D FOREGROUND
Then I build the image with:
sudo docker build --rm -t platzi/httpd .
then
sudo docker run -d -p 80:80 platzi/httpd
After that when I try to run the container accepting connections from the host in the 80 port I get this:
67ed31b50133adc7c745308058af3a6586a34ca9ac53299d721449dfa4996657
FATA[0002] Error response from daemon: Cannot start container 67ed31b50133adc7c745308058af3a6586a34ca9ac53299d721449dfa4996657: Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Any help?
It is saying port 80 is busy ... run this to see who is using port 80
sudo netstat -tlnp | grep 80 # sudo apt-get install net-tools # to install netstat
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1380/nginx -g daemo
tcp6 0 0 :::80 :::* LISTEN 1380/nginx -g daemo
scroll to far right to see offending PID of process holding port 80 ... its PID 1380 so lets do a process list to see that pid
ps -eaf | grep 1380
root 1380 1 0 11:33 ? 00:00:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
so teardown that offending process to free up the port 80
sudo kill 1380 # if you know the pid ( 1380 for example )
__ or __
sudo fuser -k 80/tcp # just kill whatever pid is using port 80 tcp
If after doing above its still saying busy then probably the process which you killed got auto relaunched in which case you need to kill off its watcher however you can walk up the process tree from netstat output to identify this parent process and kill that too
Here is how to identify the parent pid of a given process pid
ps -eafww
eve 2720 2718 0 07:56 ? 00:00:00 /usr/share/skypeforlinux/skypeforlinux --type=zygote
in above pid is 2720 and its parent will be the next column to right pid 2718 ... there are commands to show a process tree to visualize these relationships
ps -x --forest
or
pstree -p
with sample output of
systemd(1)─┬─ModemManager(887)─┬─{ModemManager}(902)
│ └─{ModemManager}(906)
├─NetworkManager(790)─┬─{NetworkManager}(872)
│ └─{NetworkManager}(877)
├─accounts-daemon(781)─┬─{accounts-daemon}(792)
│ └─{accounts-daemon}(878)
├─acpid(782)
├─avahi-daemon(785)───avahi-daemon(841)
├─colord(1471)─┬─{colord}(1472)
│ └─{colord}(1475)
├─containerd(891)─┬─containerd-shim(1836)─┬─registry(1867)─┬─{registry}(1968)
│ │ │ ├─{registry}(1969)
│ │ │ ├─{registry}(1970)
The error seems pretty clear:
FATA[0002] Error response from daemon: Cannot start container 67ed31b50133adc7c745308058af3a6586a34ca9ac53299d721449dfa4996657: Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
It says, "address already in use". This means that something on your system -- probably a web server like Apache -- is already listening on port 80. You will either need to:
stop the web server,
select a different host port in the -p argument to docker run or
just drop the -p argument.
Because Docker can't set up the requested port forwarding, it does not start the container.
Options (a) and (b) will both allow the container to bind to port 80 on your host. This is only necessary if you want to access the container from somewhere other than your host.
Option (c) is useful if you only want to access the container from the docker host but do not want to otherwise expose the container on your local network. In this case, you would use the container ip address as assigned by docker, which you can get by running docker inspect and perusing the output, or just running:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' container_id
If you are running Ubuntu, just run
sudo /etc/init.d/apache2 stop
Then reload your Docker Image
docker reload
I found so solution:
$ docker stop container_name
$ docker commit container_name image_name
$ docker rm container_name
then u can create a new container from the image:
$ docker run -d -P --name container_name_the_same_or_new image_name
and now works.