Redis server fails to start in docker - redis

I have a docker image 'redis_image' that installed redis in it. After I run a container as:
docker run --name test_redis -it redis_image bash
the redis server can start normally in the container using '/etc/init.d/redis start'.
But if I run the container with --net=host option, the redis server will fail to start in the container, it says "Starting redis-server: could not open session [Failed]". Is the problem related to the --net=host configuration when I run the container? Thanks.

Related

docker executor vs docker dind image

I am a newbie in gitlabci. I want to understand why do we need docker dind image in order to build a docker image in GitLab CI jobs. Why can't we use the docker executor and run docker commands under scripts?
When we register docker executor gitlab runner, we choose one image..
Again inside gitlabci, we choose an image under image: or services: fields. So does that mean this GitLab CI job container runs inside the docker executor container?
why do we need docker dind image in order to build a docker image in GitLab CI jobs. Why can't we use the docker executor and run docker commands under scripts?
This partly depends on how you have configured your GitLab runner.
Why docker doesn't work inside containers
When you invoke docker commands, they are really talking to a docker daemon which is needed to perform builds and carry out other docker commands. Typically, jobs running under the docker executor do not have access to any docker daemon by default. It's the same kind of problem you would face if you tried to run docker inside of a docker container you started locally.
Even if I can run docker successfully on my host:
$ docker run --rm docker /bin/sh -c 'hello from container $HOSTNAME'
hello from container 2b51479b11b1
I cannot run docker inside the container
$ docker run --rm docker /bin/sh -c 'docker info'
errors pretty printing info
Client:
Context: default
Debug Mode: false
Server:
ERROR: error during connect: Get "http://docker:2375/v1.24/info": dial tcp: lookup docker on 192.168.65.5:53: no such host
The same error would happen trying to run any other significant docker command like build, run, etc.
An exception to this would be if you configured your GitLab runner to run containers in privileged mode and mount /var/run/docker.sock to all your jobs (this would not be advisable) in which case all your jobs could talk directly to the docker daemon on the host. Another exception might be if you use the shell executor instead and you have docker installed on the host where the runner is running.
How the dind service fixes this
The docker:dind service is a daemon that is created just for your job. This is incredibly important because it can prevent concurrent jobs from stepping on one another or being able to escalate access where they might not otherwise have it.
When the build starts, the GitLab runner will create two containers: your job container and the docker:dind container; they are linked together. When your job invokes docker commands, your job connects to the docker:dind container, which then carries out the requested commands.
Any containers created by your job (say, by invoking docker run or docker build as part of your job) are managed by the daemon running on the docker:dind container, not the host daemon. If you run docker ps inside the job, you'll notice that none of the containers run on the host daemon are listed, despite the fact that if you ran docker ps on the host, you would see the job container, the dind container, and any other running containers.
To clarify your other questions:
When we register docker executor gitlab runner, we choose one image
The image specified in your runner configuration is simply the default docker image to be used if a job doesn't declare any image: key. It does not affect how the runner runs in any way.
inside gitlabci, we choose an image under image: or services: fields
When the docker executor runs your job, it uses docker run to do so. The image: key determines which image is used to run your job. Similarly, services: define the image used for service containers -- service containers are siblings to the job container and are connected with links.
So does that mean this GitLab CI job container runs inside the docker executor container?
No. I'd also like to clear up: the runner/executor doesn't run in a container, necessarily. Runners might be installed as a Windows service, or simply even a process running directly on a system. You can use runners that happen to be inside containers, but it doesn't materially affect how jobs are run.
In any case, the containers where your job run are generally always going to be run directly by the host docker daemon.

Gitlab CI job failed: ERROR the input device is not a TTY

I've registered a GitLab Runner with shell executor installed on Ubuntu 18.04, and also set up a docker container with the command below
docker run -it --gpus '"device=0"' --net=host -v /home/autotest/Desktop/ai_platform:/app --name=ai_platform_system nvcr.io/nvidia/pytorch:20.10-py3 "bash"
Then I tried to execute the following command from the gitlab-ci.yml in Gitlab CI, but I got an error "The input device is not a TTY".
docker attach ai_platform_system
Any clues for this issue except using docker exec? I know docker exec works in Gitlab CI environment but it will create a new session in the container which is not desirable for me. Thanks!
According to this answer (for Jenkins but the same problem) you need to remove the -it flag and the tty.
docker run -T --gpus '"device=0"' --net=host -v /home/autotest/Desktop/ai_platform:/app --name=ai_platform_system nvcr.io/nvidia/pytorch:20.10-py3 "bash"

docker container error "Failed to get D-Bus connection: Operation not permitted"

getting error"Failed to get D-Bus connection: Operation not permitted" while using systemctl command for restarting service in centos container.
use below command to start the docker container.
docker run -d -it --privileged ContainerId /usr/sbin/init
use docker file from below link for fix this issue. when run the docker image use --privileged.
https://hub.docker.com/_/centos
add these lines to docker file

Httpd docker stops after a number of days

I'm trying to run a small personal web server in docker using the httpd image in the docker store (https://store.docker.com/images/httpd).
However, it works ok in the beginning, but tends to simply stop a number of days later, and needs to be restarted using "docker start", and I've not found what is wrong. There are some advice on the net for a CentOS build, but I've not found any for the Docker Store image as-is.
Docker file is:
FROM httpd:2.4
LABEL maintainer "mats.ohrman#gmail.com"
COPY ./content/ /usr/local/apache2/htdocs/
COPY ./config/httpd.conf /usr/local/apache2/conf/httpd.conf
COPY ./config/httpd-vhosts.conf /usr/local/apache2/conf/extra/httpd-vhosts.conf
Docker "build" cmd I used:
docker build -t matsohrman/web .
Docker "run" cmd I used:
docker run -dit --name web -p 80:80 matsohrman/web

How to run a Redis server AND another application inside Docker?

I created a Django application which runs inside a Docker container. I needed to create a thread inside the Django application so I used Celery and Redis as the Celery Database.
If I install redis in the docker image (Ubuntu 14.04):
RUN apt-get update && apt-get -y install redis-server
RUN pip install redis
The Redis server is not launched: the Django application throws an exception because the connection is refused on the port 6379. If I manually start Redis, it works.
If I start the Redis server with the following command, it hangs :
RUN redis-server
If I try to tweak the previous line, it does not work either :
RUN nohup redis-server &
So my question is: is there a way to start Redis in background and to make it restart when the Docker container is restarted ?
The Docker "last command" is already used with:
CMD uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
RUN commands are adding new image layers only. They are not executed during runtime. Only during build time of the image.
Use CMD instead. You can combine multiple commands by externalizing them into a shell script which is invoked by CMD:
CMD start.sh
In the start.sh script you write the following:
#!/bin/bash
nohup redis-server &
uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
When you run a Docker container, there is always a single top level process. When you fire up your laptop, that top level process is an "init" script, systemd or the like. A docker image has an ENTRYPOINT directive. This is the top level process that runs in your docker container, with anything else you want to run being a child of that. In order to run Django, a Celery Worker, and Redis all inside a single Docker container, you would have to run a process that starts all three of them as child processes. As explained by Milan, you could set up a Supervisor configuration to do it, and launch supervisor as your parent process.
Another option is to actually boot the init system. This will get you very close to what you want since it will basically run things as though you had a full scale virtual machine. However, you lose many of the benefits of containerization by doing that :)
The simplest way altogether is to run several containers using Docker-compose. A container for Django, one for your Celery worker, and another for Redis (and one for your data store as well?) is pretty easy to set up that way. For example...
# docker-compose.yml
web:
image: myapp
command: uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
links:
- redis
- mysql
celeryd:
image: myapp
command: celery worker -A myapp.celery
links:
- redis
- mysql
redis:
image: redis
mysql:
image: mysql
This would give you four containers for your four top level processes. redis and mysql would be exposed with the dns name "redis" and "mysql" inside your app containers, so instead of pointing at "localhost" you'd point at "redis".
There is a lot of good info on the Docker-compose docs
use supervisord which would control both processes. The conf file might look like this:
...
[program:redis]
command= /usr/bin/redis-server /srv/redis/redis.conf
stdout_logfile=/var/log/supervisor/redis-server.log
stderr_logfile=/var/log/supervisor/redis-server_err.log
autorestart=true
[program:nginx]
command=/usr/sbin/nginx
stdout_events_enabled=true
stderr_events_enabled=true