Setting up redis with docker - authentication

I have setup a basic redis image based on the following instructions: http://docs.docker.io/en/latest/examples/running_redis_service/
With my snapshot I have also edited the redis.conf file with requirepass.
My server runs fine and I am able to access it remotely using redis-cli however the authentication isn't working. I am wondering if the config file isn't being used but when I try starting the container with:
docker run -d -p 6379:6379 jwarzech/redis /usr/bin/redis-server /etc/redis/redis.conf
the container immediately crashes.

the default config of redis is set to be a daemon. You can't run a daemon within a docker container, otherwise, lxc will lose track of it and will destroy the namespace.
I just tried doing this within the container:
$>redis-server - << EOF
requirepass foobared
EOF
Now, I can connect to it and I will get a 'ERR operation not permitted'. When I connect with redis-cli -a foobared, then it works fine.

Related

Run Redis service as non-root user

I have installed it in RHEL 7 and configured it a bit.
It is up and running as a root.
I am trying to run Redis Service as non-root user.
Any pointers would be appreciated.
If the user and group “redis” has not been created,please create it.
useradd redis
Then change the owner of the file named "redis-server" and "redis-cli"(Actually,I advice chang all the files about redis but I do not know the path you installed).
chown redis. "your path"
create the script like this
vim /usr/lib/systemd/system/redis.service
Write the contents
[Unit]
Description=Redis In-Memory Data Store
After=network.target
[Service]
User=redis
Group=redis
Type=forking
ExecStart="the absolute path of redis-server" "ths absolute path of redis.conf"
ExecStop="the absolute path of redis-cli" shutdown
[Install]
WantedBy=multi-user.target
And then you can use the following codes
systemctl status redis
systemctl start redis //start the service
sysyemctl stop redis //stop the service
systemctl enable redia //start the service when system boot
I also paste the config in my machine and it works well for me
Wish this helps!
For those who use docker, you can build your own redis image with non-root user as the following:
FROM redis:6.0.10-alpine
# Create the home directory for the new non-root user.
RUN mkdir -p /home/nonroot
# Create an non-root user so our program doesn't run as root.
RUN adduser -S -h /home/nonroot nonroot
VOLUME /home/nonroot/tmp
HEALTHCHECK --interval=30s --timeout=10s --start-period=30s --retries=3 \
CMD redis-cli ping
USER nonroot
EXPOSE 6379
Probably also add the working directory to the service since redis does not seem to change to that on its own (at least on my configuration):
WorkingDirectory=/var/lib/redis

CentOS7: Are you trying to connect to a TLS-enabled daemon without TLS?

I've installed Docker on CentOS7, now I try to launch the server in a Docker container.
$ docker run -d --name "openshift-origin" --net=host --privileged \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /tmp/openshift:/tmp/openshift \
openshift/origin start
This is the output:
Post http:///var/run/docker.sock/v1.19/containers/create?name=openshift-origin: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?
I have tried the same command with sudo and that works fine (I can also run images in OpenShift bash etc.) But it feels wrong to use it, am I right? What is a solution to let is work as normal user?
Docker is running (sudo service docker start). Restarting the CentOS did not help.
The error is:
/var/run/docker.sock: permission denied.
That seems pretty clear: the permissions on the Docker socket at /var/run/docker.sock do not permit you to access it. This is reasonably common, because handing someone acccess to the Docker API is effectively the same as giving them sudo privileges, but without any sort of auditing.
If you are the only person using your system, you can:
Create a docker group or similar if one does not already exist.
Make yourself a member of the docker group
Modify the startup configuration of the docker daemon to make the socket owned by that group by adding -G docker to the options. You'll probably want to edit /etc/sysconfig/docker to make this change, unless it's already configured that way.
With these changes in place, you should be able to access docker from your user account with requiring sudo.

SSH directly into a docker container

i've got some docker conatiners and now I want to access into one with ssh. Thats working I got a connection via ssh to the docker container.
But now I have the problem I don't know with which user I can access into this container?
I've tried it with both users I have on the host machine (web & root). But they don't work.
What to do know?
You can drop directly into a running container with:
$ docker exec -it myContainer /bin/bash
You can get a shell on a container that is not running with:
$ docker run -it myContainer /bin/bash
This is the preferred method of getting a shell on a container. Running an SSH server is considered not a good practice and, although there are some use cases out there, should be avoided when possible.
If you want to connect directly into a Docker Container, without connecting to the docker host, your Dockerfile should include the following:
# SSH login fix. Otherwise user is kicked off after login
RUN echo 'root:pass' | chpasswd
RUN mkdir /var/run/sshd
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Then use docker run with -p and -d flags. Example:
docker run -p 8022:22 -d your-docker-image
You can connect with:
ssh root#your-host -p8022
1.issue the command docker inspect (containerId or name)
You will get a result like this
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"my_bridge": {
"IPAMConfig": {
"IPv4Address": "172.17.0.20"
},
"Links": null,
"Aliases": [
"3784372432",
"xxx",
"xxx2"
],
"NetworkID": "ff7ea463ae3e6e6a099e0e044610cdcdc45b21f7e8c77a814aebfd3b2becd306",
"EndpointID": "6be4ea138f546b030bb08cf2c8af0f637e8e4ba81959c33fb5125ea0d93af967",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.20",
"IPPrefixLen": 24,
...
read out and copy the IP address from there, connect to it via command ssh existingUser#IpAddress , eg
someExistingUser#172.17.0.20. If the user doesn't exist, create him in the guest image, preferably with the sudo privileges. Probably don't use a root user directly, since as far as I know, that user is preset for connecting to the image via ssh keys, or has a preset password and changing it would probably end up in not being able to ssh connect to the image terminal via a regular way of doing it docker exec -it containerName /bin/bash or docker-compose exec containerName /bin/bash
For some case, enabling SSH in docker container is useful, specially when we want to test some scripts.
The link bellow give a good example how to create and image with ssh enabled and how to get it's IP and connect to it.
Here
If a true SSH connection into the container is needed (i.e. to allow isolated access over the internet), this image from the linuxserver.io guys could be a great solution: https://hub.docker.com/r/linuxserver/openssh-server
Much more robust solution is pulling down nsenter to your sever, then sshing in and running docker-enter from there. That way you don't need to run multiple processes in the container (ssh server + whatever the container is for), or worry about all the extra overhead of ssh users and such (not to mention security concerns).
The idea behind containers is that a container runs a single process so that it can be monitored by the daemon. If this process stops || fails for some reason, it can be restarted depending on your preference in your config. An ssh server is a running process. Therefore, if you need ssh access to your setup, make an ssh server service, which can share Volumes with other containers that are running alongside it in the setup.
To open a shell on a container in a host directly:
Imagine you are on your PC at home and you have a remote machine that runs docker and has running containers, and you want to open a shell on the container directly without "stopping by" on the remote host:
(The -t flag exposes tty)
ssh -t user#remote.host 'docker exec -it running_container_name /bin/bash'
If you are already on the host, like the accepted answer:
(The -i interactive -t tty)
docker exec -it running_container_name /bin/bash

How do I "dockerize" a redis service using phusion/baseimage-docker

I am getting started with docker, and I am trying by "dockerizing" a simple redis service using Phusion's baseimage. On its website, baseimage says:
You can add additional daemons (e.g. your own app) to the image by
creating runit entries.
Great, so I first started this image interactively with a cmd of /bin/bash. I installed redis-server via apt-get. I created a "redis-server" directory in /etc/service, and made a runfile that reads as follows:
#!/bin/sh
exec /usr/bin/redis-server /etc/redis/redis.conf >> /var/log/redis.log 2>&1
I ensured that daemonize was set to "no" in the redis.conf file
I committed my changes, and then with my newly created image, I started it with the following:
docker run -p 6379:6379 <MY_IMAGE>
I see this output:
*** Running /etc/rc.local...
*** Booting runit daemon...
*** Runit started as PID 98
I then run
boot2docker ip
It gives me back an IP address. But when I run, from my mac,
redis-cli -h <IP>
It cannot connect. Same with
telnet <IP> 6379
I ran a docker ps and see the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c7bd2dXXXXXX myuser/redis:latest "/sbin/my_init" 11 hours ago Up 2 minutes 0.0.0.0:6379->6379/tcp random_name
Can anyone suggest what I have done wrong when attempting to dockerize a simple redis service using phusion's baseimage?
It was because I did not comment out the
bind 127.0.0.1
parameter in the redis.conf file.
Now, it works!

Connect from one Docker container to another

I want to run rabbitmq-server in one docker container and connect to it from another container using celery (http://celeryproject.org/)
I have rabbitmq running using the below command...
sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
and running the celery via
sudo docker run -i -t markellul/celery /bin/bash
When I am trying to do the very basic tutorial to validate the connection on http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
I am getting a connection refused error:
consumer: Cannot connect to amqp://guest#127.0.0.1:5672//: [Errno 111]
Connection refused.
When I install rabbitmq on the same container as celery it works fine.
What do I need to do to have container interacting with each other?
[edit 2016]
Direct links are deprecated now. The new way to do link containers is docker network connect. It works quite similar to virtual networks and has a wider feature set than the old way of linking.
First you create your named containers:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
docker run --name celery -it markellul/celery /bin/bash
Then you create a network (last parameter is your network name):
docker network create -d bridge --subnet 172.25.0.0/16 mynetwork
Connect the containers to your newly created network:
docker network connect mynetwork rabbitmq
docker network connect mynetwork celery
Now, both containers are in the same network and can communicate with each other.
A very detailed user guide can be found at Work with networks: Connect containers.
[old answer]
There is a new feature in Docker 0.6.5 called linking, which is meant to help the communication between docker containers.
First, create your rabbitmq container as usual. Note that i also used the new "name" feature which makes life a litte bit easier:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
You can use the link parameter to map a container (we use the name here, the id would be ok too):
docker run --link rabbitmq:amq -i -t markellul/celery /bin/bash
Now you have access to the IP and Port of the rabbitmq container because docker automatically added some environmental variables:
$AMQ_PORT_5672_TCP_ADDR
$AMQ_PORT_5672_TCP_PORT
In addition Docker adds a host entry for the source container to the /etc/hosts file. In this example amq will be a defined host in the container.
From Docker documentation:
Unlike host entries in the /etc/hosts file, IP addresses stored in the environment variables are not automatically updated if the source container is restarted. We recommend using the host entries in /etc/hosts to resolve the IP address of linked containers.
Just get your container ip, and connect to it from another container:
CONTAINER_IP=$(sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' $CONTAINER_ID)
echo $CONTAINER_IP
When you specify -p 5672, What docker does is open up a new port, such as 49xxx on the host and forwards it to port 5672 of the container.
you should be able to see which port is forwarding to the container by running:
sudo docker ps -a
From there, you can connect directly to the host IP address like so:
amqp://guest#HOST_IP:49xxx
You can't use localhost, because each container is basically its own localhost.
Create Image:
docker build -t "imagename1" .
docker build -t "imagename2" .
Run Docker image:
docker run -it -p 8000:8000 --name=imagename1 imagename1
docker run -it -p 8080:8080 --name=imagename2 imagename2
Create Network:
docker network create -d bridge "networkname"
Connect the network with container(imagename) created after running the image:
docker network connect "networkname" "imagename1"
docker network connect "networkname" "imagename2"
We can add any number of containers to the network.
docker network inspect ''networkname"
I think you can't connect to another container directly by design - that would be the responsibility of the host. An example of sharing data between containers using Volumes is given here http://docs.docker.io/en/latest/examples/couchdb_data_volumes/, but I don't think that that is what you're looking for.
I recently found out about https://github.com/toscanini/maestro - that might suit your needs. Let us know if it does :), I haven't tried it myself yet.
Edit. Note that you can read here that native "Container wiring and service discovery" is on the roadmap. I guess 7.0 or 8.0 at the latest.
You can get the docker instance IP with...
CID=$(sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server); sudo docker inspect $CID | grep IPAddress
But that's not very useful.
You can use pipework to create a private network between docker containers.
This is currently on the 0.8 roadmap:
https://github.com/dotcloud/docker/issues/1143