Redis sentinel docker image / Dockerfile - redis

I'm looking to deploy high availability Redis on a coreOS cluster, and I need a Redis Sentinel docker image (i.e. Dockerfile) that works. I've gathered enough information/expertise to create one (I think)... but my limited knowledge/experience with advanced networking is the only thing keeping me from building and sharing it.
Can someone who is an expert here help me developing a Redis Sentinel Dockerfile (none exist right now)? The Redis/Docker community would really benefit from this.
Here's the broader issue and context:
https://github.com/antirez/redis/pull/1908
I think the solution is right here specifically:
https://github.com/antirez/redis/pull/1908#issuecomment-54380876
Here's the Dockerfile I've been using... but if you read the thread above, you'll see my comments (joshula)... it lacks the Networking fixes that mattsta is talking about. Note that because I'm using this on coreOS, any config settings in sentinel.conf are being set at run-time via the command line (hence ENTRYPOINT).
# Pull base image.
FROM dockerfile/ubuntu:latest
# Install Redis.
RUN \
cd /tmp && \
wget http://download.redis.io/redis-stable.tar.gz && \
tar xvzf redis-stable.tar.gz && \
cd redis-stable && \
make && \
make install && \
cp -f src/redis-sentinel /usr/local/bin && \
mkdir -p /etc/redis && \
cp -f *.conf /etc/redis && \
rm -rf /tmp/redis-stable* && \
sed -i 's/^\(bind .*\)$/# \1/' /etc/redis/redis.conf && \
sed -i 's/^\(daemonize .*\)$/# \1/' /etc/redis/redis.conf && \
sed -i 's/^\(dir .*\)$/# \1\ndir \/data/' /etc/redis/redis.conf && \
sed -i 's/^\(logfile .*\)$/# \1/' /etc/redis/redis.conf
# Define mountable directories.
VOLUME ["/data"]
# Define working directory.
WORKDIR /data
# Expose ports.
EXPOSE 26379
# Define default command.
ENTRYPOINT redis-sentinel /etc/redis/sentinel.conf

After a ton of work, I ended up figuring this out. Here's to making it simple for anyone else who wants to deploy a highly available redis instance via Docker:
https://registry.hub.docker.com/u/joshula/redis-sentinel/

There is no need for a custom sentinel image, or messing with the network. See my redis-ha-learning project using Spring Data Redis, bitnami/redis and bitnami/redis-sentinel images. The Docker Compose file is here.
My code auto detects the sentinels based on the Docker Compose container names.

Related

How do I use SSH on Laradock?

I'd like to set up SSH with my Laradock workspace container so I can deploy to git.
Inside the workspace folder, there are keypairs insecure_id_rsa and insecure_id_rsa.pub.
Obviously I don't want to use what comes in the repository, but there are no instructions beyond connecting inside the workspace.
Am I supposed to generate my own keys and have them in my workspace folder?
in the DockerFile:
###########################################################################
# ssh:
###########################################################################
ARG INSTALL_WORKSPACE_SSH=false
COPY insecure_id_rsa /tmp/id_rsa
COPY insecure_id_rsa.pub /tmp/id_rsa.pub
RUN if [ ${INSTALL_WORKSPACE_SSH} = true ]; then \
rm -f /etc/service/sshd/down && \
cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys \
&& cat /tmp/id_rsa.pub >> /root/.ssh/id_rsa.pub \
&& cat /tmp/id_rsa >> /root/.ssh/id_rsa \
&& rm -f /tmp/id_rsa* \
&& chmod 644 /root/.ssh/authorized_keys /root/.ssh/id_rsa.pub \
&& chmod 400 /root/.ssh/id_rsa \
&& cp -rf /root/.ssh /home/laradock \
&& chown -R laradock:laradock /home/laradock/.ssh \
;fi
How do I set up a secure SSH connection to Laradock to use on Github?
I can connect in my workspace doing ssh root#localhost. I just don't understand what's required to have my own keys inside the container and how to do it securely.
Should I make a new key with putty and replace the insecure keys?
Edit: so i can run ssh-keygen on my container and get new keypairs. I can deploy on github fine with them.
But of course, when my docker restarts, these files are no longer there and get overwritten by the config above.
So is it safe to put my actual keys in a laradock folder?
Edit: I ended up copying what ssh-keygen generated in the .ssh folder and pasting into the files that get copied over when my containers run. this of course works fine, but I'm just not sure this is the best way to go about things. especially if you want to keep the repo up to date with what laradock is doing
Can you copy the generated ssh keys to the laradock folder (and rename them to insecure_id_rsa and insecure_id_rsa.pub) then laradock will copy them across each time?
Or set INSTALL_WORKSPACE_SSH=false after it is set up so it doesn't overwrite the keys?

RuntimeWarning:You're running the worker with superuser privileges:this is absolutely not recommended

I am using django+celery+redis,celery==4.4.0 in local it is working fine but when I am dockerizing it , I am getting the above error.
I am using following commands to run in local as well as inside container
**CMDs**
celery -A nrn worker -l info
docker run -d -p 6379:6379 redis
flower -A nrn --port=5555
Any help is highly appreciated
*settings.py**
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_BROKER_URL = os.environ.get('redis', 'redis://127.0.0.1:6379/')
Take a look in the documentation. It's a warning, though, not an error (see the code). Running Celery under root is an error only when you allow pickle serialization which is not enabled by default (see here).
However, it's still the best practice to run Celery with lower privileges. In Docker (with Debian based image), I choose to run Celery under nobody:nogroup. I use this Dockerfile:
FROM python:3.6
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
WORKDIR /srv/celery
COPY ./app app
COPY ./requirements.txt /tmp/requirements.txt
COPY ./celery.sh celery.sh
RUN pip install --no-cache-dir \
-r /tmp/requirements.txt
VOLUME ["/var/log/celery", "/var/run/celery"]
CMD ["./celery.sh"]
where celery.sh looks as follows:
#!/usr/bin/env bash
mkdir -p /var/run/celery /var/log/celery
chown -R nobody:nogroup /var/run/celery /var/log/celery
exec celery --app=app worker \
--loglevel=INFO --logfile=/var/log/celery/worker-example.log \
--statedb=/var/run/celery/worker-example#%h.state \
--hostname=worker-example#%h \
--queues=celery.example -O fair \
--uid=nobody --gid=nogroup

How to pass securely SSH Keys to Docker Build?

I want to create a Docker image for devs that reproduces our production servers. Those servers are configured by Ansible.
My idea is to run an ansible-pull to apply all the configuration inside the container. The problem is that I need the SSH key to pull the playbook, but I don't want to share the SSH key on the Docker image.
So, there is a way to have the SSH keys on build time without having them on run time?
Nice question. The simple way to do it is by removing the SSH keys after the Ansible stuff in the build - but because Docker stores images as layers, someone could still find the old layer with the keys in it.
If you build this Dockerfile:
FROM ubuntu
COPY ansible-ssh-key.rsa /key.rsa
RUN [ansible stuff]
RUN rm /key.rsa
The final image will have all your Ansible state and the SSH key will be gone but someone could easily run docker history to look at all the image layers, and just start a container from an intermediate layer before the key was deleted, and grab the key.
The trick would be to do something like this and then use Jason Wilder's docker-squash tool to squash the final image. In the squashed image the intermediate layer is gone and there's no way to get at the deleted key.
I'd setup some local file serving facility available only in your build environment.
E.g. start lighttpd on your build host to serve your pem-files only to local clients.
And in your Dockerfile do add/pull/cleanup in a single run:
RUN curl -sO http://build-host:8888/key.pem && ansible-pull -U myrepo && rm -rf key.pem
In this case it should be done in a single layer, so there should be no trace of key.pem left after layer commit.
This is another solution by using this repo, dockito/vault,
Secret store to be used on Docker image building.
I create a service dockito/vault and Ubuntu image where I attach my private key to the volume and run it as a process using,
docker run -it -v ~/.ssh:/vault/.ssh ubuntu /bin/bash -c "echo mysupersecret > /vault/.ssh/key"
docker run -d -p 14242:3000 -v ~/.ssh:/vault/.ssh dockito/vault
And, here is my Dockerfile
FROM ubuntu:14.04
RUN apt-get update -y && \
apt-get install -y curl && \
curl -L $(ip route|awk '/default/{print $3}'):14242/ONVAULT >
/usr/local
/bin/ONVAULT && \
chmod +x /usr/local/bin/ONVAULT
ENV REV_BREAK_CACHE=1
RUN ONVAULT echo ENV: && env && echo TOKEN ENV && echo $TOKEN
RUN ONVAULT ls -lha ~/.ssh/
RUN ONVAULT cat ~/.ssh/key
You can use the alpine linux to reduce final build size, and built the image as,
docker build -f Dockerfile -t mohan08p/VaultTest .
And, you are done. You can inspect the image. Secrets has not stored inside the image as its empty.
docker run -it mohan08p/VaultTest ls /root/.ssh
This is good technique to pass the .ssh at the build time. Only disadvantage is I need to keep additional Vault service running.
You could mount the SSH Keys into the Container on runtime.
docker run -v /path/to/ssh/key:/path/to/key/in/container image command

Add Ruby SDK from Docker container as a remote SDK on RubyMine

Rubymine has options to add remote sdks using Vagrant and SSH, however I decided to go with Docker. I already created a Ruby container, but I don't know how to enable SSH access to it so Rubymine can set it as the remote SDK.
Is it possible?
Tried to follow this article, but the Ruby image doesn't have yum and this package epel-release is for Fedora/RedHat.
Hey are you using this official Ruby docker image?
If so, it's based on Debian and you'll have to use apt-get to install packages.
Here's a handy script for installing openssh-server and configuring a user in a Dockerfile:
FROM ruby:2.1.9
#======================
# Install OpenSSH server (sshd)
#======================
RUN apt-get update -qqy \
&& apt-get -qqy install \
openssh-server \
&& echo "PidFile ${RUN_DIR}/sshd.pid" >> /etc/ssh/sshd_config \
&& sed -i 's|session required pam_loginuid.so|session optional pam_loginuid.so|g' /etc/pam.d/sshd \
&& mkdir -p /var/run/sshd \
&& rm -rf /var/lib/apt/lists/*
# Add user rubymine with password rubymine and give ownership of rubymine home dir
RUN adduser --quiet rubymine \
&& echo "rubymine:rubymine" | chpasswd \
&& chown -R rubymine:rubymine /home/rubymine \
EXPOSE 22
I'm not sure of what are the exact configurations you can perform with Rubymine. But it's possible to open a tty with the container without the need of ssh:
#run it as a daemon
docker run -d --name=myruby ruby:2.19
#connect to it
docker -it exec myruby /bin/bash
UPDATE:
Try setting DOCKER_HOST environment variable to listen on a tcp port:
export DOCKER_HOST='tcp://localhost:2376'

Docker Container from php:5.6-apache as root

This would be related to Docker php:5.6-Apache Development Environment missing permissions on volume mount
I have tried pretty much everything to make the mounted volume be readable by www-data, my current solution is trying to move by scripts the folders needed by the application to /var and giving the proper permissions to be writable by www-data but that is becoming hard to maintain.
Giving the fact that it's a development environment I don't mind being a security hole so I would like to run apache as root and I get
Error: Apache has not been designed to serve pages while running as
root. There are known race conditions that will allow any local user
to read any file on the system. If you still desire to serve pages as
root then add -DBIG_SECURITY_HOLE to the CFLAGS line in your
src/Configuration file and rebuild the server. It is strongly
suggested that you instead modify the User directive in your
httpd.conf file to list a non-root user.
Is there any easy way I can accomplish this using the docker image php:5.6-apache?
This is my docker-compose.yml
version: '2'
services:
api:
container_name: api
privileged: true
build:
context: .
dockerfile: apigility/Dockerfile
ports:
- "2020:80"
volumes:
- /ft/code/api:/var/www:rw
And this is my Dockerfile:
FROM php:5.6-apache
USER root
RUN apt-get update \
&& apt-get install -y sudo openjdk-7-jdk \
&& echo "www-data ALL=NOPASSWD: ALL" >> /etc/sudoers
RUN apt-get install -y git zlib1g-dev libmcrypt-dev nano vim --no-install-recommends \
&& apt-get clean \
&& rm -r /var/lib/apt/lists/* \
&& docker-php-ext-install mcrypt zip \
&& curl -sS https://getcomposer.org/installer \
| php -- --install-dir=/usr/local/bin --filename=composer \
&& a2enmod rewrite \
&& sed -i 's!/var/www/html!/var/www/public!g' /etc/apache2/apache2.conf \
&& echo "AllowEncodedSlashes On" >> /etc/apache2/apache2.conf \
&& cp /usr/src/php/php.ini-production /usr/local/etc/php/php.ini \
&& printf '[Date]\ndate.timezone=UTC' > /usr/local/etc/php/conf.d/timezone.ini
WORKDIR /var/www
Why not to do exactly what it says in the question you referred to?
RUN usermod -u 1000 www-data
RUN groupmod -g 1000 www-data
This is not a hack. It's a proper solution to the problem you have in the development environment.
So, I managed to make the mounted data available for www-data by using the part of the answer in the related post but another step is required for it to work.
After you run docker-machine start default you need to ssh into it and run the following:
sudo mkdir --parents /code [where /code is the shared folder in virtualbox]
sudo mount -t vboxsf -o uid=999,gid=999 code /code [this is to make sure the uid and gid is 999 for the next part to work]
Then in your Dockerfile add
RUN usermod -u 999 www-data \
&& groupmod -g 999 www-data
After it's mounted, /code will have the owner www-data, and problem solved!
Another and better solution.
Add this in your dockerfile
RUN cd ~ \
&& apt-get -y install dpkg-dev debhelper libaprutil1-dev libapr1-dev libpcre3-dev liblua5.1-0-dev autotools-dev \
&& apt-get source apache2.2-common \
&& cd apache2-2.4.10 \
&& export DEB_CFLAGS_SET="-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -DBIG_SECURITY_HOLE" \
&& dpkg-buildpackage -b \
&& cd .. \
&& dpkg -i apache2-bin_2.4.10-10+deb8u7_amd64.deb \
&& dpkg -i apache2.2-common_2.4.10-10+deb8u7_amd64.deb
After that, you could be able to run apache as root.
PS : apache2-2.4.10, apache2-bin_2.4.10-10+deb8u7_amd64.deb and apache2.2-common_2.4.10-10+deb8u7_amd64.deb could change according to your source