Docker: What is the simplest way to secure a private registry? - authentication

Our Docker images ship closed sources, we need to store them somewhere safe, using own private docker registry.
We search the simplest way to deploy a private docker registry with a simple authentication layer.
I found :
this manual way http://www.activestate.com/blog/2014/01/deploying-your-own-private-docker-registry
and the shipyard/docker-private-registry docker image based on stackbrew/registry and adding basic auth via Nginx - https://github.com/shipyard/docker-private-registry
I think use shipyard/docker-private-registry, but is there one another best way?

I'm still learning how to run and use Docker, consider this an idea:
# Run the registry on the server, allow only localhost connection
docker run -p 127.0.0.1:5000:5000 registry
# On the client, setup ssh tunneling
ssh -N -L 5000:localhost:5000 user#server
The registry is then accessible at localhost:5000, authentication is done through ssh that you probably already know and use.
Sources:
https://blog.codecentric.de/en/2014/02/docker-registry-run-private-docker-image-repository/
https://docs.docker.com/userguide/dockerlinks/

You can also use an Nginx front-end with a Basic Auth and an SSL certificate.
Regarding the SSL certificate I have tried couple of hours to have a working self-signed certificate but Docker wasn't able to work with the registry. To solve this I have a free signed certificate which work perfectly. (I have used StartSSL but there are others).
Also be careful when generating the certificate. If you want to have the registry running at the URL registry.damienroch.com, you must give this URL with the sub-domain otherwise it's not going to work.
You can perform all this setup using Docker and my nginx-proxy image (See the README on Github: https://github.com/zedtux/nginx-proxy).
This means that in the case you have installed nginx using the distribution package manager, you will replace it by a containerised nginx.
Place your certificate (.crt and .key files) on your server in a folder (I'm using /etc/docker/nginx/ssl/ and the certificate names are private-registry.crt and private-registry.key)
Generate a .htpasswd file and upload it on your server (I'm using /etc/docker/nginx/htpasswd/ and the filename is accounts.htpasswd)
Create a folder where the images will be stored (I'm using /etc/docker/registry/)
Using docker run my nginx-proxy image
Run the docker registry with some environment variable that nginx-proxy will use to configure itself.
Here is an example of the commands to run for the previous steps:
sudo docker run -d --name nginx -p 80:80 -p 443:443 -v /etc/docker/nginx/ssl/:/etc/nginx/ssl/ -v /var/run/docker.sock:/tmp/docker.sock -v /etc/docker/nginx/htpasswd/:/etc/nginx/htpasswd/ zedtux/nginx-proxy:latest
sudo docker run -d --name registry -e VIRTUAL_HOST=registry.damienroch.com -e MAX_UPLOAD_SIZE=0 -e SSL_FILENAME=private-registry -e HTPASSWD_FILENAME=accounts -e DOCKER_REGISTRY=true -v /etc/docker/registry/data/:/tmp/registry registry
The first line starts nginx and the second one the registry. It's important to do it in this order.
When both are up and running you should be able to login with:
docker login https://registry.damienroch.com

I have create an almost ready to use but certainly ready to function setup for running a docker-registry: https://github.com/kwk/docker-registry-setup .
Maybe it helps.
Everything (Registry, Auth server, and LDAP server) is running in containers which makes parts replacable as soon as you're ready to. The setup is fully configured to make it easy to get started. There're even demo certificates for HTTPs but they should be replaced at some point.
If you don't want LDAP authentication but simple static authentication you can disable it in auth/config/config.yml and put in your own combination of usernames and hashed passwords.

Related

How to resolve peer unverified exception in a secure nifi cluster?

I set up a secured NiFi cluster with TLS certificates provided by the organisation.On accessing the UI I am getting the error as "javax.net.ssl.SSLPeerUnverifiedException: Hostname abc.com not verified: certificate: sha256/abc/abcabc= DN: CN=abc.com, OU=Abc Operations, O=Abc Corporation Limited, C=SG subjectAltNames: [abc.com]".I have referred the link https://nifi.apache.org/docs/nifi-docs/html/walkthroughs.html#securing-nifi-with-provided-certificates.
Is there anything I missed to enable peer to peer communication while using SSL?
I had same problem and found solution in NiFi TLS-toolkit.
Notion: on my cluster auth worked correctly and problem was only in java verification SSL
Shortly: problem indeed in --subjectAlternativeNames
Generating ssl-keys with own rootCA not worked for me. Good instrunction (but old): https://community.cloudera.com/t5/Community-Articles/How-to-create-user-generated-keys-for-securing-NiFi/ta-p/245551
CentOS Linux 8
NiFi 1.14.0
nifi-toolkit 1.15.2
My way with NiFi TLS-toolkit:
Download nifi-toolkit-*.tar.gz to linux machine (let's ip machine is 0.0.0.1, we need it because this VM will be as "certificateAuthorityHostname") link at this page
sudo wget https://dlcdn.apache.org/nifi/1.15.2/nifi-toolkit-1.15.2-bin.tar.gz
Unarchive it
sudo tar -xvf nifi-toolkit-1.15.2-bin.tar.gz
Generate all keys by long command
../security_output - this dir (or any other name) need to be created before run main command (it's useful to store all key-files in one place)
sudo ./bin/tls-toolkit.sh standalone -h - this help-command to better understand args
OU - equal VM-names in my cluster
!!! --subjectAlternativeNames - it's main reason why raise error javax.net.ssl.SSLPeerUnverifiedException: Hostname <ip / dns> not verified
-O - this arg overwrite your keys in folder, be careful
generaet coomand: sudo ./bin/tls-toolkit.sh standalone --hostnames '0.0.0.1,0.0.0.2,0.0.0.3' -c '0.0.0.1' -C 'CN=0.0.0.1,OU=nifi-prod-cluster-01' -C 'CN=0.0.0.2,OU=nifi-prod-cluster-02' -C 'CN=0.0.0.3,OU=nifi-prod-cluster-03' -O -o ../security_output --subjectAlternativeNames '0.0.0.1,0.0.0.2,0.0.0.3,nifi-prod-cluster-01,nifi-prod-cluster-02,nifi-prod-cluster-03'
After generating keys I archive full dir security_output:
sudo tar -zcvf security_output.tar.gz security_output
And copy this tar/dir to other VM of cluster: to 0.0.0.2 and 0.0.0.3 in my example
Then we need to move keystore.jks and truststore.jks to nifi/conf/ directory near nifi.properties
Edit nifi.properties. Passwords of keys will be in security_output/0.0.0.X/nifi.properties. I replace only this params:
nifi.security.autoreload.enabled=false
nifi.security.autoreload.interval=10 secs
nifi.security.keystore=./conf/keystore.jks
nifi.security.keystoreType=jks
nifi.security.keystorePasswd=34dgsOBKdS+9DGHIm849ALK3JaNBdd738ddsgjfghb4J
nifi.security.keyPasswd=34dgsOBKdS+9DGHIm849ALK3Jaddsgjfghb4J
nifi.security.truststore=./conf/truststore.jks
nifi.security.truststoreType=jks
nifi.security.truststorePasswd=/n1xI9AjcwutNBdd738uOQeQL5O9ALK3i3KwylEYMW5
nifi.security.user.authorizer=single-user-authorizer
nifi.security.allow.anonymous.authentication=false
nifi.security.user.login.identity.provider=single-user-provider
nifi.security.user.jws.key.rotation.period=PT1H
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=
Restart nifi:
sudo service nifi restart && tail -f /opt/nifi/logs/nifi-app.log
UPD. Maybe you want to set one password for keys for all machines (it's easier to setup) or set number of days for keys: https://nifi.apache.org/docs/nifi-docs/html/toolkit-guide.html#standalone
Links:
Usefull link for my guide (but old): https://pierrevillard.com/tag/tls-toolkit/
This helps me find good idea: https://community.cloudera.com/t5/Community-Articles/Using-the-TLS-Toolkit-to-simplify-security/ta-p/247531

Can't login with root user in native templates of environments Jelastic

When I create a new environment in some nodes, (i.e. with the Nginx) I can't access to this node with root user
I logged with user a not with root.
Using username "251X-XXX".
Authenticating with public key "rsa-key-XXXXXXXX"
Last login: Thu Sep 28 09:11:56 2017
nginx#node251X-delete ~ $ sudo date
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
[sudo] password for nginx:
Sorry, try again.
Brief:
I didn't receive root password to my email (I'm the owner of this environment).
I can't change this node to a Docker image
There's no Reset Password option on Dashboard
Sudo it doesn't work.
Also it happens with other non-docker nodes (Tomcat, MySQL,...)
Any alternative or configuration to enter with root user to this node.
Thanks
Jelastic doesn't provide root access to separate containers. At the same time while accessing containers via SSH, a user receives all required permissions and additionally can manage the main services with sudo commands of the following kind (and others):
sudo /etc/init.d/jetty start
sudo /etc/init.d/mysql stop
sudo /etc/init.d/tomcat restart
sudo /etc/init.d/memcached status
sudo /etc/init.d/mongod reload
sudo /etc/init.d/nginx upgrade
sudo /etc/init.d/httpd help
For example, you can restart nginx with the following command:
sudo /etc/init.d/nginx restart
No password will be requested.
Note: If you deploy any application, change the configurations or add any extra functionality via SSH to your Jelastic environment, this
will not be displayed at the Jelastic dashboard.
Using our documentation you’ll find out how to:
use SFTP and FISH protocols
manage containers via SSH with Capistrano
Root user is only provided for self-managed nodes (custom Docker / Elastic VPS).
You can execute specific whitelisted commands with sudo (e.g. sudo service nginx restart). Besides that you shouldn't need root access.
If you feel otherwise then contact your hosting provider to discuss your needs and they can find a solution for you.

How do I access my TLS/HTTPS keys in order to start ListenAndServeTLS?

My server uses Let's Encrypt to get its TLS certificate to serve over HTTPS.
I'm electing to use the standard net/http package over Apache or nginx, so I used the webroot installation method, and it placed the cert files in /etc/letsencrypt/live/mysite.
The issue is that the live directory is only accessible by the root user. My golang program requires the certs in this directory to function and serve over HTTPS.
However for obvious reasons I'm not running my program as the root user.
So that leads me to wonder: how do I access these files without having to insecurely run my web server as root permanently?
You have few options:
sudo chown -R your-user /etc/letsencrypt/live/mysite
Or
sudo cp -a /etc/letsencrypt/live/mysite ./ssl/ && sudo chown -R your-user ./ssl/
Or
Use a container for your app and copy your app and the certs to it, and since it will be running as root inside the container, it won't matter.

Inject host's SSH keys into Docker Machine with Docker Compose

I am using Docker on Mac OS X with Docker Machine (with the default boot2docker machine), and I use docker-compose to setup my development environment.
Let's say that one of the containers is called "stack". Now what I want to do is call:
docker-composer run stack ssh user#stackoverflow.com
My public key (which has been added to stackoverflow.com and which will be used to authenticate me) is located on the host machine. I want this key to be available to the Docker Machine container so that I will be able to authenticate myself against stackoverflow using that key from within the container. Preferably without physically copying my key to Docker Machine.
Is there any way to do this? Also, if my key is password protected, is there any way to unlock it once so after every injection I will not have to manually enter the password?
You can add this to your docker-compose.yml (assuming your user inside container is root):
volumes:
- ~/.ssh:/root/.ssh
Also you can check for more advanced solution with ssh agent (I did not tried it myself)
WARNING: This feature seems to have limited support in Docker Compose and is more designed for Docker Swarm.
(I haven't checked to make sure, but) My current impression is that:
In Docker Compose secrets are just bind mount volumes, so there's no additional security compared to volumes
Ability to change secrets permissions with Linux host may be limited
See answer comments for more details.
Docker has a feature called secrets, which can be helpful here. To use it one could add the following code to docker-compose.yml:
---
version: '3.1' # Note the minimum file version for this feature to work
services:
stack:
...
secrets:
- host_ssh_key
secrets:
host_ssh_key:
file: ~/.ssh/id_rsa
Then the new secret file can be accessed in Dockerfile like this:
RUN mkdir ~/.ssh && ln -s /run/secrets/host_ssh_key ~/.ssh/id_rsa
Secret files won't be copied into container:
When you grant a newly-created or running service access to a secret, the decrypted secret is mounted into the container in an in-memory filesystem
For more details please refer to:
https://docs.docker.com/engine/swarm/secrets/
https://docs.docker.com/compose/compose-file/compose-file-v3/#secrets
If you're using OS X and encrypted keys this is going to be PITA. Here are the steps I went through figuring this out.
Straightforward approach
One might think that there’s no problem. Just mount your ssh folder:
...
volumes:
- ~/.ssh:/root/.ssh:ro
...
This should be working, right?
User problem
Next thing we’ll notice is that we’re using the wrong user id. Fine, we’ll write a script to copy and change the owner of ssh keys. We’ll also set ssh user in config so that ssh server knows who’s connecting.
...
volumes:
- ~/.ssh:/root/.ssh-keys:ro
command: sh -c ‘./.ssh-keys.sh && ...’
environment:
SSH_USER: $USER
...
# ssh-keys.sh
mkdir -p ~/.ssh
cp -r /root/.ssh-keys/* ~/.ssh/
chown -R $(id -u):$(id -g) ~/.ssh
cat <<EOF >> ~/.ssh/config
User $SSH_USER
EOF
SSH key passphrase problem
In our company we protect SSH keys using a passphrase. That wouldn’t work in docker since it’s impractical to enter a passphrase each time we start a container.
We could remove a passphrase (see example below), but there’s a security concern.
openssl rsa -in id_rsa -out id_rsa2
# enter passphrase
# replace passphrase-encrypted key with plaintext key:
mv id_rsa2 id_rsa
SSH agent solution
You may have noticed that locally you don’t need to enter a passphrase each time you need ssh access. Why is that?
That’s what SSH agent is for. SSH agent is basically a server which listens to a special file, unix socket, called “ssh auth sock”. You can see its location on your system:
echo $SSH_AUTH_SOCK
# /run/user/1000/keyring-AvTfL3/ssh
SSH client communicates with SSH agent through this file so that you’d enter passphrase only once. Once it’s unencrypted, SSH agent will store it in memory and send to SSH client on request.
Can we use that in Docker? Sure, just mount that special file and specify a corresponding environment variable:
environment:
SSH_AUTH_SOCK: $SSH_AUTH_SOCK
...
volumes:
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
We don’t even need to copy keys in this case.
To confirm that keys are available we can use ssh-add utility:
if [ -z "$SSH_AUTH_SOCK" ]; then
echo "No ssh agent detected"
else
echo $SSH_AUTH_SOCK
ssh-add -l
fi
The problem of unix socket mount support in Docker for Mac
Unfortunately for OS X users, Docker for Mac has a number of shortcomings, one of which is its inability to share Unix sockets between Mac and Linux. There’s an open issue in D4M Github. As of February 2019 it’s still open.
So, is that a dead end? No, there is a hacky workaround.
SSH agent forwarding solution
Luckily, this issue isn’t new. Long before Docker there was a way to use local ssh keys within a remote ssh session. This is called ssh agent forwarding. The idea is simple: you connect to a remote server through ssh and you can use all the same remote servers there, thus sharing your keys.
With Docker for Mac we can use a smart trick: share ssh agent to the docker virtual machine using TCP ssh connection, and mount that file from virtual machine to another container where we need that SSH connection. Here’s a picture to demonstrate the solution:
First, we create an ssh session to the ssh server inside a container inside a linux VM through a TCP port. We use a real ssh auth sock here.
Next, ssh server forwards our ssh keys to ssh agent on that container. SSH agent has a Unix socket which uses a location mounted to Linux VM. I.e. Unix socket works in Linux. Non-working Unix socket file in Mac has no effect.
After that we create our useful container with an SSH client. We share the Unix socket file which our local SSH session uses.
There’s a bunch of scripts that simplifies that process:
https://github.com/avsm/docker-ssh-agent-forward
Conclusion
Getting SSH to work in Docker could’ve been easier. But it can be done. And it’ll likely to be improved in the future. At least Docker developers are aware of this issue. And even solved it for Dockerfiles with build time secrets. And there's a suggestion how to support Unix domain sockets.
You can forward SSH agent:
something:
container_name: something
volumes:
- $SSH_AUTH_SOCK:/ssh-agent # Forward local machine SSH key to docker
environment:
SSH_AUTH_SOCK: /ssh-agent
You can use multi stage build to build containers This is the approach you can take :-
Stage 1 building an image with ssh
FROM ubuntu as sshImage
LABEL stage=sshImage
ARG SSH_PRIVATE_KEY
WORKDIR /root/temp
RUN apt-get update && \
apt-get install -y git npm
RUN mkdir /root/.ssh/ &&\
echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa &&\
chmod 600 /root/.ssh/id_rsa &&\
touch /root/.ssh/known_hosts &&\
ssh-keyscan github.com >> /root/.ssh/known_hosts
COPY package*.json ./
RUN npm install
RUN cp -R node_modules prod_node_modules
Stage 2: build your container
FROM node:10-alpine
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY ./ ./
COPY --from=sshImage /root/temp/prod_node_modules ./node_modules
EXPOSE 3006
CMD ["npm", "run", "dev"]
add env attribute in your compose file:
environment:
- SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}
then pass args from build script like this:
docker-compose build --build-arg SSH_PRIVATE_KEY="$(cat ~/.ssh/id_rsa)"
And remove the intermediate container it for security. This Will help you cheers.
Docker for Mac now supports mounting the ssh agent socket on macOS.

CentOS7: Are you trying to connect to a TLS-enabled daemon without TLS?

I've installed Docker on CentOS7, now I try to launch the server in a Docker container.
$ docker run -d --name "openshift-origin" --net=host --privileged \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /tmp/openshift:/tmp/openshift \
openshift/origin start
This is the output:
Post http:///var/run/docker.sock/v1.19/containers/create?name=openshift-origin: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?
I have tried the same command with sudo and that works fine (I can also run images in OpenShift bash etc.) But it feels wrong to use it, am I right? What is a solution to let is work as normal user?
Docker is running (sudo service docker start). Restarting the CentOS did not help.
The error is:
/var/run/docker.sock: permission denied.
That seems pretty clear: the permissions on the Docker socket at /var/run/docker.sock do not permit you to access it. This is reasonably common, because handing someone acccess to the Docker API is effectively the same as giving them sudo privileges, but without any sort of auditing.
If you are the only person using your system, you can:
Create a docker group or similar if one does not already exist.
Make yourself a member of the docker group
Modify the startup configuration of the docker daemon to make the socket owned by that group by adding -G docker to the options. You'll probably want to edit /etc/sysconfig/docker to make this change, unless it's already configured that way.
With these changes in place, you should be able to access docker from your user account with requiring sudo.