How to configure Let's encrypt certificates for nginx inside a docker image? - ssl

I know how to configure let's encrypt for nginx. I'm having hard time configuring let's encrypt with nginx inside a docker image. Let's encrypt certificates are symlinked in etc/letsencrypt/live folder and I don't have permission to view the real certificate files inside /etc/letsencrypt/archive
Can someone suggest a way out ?

I add my mistake. Maybe someone will find it useful.
I mounted the /live directory of letsencrypt and not the whole letsencrypt directory tree.
The problem with this:
The /live folder just holds symlinks to the /archive folder that is not mounted to the docker container with my approach.
(In fact I even mounted a /certs folder that symlinked to the live folder because I had that certs folder in the development environment, same problem..the real (symlinked) files were not mounted)
All problems went away when I mounted /etc/letsencrypt instead of /live
A part of my docker-compose.yml
services:
ngx:
image: nginx
container_name: ngx
ports:
- 80:80
- 443:443
links:
- php-fpm
volumes:
- ../../com/website:/var/www/html/website
- ./nginx.conf:/etc/nginx/nginx.conf
- ./nginx_conf.d/:/etc/nginx/conf.d/
- ./nginx_logs/:/var/log/nginx/
- ../whereever/you/haveit/etc/letsencrypt/:/etc/letsencrypt
The last line in that config is the important one. Changed it from
- ./certs/:/etc/nginx/certs/
And /certs was a symlink to /etc/letsencrypt/live in my case. This can not work as I described above.

If anyone having this problem, I've solved it by mounting the folders into docker container.
I've mounted both etc/letsencrypt and etc/ssl folders into docker
Docker has -vflag to mount volumes. Don't forget to open port 443 for the container.
Based on how you mount it it's possible to enable https in docker container without changing nginx paths.
docker run -d -p 80:80 -p 443:443 -v /etc/letsencrypt/:/etc/letsencrypt/ -v /etc /ssl/:/etc/ssl/ <image name>

If you are using nginx, Docker and Letsencrypt you might like the following Github project: https-portal.
It automates a lot of manual actions, and makes it easy to manage your configurations using docker-compose. From the README:
Features
Test Locally
Redirections
Automatic Container Discovery
Hybrid Setup with Non-Dockerized Apps
Multiple Domains
Serving Static Sites
Share Certificates with Other Apps
HTTP Basic Auth
How it works
obtains an SSL certificate for each of your subdomains from Let's Encrypt.
configures Nginx to use HTTPS (and force HTTPS by redirecting HTTP to HTTPS)
sets up a cron job that checks your certificates every week, and renew them. if they expire in 30 days.
For some background.. The project was also discussed on Hacker News: HTTPS-Portal: Automated HTTPS server powered by Nginx, Let’s Encrypt and Docker
(Disclaimer: I have no affiliation to the project, just a user)

Related

kubectl unable to connect to server: x509: certificate signed by unknown authority

i'm getting an error when running kubectl one one machine (windows)
the k8s cluster is running on CentOs 7 kubernetes cluster 1.7
master, worker
Here's my .kube\config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://10.10.12.7:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:node:localhost.localdomain
name: system:node:localhost.localdomain#kubernetes
current-context: system:node:localhost.localdomain#kubernetes
kind: Config
preferences: {}
users:
- name: system:node:localhost.localdomain
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
the cluster is built using kubeadm with the default certificates on the pki directory
kubectl unable to connect to server: x509: certificate signed by unknown authority
One more solution in case it helps anyone:
My scenario:
using Windows 10
Kubernetes installed via Docker Desktop ui 2.1.0.1
the installer created config file at ~/.kube/config
the value in ~/.kube/config for server is https://kubernetes.docker.internal:6443
using proxy
Issue: kubectl commands to this endpoint were going through the proxy, I figured it out after running kubectl --insecure-skip-tls-verify cluster-info dump which displayed the proxy html error page.
Fix: just making sure that this URL doesn't go through the proxy, in my case in bash I used export no_proxy=$no_proxy,*.docker.internal
So kubectl doesn't trust the cluster, because for whatever reason the configuration has been messed up (mine included). To fix this, you can use openssl to extract the certificate from the cluster
openssl.exe s_client -showcerts -connect IP:PORT
IP:PORT should be what in your config is written after server:
Copy paste stuff starting from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE----- (these lines included) into a new text file, say... myCert.crt If there are multiple entries, copy all of them.
Now go to .kube\config and instead of
certificate-authority-data: <wrongEncodedPublicKey>`
put
certificate-authority: myCert.crt
(it assumes you put myCert.crt in the same folder as the config file)
If you made the cert correctly it will trust the cluster (tried renaming the file and it no longer trusted afterwards).
I wish I knew what encoding certificate-authority-data uses, but after a few hours of googling I resorted to this solution, and looking back I think it's more elegant anyway.
Run:
gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project devops1-218400
here devops1-218400 is my project name. Replace it with your project name.
I got the same error while running $ kubectl get nodes as a root user. I fixed it by exporting kubelet.conf to environment variable.
$ export KUBECONFIG=/etc/kubernetes/kubelet.conf
$ kubectl get nodes
For my case, its simple worked by adding --insecure-skip-tls-verify at end of kubectl commands, for single time.
Sorry I wasn't able to provide this earlier, I just realized the cause:
So on the master node we're running a kubectl proxy
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
I stopped this and voila the error was gone.
I'm now able to do
kubectl get nodes
NAME STATUS AGE VERSION
centos-k8s2 Ready 3d v1.7.5
localhost.localdomain Ready 3d v1.7.5
I hope this helps those who stumbled upon this scenario.
I my case I resolved this issue copying the kubelet configuration to my home kube config
cat /etc/kubernetes/kubelet.conf > ~/.kube/config
This was happening because my company's network does not allow self signing certificates through their network. Try switching to a different network
For those of you that were late to the thread like I was and none of these answers worked for you I may have the solution:
When I copied over my .kube/config file to my windows 10 machine (with kubectl installed) I didn't change the IP address from 127.0.0.1:6443 to the master's IP address which was 192.168.x.x. (running windows 10 machine connecting to raspberry pi cluster on the same network). Make sure that you do this and it may fix your problem like it did mine.
On GCP
check: gcloud version
-- localMacOS# gcloud version
Run:
--- localMacOS# gcloud container clusters get-credentials 'clusterName' \ --zone=us-'zoneName'
Get clusterName and zoneName from your console -- here: https://console.cloud.google.com/kubernetes/list?
ref: .x509 #market place deployments on GCP #Kubernetes
I got this because I was not connected to the office's VPN
In case of the error you should export all the kubecfg which contains the certs. kops export kubecfg "your cluster-name and export KOPS_STATE_STORE=s3://"paste your S3 store" .
Now you should be able to access and see the resources of your cluster.
This is an old question but in case that also helps someone else here is another possible reason.
Let's assume that you have deployed Kubernetes with user x. If the .kube dir is under the /home/x user and you connect to the node with root or y user it will give you this error.
You need to switch to the user profile so kubernetes can load the configuration from the .kube dir.
Update: When copying the ~/.kube/config file content on a local pc from a master node make sure to replace the hostname of the loadbalancer with a valid IP. In my case the problem was related to the dns lookup.
Hope this helps.

Issue with docker push on local registry https access to ressource denied

I have a problem with my registry docker. My "server" VM is on kali-linux. I created the registry docker in HTTP and use a centOS VM as a client. I declared the registry insecure in the client VM and it worked perfectly.
Now I try to put it in HTTPS. In order to do that, I use nginx as a proxy. I followed this tutorial : Step 5 — Setting Up SSL except for Part 8 to make it a service (I don't know why but i can't do it).
Because I don't have a domain name, I used a fake one. In order to be recognized, I added my IP (192.168.X.X) and the domain name I used (myregistryexemple) to the /etc/hosts file on both VM.
As asked by the tutorial, I generated the certificat on my "server" VM (the kali one), and send it by scp to my client VM. I make the centOS vm trust the certificate thanks to this commands :
yum install ca-certificates
update-ca-trust force-enable
cp cert.crt /etc/pki/ca-trust/source/anchors/
update-ca-trust extract
I restart the service docker on the client VM. And launch the docker registry and the nginx proxy with "docker-compose up" on my kali VM.
I tag and try to push an ubuntu on the registry :
docker tag ubuntu myregistryexemple/ubuntu
docker push myregistryexemple/ubuntu
But I get this error :
The push refers to a repository [docker.io/myregistryexemple/ubuntu]
56827159aa8b: Preparing
440e02c3dcde: Preparing
29660d0e5bb2: Preparing
85782553e37a: Preparing
745f5be9952c: Preparing
denied: requested access to the resource is denied
Then I try to push to localhost directly :
docker tag ubuntu localhost:5000/ubuntu & docker push localhost:5000/ubuntu
then I docker login on the domain from the client VM, it worked, but when i tried to pull from my domain registry on the client VM, docker cannot find on the registry the docker images i tried to push.
Do someone has any idea why and knows how to help me ?
Ok so i found a way to make it work.
It is quite simple : Juste follow the complete tutorial I quote on the question ( https://www.digitalocean.com/community/tutorials/how-to-set-up-a-private-docker-registry-on-ubuntu-14-04#step-5-%E2%80%94-setting-up-ssl )
After you created the repository, and before you push/pull a docker image.
You need to go, in both client and server VM, on /etc/hosts .
Add the line : domainChosen serverVmIp
Save and quit it.
Now we need the client to trust the certificate generated. In order to do that, you can use this tutorial : http://kb.kerio.com/product/kerio-connect/server-configuration/ssl-certificates/adding-trusted-root-certificates-to-the-server-1605.html .
Then restart your registry and your docker deamon. And you normaly can use your domain name to push/pull in your registry in https.

Problems setting up artifactory as a docker registry

im currently trying to setup a private Docker Registry in Artifacory (v4.7.4).
I've setup a local,remote and virtual docker Repository, added Apache as a Reverse Proxy. Added a DNS Entry for the virtual "docker" Repo.
Reverse Proxy is working but if i try something like:
docker pull docker.my.company.com/ubuntu:16.04
I'm getting:
https://docker.my.company.com/v1/_ping: x509: certificate is valid for
*.company.com, company.com, not docker.my.company.com
My Artifactory URL is: "my.company.com/artifactory" and i want the repositorys to be accessible on repo.my.company.com/artifactory.
I also have a Wildcard Certificate for company.com so i don't understand whats the problem here.
Or is there a way to access Artifactory over just http without SSL
Any Ideas?
According to the RFC-2818 Wildcard certificate matches only the one level down domains, but not deeper:
E.g., *.a.com matches foo.a.com but not bar.foo.a.com. f*.com matches foo.com but not bar.com.
In this case what you should do is use ports for mapping repositories, instead of subdomains, so the docker repository will be accessible under, for example my.company.com:5001/ instead of docker.my.company.com.
You can find the explanation about the change and how to do it using Artifactory Proxy settings generator in the User Guide.
If you are prepared to live with the certificate-name mismatch for-now, and understand the security implications of ignoring the name-mismatch and accessing the repo insecurely, you can apply the following workaround:
Edit /etc/default/docker and add the option DOCKER_OPTS="--insecure-registry docker.my.company.com".
Restart docker: [sudo] service docker restart.

artifactory pro registry docker image

I am trying out the 30 day trial version of the artifactory-registry docker image to evaluate the docker repository for our internal use. I am following the documentation https://www.jfrog.com/confluence/display/RTF/Running+with+Docker
After I run the docker image I am able to access the UI on port 8081, however When I try to push an image I get the following error
“The plain HTTP request was sent to HTTPS port”
Heres how I deploy the image
sudo docker pull mysql
sudo docker tag mysql localhost:5002/mysql
sudo docker push localhost:5002/mysql
Also the documentation says that artifactory could be accessed on the following URLS
http://localhost/artifactory
http://localhost:8081/artifactory
https://localhost:5000/v2 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-remote/v2)
https://localhost:5001/v1 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-prod-local/v1)
https://localhost:5002/v1 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-dev-local/v1)
https://localhost:5001/v2 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-prod-local/v2)
https://localhost:5002/v2 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-dev-local/v2)
But I get a 404 trying to access any of the https urls
What am I missing?
This appears to be an NGINX configuration issue (as described here ) with not forwarding HTTPS requests to Artifactory.
Changing the configuration to forward your requests should fix your issue.

Setup secured Jenkins master with docker

I would like to set up a secured Jenkins master server on ec2 with docker.
I'm using standard jenkins docker file from here: https://registry.hub.docker.com/_/jenkins/
By default it opens an unsecured 8080 http port. However I want it to use a standard 443 port with https (at first I want to use self-signed ssl certificate).
I researched in this topic a little bit and found several possible solution. I'm not really experienced with docker so I still couldn't find a simple one I can use or implement. Here are some options I found:
Use standard jenkins docker on 8080 but configure a secured apache or nginx server on my ec2 instance that will redirect the trafic. I don't like this because the server will be outside the docker so I can not keep it in the version control
Somehow modify the jenkins docker file to start jenkins with a https configured according to https://wiki.jenkins-ci.org/display/JENKINS/Starting+and+Accessing+Jenkins. I'm not sure how to do that though. Do I need to create my own docker container?
use docker file with secured nginx like this one https://registry.hub.docker.com/u/marvambass/nginx-ssl-secure/ and somehow combine two docker containers or make them communicate? Not sure how to that either.
Could someone experienced please recommend me the best solution?
P.S. I'm not sure how much troubles ec2 is going to give me but I assume its just about opening 443 in a security group.
After passing few tutorials on Docker I found that the easiest option to follow is number 2. Jenkins docker image declares the entry point in a way that you can easily pass arguments to the jenkins.
Lets say you have your keystore (e.g. self-signed in this example) as jenkins_keystore.jks in the home folder of ubuntu ec2 instance. Here is the example how to generate one:
keytool -genkey -keyalg RSA -alias selfsigned -keystore jenkins_keystore.jks -storepass mypassword -keysize 2048
Now you can easily configure jenkins to run on https only without creating your own docker image:
docker run -v /home/ubuntu:/var/jenkins_home -p 443:8443 jenkins --httpPort=-1 --httpsPort=8443 --httpsKeyStore=/var/jenkins_home/jenkins_keystore.jks --httpsKeyStorePassword=mypassword
-v /home/ubuntu:/var/jenkins_home exposes the host home folder to the jenkins docker container
-p 443:8443 maps 8443 jenkins port in the container to the 443 port of the host
--httpPort=-1 --httpsPort=8443 blocks jenkins http and exposes it with https on port 8443 inside the container
--httpsKeyStore=/var/jenkins_home/jenkins_keystore.jks --httpsKeyStorePassword=mypassword provides your keystore that has been mapped from the host home folder to the container /var/jenkins_home/ folder.
Like otognan, I would also recommend doing #2, but it seems that his answer is outdated.
First of all, use the jenkins/jenkins:lts image, as the jenkins image is deprecated (see https://hub.docker.com/_/jenkins/ )
Now, lets set it up. You'll need to stop your current jenkins container to free up the ports.
First, you'll need a certificate keystore. If you don't have one, you could create a self-signed one with
keytool -genkey -keyalg RSA -alias selfsigned -keystore jenkins_keystore.jks -storepass mypassword -keysize 4096
Next, let's pass the SSL arguments into the jenkins container. This is the script I use to do so:
read -s -p "Keystore Password:" password
echo
sudo cp jenkins_keystore.jks /var/lib/docker/volumes/jenkins_home/_data
docker run -d -v jenkins_home:/var/jenkins_home -v $(which docker):/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock -p 443:8443 -p 50000:50000 jenkins/jenkins:lts --httpPort=-1 --httpsPort=8443 --httpsKeyStore=/var/jenkins_home/jenkins_keystore.jks --httpsKeyStorePassword=$password
this script prompts the user for the keystore password
-v jenkins_home:/var/jenkins_home creates a named volume called jenkins_home, which happens to exist at /var/lib/docker/volumes/jenkins_home/_data by convention.
if the directory at /var/lib/docker/volumes/jenkins_home/_data does not exist yet, you will need to create the named volume using docker volume before copying the keystore.
-p 443:8443 maps 8443 jenkins port in the container to the 443 port of the host
--httpPort=-1 --httpsPort=8443 blocks http and exposes https on port 8443 inside the container (port 443 outside the container).
--httpsKeyStore=/var/jenkins_home/jenkins_keystore.jks --httpsKeyStorePassword=$password provides your keystore, which exists at /var/jenkins_home/jenkins_keystore.jks inside the container ( /var/lib/docker/volumes/jenkins_home/_data/jenkins_keystore.jks outside the container).
-v /var/run/docker.sock:/var/run/docker.sock is optional, but is the recommended way to allow your jenkins instance to spin up other docker containers.
WARNING: By giving the container access to /var/run/docker.sock, it is easy to break out of the containment provided by the container, and gain access to the host machine. This is obviously a potential security risk.
-v $(which docker):/usr/bin/docker is also optional, but allows your jenkins container to be able to run the docker binary.
Be aware that, because docker is now dynamically linked, it no longer comes with dependencies, so you may be required to install dependencies in the container.
The alternative is to omit -v $(which docker):/usr/bin/docker and install docker within the jenkins container. You'll need to ensure that the inner container docker and the outer host docker are the same version, so that communication over /var/run/docker.sock is supported.
In either case, you may want to use a Dockerfile to create a new Jenkins docker image.
Another alternative is to include -v $(which docker):/usr/bin/docker, but install a statically-linked binary of docker on the host machine.
You should now be able to access the jenkins webportal via https with no port specifier (since port 443 is default for https)
Thanks to otognan for getting me part of the way here.
I know this is a very old topic but I wanted to share a blog post which covers reverse proxy option in detail: https://itnext.io/setting-up-https-for-jenkins-with-nginx-everything-in-docker-4a118dc29127
Jenkins suggests to setup reverse proxy in documents. It may seem like an extra effort in the first place but it is a general solution for other services related with CI/CD environment as well (i.e. SonarQube).
I would use nginx together with jenkins in the same container, and use supervisord to manage both processes. Securing different services with builtin tools is a pain; nginx works the same for all services, and is easy to configure. It is possible, and nicer in some ways, to use docker-compose (was fig) to create two different containers and hook them up with the pretty internal networking that docker provides with links. The problem is that running pairs of jobs together is still not well supported in cluster managers like marathon. It's far easier to tell most services to run a single container, rather than to run two containers, but make sure they're on the same host.
Install Self-Signed SSL Certification to Jenkins Container
I have setup my jenkins in AWS EC2 instance with docker using official jenkins container. I have used docker-compose to build and run jenkins container and here is my docker-compose.yml file
First, you will need certificate keystore. If you already have a certificate keystorke, no need to run below code. So to generate certificate keystroke run
keytool -genkey -keyalg RSA -alias selfsigned -keystore jenkins.jks -storepass password -keysize 4096
please be careful with volume mapping because i have places my jenkins.jks file in /opt/cert folder and my jenkins directory is in /jenkins foler and inside that jenkins foler i have my docker-compose.yml file and jenkins_home directory
version: '3.7'
services:
jenkins:
image: jenkins/jenkins
container_name: jenkins-docker
restart: always
privileged: true
user: root
ports:
- 443:8443
- 50000:50000
volumes:
- ./jenkins_home:/var/jenkins_home
- ../opt/cert/jenkins.jks:/var/lib/jenkins/jenkins.jks
environment:
JAVA_OPTS: -Duser.timezone=CET
JENKINS_OPTS: --httpPort=-1 --httpsPort=443 --httpsKeyStore=/var/lib/jenkins/jenkins.jks --httpsKeyStorePassword=password
after the all these steps , check your jenkins container is up and running. If so , then you can access your jenkins with browser with simply typing https://public-ip:443