Best practices for certificates in docker - ssl

If I have a docker application (J2EE web applications) meeting the following conditions:
there are multiple containers to be deployed (from the same image) on separate hosts which will then communicate with each other over SSL/TLS - so the containers would need their own SSL certificates, and need to trust the certificates of the other containers
these containers will additionally make HTTPS calls to other external URLs - so the certificates of these servers also need to be trusted. These external URL are not known at deployment time, so the certificates need to be imported separately
the application being a J2EE web application uses java keystore and truststore for the certificates
Given this, how should the certificates be made available to the servers?

Update: 2018-02
Docker Swarm allows secrets keeping.
https://docs.docker.com/engine/swarm/secrets/
This is however not supported in non Swarm deployments. One hacky way to get around this is to deploy to only 1 node as a Swarm.
Previous answer:
Docker doesn't currently have a way to handle secrets (it's on their road map). There's a long running thread over at Docker. It lists many ways that people use to import secrets into containers.
https://github.com/docker/docker/issues/13490
Some people use HashiCorp's Vault, others encrypt secrets on the host (env vars) or in a docker volume (that's what my team does). Containers can decrypt them when they are started (ENTRYPOINT/COMMAND). To add secrets at run time, you can create a custom container that does just that (accepts a http request and store it in a truststore). Just a suggestion amongst many that you'll see in the link above.

Application should handle this internally:
If you have a java app running inside a docker container you can use the java keystore (just then add the other containers pem's to the applications keystore in the Dockerfile)
# import my domaincert
COPY homelinux.org.pem /tmp/homelinux.org.pem
RUN /usr/bin/keytool -import -alias homelinux.org -keystore ${JAVA_HOME}/jre/lib/security/cacerts -trustcacerts -file /tmp/homelinux.org.pem -storepass changeit -noprompt
I have ~15 containers on *.homelinux.org via dyndns and a
*.homelinux.org self-signed domain certificate (dnsname=*.homelinux.org) so the containers can use ssl between them fine.

Related

Allow kubernetes storageclass resturl HTTPS with self-signed certificate

I'm currently trying to setup GlusterFS integration for a Kubernetes cluster. Volume provisioning is done with Heketi.
GlusterFS-cluster has a pool of 3 VMs
1st node has Heketi server and client configured. Heketi API is secured with a self-signed certificate OpenSSL and can be accessed.
e.g. curl https://heketinodeip:8080/hello -k
returns the expected response.
StorageClass definition sets the "resturl" to Heketi API https://heketinodeip:8080
When storageclass was created successfully and I try to create a PVC, this fails:
"x509: certificate signed by unknown authority"
This is expected, as ususally one has to allow this insecure HTTPS-connection or explicitly import the issuer CA (e.g. a file simply containing the pem-String)
But: How is this done for Kubernetes? How do I allow this insecure connection to Heketi from Kubernetes, allowing insecure self-signed cert HTTPS or where/how do I import a CA?
It is not an DNS/IP problem, this was resolved with correct subjectAltName settings.
(seems that everybody is using Heketi, and it seems to be still a standard usecase for GlusterFS integration, but always without SSL, if connected to Kubernetes)
Thank you!
To skip verification of server cert, caller just need specify InsecureSkipVerify: true. Refer this github issue for more information (https://github.com/heketi/heketi/issues/1467)
In this page, they have specified a way to use self signed certificate. Not explained thoroughly but still can be useful (https://github.com/gluster/gluster-kubernetes/blob/master/docs/design/tls-security.md#self-signed-keys).

Charles Proxy for Mobile apps that use SSL Pinning

Charles Proxy website comments that:
Note that some apps implement SSL certificate pinning which means they specifically validate the root certificate. Because the app is itself verifying the root certificate it will not accept Charles's certificate and will fail the connection. If you have successfully installed the Charles root SSL certificate and can browse SSL websites using SSL Proxying in Safari, but an app fails, then SSL Pinning is probably the issue.
Just to be certain, is it possible to use an HTTP monitor like Charles Proxy (or another monitor) even though a mobile app uses SSL certificate pinning?
As Steffen said you might need to patch the app to disable certificate pinning.
Most mobile apps don't use it though :) Thus you just need to enable SSL
connections with self-signed certificate. To allow that with Android
application do following. First Download apktool. Then unpack APK file
(according to apktool 2.4.1):
java -jar apktool.jar d app.apk
Modify AndroidManifest.xml by adding this attribute to application element:
android:networkSecurityConfig="#xml/network_security_config"
Create file res/xml/network_security_config.xml with following content:
<?xml version="1.0" encoding="utf-8"?>
<network-security-config>
<base-config>
<trust-anchors>
<certificates src="system" />
<certificates src="user" />
</trust-anchors>
</base-config>
</network-security-config>
Generate keys to sign APK:
keytool -genkey -alias keys -keystore keys -keyalg DSA
Build patched APK:
java -jar apktool.jar b app -o app_patched.apk --use-aapt2
Sign APK file:
jarsigner -verbose -keystore keys app_patched.apk keys
If necessary convert APK to JAR for further analysis: d2j-dex2jar.sh app.apk.
More information: Network security configuration.
Certificate pinning means that the application explicitly wants to get the original certificate. If you do have the original certificate and the associated private key (which usually means that the you control the server the application is using) then it is possible to be a man in the middle (i.e. HTTP monitor) even for applications using certificate pinning.
Of course your HTTP monitoring application must support specifying a fixed certificate. It looks to me like Charles Proxy does not support this. But mitmproxy supports providing a fixed certificate for specific domains.
If you don't have access to the expected certificate and the matching key then you cannot give the expected certificate to the application. The only hope is then to somehow disable the pinning in the application itself by somehow hacking the code. Use your favorite search engine and search for "bypass pinning android" or similar to get a variety of non-trivial ways how one can try to make the application believe that it got the expected certificate.

Trusted Root Certificates in DotNet Core on Linux (RHEL 7.1)

I'm currently deploying a .net-core web-api to an docker container on rhel 7.1.
Everything works as expected, but from my application I need to call other services via https and those hosts use certificates signed by self-maintained root certificates.
In this constellation I get ssl-errors while calling this services (ssl-not valid) and therefore I need to install this root-certificate in the docker-container or somehow use the root-certificate in the .net-core application.
How can this be done? Is there a best practice to handle this situation? Will .net-core access the right keystore on the rhel-system?
Since .NET Core uses OpenSSL on linux, you need to set up your linux environment in the container so that OpenSSL will pick up the certificate.
This is done by (+ Dockerfile examples):
Copying the the certificate .crt file to a location that update-ca-certificates will scan for trusted certificates - e.g. /usr/local/share/ca-certificates/ or on RHEL /etc/pki/ca-trust/source/anchors/:
COPY myca.crt /usr/local/share/ca-certificates/
Invoking update-ca-certificates:
RUN update-ca-certificates

Setup secured Jenkins master with docker

I would like to set up a secured Jenkins master server on ec2 with docker.
I'm using standard jenkins docker file from here: https://registry.hub.docker.com/_/jenkins/
By default it opens an unsecured 8080 http port. However I want it to use a standard 443 port with https (at first I want to use self-signed ssl certificate).
I researched in this topic a little bit and found several possible solution. I'm not really experienced with docker so I still couldn't find a simple one I can use or implement. Here are some options I found:
Use standard jenkins docker on 8080 but configure a secured apache or nginx server on my ec2 instance that will redirect the trafic. I don't like this because the server will be outside the docker so I can not keep it in the version control
Somehow modify the jenkins docker file to start jenkins with a https configured according to https://wiki.jenkins-ci.org/display/JENKINS/Starting+and+Accessing+Jenkins. I'm not sure how to do that though. Do I need to create my own docker container?
use docker file with secured nginx like this one https://registry.hub.docker.com/u/marvambass/nginx-ssl-secure/ and somehow combine two docker containers or make them communicate? Not sure how to that either.
Could someone experienced please recommend me the best solution?
P.S. I'm not sure how much troubles ec2 is going to give me but I assume its just about opening 443 in a security group.
After passing few tutorials on Docker I found that the easiest option to follow is number 2. Jenkins docker image declares the entry point in a way that you can easily pass arguments to the jenkins.
Lets say you have your keystore (e.g. self-signed in this example) as jenkins_keystore.jks in the home folder of ubuntu ec2 instance. Here is the example how to generate one:
keytool -genkey -keyalg RSA -alias selfsigned -keystore jenkins_keystore.jks -storepass mypassword -keysize 2048
Now you can easily configure jenkins to run on https only without creating your own docker image:
docker run -v /home/ubuntu:/var/jenkins_home -p 443:8443 jenkins --httpPort=-1 --httpsPort=8443 --httpsKeyStore=/var/jenkins_home/jenkins_keystore.jks --httpsKeyStorePassword=mypassword
-v /home/ubuntu:/var/jenkins_home exposes the host home folder to the jenkins docker container
-p 443:8443 maps 8443 jenkins port in the container to the 443 port of the host
--httpPort=-1 --httpsPort=8443 blocks jenkins http and exposes it with https on port 8443 inside the container
--httpsKeyStore=/var/jenkins_home/jenkins_keystore.jks --httpsKeyStorePassword=mypassword provides your keystore that has been mapped from the host home folder to the container /var/jenkins_home/ folder.
Like otognan, I would also recommend doing #2, but it seems that his answer is outdated.
First of all, use the jenkins/jenkins:lts image, as the jenkins image is deprecated (see https://hub.docker.com/_/jenkins/ )
Now, lets set it up. You'll need to stop your current jenkins container to free up the ports.
First, you'll need a certificate keystore. If you don't have one, you could create a self-signed one with
keytool -genkey -keyalg RSA -alias selfsigned -keystore jenkins_keystore.jks -storepass mypassword -keysize 4096
Next, let's pass the SSL arguments into the jenkins container. This is the script I use to do so:
read -s -p "Keystore Password:" password
echo
sudo cp jenkins_keystore.jks /var/lib/docker/volumes/jenkins_home/_data
docker run -d -v jenkins_home:/var/jenkins_home -v $(which docker):/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock -p 443:8443 -p 50000:50000 jenkins/jenkins:lts --httpPort=-1 --httpsPort=8443 --httpsKeyStore=/var/jenkins_home/jenkins_keystore.jks --httpsKeyStorePassword=$password
this script prompts the user for the keystore password
-v jenkins_home:/var/jenkins_home creates a named volume called jenkins_home, which happens to exist at /var/lib/docker/volumes/jenkins_home/_data by convention.
if the directory at /var/lib/docker/volumes/jenkins_home/_data does not exist yet, you will need to create the named volume using docker volume before copying the keystore.
-p 443:8443 maps 8443 jenkins port in the container to the 443 port of the host
--httpPort=-1 --httpsPort=8443 blocks http and exposes https on port 8443 inside the container (port 443 outside the container).
--httpsKeyStore=/var/jenkins_home/jenkins_keystore.jks --httpsKeyStorePassword=$password provides your keystore, which exists at /var/jenkins_home/jenkins_keystore.jks inside the container ( /var/lib/docker/volumes/jenkins_home/_data/jenkins_keystore.jks outside the container).
-v /var/run/docker.sock:/var/run/docker.sock is optional, but is the recommended way to allow your jenkins instance to spin up other docker containers.
WARNING: By giving the container access to /var/run/docker.sock, it is easy to break out of the containment provided by the container, and gain access to the host machine. This is obviously a potential security risk.
-v $(which docker):/usr/bin/docker is also optional, but allows your jenkins container to be able to run the docker binary.
Be aware that, because docker is now dynamically linked, it no longer comes with dependencies, so you may be required to install dependencies in the container.
The alternative is to omit -v $(which docker):/usr/bin/docker and install docker within the jenkins container. You'll need to ensure that the inner container docker and the outer host docker are the same version, so that communication over /var/run/docker.sock is supported.
In either case, you may want to use a Dockerfile to create a new Jenkins docker image.
Another alternative is to include -v $(which docker):/usr/bin/docker, but install a statically-linked binary of docker on the host machine.
You should now be able to access the jenkins webportal via https with no port specifier (since port 443 is default for https)
Thanks to otognan for getting me part of the way here.
I know this is a very old topic but I wanted to share a blog post which covers reverse proxy option in detail: https://itnext.io/setting-up-https-for-jenkins-with-nginx-everything-in-docker-4a118dc29127
Jenkins suggests to setup reverse proxy in documents. It may seem like an extra effort in the first place but it is a general solution for other services related with CI/CD environment as well (i.e. SonarQube).
I would use nginx together with jenkins in the same container, and use supervisord to manage both processes. Securing different services with builtin tools is a pain; nginx works the same for all services, and is easy to configure. It is possible, and nicer in some ways, to use docker-compose (was fig) to create two different containers and hook them up with the pretty internal networking that docker provides with links. The problem is that running pairs of jobs together is still not well supported in cluster managers like marathon. It's far easier to tell most services to run a single container, rather than to run two containers, but make sure they're on the same host.
Install Self-Signed SSL Certification to Jenkins Container
I have setup my jenkins in AWS EC2 instance with docker using official jenkins container. I have used docker-compose to build and run jenkins container and here is my docker-compose.yml file
First, you will need certificate keystore. If you already have a certificate keystorke, no need to run below code. So to generate certificate keystroke run
keytool -genkey -keyalg RSA -alias selfsigned -keystore jenkins.jks -storepass password -keysize 4096
please be careful with volume mapping because i have places my jenkins.jks file in /opt/cert folder and my jenkins directory is in /jenkins foler and inside that jenkins foler i have my docker-compose.yml file and jenkins_home directory
version: '3.7'
services:
jenkins:
image: jenkins/jenkins
container_name: jenkins-docker
restart: always
privileged: true
user: root
ports:
- 443:8443
- 50000:50000
volumes:
- ./jenkins_home:/var/jenkins_home
- ../opt/cert/jenkins.jks:/var/lib/jenkins/jenkins.jks
environment:
JAVA_OPTS: -Duser.timezone=CET
JENKINS_OPTS: --httpPort=-1 --httpsPort=443 --httpsKeyStore=/var/lib/jenkins/jenkins.jks --httpsKeyStorePassword=password
after the all these steps , check your jenkins container is up and running. If so , then you can access your jenkins with browser with simply typing https://public-ip:443

Multiple SSL Certificates in One Heroku Application

Is it possible to have many SSL certificates in the single Heroku Application ?
We have multiple domain names of different types and TLD's pointing to our application and need to secure each domain name. Preferably without redirecting to a different secure URL.
There is a way to have multiple SSL endpoints routing traffic to the same app.
An SSL endpoint works by terminating the SSL connection and injecting the unencrypted traffic back in to the normal Heroku routing layer.
You can take advantage of this by creating a new app with a new SSL endpoint to terminate the SSL connection and route the traffic to your existing app:
Add your domain name to your app:
$ heroku domains:add ssl.example.com
Create a new app:
$ heroku create endpoint-for-example-com
Add the SSL endpoint add-on ($20/mo):
$ heroku addons:create ssl:endpoint --app endpoint-for-example-com
Add your certificate to your new app:
$ heroku certs:add server.crt bundle.pem server.key --app endpoint-for-example-com --type endpoint
Resolving trust chain... done
Adding SSL Endpoint to endpoint-for-example-com... done
endpoint-for-example-com now served by kagawa-1482.herokussl.example.com
Use the ssl endpoint assigned to your new app (e.g. kagawa-1482.herokussl.example.com) as the CNAME host for the domain name you wish to secure. This is normally done in your domain's DNS configuration.
The new app does not need any dynos, but there will be a charge of $20 / month for the SSL endpoint add-on.
Notes:
This solution is not documented by Heroku, so it's possible that they
would remove or change this behaviour in the future. Heroku have confirmed that this is safe for production use.
Be sure to create your endpoints in the same region as your primary app.
It might take a while for your DNS changes to take effect.
Recently heroku has added automatic LetsEncrypt TLS certificates for paid dynos, hobby and up. This will work across any number of domains and subdomains automatically. This method only works if you don't need wildcard subdomains.
Additionally you can manage the LE certification yourself across multiple domains and subdomains, with certbot
certbot certonly --standalone -d example.com -d www.example.com -d test.net
You can refer to this heroku doc for uploading custom certificates.
While not the exact same as OP's question, I was able to achieve this on Heroku with a single SAN (Subject Alternative Name) certificate for about $25/year.
I generated a CSR with multiple subject alternative names (subjectAltName) in OSX by:
Copying /System/Library/OpenSSL/openssl.cnf to the current directory, and amending the relevant sections ([req] and [v3_req]):
[req]
req_extensions = v3_req
[v3_req]
subjectAltName=DNS:www.example1.com,DNS:www.example2.com,DNS:www.example3.com
Then I used this new .cnf when generating the CSR:
openssl req -nodes -newkey rsa:2048 -keyout server.key -out server.csr -config openssl.cnf
I purchased the cert from SSLs.com. Their Comodo "PositiveSSL Multi-Domain" is $25.99/yr as of this writing and support from 3-100 domains (domains over 3 cost something like $12).
I concatenated the CA bundle and .crt that I was sent into a single .crt (in that order) and added it to Heroku. All 3 domains were added to the app and pointed to the same CNAME, and all resolve over https:// as expected.
Much cheaper than $240/yr for an additional endpoint, if this is a viable route for anyone interested.
Relevant links:
https://stackoverflow.com/a/8520510/630614
http://apetec.com/support/GenerateSAN-CSR.htm
I'm dealing with this myself. Heroku suggests getting a SAN/UCC certificate, which lets you list multiple several domains. Just did it with GoDaddy and it's working fine so far.
https://devcenter.heroku.com/articles/ssl-endpoint#serving-multiple-domains
We have multiple domain names belonging to multiple companies. A SAN/UCC certificate is only available for domain names owned by the same entity/company/individual. We created an iFrame in the background as a quick-fix but we have since moved our platform to our own infrastructure.