For https access I need to add a CA cert file to /usr/local/share/ca-certificates on my Ubuntu host machine.
Currently my Dockerfile RUN wget https... is failing since the certificate verification is failing.
How can Docker use the host machine CA cert? Or is there an existing enhancement opened to allow this?
I've used CA and SSL certs via a passthrough mount, but this looks like you're trying to do it in the Dockerfile.
So my suggestion would be - copy the CA cert to the image as part of the Dockerfile, and then proceed as normal. Or drop to http, or run wget --no-check-certificate if you're happy with that.
There are a few open bugs in this area:
https://github.com/docker/machine/issues/1799
https://github.com/docker/docker/issues/4372
https://github.com/docker/machine/issues/1435
https://github.com/deis/deis/issues/2230
Related
I have a Microk8s cluster running gitea, harbor and droneci. Everything is hosted under *.dev.mydomain.com and there is a wildcard certificate for that. The certificate is signed using a private CA.
I'm trying to push the CA certificate to the Pods running the Drone CI builds such that they can push/pull from Gitea and Harbor while also being able to connect to external sources to (to fetch other docker images from dockerhub for example).
DroneCI and the drone runner are installed using Helm. I have tried the following in the values.yaml file for the runner:
DRONE_RUNNER_VOLUMES: "/sslcerts:/etc/ssl/certs"
This overwrites the /etc/ssl/certs/ folder in the runner pod. Any requests made from the pod to harbor or gitea work, any requests to anything else fail with error x509 certificate signed by unknown authority
I also tried
DRONE_RUNNER_VOLUMES: "/sslcerts/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt"
This returned the error mounting "/sslcerts/ca-certificates.crt" to rootfs at "/etc/ssl/certs/ca-certificates.crt" caused: mount through procfd: not a directory: unknown"
Any ideas on how to go about what I'm trying to do? Thanks!
As the runners are Alpine Linux based all you should have to do is to mount your certificates in the /usr/local/share/ca-certificates/ folder (not in a subfolder but right into that folder).
Alpine should then add all certificates from there to /etc/ssl/certs for you.
When I do docker pull from inside a container that uses /var/run/docker.sock to run docker (docker inside docker), I got this error:
FATA[0000] Error response from daemon: v1 ping attempt failed with error: Get https://registry.com:5000/v1/_ping: x509: certificate has expired or is not yet valid. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry registry.com:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/registry.com:5000/ca.crt
So I followed the instruction and added the ca.crt inside that directory and also added the insecure option to /etc/default/docker, but the error didn't go away.
I wonder where /var/run/docker.sock command look for the cert when I pull from inside the container. Especially when pulling works from outside (host) with the same config (ca.crt in the right folder and the insecure option is also added).
/var/run/docker.sock is not the thing that is looking for a cert. That is simply the socket that you use to communicate with dockerd. When you do a pull, you are asking the docker daemon to go talk to a registry.
Where did you get the ca.crt file? Is that really the signing certificate for your registry.com:5000 server's certificate? Did you put that in /et c/default/docker/registry.com:5000/ca.crt on the host where dockerd is running, or inside the container?
That ca.crt file belongs where the daemon is running. Double check that you have that correct file in the correct place on the host, and that should fix the issue.
Got it to work now, the solution is to restart the docker daemon inside the container. I actually tried it before but the docker service kept going down after restart, that made me think it was the docker service from the host.
The reason I could not restart docker service is /var/run/docker.pid existed which prevent docker from starting again. So I deleted that pid and docker restarted successfully.
I've tried to follow the following tutorial to setup our own private registry (v2) on an AWS Centos machine.
I've self signed a TLS certificate and placed it in /etc/docker/certs.d/MACHINE_STATIS_IP:5000/
When trying to login the registry (docker login MACHINE_IP:5000) or push a tagged repository (MACHINE_IP:5000/ubuntu:latest) i get the following error :
Error response from daemon: Get https://MACHINE_IP:5000/v1/users/: x509: cannot validate certificate for MACHINE_IP because it doesn't contain any IP SANs
Tried to search for an answer for 2 days, however I couldn't find any.
I've set the certificate CN (common name) to MACHINE_STATIC_IP:5000
When using a self signed TLS certificate docker daemon require you to add the certificate to it's known certificates.
Use the keytool command to grab the certificate :
keytool -printcert -sslserver ${NEXUS_DOMAIN}:${SSL_PORT} -rfc > ${NEXUS_DOMAIN}.crt
And copy it your client's machine SSL certificates directory (in my case - ubuntu):
sudo cp ${NEXUS_DOMAIN}.crt /usr/local/share/ca-certificates/${NEXUS_DOMAIN}.crt && sudo update-ca-certificates
Now reload docker daemon and you're good to go :
sudo systemctl restart docker
You can also use the following command to temporarily trust the certificate without adding it your system certificates.
docker --tlscert <the downloaded tls cert> pull <whatever you want to pull>
I've already purchased the SSL Certifcate from DigiCert and install it into my Nexus server (running in tomcat, jks)
It works well in firefox and chrome(green address bar indicates that a valid certificate received) , builds could be downloaded from Nexus WebUI too.
But, wget could not get the result without --no-check-certificate
something like
ERROR: cannot verify mydomain.com's certificate, issued by `/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance CA-3':
Unable to locally verify the issuer's authority.
To connect to mydomain.com insecurely, use `--no-check-certificate'.
Unable to establish SSL connection.
Found something,
SSL connection fails with wget, curl, but succeed with firefox and lynx
linux wget not certified?
But neither of them gives a final solution, I want to know whether there are some (special) configurations on Nexus or this's a bug of wget command?
Google return many results about "digitcert wget",but I cannot find a clue either, Thank you!
You need to add the DigiCert root certificate to a store accessible by wget:
http://wiki.openwrt.org/doc/howto/wget-ssl-certs
I am using wget in my program to get some file using HTTP protocol. Here i need to set security so we moved HTTP protocol to HTTPS.
After changing to HTTPS how to perform wget. I mean how to make trusted connection between two machines then perform wget.
I want to make sure that wget can be performed from certain system only.
Step 1: SSL Certificates
First things first, if this machine is on the internet and the SSL certificate is signed by a trusted source, there is no need to specify a certificate.
However, if there is a self signed certificate involved things get a little more interesting.
For example:
if this machine uses a self signed certificate, or
if you are on a network with a proxy that re-encrypts all https connections
Then you need to trust the public key of the self signed certificate. You will need to export the public key as a .CER file. How you got the SSL certificate will determine how you get the public key as a .CER
Once you have the .CER then...
Step 2: Trust the Certificate
I suggest two options:
option one
wget --ca-certificate={the_cert_file_path} https://www.google.com
option two
set the option on ~/.wgetrc
ca_certificate={the_cert_file_path}
Additional resources
Blog post about this wget and ssl certificates
wget manual
macOS users can use the cert.pem file:
wget --ca-certificate=/etc/ssl/cert.pem
or set in your ~/.wgetrc:
ca_certificate = /etc/ssl/cert.pem
On Linux (at least on my Debian and Ubuntu distributions), you can do the following to install your cert to be trusted system-wide.
Assuming your certificate is ~/tmp/foo.pem, do the following:
Install the ca-certificates package, if it is not already present, then do the following to install foo.pem:
$ cd ~/tmp
$ chmod 444 foo.pem
$ sudo cp foo.pem /usr/local/share/ca-certificates/foo.crt
$ sudo update-ca-certificates
Once this is done, most apps (including wget, Python and others) should automatically use it when it is required by the remote site.
The only exception to this I've found has been the Firefox web browser. It has its own private store of certificates, so you need to manually install the cert via its Settings interface if you require it there.
At least this has always worked for me (to install a corporate certificate needed for Internet access into the Linux VMs I create).