Use certificates from host inside ddev environment to connect a remote system - ssl-certificate

I try to connect a remote elastic cluster that is available from the host (Windows 10 Enterprise) system.
I tested the host's connection via curl https://url.to.target:443. Got that 'For sure, its search'-Response.
When i try the same from inside the webserver-container (Debian GNU/Linux 10 (buster)) it failes by:
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it.
Is there a simple way use the hosts certificates store?

Copy yourcert.crt to .ddev/web-build folder.
Create a custom .ddev/web-build/Dockerfile, for example:
ARG BASE_IMAGE
FROM $BASE_IMAGE
COPY ./yourcert.crt /usr/local/share/ca-certificates/
RUN update-ca-certificates --fresh
When referencing the cert in your code use:
$myCert='/usr/local/share/ca-certificates/yourcert.crt';

Have you tried it by adding the insecure option to the .curlc file in your Home dir?
echo insecure >> $HOME/.curlrc
Shouldn't be used in production!

Related

Which ca.crt does docker.sock use for docker pull?

When I do docker pull from inside a container that uses /var/run/docker.sock to run docker (docker inside docker), I got this error:
FATA[0000] Error response from daemon: v1 ping attempt failed with error: Get https://registry.com:5000/v1/_ping: x509: certificate has expired or is not yet valid. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry registry.com:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/registry.com:5000/ca.crt
So I followed the instruction and added the ca.crt inside that directory and also added the insecure option to /etc/default/docker, but the error didn't go away.
I wonder where /var/run/docker.sock command look for the cert when I pull from inside the container. Especially when pulling works from outside (host) with the same config (ca.crt in the right folder and the insecure option is also added).
/var/run/docker.sock is not the thing that is looking for a cert. That is simply the socket that you use to communicate with dockerd. When you do a pull, you are asking the docker daemon to go talk to a registry.
Where did you get the ca.crt file? Is that really the signing certificate for your registry.com:5000 server's certificate? Did you put that in /et c/default/docker/registry.com:5000/ca.crt on the host where dockerd is running, or inside the container?
That ca.crt file belongs where the daemon is running. Double check that you have that correct file in the correct place on the host, and that should fix the issue.
Got it to work now, the solution is to restart the docker daemon inside the container. I actually tried it before but the docker service kept going down after restart, that made me think it was the docker service from the host.
The reason I could not restart docker service is /var/run/docker.pid existed which prevent docker from starting again. So I deleted that pid and docker restarted successfully.

Docker private registry | TLS certificate issue

I've tried to follow the following tutorial to setup our own private registry (v2) on an AWS Centos machine.
I've self signed a TLS certificate and placed it in /etc/docker/certs.d/MACHINE_STATIS_IP:5000/
When trying to login the registry (docker login MACHINE_IP:5000) or push a tagged repository (MACHINE_IP:5000/ubuntu:latest) i get the following error :
Error response from daemon: Get https://MACHINE_IP:5000/v1/users/: x509: cannot validate certificate for MACHINE_IP because it doesn't contain any IP SANs
Tried to search for an answer for 2 days, however I couldn't find any.
I've set the certificate CN (common name) to MACHINE_STATIC_IP:5000
When using a self signed TLS certificate docker daemon require you to add the certificate to it's known certificates.
Use the keytool command to grab the certificate :
keytool -printcert -sslserver ${NEXUS_DOMAIN}:${SSL_PORT} -rfc > ${NEXUS_DOMAIN}.crt
And copy it your client's machine SSL certificates directory (in my case - ubuntu):
sudo cp ${NEXUS_DOMAIN}.crt /usr/local/share/ca-certificates/${NEXUS_DOMAIN}.crt && sudo update-ca-certificates
Now reload docker daemon and you're good to go :
sudo systemctl restart docker
You can also use the following command to temporarily trust the certificate without adding it your system certificates.
docker --tlscert <the downloaded tls cert> pull <whatever you want to pull>

Tunnel Connection Failed error when logging into artifactory docker registry

We have created a private docker registry in artifactory.
Our artifactory is a standalone installation and have Nginx as a webserver.
SSL certificates are trusted and works fine.
on docker client, I have copied the ca.crt to /etc/docker/certs.d/:5001/
while am trying to login or push images from my docker client i see below error.
[root#cds-dev-test ~]# docker login artifactory.host:5001
Username: raj
Password:
Email: raj#gmail.com
Error response from daemon: invalid registry endpoint
https://artifactory.host:5001/v0/: unable to ping registry endpoint
v2 ping attempt failed with error: Get https://artifactory.host:5001/v2/: Tunnel Connection Failed
v1 ping attempt failed with error: Get artifactory.host:5001/v1/_ping: Tunnel Connection Failed. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add --insecure-registry artifactory.host:5001 to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/artifactory.host:5001/ca.crt
my docker version is 1.9.1 and artifactory versioin 4.4.3.
It works when i use --insecure-registry option but not the secure way. We have all trusted certs in place, still see the error.
I have tried using proxy settings on docker client and also without proxy... always the same error.
Any help guys?
I figured it out.
I have proxy settings under my docker deamon. I have added No_Proxy and it works fine.
FYI....
so people if you are using trusted CA cert, and your network is behind proxy, make sure your docker services file doesnt have proxy settings, if it does add No-proxy=artifactory.host.
/etc/systemd/system/docker.service.d/http-proxy.conf
Thanks

unable to ssl connect to chef-server from chef-workstation

I have 2 different ubuntu VPS instances each with different ip addresses.
One is assigned as a chef-server and the other acts as a workstation.
When I use the command
knife configure -i
I do get options to locate admin.pem and chf-validator.pem files locally.
I am also able to create knife.rb file locally.
WHile setting up knife, I get a question which asks me to enter 'chef-server url' so I enter 'https://ip_address/ of the vps instance
But in the end I get an error message
ERROR: SSL Validation failure connecting to host: "ip_address of my server host"- hostname "ip_address of my host" does not match the server certificate
ERROR: Could not establish a secure connection to the server.
Use knife ssl check to troubleshoot your SSL configuration.
If your Chef Server uses a self-signed certificate, you can use
knife ssl fetch to make knife trust the server's certificates.
I used 'knife ssl fetch' to fetch the trusted_certs from the chef-server but still it doesnt work.
CHef experts please help.
Your chef-server has a hostname, the selfsigned certificate is done with this hostname.
The error you get is due to the fact you call an IP adress where the certificate is done for a hostname.
Two way: disable ssl validation (you'll have a warning but it will works) or make a configuration (using your hostname files for exemple) to use the chef-server hostname instead of ip address.
This is a SSL configuration point you may have with other servers too.

Why does Chef throw SSL error when using knife Command on Chef-Workstation?

SSL error occurs when we use the knife command to verify successful setup of the Chef-Workstation or when we try to upload a Chef-Cookbook. Using the following commands :
knife client list
knife node list
knife cookbook upload cookbookname
we get the following error on the Chef-Workstation:
OpenSSL::SSL::SSLError: SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: unknown protocol
To resolve this error we tried using rackfile software to create following 3 files:
hostname.key
hostname.pem
hostname.crt
on the Chef-Server.
We placed hostname.pem inside the chef folder on the server itself and inside certs folder on the workstation. Finally we tried to run the commands once again but did not succeed. Any help to resolve the SSL error will be sincerely appreciated.
The Chef Server certificate has not yet been pulled into the workstation's trusted_certs directory.
Run the command
knife ssl fetch
from your Chef Workstation.
This will pull the certificate from the Chef Server and place it in the Workstation's trusted_certs directory. The default location of the trusted_certs is in your .chef/trusted_certs directory within your chef-repo directory.
Then run
knife ssl check
to verify the certificate.
Certificates that are in the trusted_certs directory will be trusted by any execution of the knife command.
https://docs.chef.io/workstation/getting_started/#get-ssl-certificates
You need to register that certificate on each workstation. Also, make sure the certificate matches the correct URL (i.e. the API endpoint, not the web interface)