Docker: TLS handshake timeout - ssl

I've created my own private registry (private-registry) but I'm unable to push images to it.
Than I get the following error:
The push refers to a repository [private-registry:5000/ubuntu] (len: 1)
unable to ping registry endpoint https://private-registry:5000/v0/
v2 ping attempt failed with error: Get https://private-registry:5000/v2/: net/http: TLS handshake timeout
v1 ping attempt failed with error: Get https://private-registry:5000/v1/_ping: net/http: TLS handshake timeout
The logs of the running registry are showing the following:
time="2015-12-14T07:59:21Z" level=warning msg="No HTTP secret provided - generated random secret. This may cause problems with uploads if multiple registries are behind a load-balancer. To provide a shared secret, fill in http.secret in the configuration file or set the REGISTRY_HTTP_SECRET environment variable." go.version=go1.5.2 instance.id=a77e1955-3688-4fe3-a06e-0341787f8d0f version=v2.2.1
time="2015-12-14T07:59:21Z" level=info msg="redis not configured" go.version=go1.5.2 instance.id=a77e1955-3688-4fe3-a06e-0341787f8d0f version=v2.2.1
time="2015-12-14T07:59:21Z" level=info msg="using inmemory blob descriptor cache" go.version=go1.5.2 instance.id=a77e1955-3688-4fe3-a06e-0341787f8d0f version=v2.2.1
time="2015-12-14T07:59:21Z" level=info msg="listening on [::]:5000, tls" go.version=go1.5.2 instance.id=a77e1955-3688-4fe3-a06e-0341787f8d0f version=v2.2.1
time="2015-12-14T07:59:21Z" level=info msg="Starting upload purge in 47m0s" go.version=go1.5.2 instance.id=a77e1955-3688-4fe3-a06e-0341787f8d0f version=v2.2.1
I'm unable to curl my registry (timeout).
This are the steps I performed:
First I've created selfsigned certificates:
mkdir -p certs && openssl req \
-newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \
-x509 -days 365 -out certs/domain.crt
I've created my registry, which will use this certificates:
docker run -d -p 5000:5000 --restart=always --name private-registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=certs/domain.key \
registry:2
I gave the right permissions:
chcon -Rt svirt_sandbox_file_t ~certs/
I've created: /etc/docker/etc.d/private-registry:5000/
And I copied my domain.crt in it.
I've edited my /etc/hosts and added:
10.0.0.X private-registry (my internal ip and the name of my registry)
I also restarted docker and my registry.
EDIT:
[centos# ~]$ curl -v private-registry:5000
* About to connect() to private-registry port 5000 (#0)
* Trying 10.0.0.xx...
* Connected to private-registry (10.0.0.xx) port 5000 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: private-registry:5000
> Accept: */*
>
* Connection #0 to host private-registry left intact
[centos#~]$ curl -v https://private-registry:5000
* About to connect() to private-registry port 5000 (#0)
* Trying 10.0.0.xx...
* Connected to private-registry (10.0.0.xx) port 5000 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* NSS error -5990 (PR_IO_TIMEOUT_ERROR)
* I/O operation timed out
* Closing connection 0
curl: (35) I/O operation timed out

You may need to place the certificate into this directory.
/etc/docker/certs.d/private-registry.com:5000/ca.crt

Related

How to run curl --retry command in GitLab-ci.yml

i am trying to run a curl --retry command in GitLab-ci pipeline and it fails in the pipeline but when i try to run it somewhere else it's working fine.
I am trying to check if my url is up or not and this will run till my url is up and after that rest of my script run to configure my application
curl -Is -k --retry 50 --retry-delay 0 --retry-connrefused https://{URL} -vvv
Trying 127.0.1.1:8443...
TCP_NODELAY set
Connected to {url} (127.0.1.1) port 8443 (#0)
ALPN, offering h2
ALPN, offering http/1.1
successfully set certificate verify locations:
CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
} [5 bytes data]
TLSv1.3 (OUT), TLS handshake, Client hello (1):
} [512 bytes data]
OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to {url}
Closing connection 0
Cleaning up file based variables
ERROR: Job failed: exit status 1
I think what is happening here that when we run curl --retry for every retry it closes the connection and restart it and that's where my pipeline breaks and i am not able to figure out how to handle this.
Is there any workaround to do this in the pipeline.
you can use a simple bash script and check for curl return value:
while true; do if curl --location 'http://localhost:8080'; then break; else sleep 1; fi; done

MQTT and SSL/TLS

I registered for an account on a mqtt server provider.
They provide 3 ports:
port: 1xxxx
ssl port: 2xxxx
websockets(TLS only): 3xxxx
I am publishing and receiving data from port 1xxx.
I would like to add encryption though. The mqtt server provider gives a "shared" subdomain but says:
If you want to use a custom domain for your instance you have to provide your own certificate to use with MQTT+TLS and Websockets. Certificates must be PEM encoded and the privte key unencrypted. Certs are only stored on your dedicated instance. When certs are installed you can point your domain as a CNAME to hairdresser.cloudmqtt.com.
I added a CNAME on my domain panel which I call it (mqtt.mydomain.com) and resolves to the above subdomain.
On my domain panel also I added ssl from letsenrypt(free) to my subdomain mqtt.mydomain.com(which points to mqtt server domain).
After adding the ssl I downloaded a zip file from the domain panel which contains 3 files:
mqtt.mydomain.com.ca
mqtt.mydomain.com.cert
mqtt.mydomain.com.key
I paste the contents of ca file to CA chain, cert file to Certificate and key file to Private key
Saved everything and restarted instance(mqtt server).
Then I tried from my computer:
mosquitto_pub -h "mqtt.mydomain.com" -p 1xxxx -i test1 -u test1 -P pass1 -t mytopics/test1 -m "hi everyone" -d -c
works but since its port 1xxxx its not SSL.
Trying the ssl:
mosquitto_pub -h "mqtt.mydomain.com" -p 2xxxx -i test1 -u test1 -P pass1 -t mytopics/test1 -m "hi everyone" -d -c --cafile C:\Users\CT\Downloads\certs\mqtt.mydomain.com.ca
gives me error on cmd:
OpenSSL Error[0]: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed
Error: A TLS error occurred.
Tried many different commands like passing cert file appart from ca and even key file(which is probably wrong i guess) and I am getting different errors on the server logs like:
OpenSSL Error: error:14094418:SSL routine
s:ssl3_read_bytes:tlsv1 alert unknown ca
OpenSSL Error: error:1408F10B:SSL routines:ssl3_get_record:wrong version number
Client connection from xx.xx.xx.xx failed: error:1408F10B:SSL routines:ssl3_get_record:wrong version number.

CentOS6 - curl NSS error -5961 resolved by updating packages. What did it actually do?

I was running into the same issue as described in this question: cURL SSL connect error 35 with NSS error -5961
$ curl --verbose https://api.hostname.com
* About to connect() to api.hostname.com port 443 (#0)
* Trying 1.2.3.4... connected
* Connected to api.hostname.com (1.2.3.4) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* NSS error -5961
* Closing connection #0
* SSL connect error
curl: (35) SSL connect error
I followed user qingbo's advice after noticing that the server (CentOS 6.5 VM) had not been updated in a while. I ran this command:
$ yum update -y nss curl libcurl
and then re-ran the curl command and this time I received an expected HTTP/1.1 200 OK.
My question is - what did updating the nss, curl and libcurl packages do that fixed the connection issue?

Docker container not connecting to https endpoints

From inside a docker container, I'm running
# openssl s_client -connect rubygems.org:443 -state -nbio 2>&1 | grep "^SSL"
SSL_connect:before/connect initialization
SSL_connect:SSLv2/v3 write client hello A
SSL_connect:error in SSLv2/v3 read server hello A
That's all I get
I can't connect to any https site from within the docker container. The container is running on an openstack vm. The vm can connect via https.
Any advice?
UPDATE
root#ce239554761d:/# curl -vv https://google.com
* Rebuilt URL to: https://google.com/
* Hostname was NOT found in DNS cache
* Trying 216.58.217.46...
* Connected to google.com (216.58.217.46) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
and then it hangs.
Also, I'm getting intermittent successes now.
Sanity Checks:
changing the docker ips doesn't fix the problem
The docker containers work on my local machine
The docker containers work on other clouds
Docker 1.10.0 doesn't work in the vms
Docker 1.9.1 works in the vms
I was given a solution by the Docker community
OpenStack network seems to use lower MTU values and Docker does not infer the MTU settings from the host's network card since 1.10.
To run docker daemon with custom MTU settings, you can follow this blog post, that says:
$ cp /lib/systemd/system/docker.service /etc/systemd/system/docker.service
Edit a line in the new file to look like this:
ExecStart=/usr/bin/docker daemon -H fd:// --mtu=1454
Or (as suggested below by Dionysius), create and edit the file
/etc/systemd/system/docker.service.d/fixmtu.conf as follow:
[Service]
# Reset ExecStart & update mtu (see original command in /lib/systemd/system/docker.service)
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --mtu=1454.
MTU of 1454 is the value that seems to be common with OpenStack. You can look it up in your host using ifconfig.
Finally restart Docker:
$ sudo systemctl daemon-reload
$ sudo service docker restart

CHECK_NRPE: Error - Could not complete SSL handshake with nsclient++

I'm using NRPE (v2.15) on my Icinga-Server to check the memory usage on a
windows host with nsclient++ (v0.4.3.143).
Unluckily I always get the same error message when I try to check it:
./check_nrpe -H host01 -p 5666 -c CheckMem -a MaxWarn=95% MaxCrit=98% ShowAll type=physical
CHECK_NRPE: Error - Could not complete SSL handshake.