Troubleshooting - Setting up private GitLab server and connecting Gitlab Runners - ssl

I have a Gitlab instance running in docker on a dedicated private server (accessible only from within our vpc). We want to start doing CI using Gitlab runners so I spun up another server to host our runners.
Now that Gitlab-Runner has been configured, I try and register a runner with the private IP of the Gitlab server and the registration token
Enter the GitLab instance URL (for example, https://gitlab.com/):
$GITLAB_PRIVATE_IP
Enter the registration token:
$TOKEN
Enter a description for the runner:
[BEG-GITLAB-RUNNER]: default
Enter tags for the runner (comma-separated):
default
ERROR: Registering runner... failed runner=m616FJy- status=couldn't execute POST against https://$GITLAB_PRIVATE_IP/api/v4/runners: Post "https://$GITLAB_PRIVATE_IP/api/v4/runners": x509: certificate has expired or is not yet valid: current time 2022-02-06T20:00:35Z is after 2021-12-24T04:54:28Z
It looks like our certs have expired and to verify:
echo | openssl s_client -showcerts -connect $GITLAB_PRIVATE_IP:443 2>&1 | openssl x509 -noout -dates
notBefore=Nov 24 04:54:28 2021 GMT
notAfter=Dec 24 04:54:28 2021 GMT
Gitlab comes with let's encrypt so I decided to enable let's encrypt and cert autorenewal in gitlab rails, however when I try and reconfigure I get the error message:
There was an error running gitlab-ctl reconfigure:
letsencrypt_certificate[$GITLAB_PRIVATE_IP] (letsencrypt::http_authorization line 6) had an error: Acme::Client::Error::RejectedIdentifier: acme_certificate[staging] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/letsencrypt/resources/certificate.rb line 41) had an error: Acme::Client::Error::RejectedIdentifier: Error creating new order :: Cannot issue for "$GITLAB_PRIVATE_IP": The ACME server can not issue a certificate for an IP address
So it looks like I can't use the let's encrypt option that packaged with gitlab to enable the renewal of certs.
How can I create/renew ssl certs on a private linux server without a domain?
If you've set up Gitlab + Runners on private servers, what does your rails configuration look like?
Is there a way to enable DNS on a private server for the sole purpose of a certificate authority granting certs?

I would suggest to use Self-signed certificate I have tested this before and its working fine but require some work. I will try to summarize some of the steps needed:
1- generate Self-signed certificate with the domain you choose and make sure to keep it in /etc/gitlab-runner/certs/
2- you need to add the domain and certs path in /etc/gitlab/gitlab.rb
3- reconfigure giltab
4- when connecting the runner make sure to manually copy and activate certs to the runner server .

Related

What is the meaning of error=x509: certificate is valid for user A, not localhost in Docker?

I am using a Docker container to run a bunch of services, all of those services make use of certificates to communicate to each other.
When starting up those services there is one in concrete that complains with the following error
> discovery_1 | INFO ttn: Got public keys for token validation
> discovery_1 | DEBUG Connected to gRPC server Address=localhost:1900
> discovery_1 | FATAL Could not start client for gRPC proxy error=x509: certificate is valid for discovery, not localhost
> ttnbackbone_discovery_1 exited with code 1
I have created the certificate for "discovery" user but still Docker runs it for the localhost, in some way, which I don't understand... I have also followed this tutorial of certificates usage from Docker but still I have the same error.
What can I do further?
THanks in advance,
REgards!
I encountered this today. x509 certificates have a Common Name attribute that some software use to match the DNS hostname of a server. Here was my error with a certificate with CN of localhost and a DNS hostname of docker1-staging:
error during connect: Get https://docker1-staging:2376/v1.26/containers/json: x509: certificate is valid for localhost, not docker1-staging
I'll have to regenerate the certificate used by the Docker server and make sure it has a CN value of docker1-staging. You'll have to do the same with a CN value of localhost.

Docker private registry | TLS certificate issue

I've tried to follow the following tutorial to setup our own private registry (v2) on an AWS Centos machine.
I've self signed a TLS certificate and placed it in /etc/docker/certs.d/MACHINE_STATIS_IP:5000/
When trying to login the registry (docker login MACHINE_IP:5000) or push a tagged repository (MACHINE_IP:5000/ubuntu:latest) i get the following error :
Error response from daemon: Get https://MACHINE_IP:5000/v1/users/: x509: cannot validate certificate for MACHINE_IP because it doesn't contain any IP SANs
Tried to search for an answer for 2 days, however I couldn't find any.
I've set the certificate CN (common name) to MACHINE_STATIC_IP:5000
When using a self signed TLS certificate docker daemon require you to add the certificate to it's known certificates.
Use the keytool command to grab the certificate :
keytool -printcert -sslserver ${NEXUS_DOMAIN}:${SSL_PORT} -rfc > ${NEXUS_DOMAIN}.crt
And copy it your client's machine SSL certificates directory (in my case - ubuntu):
sudo cp ${NEXUS_DOMAIN}.crt /usr/local/share/ca-certificates/${NEXUS_DOMAIN}.crt && sudo update-ca-certificates
Now reload docker daemon and you're good to go :
sudo systemctl restart docker
You can also use the following command to temporarily trust the certificate without adding it your system certificates.
docker --tlscert <the downloaded tls cert> pull <whatever you want to pull>

Tunnel Connection Failed error when logging into artifactory docker registry

We have created a private docker registry in artifactory.
Our artifactory is a standalone installation and have Nginx as a webserver.
SSL certificates are trusted and works fine.
on docker client, I have copied the ca.crt to /etc/docker/certs.d/:5001/
while am trying to login or push images from my docker client i see below error.
[root#cds-dev-test ~]# docker login artifactory.host:5001
Username: raj
Password:
Email: raj#gmail.com
Error response from daemon: invalid registry endpoint
https://artifactory.host:5001/v0/: unable to ping registry endpoint
v2 ping attempt failed with error: Get https://artifactory.host:5001/v2/: Tunnel Connection Failed
v1 ping attempt failed with error: Get artifactory.host:5001/v1/_ping: Tunnel Connection Failed. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add --insecure-registry artifactory.host:5001 to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/artifactory.host:5001/ca.crt
my docker version is 1.9.1 and artifactory versioin 4.4.3.
It works when i use --insecure-registry option but not the secure way. We have all trusted certs in place, still see the error.
I have tried using proxy settings on docker client and also without proxy... always the same error.
Any help guys?
I figured it out.
I have proxy settings under my docker deamon. I have added No_Proxy and it works fine.
FYI....
so people if you are using trusted CA cert, and your network is behind proxy, make sure your docker services file doesnt have proxy settings, if it does add No-proxy=artifactory.host.
/etc/systemd/system/docker.service.d/http-proxy.conf
Thanks

Chef ssl validation failure

I have one chef-server version 12.0.1 and can connect linux (rhel/centos) systems to the chef-server with knife bootstrap but cannot with windows and locally on my rhel client knife ssl check fails.
I have two problems but I think they are both related.
Problem 1 - knife ssl check fails:
Connecting to host chef-server:443
ERROR: The SSL certificate of chef-server could not be verified
Problem 2 - bootstrap windows server fails:
ERROR: SSL Validation failure connecting to host: chef-server - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
Chef encountered an error attempting to create the client "desktop"
I have tried a number of things:
1) knife ssl fetch - no changes
2) I have a signed digicert crt on the server which is accepted by the management-console and chrome web browser
3) I have changed set this in the chef-server.rb
nginx['ssl_certificate'] = "/var/opt/opscode/nginx/ca/hostname.crt"
nginx['ssl_certificate_key'] = "/var/opt/opscode/nginx/ca/hostname.key"
which go to the signed certs.
Anything else I should be trying or am I being a plank?
Try running these commands on your Chef server:
mkdir /root/.chef/trusted_certs
cp /var/opt/chef-server/nginx/ca/YOUR_SERVER'S_HOSTNAME.crt /root/.chef/trusted_certs/
I was having the same problem and it was fixed after I looked through this article, and tried out the steps it gave: http://jtimberman.housepub.org/blog/2014/12/11/chef-12-fix-untrusted-self-sign-certs/
I was having the same issue using a valid wildcard certificate, although it was linux rather than windows. Looks like the issue is that the chef client uses openssl and didn't have the CA and root certificates. I was getting errors when I ran the following from the chef client server:
openssl s_client -connect chef_server_url*:443 -showcerts
I solved my issue by browsing to the chef server, inspecting the certs and exporting each cert in the chain to a single file, ordered with the issued certificate at the top, and the root at the bottom. I then used this bundled-cert as the certificate file in the chef server config file and reconfigured chef.

Chef SSL verification failed while setting workstation

I am setting up Chef workstation by configuring knife.rb using "knife configure -i" configure command. After PROPERLY answering all question, I get the following error :
ERROR: SSL Validation failure connecting to host: 172.xx.x.xx - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
ERROR: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
My goal is to disable this SSL certificate verification forever and use knife utility to bootstrap my all nodes.
I had the same issue running chef-client after upgrading to the version 12.xx. Steps to solve:
Pull crt from server. Run on node:
knife ssl fetch -s https://yourchefserver01.com:443
Note: If fetch doesnt work copy from yourchefserver01.com:/var/opt/chef-server/nginx/ca/yourchefserver01.com.crt to client:/root/.chef/trusted_certs/yourchefserver01.com.crt
Verify it pulled:
knife ssl check -s https://yourchefserver01.com:443
export SSL_CERT_FILE="/root/.chef/trusted_certs/yourchefserver01.com.crt"
Run chef-client
Your problem is the validation of the chef server certificate.
Install a proper certificate on the chef server
or add your chef server certificate (located in /etc/chef-server/hostname.crt) to your workstation cacert.pem (located by default in <install path>/opscode/chef/embedded/ssl/certs).
With chef 12 you'll have to ditribute it too on your nodes to validate the chef API server or you'll have a warning at the start of each chef-client run about it.
Issue seems to be concerned with the .pem validator. your validation are misconfigured. Try create new validation key from chef server and place it under the node.
If you are running Chef Server on-premise, it will easier in the long run to install a third-party SSL cert, e.g. Verisign, on the Chef Server (or load balancer). chef-client and knife come with OpenSSL which will trust a valid third-party cert automatically with no configuation required on each node.
Please don't turn off SSL cert validation. SSL validation is additional protection that the server you are trusting with root access to your Chef nodes is the real Chef server, not a man-in-the-middle attack.