I am currently running a Chef Server.
There are 2 ways to access the server :
<HOSTNAME_OF_SERVER_OR_FQDN>
OR
<ACTUAL_URL_THAT_SHOULD_BE_OR_CNAME>
When I try to run knife ssl check, I get:
root#host:/opt/chef-server/embedded/jre# knife ssl check
Connecting to host <ACTUAL_URL_THAT_SHOULD_BE_OR_CNAME>:443
ERROR: The SSL certificate of <HOSTNAME_OF_SERVER_OR_FQDN> could not be verified
Certificate issuer data: /C=US/ST=MA/L=Boston/O=YouCorp/OU=Operations/CN=<HOSTNAME_OF_SERVER_OR_FQDN>.com/emailAddress=you#example.com
Configuration Info:
OpenSSL Configuration:
* Version: OpenSSL 1.0.1p 9 Jul 2015
* Certificate file: /opt/chefdk/embedded/ssl/cert.pem
* Certificate directory: /opt/chefdk/embedded/ssl/certs
Chef SSL Configuration:
* ssl_ca_path: nil
* ssl_ca_file: nil
* trusted_certs_dir: "/root/.chef/trusted_certs"
I want the knife ssl check command to be successful. Basically I want it to be able to successfully connect using <ACTUAL_URL_THAT_SHOULD_BE_OR_CNAME>
How can I add the CNAME to the current certificate which I believe is /opt/chefdk/embedded/ssl/cert.pem ?
One strange aspect about the certificate file is that when I try to read it and grep for the Hostnames or CNAMES, I do not find any :
# /opt/chef-server/embedded/jre/bin/keytool -printcert -file /opt/chefdk/embedded/ssl/cert.pem | grep <ACTUAL_URL_THAT_SHOULD_BE_OR_CNAME>
No result
# /opt/chef-server/embedded/jre/bin/keytool -printcert -file /opt/chefdk/embedded/ssl/cert.pem | grep <HOSTNAME_OF_SERVER_OR_FQDN>
No result
this is how i did it in the past
The Chef server can be configured to use SSL certificates by adding the following settings to the server configuration file
For example:
nginx['ssl_certificate'] = "/etc/pki/tls/certs/your-host.crt"
nginx['ssl_certificate_key'] = "/etc/pki/tls/private/your-host.key"
Save the file, and then run the following command:
$ sudo chef-server-ctl reconfigure
Related
I have a Gitlab instance running in docker on a dedicated private server (accessible only from within our vpc). We want to start doing CI using Gitlab runners so I spun up another server to host our runners.
Now that Gitlab-Runner has been configured, I try and register a runner with the private IP of the Gitlab server and the registration token
Enter the GitLab instance URL (for example, https://gitlab.com/):
$GITLAB_PRIVATE_IP
Enter the registration token:
$TOKEN
Enter a description for the runner:
[BEG-GITLAB-RUNNER]: default
Enter tags for the runner (comma-separated):
default
ERROR: Registering runner... failed runner=m616FJy- status=couldn't execute POST against https://$GITLAB_PRIVATE_IP/api/v4/runners: Post "https://$GITLAB_PRIVATE_IP/api/v4/runners": x509: certificate has expired or is not yet valid: current time 2022-02-06T20:00:35Z is after 2021-12-24T04:54:28Z
It looks like our certs have expired and to verify:
echo | openssl s_client -showcerts -connect $GITLAB_PRIVATE_IP:443 2>&1 | openssl x509 -noout -dates
notBefore=Nov 24 04:54:28 2021 GMT
notAfter=Dec 24 04:54:28 2021 GMT
Gitlab comes with let's encrypt so I decided to enable let's encrypt and cert autorenewal in gitlab rails, however when I try and reconfigure I get the error message:
There was an error running gitlab-ctl reconfigure:
letsencrypt_certificate[$GITLAB_PRIVATE_IP] (letsencrypt::http_authorization line 6) had an error: Acme::Client::Error::RejectedIdentifier: acme_certificate[staging] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/letsencrypt/resources/certificate.rb line 41) had an error: Acme::Client::Error::RejectedIdentifier: Error creating new order :: Cannot issue for "$GITLAB_PRIVATE_IP": The ACME server can not issue a certificate for an IP address
So it looks like I can't use the let's encrypt option that packaged with gitlab to enable the renewal of certs.
How can I create/renew ssl certs on a private linux server without a domain?
If you've set up Gitlab + Runners on private servers, what does your rails configuration look like?
Is there a way to enable DNS on a private server for the sole purpose of a certificate authority granting certs?
I would suggest to use Self-signed certificate I have tested this before and its working fine but require some work. I will try to summarize some of the steps needed:
1- generate Self-signed certificate with the domain you choose and make sure to keep it in /etc/gitlab-runner/certs/
2- you need to add the domain and certs path in /etc/gitlab/gitlab.rb
3- reconfigure giltab
4- when connecting the runner make sure to manually copy and activate certs to the runner server .
We would like to have our custom brew repository to allow our developers to easy manage/update our company tools. We made a decision to keep all these files on AWS S3 bucket and in brew formulas just point directly to the object's url. The only restriction which we have is to be sure that access to that AWS S3 bucket is available behind our VPN network.
So what we did:
Created new bucket, let's say with following name: downloads.example.com
Created S3 endpoint. AWS created dns entry:
*.vpce-XXXXXXXXXXXXXXX-XXXXXX.s3.eu-west-1.vpce.amazonaws.com
In the bucket policy we limited access only to that AWS S3 endpoint:
"Condition": {
"StringEquals": {
"aws:SourceVpce": "vpce-XXXXXXXXXXXXXXX"
}
}
We created a route53 DNS entry:
record A downloads.example.com as an alias to *.vpce-XXXXXXXXXXXXXXX-XXXXXX.s3.eu-west-1.vpce.amazonaws.com
After that simple configuration we are able to get/push objects only when we are connected to our VPN server using AWS CLI commands.
Unfortunately problem is when we want to use curl for example:
* Trying 10.X.X.X:443...
* Connected to downloads.example.com (10.X.X.X) port 443 (#0)
...
* Server certificate:
* subject: CN=s3.eu-west-1.amazonaws.com
* start date: Dec 16 00:00:00 2021 GMT
* expire date: Jan 14 23:59:59 2023 GMT
* subjectAltName does not match downloads.example.com
* SSL: no alternative certificate subject name matches target host name 'downloads.example.com'
* Closing connection 0
* TLSv1.2 (OUT), TLS alert, close notify (256):
If i do the same command with skipping CA verification it works:
20211217 16:56:52 kamil#thor ~$ curl -Ls https://downloads.example.com/getMe.txt -k
test file
Do you know if there is an any way to makes that work properly?
I know that we could do following things but we would like see other options:
push route s3.eu-west-1.amazonaws.com via VPN and in the bucket policy limit access only to our VPN public IP
install right certificates on ingress/nginx to do some redirect/proxy
we tried some combination with Loadbalancers and ACMs but didn't work.
Thank you in advance for help
Kamil
I'm afraid it is not possible to do what you want.
When you create an endpoint, AWS is not creating certificates for your own domain. It create a certificate for it owns domains.
You check it with:
First, download the certificate
$ echo | openssl s_client -connect 10.99.16.29:443 2>&1 | sed --quiet '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > vpce.pem
Then you can verify what names are in the certificate.
$ openssl x509 -noout -text -in vpce.pem | grep DNS | tr "," "\n" | sort -u
DNS:s3.eu-central-1.amazonaws.com
DNS:*.accesspoint.vpce-0f0d06a5091e70758-7mtj4kk7-eu-central-1a.s3.eu-central-1.vpce.amazonaws.com
DNS:*.accesspoint.vpce-0f0d06a5091e70758-7mtj4kk7.s3.eu-central-1.vpce.amazonaws.com
DNS:*.bucket.vpce-0f0d06a5091e70758-7mtj4kk7-eu-central-1a.s3.eu-central-1.vpce.amazonaws.com
DNS:*.bucket.vpce-0f0d06a5091e70758-7mtj4kk7.s3.eu-central-1.vpce.amazonaws.com
DNS:*.control.vpce-0f0d06a5091e70758-7mtj4kk7-eu-central-1a.s3.eu-central-1.vpce.amazonaws.com
DNS:*.control.vpce-0f0d06a5091e70758-7mtj4kk7.s3.eu-central-1.vpce.amazonaws.com
DNS:*.s3-accesspoint.eu-central-1.amazonaws.com
DNS:*.s3-control.eu-central-1.amazonaws.com
DNS:*.s3.eu-central-1.amazonaws.com
DNS:bucket.vpce-0f0d06a5091e70758-7mtj4kk7-eu-central-1a.s3.eu-central-1.vpce.amazonaws.com
DNS:bucket.vpce-0f0d06a5091e70758-7mtj4kk7.s3.eu-central-1.vpce.amazonaws.com
note for brevity, I've remove some names from the list.
So, to access to your endpoint and do not have problems with certificates, you need to use one of the names provided in the certificate.
I've tried to follow the following tutorial to setup our own private registry (v2) on an AWS Centos machine.
I've self signed a TLS certificate and placed it in /etc/docker/certs.d/MACHINE_STATIS_IP:5000/
When trying to login the registry (docker login MACHINE_IP:5000) or push a tagged repository (MACHINE_IP:5000/ubuntu:latest) i get the following error :
Error response from daemon: Get https://MACHINE_IP:5000/v1/users/: x509: cannot validate certificate for MACHINE_IP because it doesn't contain any IP SANs
Tried to search for an answer for 2 days, however I couldn't find any.
I've set the certificate CN (common name) to MACHINE_STATIC_IP:5000
When using a self signed TLS certificate docker daemon require you to add the certificate to it's known certificates.
Use the keytool command to grab the certificate :
keytool -printcert -sslserver ${NEXUS_DOMAIN}:${SSL_PORT} -rfc > ${NEXUS_DOMAIN}.crt
And copy it your client's machine SSL certificates directory (in my case - ubuntu):
sudo cp ${NEXUS_DOMAIN}.crt /usr/local/share/ca-certificates/${NEXUS_DOMAIN}.crt && sudo update-ca-certificates
Now reload docker daemon and you're good to go :
sudo systemctl restart docker
You can also use the following command to temporarily trust the certificate without adding it your system certificates.
docker --tlscert <the downloaded tls cert> pull <whatever you want to pull>
I have a custom Chef server on premises with a TLS certificate that is signed by our own CA server. I added the CA certificate to .chef/trusted_certs and now knife ssl verify works fine.
But when I try to upload cookbooks using Berksfile I run into the following error:
$ berks upload
E, [2016-03-26T15:02:18.290419 #8629] ERROR -- : Ridley::Errors::ClientError: SSL_connect returned=1 errno=0 state=error: certificate verify failed
E, [2016-03-26T15:02:18.291025 #8629] ERROR -- : /Users/chbr/.rvm/gems/ruby-2.3-head#global/gems/celluloid-0.16.0/lib/celluloid/responses.rb:29:in `value'
I have tried to append the CA certificate to /ops/chefdk/embedded/ssl/certs/cabundle.pem but it made no difference.
Create a custom CA bundle file and then set $SSL_CERT_FILE (or $SSL_CERT_DIR if you want to use that format) in your environment.
Use --no-ssl-verify. Berkshelf does not respect chef's trusted certs.
Alternatively, there is an option to specify this in berks config file.
Don't ignore certificate validation. That is not the safest choice, especially with news about attackers having recently inserted malware in places like Node Package Manager. You can easily configure Berkshelf to trust the same certificates you trust with Chef.
In your ~/chef-repo/.berkshelf/config.json file, make sure the ca_path is set to point at your Chef trusted certificates, like this (assuming your chef repo is located at ~/chef-repo)
{
"ssl": {
"verify": true,
"ca_path": "~/chef-repo/.chef/trusted_certs"
}
}
Then, use knife to manage your Chef certificates (like this):
$ cd ~/chef-repo
$ knife ssl fetch https://supermarket.chef.io/
$ knife ssl fetch https://my.chef.server.example.org/
All the certificates you trust with Chef will also be trusted by Berks.
I have one chef-server version 12.0.1 and can connect linux (rhel/centos) systems to the chef-server with knife bootstrap but cannot with windows and locally on my rhel client knife ssl check fails.
I have two problems but I think they are both related.
Problem 1 - knife ssl check fails:
Connecting to host chef-server:443
ERROR: The SSL certificate of chef-server could not be verified
Problem 2 - bootstrap windows server fails:
ERROR: SSL Validation failure connecting to host: chef-server - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
Chef encountered an error attempting to create the client "desktop"
I have tried a number of things:
1) knife ssl fetch - no changes
2) I have a signed digicert crt on the server which is accepted by the management-console and chrome web browser
3) I have changed set this in the chef-server.rb
nginx['ssl_certificate'] = "/var/opt/opscode/nginx/ca/hostname.crt"
nginx['ssl_certificate_key'] = "/var/opt/opscode/nginx/ca/hostname.key"
which go to the signed certs.
Anything else I should be trying or am I being a plank?
Try running these commands on your Chef server:
mkdir /root/.chef/trusted_certs
cp /var/opt/chef-server/nginx/ca/YOUR_SERVER'S_HOSTNAME.crt /root/.chef/trusted_certs/
I was having the same problem and it was fixed after I looked through this article, and tried out the steps it gave: http://jtimberman.housepub.org/blog/2014/12/11/chef-12-fix-untrusted-self-sign-certs/
I was having the same issue using a valid wildcard certificate, although it was linux rather than windows. Looks like the issue is that the chef client uses openssl and didn't have the CA and root certificates. I was getting errors when I ran the following from the chef client server:
openssl s_client -connect chef_server_url*:443 -showcerts
I solved my issue by browsing to the chef server, inspecting the certs and exporting each cert in the chain to a single file, ordered with the issued certificate at the top, and the root at the bottom. I then used this bundled-cert as the certificate file in the chef server config file and reconfigured chef.