AWS S3 endpoint and proper SSL verification - amazon-s3

We would like to have our custom brew repository to allow our developers to easy manage/update our company tools. We made a decision to keep all these files on AWS S3 bucket and in brew formulas just point directly to the object's url. The only restriction which we have is to be sure that access to that AWS S3 bucket is available behind our VPN network.
So what we did:
Created new bucket, let's say with following name: downloads.example.com
Created S3 endpoint. AWS created dns entry:
*.vpce-XXXXXXXXXXXXXXX-XXXXXX.s3.eu-west-1.vpce.amazonaws.com
In the bucket policy we limited access only to that AWS S3 endpoint:
"Condition": {
"StringEquals": {
"aws:SourceVpce": "vpce-XXXXXXXXXXXXXXX"
}
}
We created a route53 DNS entry:
record A downloads.example.com as an alias to *.vpce-XXXXXXXXXXXXXXX-XXXXXX.s3.eu-west-1.vpce.amazonaws.com
After that simple configuration we are able to get/push objects only when we are connected to our VPN server using AWS CLI commands.
Unfortunately problem is when we want to use curl for example:
* Trying 10.X.X.X:443...
* Connected to downloads.example.com (10.X.X.X) port 443 (#0)
...
* Server certificate:
* subject: CN=s3.eu-west-1.amazonaws.com
* start date: Dec 16 00:00:00 2021 GMT
* expire date: Jan 14 23:59:59 2023 GMT
* subjectAltName does not match downloads.example.com
* SSL: no alternative certificate subject name matches target host name 'downloads.example.com'
* Closing connection 0
* TLSv1.2 (OUT), TLS alert, close notify (256):
If i do the same command with skipping CA verification it works:
20211217 16:56:52 kamil#thor ~$ curl -Ls https://downloads.example.com/getMe.txt -k
test file
Do you know if there is an any way to makes that work properly?
I know that we could do following things but we would like see other options:
push route s3.eu-west-1.amazonaws.com via VPN and in the bucket policy limit access only to our VPN public IP
install right certificates on ingress/nginx to do some redirect/proxy
we tried some combination with Loadbalancers and ACMs but didn't work.
Thank you in advance for help
Kamil

I'm afraid it is not possible to do what you want.
When you create an endpoint, AWS is not creating certificates for your own domain. It create a certificate for it owns domains.
You check it with:
First, download the certificate
$ echo | openssl s_client -connect 10.99.16.29:443 2>&1 | sed --quiet '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > vpce.pem
Then you can verify what names are in the certificate.
$ openssl x509 -noout -text -in vpce.pem | grep DNS | tr "," "\n" | sort -u
DNS:s3.eu-central-1.amazonaws.com
DNS:*.accesspoint.vpce-0f0d06a5091e70758-7mtj4kk7-eu-central-1a.s3.eu-central-1.vpce.amazonaws.com
DNS:*.accesspoint.vpce-0f0d06a5091e70758-7mtj4kk7.s3.eu-central-1.vpce.amazonaws.com
DNS:*.bucket.vpce-0f0d06a5091e70758-7mtj4kk7-eu-central-1a.s3.eu-central-1.vpce.amazonaws.com
DNS:*.bucket.vpce-0f0d06a5091e70758-7mtj4kk7.s3.eu-central-1.vpce.amazonaws.com
DNS:*.control.vpce-0f0d06a5091e70758-7mtj4kk7-eu-central-1a.s3.eu-central-1.vpce.amazonaws.com
DNS:*.control.vpce-0f0d06a5091e70758-7mtj4kk7.s3.eu-central-1.vpce.amazonaws.com
DNS:*.s3-accesspoint.eu-central-1.amazonaws.com
DNS:*.s3-control.eu-central-1.amazonaws.com
DNS:*.s3.eu-central-1.amazonaws.com
DNS:bucket.vpce-0f0d06a5091e70758-7mtj4kk7-eu-central-1a.s3.eu-central-1.vpce.amazonaws.com
DNS:bucket.vpce-0f0d06a5091e70758-7mtj4kk7.s3.eu-central-1.vpce.amazonaws.com
note for brevity, I've remove some names from the list.
So, to access to your endpoint and do not have problems with certificates, you need to use one of the names provided in the certificate.

Related

Prometheus Discovering Services with Consul: tls:Bad Certificate

I want to make use of Consul with Prometheus. But receive the tls:Bad Certificate error.
See:
caller=consul.go:513 level=error component="discovery manager scrape" discovery=consul msg="Error refreshing service" service=NodeExporter tags= err="Get \"https://consul.service.dc1.consul:8500/v1/health/service/NodeExporter?dc=dc1&stale=&wait=120000ms\": remote error: tls: bad certificate"
At the same time when running the same manually with curl, I am able to get an expected output:
curl -v -s -X GET "https://consul.service.dc1.consul:8500/v1/health/service/NodeExporter?dc=dc1&stale=&wait=120000ms" --key /secrets/consul.key --cert /secrets/consul.pem --cacert /secrets/cachain.pem
[{"Node":{"ID":"e53188ef-16ec-xxxx-xxxx-xxxx","Node":"dc1-runner-dev-1.test.io","Address":"30.10.xx.xx","Datacenter":"dc1","TaggedAddresses":{"lan":"30.10.xx.xx","lan_ipv4":"30.10.xx.xx","wan":"30.10.xx.xx","wan_ipv4":"30.10.xx.xx"},"Meta":{"consul-network-segment":""},"CreateIndex":71388,"ModifyIndex":71391},"Service":{"ID":"dc1-runner-dev-1.test.io-NodeExporter","Service":"NodeExporter","Tags":["service=node_exporter","environment=dev","datacenter=dc1"]...
To see more details from curl debug output, please see here:
LINK
The Prometheus is running in Docker. The Prometheus version is 2.31.1
curl command I also execute from the same Docker container.
Here Prometheus config:
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
- job_name: "node_exporter"
consul_sd_configs:
- server: "consul.service.dc1.consul:8500"
scheme: "https"
datacenter: "dc1"
services: [
"NodeExporter"]
tls_config:
ca_file: "/secrets/cachain.pem"
cert_file: "/secrets/consul.pem"
key_file: "/secrets/consul.key"
The Prometheus is able to access the specified certificates.
I have also tried to add "insecure_skip_verify" property into the prometheus config file. I receive the same error.
The steps how the certificates are created:
I create an offline self-signed root CA by using Ansible modules from community.crypto collection
Create CSR and sign Intermediate CA1 with that root CA
I upload the Intermediate CA1 and the corresponding key into PKI secret engine in Hashicorp Vault.
After that inside Vault PKI I create new CSR and use Intermediate CA1 to sign Intermediate CA2.
Create a PKI role
The certificates in Prometheus are leaf certificates of Intermediate CA2 issued against the mentioned PKI role.
See the output of openssl x509 -text command for the used certificates here
Any ideas what I am missing here?

Troubleshooting - Setting up private GitLab server and connecting Gitlab Runners

I have a Gitlab instance running in docker on a dedicated private server (accessible only from within our vpc). We want to start doing CI using Gitlab runners so I spun up another server to host our runners.
Now that Gitlab-Runner has been configured, I try and register a runner with the private IP of the Gitlab server and the registration token
Enter the GitLab instance URL (for example, https://gitlab.com/):
$GITLAB_PRIVATE_IP
Enter the registration token:
$TOKEN
Enter a description for the runner:
[BEG-GITLAB-RUNNER]: default
Enter tags for the runner (comma-separated):
default
ERROR: Registering runner... failed runner=m616FJy- status=couldn't execute POST against https://$GITLAB_PRIVATE_IP/api/v4/runners: Post "https://$GITLAB_PRIVATE_IP/api/v4/runners": x509: certificate has expired or is not yet valid: current time 2022-02-06T20:00:35Z is after 2021-12-24T04:54:28Z
It looks like our certs have expired and to verify:
echo | openssl s_client -showcerts -connect $GITLAB_PRIVATE_IP:443 2>&1 | openssl x509 -noout -dates
notBefore=Nov 24 04:54:28 2021 GMT
notAfter=Dec 24 04:54:28 2021 GMT
Gitlab comes with let's encrypt so I decided to enable let's encrypt and cert autorenewal in gitlab rails, however when I try and reconfigure I get the error message:
There was an error running gitlab-ctl reconfigure:
letsencrypt_certificate[$GITLAB_PRIVATE_IP] (letsencrypt::http_authorization line 6) had an error: Acme::Client::Error::RejectedIdentifier: acme_certificate[staging] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/letsencrypt/resources/certificate.rb line 41) had an error: Acme::Client::Error::RejectedIdentifier: Error creating new order :: Cannot issue for "$GITLAB_PRIVATE_IP": The ACME server can not issue a certificate for an IP address
So it looks like I can't use the let's encrypt option that packaged with gitlab to enable the renewal of certs.
How can I create/renew ssl certs on a private linux server without a domain?
If you've set up Gitlab + Runners on private servers, what does your rails configuration look like?
Is there a way to enable DNS on a private server for the sole purpose of a certificate authority granting certs?
I would suggest to use Self-signed certificate I have tested this before and its working fine but require some work. I will try to summarize some of the steps needed:
1- generate Self-signed certificate with the domain you choose and make sure to keep it in /etc/gitlab-runner/certs/
2- you need to add the domain and certs path in /etc/gitlab/gitlab.rb
3- reconfigure giltab
4- when connecting the runner make sure to manually copy and activate certs to the runner server .

Modify the SSL certificate for Chef Server

I am currently running a Chef Server.
There are 2 ways to access the server :
<HOSTNAME_OF_SERVER_OR_FQDN>
OR
<ACTUAL_URL_THAT_SHOULD_BE_OR_CNAME>
When I try to run knife ssl check, I get:
root#host:/opt/chef-server/embedded/jre# knife ssl check
Connecting to host <ACTUAL_URL_THAT_SHOULD_BE_OR_CNAME>:443
ERROR: The SSL certificate of <HOSTNAME_OF_SERVER_OR_FQDN> could not be verified
Certificate issuer data: /C=US/ST=MA/L=Boston/O=YouCorp/OU=Operations/CN=<HOSTNAME_OF_SERVER_OR_FQDN>.com/emailAddress=you#example.com
Configuration Info:
OpenSSL Configuration:
* Version: OpenSSL 1.0.1p 9 Jul 2015
* Certificate file: /opt/chefdk/embedded/ssl/cert.pem
* Certificate directory: /opt/chefdk/embedded/ssl/certs
Chef SSL Configuration:
* ssl_ca_path: nil
* ssl_ca_file: nil
* trusted_certs_dir: "/root/.chef/trusted_certs"
I want the knife ssl check command to be successful. Basically I want it to be able to successfully connect using <ACTUAL_URL_THAT_SHOULD_BE_OR_CNAME>
How can I add the CNAME to the current certificate which I believe is /opt/chefdk/embedded/ssl/cert.pem ?
One strange aspect about the certificate file is that when I try to read it and grep for the Hostnames or CNAMES, I do not find any :
# /opt/chef-server/embedded/jre/bin/keytool -printcert -file /opt/chefdk/embedded/ssl/cert.pem | grep <ACTUAL_URL_THAT_SHOULD_BE_OR_CNAME>
No result
# /opt/chef-server/embedded/jre/bin/keytool -printcert -file /opt/chefdk/embedded/ssl/cert.pem | grep <HOSTNAME_OF_SERVER_OR_FQDN>
No result
this is how i did it in the past
The Chef server can be configured to use SSL certificates by adding the following settings to the server configuration file
For example:
nginx['ssl_certificate'] = "/etc/pki/tls/certs/your-host.crt"
nginx['ssl_certificate_key'] = "/etc/pki/tls/private/your-host.key"
Save the file, and then run the following command:
$ sudo chef-server-ctl reconfigure

Can an insecure docker registry be given a CA signed certificate so that clients automatically trust it?

Currently, I have set up a registry in the following manner:
docker run -d \
-p 10.0.1.4:443:5000 \
--name registry \
-v `pwd`/certs/:/certs \
-v `pwd`/registry:/var/lib/registry \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/certificate.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/private.key \
registry:latest
Using Docker version 17.06.2-ce, build cec0b72
I have obtained my certificate.crt, private.key, and ca_bundle.crt from Let's Encrypt. And I have been able to establish https connections when using these certs on a nginx server, without having to explicitly trust the certificates on the client machine/browser.
Is it possible to setup a user experience with a docker registry similar to that of a CA certified website being accessed via https, where the browser/machine trusts the root CA and those along the chain, including my certificates?
Note:
I can of course specify the certificate in the clients docker files as described in this tutorial: https://docs.docker.com/registry/insecure/#use-self-signed-certificates . However, this is not an adequate solution for my needs.
Output of curl -v https://docks.behar.cloud/v2/:
* Trying 10.0.1.4...
* TCP_NODELAY set
* Connected to docks.behar.cloud (10.0.1.4) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: docks.behar.cloud
* Server certificate: Let's Encrypt Authority X3
* Server certificate: DST Root CA X3
> GET /v2/ HTTP/1.1
> Host: docks.behar.cloud
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Length: 2
< Content-Type: application/json; charset=utf-8
< Docker-Distribution-Api-Version: registry/2.0
< X-Content-Type-Options: nosniff
< Date: Sun, 10 Sep 2017 23:05:01 GMT
<
* Connection #0 to host docks.behar.cloud left intact
Short answer: Yes.
My issue was caused by my os not having a build in trust of the root certificates from which my SSL certificate was signed by. This is likely due to the age of my os. See the answer from Matt for more information.
Docker will normally use the the OS provided CA bundle, so certificates signed by trusted roots should work without extra config.
Let's Encrypt certificates are cross signed by an IdentTrust root certificate (DST Root CA X3) so most CA bundles should already trust their certificates. The Lets Encrypt root cert (ISRG Root X1) is also distributed but will not be as widespread due to it being more recent.
Docker 1.13+ will use the host systems CA bundle to verify certificates. Prior to 1.13 this may not happen if you have installed a custom root cert. So if you use curl without any TLS warning then docker commands should also work the same.
To have DTR recognize the certificates you need to edit the configuration file so that you specify your certs correctly. DTR accepts and has special parameters for LetsEncrypt Certs. They also have specific requirements for them. You will need to make a configuration file and mount the appropriate directories and then there should be no further issues with insecure-registry errors and unrecognized certs.
...
http:
addr: localhost:5000
prefix: /my/nested/registry/
host: https://myregistryaddress.org:5000
secret: asecretforlocaldevelopment
relativeurls: false
tls:
certificate: /path/to/x509/public
key: /path/to/x509/private
clientcas:
- /path/to/ca.pem
- /path/to/another/ca.pem
letsencrypt:
cachefile: /path/to/cache-file
email: emailused#letsencrypt.com
...

Docker private registry | TLS certificate issue

I've tried to follow the following tutorial to setup our own private registry (v2) on an AWS Centos machine.
I've self signed a TLS certificate and placed it in /etc/docker/certs.d/MACHINE_STATIS_IP:5000/
When trying to login the registry (docker login MACHINE_IP:5000) or push a tagged repository (MACHINE_IP:5000/ubuntu:latest) i get the following error :
Error response from daemon: Get https://MACHINE_IP:5000/v1/users/: x509: cannot validate certificate for MACHINE_IP because it doesn't contain any IP SANs
Tried to search for an answer for 2 days, however I couldn't find any.
I've set the certificate CN (common name) to MACHINE_STATIC_IP:5000
When using a self signed TLS certificate docker daemon require you to add the certificate to it's known certificates.
Use the keytool command to grab the certificate :
keytool -printcert -sslserver ${NEXUS_DOMAIN}:${SSL_PORT} -rfc > ${NEXUS_DOMAIN}.crt
And copy it your client's machine SSL certificates directory (in my case - ubuntu):
sudo cp ${NEXUS_DOMAIN}.crt /usr/local/share/ca-certificates/${NEXUS_DOMAIN}.crt && sudo update-ca-certificates
Now reload docker daemon and you're good to go :
sudo systemctl restart docker
You can also use the following command to temporarily trust the certificate without adding it your system certificates.
docker --tlscert <the downloaded tls cert> pull <whatever you want to pull>