Minio does not seem to recognize TLS/https certificates - ssl

Im search now for hours to make minio work with self-signed tls certs using docker.
accroding to the documentation certs just need to be placed at /root/.minio/certs/CAs or /root/.minio/ inside the minio container
I tried both with no success
This is how I start minio (using saltstack):
minio:
docker_container.running:
- order: 10
- hostname: backup
- container_name: backup
- binds:
- /root/backup:/data
- /srv/salt/minio/certs:/root/.minio
- image: minio/minio:latest
- port_bindings:
- 10.10.10.1:9000:443
- environment:
- MINIO_BROWSER=off
- MINIO_ACCESS_KEY=BlaBlaBla
- MINIO_SECRET_KEY=BlaBlaBla
- privileged: false
- entrypoint: sh
- command: -c 'mkdir -p /data/backup && /usr/bin/minio server --address ":443" /data'
- restart_policy: always
If I do "docker logs minio" I just get to see http instead of https:
Endpoint: http://172.17.0.3:443 http://127.0.0.1:443
Both keys public and privat are mounted at the correct location inside the container but they not seem to recognize ...
can smb help, do I need to add some extra parameter here?
Thanks in advance

Per the docs (https://docs.minio.io/docs/how-to-secure-access-to-minio-server-with-tls.html), your keys must be named public.crt and private.key, respectively, and mounted at ~/.minio/certs (e.g. /root/.minio/certs). The CA's directory is for public certs of other servers you want to trust, for example in a distributed setup.

you doesn't need to setup cert in minio. Use nginx server then reverse proxy minio port as like 127.0.0.1:9000 port . then use cert file in nginx server block . your all problem solved

Related

How to monitor ssl certificates with Datadog?

I have an nginx-pod which redirects traffic into Kubernetes services and stores related certificates insides its volume. I want to monitor these certificates - mainly their expiration.
I found out that there is a TLS integration in Datadog (we use Datadog in our cluster): https://docs.datadoghq.com/integrations/tls/?tab=host.
They provide sample file, which can be found here: https://github.com/DataDog/integrations-core/blob/master/tls/datadog_checks/tls/data/conf.yaml.example
To be honest, I am completely lost and do not understand comments of the sample file - such as:
## #param server - string - required
## The hostname or IP address with which to connect.
I want to monitor certificates that are stored in the pod, does it mean this value should be localhost or do I need to somehow iterate over all the certificates that are stored using this value (such as server_names in nginx.conf)?
If anyone could help me with setting sample configuration, I would be really grateful - if there are any more details I should provide, that is not a problem at all.
TLS Setup on Host
You can use a host type of instance to track all your certificate expiration dates
1- Install TLS Integration from datadog UI
2- Create instance and install datadog agent in there.
3- Create a /etc/datadog/conf.d/tls.d/conf.yaml
4- Edit following template for your need
init_config:
instances:
## #param server - string - required
## The hostname or IP address with which to connect.
#
- server: https://yourDNS1.com/
tags:
- dns:yourDNS.com
- server: https://yourDNS2.com/
tags:
- dns:yourDNS2
- server: yourDNS3.com
tags:
- dns:yourDNS3
- server: https://yourDNS4.com/
tags:
- dns:yourDNS4.com
- server: https://yourDNS5.com/
tags:
- dns:yourDNS5.com
- server: https://yourDNS6.com/
tags:
- dns:yourDNS6.com
5- Restart datadog-agent
systemctl restart datadog-agent
6- Check status if you see the tls is running successfully
watch systemctl status datadog-agent
8- Create a TLS Overview Dashboard
9- Create a Monitor for getting alert on expiration dates
TLS Setup on Kubernetes
1- Create a ConfigMap and attach that as a Volume
https://docs.datadoghq.com/agent/kubernetes/integrations/?tab=configmap

Traefik: "No ACME certificate generation required for domains" in the logs while using the default cert

I'm struggling with Let's Encrypt setup for my Docker Swarm.
Traefik is started this way in my stack's compose file:
image: traefik:v2.2
ports:
- 80:80
- 443:443
- 8080:8080
command:
- --api
- --log.level=DEBUG
- --providers.docker=true
- --providers.docker.endpoint=unix:///var/run/docker.sock
- --providers.docker.swarmMode=true
- --providers.docker.exposedbydefault=false
- --providers.docker.network=traefik-public
- --entrypoints.http.address=:80
- --entrypoints.https.address=:443
- --certificatesResolvers.certbot=true
- --certificatesResolvers.certbot.acme.httpChallenge=true
- --certificatesResolvers.certbot.acme.httpChallenge.entrypoint=http
- --certificatesResolvers.certbot.acme.email=${EMAIL?Variable EMAIL not set}
- --certificatesResolvers.certbot.acme.storage=/certs/acme-v2.json
- --certificatesResolvers.certbot.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
...networks, volumes...
deploy:
mode: replicated
replicas: 1 # to avoid concurrency issues
...
labels:
- "traefik.docker.network=traefik-public"
- "traefik.enable=true"
- "traefik.http.services.traefik.loadbalancer.server.port=8080"
- "traefik.http.routers.traefik.rule=Host(`traefik.my-domain.com`)"
- "traefik.http.routers.traefik.entrypoints=http,https"
- "traefik.http.routers.traefik.tls.certresolver=certbot"
- "traefik.http.routers.traefik.middlewares=traefik-auth"
- "traefik.http.middlewares.traefik-auth.basicauth.users=admin:${HASHED_PASSWORD?Variable HASHED_PASSWORD not set}"
And I cannot get more than
level=debug msg="No ACME certificate generation required for domains [\"traefik.my-domain.com\"]." providerName=certbot.acme routerName=traefik#docker rule="Host(`traefik.my-domain.com`)"
I wonder why no ACME certificate is required while Firefox complains of getting the "TRAEFIK DEFAULT CERT" (Chromium also btw).
I also tried:
Without the staging server of let's encrypt
With a DNS challenge as I hope to make it work with wildcard *.my-domain.com for dev purpose (which works manually with certbot).
Setting a traefik.my-domain.com DNS zone (to remove the wildcard case from the problem)
Changed the mode "replicated" of the deploy with global as suggested here Traefik + Consul not generaitng SSL certificates in replicated mode, using TRAEFIK DEFAULT CERT
I'm presently looking for a way to handle certificates renewal with Certbot directly on my servers...
I've had same issue, and it helped me to change the volume where acme.json is stored. I think it's because Traefik sees that acme.json is not empty, he simply doesn't ask for new cert.
So if you're using something like:
command:
...
- --certificatesResolvers.certbot.acme.storage=/certs/acme-v2.json
volumes:
- "certs:/certs"
Try to use different volume:
command:
...
- --certificatesResolvers.certbot.acme.storage=/letsencrypt/acme-v2.json
volumes:
- "letsencrypt:/letsencrypt"
For me it was the set default (custom) Cert, that was valid for the full domain, so traefik didn't request a specific acme/letsencrypt one, because it thought it already has one.
After disabling the custom default cert it worked instantly.

Setting up Traefik to require client side certificates with Let's Encrypt using CLI only

I am trying to setup Traefik to do SSL client certificates much like how I used to do it with Apache. But I can't seem to get it working correctly. I'm using Docker as well, here's the command parameters
command:
- --defaultEntryPoints=http,https
- --insecureSkipVerify
- "--entryPoints=Name:http Address::80 Compress:true Redirect.entryPoint:https"
# This one works with no authentication
- "--entryPoints=Name:https Address::443 Compress:true TLS"
# These don't seem to do anything
- "--entryPoints=Name:https Address::443 Compress:true TLS CA.Optional:false CA:/run/secrets/CA"
- --ping
- --docker
- --docker.endpoint=tcp://daemon:2375
- --docker.exposedByDefault=false
- --docker.swarmMode
- --docker.watch
- --acme
- --acme.email=REDACTED#trajano.net
- --acme.onhostrule
- --acme.entrypoint=https
- --acme.httpchallenge
- --acme.httpchallenge.entrypoint=http
- --zookeeper.endpoint=zookeeper:2181
- --zookeeper.prefix=traefik
- --acme.storage=traefik/acme/acme.json
Actually it was CA.Optional and CA after all. I was using Firefox which auto-selected the certificate and when I was using Chrome, it was using the cached content. So when I cleared the browser cache things started working.
Note this approach only validates that the Client cert was signed by the CA, but does not perform any extra checks like what is the subject being used. That's a limitation of Traefik 1.7 at the moment from what I understand.

kubectl unable to connect to server: x509: certificate signed by unknown authority

i'm getting an error when running kubectl one one machine (windows)
the k8s cluster is running on CentOs 7 kubernetes cluster 1.7
master, worker
Here's my .kube\config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://10.10.12.7:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:node:localhost.localdomain
name: system:node:localhost.localdomain#kubernetes
current-context: system:node:localhost.localdomain#kubernetes
kind: Config
preferences: {}
users:
- name: system:node:localhost.localdomain
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
the cluster is built using kubeadm with the default certificates on the pki directory
kubectl unable to connect to server: x509: certificate signed by unknown authority
One more solution in case it helps anyone:
My scenario:
using Windows 10
Kubernetes installed via Docker Desktop ui 2.1.0.1
the installer created config file at ~/.kube/config
the value in ~/.kube/config for server is https://kubernetes.docker.internal:6443
using proxy
Issue: kubectl commands to this endpoint were going through the proxy, I figured it out after running kubectl --insecure-skip-tls-verify cluster-info dump which displayed the proxy html error page.
Fix: just making sure that this URL doesn't go through the proxy, in my case in bash I used export no_proxy=$no_proxy,*.docker.internal
So kubectl doesn't trust the cluster, because for whatever reason the configuration has been messed up (mine included). To fix this, you can use openssl to extract the certificate from the cluster
openssl.exe s_client -showcerts -connect IP:PORT
IP:PORT should be what in your config is written after server:
Copy paste stuff starting from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE----- (these lines included) into a new text file, say... myCert.crt If there are multiple entries, copy all of them.
Now go to .kube\config and instead of
certificate-authority-data: <wrongEncodedPublicKey>`
put
certificate-authority: myCert.crt
(it assumes you put myCert.crt in the same folder as the config file)
If you made the cert correctly it will trust the cluster (tried renaming the file and it no longer trusted afterwards).
I wish I knew what encoding certificate-authority-data uses, but after a few hours of googling I resorted to this solution, and looking back I think it's more elegant anyway.
Run:
gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project devops1-218400
here devops1-218400 is my project name. Replace it with your project name.
I got the same error while running $ kubectl get nodes as a root user. I fixed it by exporting kubelet.conf to environment variable.
$ export KUBECONFIG=/etc/kubernetes/kubelet.conf
$ kubectl get nodes
For my case, its simple worked by adding --insecure-skip-tls-verify at end of kubectl commands, for single time.
Sorry I wasn't able to provide this earlier, I just realized the cause:
So on the master node we're running a kubectl proxy
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
I stopped this and voila the error was gone.
I'm now able to do
kubectl get nodes
NAME STATUS AGE VERSION
centos-k8s2 Ready 3d v1.7.5
localhost.localdomain Ready 3d v1.7.5
I hope this helps those who stumbled upon this scenario.
I my case I resolved this issue copying the kubelet configuration to my home kube config
cat /etc/kubernetes/kubelet.conf > ~/.kube/config
This was happening because my company's network does not allow self signing certificates through their network. Try switching to a different network
For those of you that were late to the thread like I was and none of these answers worked for you I may have the solution:
When I copied over my .kube/config file to my windows 10 machine (with kubectl installed) I didn't change the IP address from 127.0.0.1:6443 to the master's IP address which was 192.168.x.x. (running windows 10 machine connecting to raspberry pi cluster on the same network). Make sure that you do this and it may fix your problem like it did mine.
On GCP
check: gcloud version
-- localMacOS# gcloud version
Run:
--- localMacOS# gcloud container clusters get-credentials 'clusterName' \ --zone=us-'zoneName'
Get clusterName and zoneName from your console -- here: https://console.cloud.google.com/kubernetes/list?
ref: .x509 #market place deployments on GCP #Kubernetes
I got this because I was not connected to the office's VPN
In case of the error you should export all the kubecfg which contains the certs. kops export kubecfg "your cluster-name and export KOPS_STATE_STORE=s3://"paste your S3 store" .
Now you should be able to access and see the resources of your cluster.
This is an old question but in case that also helps someone else here is another possible reason.
Let's assume that you have deployed Kubernetes with user x. If the .kube dir is under the /home/x user and you connect to the node with root or y user it will give you this error.
You need to switch to the user profile so kubernetes can load the configuration from the .kube dir.
Update: When copying the ~/.kube/config file content on a local pc from a master node make sure to replace the hostname of the loadbalancer with a valid IP. In my case the problem was related to the dns lookup.
Hope this helps.

How to configure Let's encrypt certificates for nginx inside a docker image?

I know how to configure let's encrypt for nginx. I'm having hard time configuring let's encrypt with nginx inside a docker image. Let's encrypt certificates are symlinked in etc/letsencrypt/live folder and I don't have permission to view the real certificate files inside /etc/letsencrypt/archive
Can someone suggest a way out ?
I add my mistake. Maybe someone will find it useful.
I mounted the /live directory of letsencrypt and not the whole letsencrypt directory tree.
The problem with this:
The /live folder just holds symlinks to the /archive folder that is not mounted to the docker container with my approach.
(In fact I even mounted a /certs folder that symlinked to the live folder because I had that certs folder in the development environment, same problem..the real (symlinked) files were not mounted)
All problems went away when I mounted /etc/letsencrypt instead of /live
A part of my docker-compose.yml
services:
ngx:
image: nginx
container_name: ngx
ports:
- 80:80
- 443:443
links:
- php-fpm
volumes:
- ../../com/website:/var/www/html/website
- ./nginx.conf:/etc/nginx/nginx.conf
- ./nginx_conf.d/:/etc/nginx/conf.d/
- ./nginx_logs/:/var/log/nginx/
- ../whereever/you/haveit/etc/letsencrypt/:/etc/letsencrypt
The last line in that config is the important one. Changed it from
- ./certs/:/etc/nginx/certs/
And /certs was a symlink to /etc/letsencrypt/live in my case. This can not work as I described above.
If anyone having this problem, I've solved it by mounting the folders into docker container.
I've mounted both etc/letsencrypt and etc/ssl folders into docker
Docker has -vflag to mount volumes. Don't forget to open port 443 for the container.
Based on how you mount it it's possible to enable https in docker container without changing nginx paths.
docker run -d -p 80:80 -p 443:443 -v /etc/letsencrypt/:/etc/letsencrypt/ -v /etc /ssl/:/etc/ssl/ <image name>
If you are using nginx, Docker and Letsencrypt you might like the following Github project: https-portal.
It automates a lot of manual actions, and makes it easy to manage your configurations using docker-compose. From the README:
Features
Test Locally
Redirections
Automatic Container Discovery
Hybrid Setup with Non-Dockerized Apps
Multiple Domains
Serving Static Sites
Share Certificates with Other Apps
HTTP Basic Auth
How it works
obtains an SSL certificate for each of your subdomains from Let's Encrypt.
configures Nginx to use HTTPS (and force HTTPS by redirecting HTTP to HTTPS)
sets up a cron job that checks your certificates every week, and renew them. if they expire in 30 days.
For some background.. The project was also discussed on Hacker News: HTTPS-Portal: Automated HTTPS server powered by Nginx, Let’s Encrypt and Docker
(Disclaimer: I have no affiliation to the project, just a user)