How to remove certificate from Traefik acme storage when saved to consul KV - traefik

I have Traefik running with a Consul KV store. How do I remove a record from the acme certificate storage in Consul, or force a renewal for just one domain/frontend?
Problem:
Somehow one of the frontend domains has saved with the wrong certificate. It's referencing a certificate from a different domain (which is also a frontend in Traefik).
I was able to inspect the acme json by getting the consul value for the traefik/acme/account/object key, decode and unzip it and this is the record from the Certs array:
{
"Domains":{
"Main":"my.domain1.com",
"SANs":null
},
"Certificate":{
"Domain":"my.domain2.com",
"CertURL":"https://acme-v02.api.letsencrypt.org/acme/cert/idfordomain2",
"CertStableURL":"https://acme-v02.api.letsencrypt.org/acme/cert/idfordomain2",
"PrivateKey":"...",
"Certificate":"..."
}
}
As you can see, somehow the cert for my.domain2.com has been saved against the record for my.domain1.com so this results in an invalid certificate warning in the browser. I want to clear out the whole record so Traefik will get a fresh cert. I'm using Consul and its saved in binary so I can't just edit the json.

Here is how I solved this issue:
Your traefik network should be marked as attachable: true
Run on host:
docker run -it --rm --name consul-client --network traefik_traefik consul sh
Then run in created container:
export CONSUL_HTTP_ADDR=consul:8500
# get value from consul and store it to acme.json
consul kv get traefik/acme/account/object | gzip -dc > acme.json
# remove invalid domain and store it to acme-fixed.json
cat acme.json | jq -r 'del (.DomainsCertificate.Certs[] | select(.Domains.Main=="'yourdomain.com'"))' > acme-fixed.json
# gzip it
cat acme-fixed.json | gzip -c > acme-fixed.json.gz
# upload fixed and gzipped json back to consul
consul kv put traefik/acme/account/object #acme-fixed.json.gz

Simplest way is to use consul CLI utility. The utility is also used to run server and ideally you should use same version as the one used for your servers. Make sure you export environment variables: CONSUL_HTTP_ADDR - points to consul server, default is http://127.0.0.1:8500 and CONSUL_HTTP_TOKEN - is ACL token, if you have ACLs on your server enabled, as it should be on production environments.
Then you just run following command
consul kv put traefik/acme/account/object #traefik.json
Where trafik.json is json file that has updated values, that you wish to change in Consul KV store.
Or you can use HTTP API: Consul Create/Update Key
curl -X PUT --data #traefik.json http://<your-server-url>:<port>/v1/kv/traefik/acme/account/object
If your server is ACL enabled, you need to add following header to curl request, with <your-acl-token> that was issued to you: -H "X-Consul-Token: <your-acl-token>"

Related

Why does this pod get a 403 Forbidden when calling the Kubernetes API despite a RoleBinding (same with ClusterRoleBinding)?

I created a pod (an Alpine "BusyBox" to run commands in) which then gets the default service account associated with it. I then created a RoleBinding (and later ClusterRoleBinding when the first didn't work) but it still won't let me call the K8s API.
What am I doing wrong?
First I created a container to run commands in:
# Create a namespace to install our pod
kubectl create namespace one
# Now create a pod that we can run stuff in
kubectl run runner -n one --image alpine -- sleep 3600
Then I created a role binding:
# My understanding of this command is that I'm doing the following:
# 1. Creating a binding for the "default" service account in the "one" namespace
# 2. Tying that to the cluster role for viewing things
# 3. Making this binding work in the "default" namespace, so that it can call
# the API there FROM its own namespace (one)
kubectl create rolebinding default-view --clusterrole=view --serviceaccount=one:default --namespace=default
Then I connected to the pod's terminal and tried to call the API to list all services in its own namespace:
kubectl exec --stdin --tty use-rest -n one -- /bin/ash
# Now I run all these inside that terminal:
# Point to the internal API server hostname
APISERVER=https://kubernetes.default.svc
# Path to ServiceAccount token
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICEACCOUNT}/token)
# Reference the internal certificate authority (CA)
CACERT=${SERVICEACCOUNT}/ca.crt
# The wget installed with Alpine cannot do SSL
apk --no-cache add ca-certificates
apk add wget
wget --ca-certificate=${CACERT} --header="Authorization: Bearer ${TOKEN}" ${APISERVER}/api/v1/namespaces/$NAMESPACE/services
The above gives the error:
--2021-04-20 01:04:54-- https://kubernetes.default.svc/api/v1/namespaces/default/services/
Resolving kubernetes.default.svc (kubernetes.default.svc)... 10.43.0.1
Connecting to kubernetes.default.svc (kubernetes.default.svc)|10.43.0.1|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2021-04-20 01:04:54 ERROR 403: Forbidden.
But that should be allowed! I get the same error when using a cluster role binding.
USING:
k3d version v4.4.1
k3s version v1.20.5-k3s1 (default)
Calico
You can only have one ServiceAccount per pod and once you've assigned an account to that pod, the default account no longer applies. I was trying to bind the role to the default account, but passing the token of another account I'd created for the pod.

How can I setup kubeapi server to allow kubectl from outside the cluster

I have a single master, multinode kubernetes going. It works great. However I want to allow kubectl commands to be run from outside the master server. How do I run kubectl get node from my laptop for example?
If I install kubectl on my laptop I get the following error:
error: client-key-data or client-key must be specified for kubernetes-admin to use the clientCert authentication method
How do I go about this. I have read through the kubernetes authorisation documentation but I must say it's a bit greek to me. I am running version 1.10.2.
Thank you.
To extend #sfgroups answer:
Configurations of all Kubernetes clusters you are managing
are stored in $HOME/.kube/config file. If you have that file on the master node,
the easy way is to copy it to $HOME/.kube/config file on a local machine.
You can choose other places, and then specify the location by environment value KUBECONFIG:
export KUBECONFIG=/etc/kubernetes/config
or use --kubeconfig command line parameter instead.
Cloud providers often give you a possibility to download config to local machine from the
web interface or by the cloud management command.
For GCP:
gcloud container clusters get-credentials NAME [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]
For Azure:
az login -u yourazureaccount -p yourpassword
az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
If the cluster was created using Kops utility, you could get the config file by:
kops export kubeconfig ${CLUSTER_NAME}
From your master copy /root/.kube directory to your laptop C:\Users\.kube location.
kubectl will pickup the certificate from config file automatically.

How can I deploy a secure (HTTPS) Meteor app on Heroku?

I would like to deploy my Meteor app to Heroku and make it only accessible through HTTPS. Ideally, I want to do this as cheaply as possible.
Create the Certificate
Run these commands to get certbot-auto. certbot-auto should work on most systems
wget https://dl.eff.org/certbot-auto
chmod 755 certbot-auto
This command starts the process of getting your certificate. The -d flag allows you to pass in the domain you would like to secure. Alternatively, without the -d flag, it will pop up a prompt where you can enter the domain.
./certbot-auto certonly --manual -d app.yoursite.com
Then it will ask you the following. Do not hit enter.
Make sure your web server displays the following content at
http://app.yoursite.com/.well-known/acme-challenge/SOME-LENGTHY-KEY before continuing:
SOME-LONGER-KEY
Use Picker
I suggest using this method because on renewal, you will only need to update an environment variable. You can use public/ as below, but it will require a rebuild of your entire app every time
Run meteor add meteorhacks:picker
In a server side file, add the following
import { Picker } from 'meteor/meteorhacks:picker';
Picker.route('/.well-known/acme-challenge/:routeKey', (params, request, response) => {
response.writeHead('200', {'Content-Type': 'text/plain'});
response.write(process.env.SSL_PAGE_KEY)
response.end();
});
Then set an environment variable SSL_PAGE_KEY to SOME-LONGER-KEY with
heroku config:set SSL_PAGE_KEY=SOME-LONGER-KEY
Use public/
Create the directory path in your public folder. If you don't have one, create one.
mkdir -p public/.well-known/acme-challenge/
Then create the file SOME-LENGTHY-KEY and place SOME-LONGER-KEY inside it
echo SOME-LONGER-KEY > public/.well-known/acme-challenge/SOME-LENGTHY-KEY
Commit and push that change to your Heroku app.
git push heroku master
Now hit enter to continue the verification process. You should receive a message like this
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at
/etc/letsencrypt/live/app.yoursite.com/fullchain.pem. Your cert will
expire on 2016-04-11. To obtain a new version of the certificate in
the future, simply run Let's Encrypt again.
Upload the Certificate
To upload your certificates to Heroku, first enable the SSL Beta
heroku labs:enable http-sni -a your-app
heroku plugins:install heroku-certs
Then add your fullchain.pem and privkey.pem to Heroku.
sudo heroku _certs:add /etc/letsencrypt/live/app.yoursite.com/fullchain.pem /etc/letsencrypt/live/app.yoursite.com/privkey.pem
You can verify that the certificate was uploaded with
heroku _certs:info
Change your DNS Settings
Update your DNS to point to app.yoursite.com.herokudns.com
Verify SSL is working
To check that SSL is set up, run the following. -v gives you verbose output. -I shows the document info only. -H passes a header to the URL. The header we're passing ensures that a cache is not being used and will ensure you get your new certificate and not an old one.
curl -vI https://app.yoursite.com -H "Cache-Control: no-cache"
Check that the output contains the following
* Server certificate:
* subject: C=US; ST=CA; L=SF; O=SFDC; OU=Heroku; CN=app.yoursite.com
If the subject line does not contain CN=app.yoursite.com, wait 5 to 10 minutes and try again. If it does, you're almost good to go.
Make Meteor Specific Changes
To finish up the process, you'll want to change your ROOT_URL environment variable to the new https version.
heroku config:set ROOT_URL=https://app.yoursite.com
Then you'll want to ensure that your users are always using SSL with the force-ssl package
meteor add force-ssl
Lastly, if you have any OAuth logins set up in your app (Facebook, Google, etc), you'll want to provide them with the new https version of your URL.
Renewal
Run certbot-auto again
./certbot-auto certonly --manual -d app.yoursite.com
It may prompt you for the same endpoint with the same content. If it does, just hit enter. If it does not, you will need to repeat the above steps.
It will then create new certificate files, which you will upload to Heroku with
heroku certs:update /etc/letsencrypt/live/app.yoursite.com/fullchain.pem /etc/letsencrypt/live/app.yoursite.com/privkey.pem
Then to confirm, run the Verify SSL is working commands above
Sources
https://certbot.eff.org/#ubuntutrusty-other
https://devcenter.heroku.com/articles/ssl-beta
https://themeteorchef.com/blog/securing-meteor-applications/

How to access Kubernetes UI via browser?

I have installed Kubernetes using contrib/ansible scripts.
When I run cluster-info:
[osboxes#kube-master-def ~]$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Elasticsearch is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubedash is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kubedash
Grafana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
The cluster is exposed on localhost with insecure port, and exposed on secure port 443 via ssl
kube 18103 1 0 12:20 ? 00:02:57 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=https://10.57.50.161:443 -- kubeconfig=/etc/kubernetes/controller-manager.kubeconfig --service-account-private-key-file=/etc/kubernetes/certs/server.key --root-ca-file=/etc/kubernetes/certs/ca.crt
kube 18217 1 0 12:20 ? 00:00:15 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=https://10.57.50.161:443 --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
root 27094 1 0 12:21 ? 00:00:00 /bin/bash /usr/libexec/kubernetes/kube-addons.sh
kube 27300 1 1 12:21 ? 00:05:36 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://10.57.50.161:2379 --insecure-bind-address=127.0.0.1 --secure-port=443 --allow-privileged=true --service-cluster-ip-range=10.254.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --tls-cert-file=/etc/kubernetes/certs/server.crt --tls-private-key-file=/etc/kubernetes/certs/server.key --client-ca-file=/etc/kubernetes/certs/ca.crt --token-auth-file=/etc/kubernetes/tokens/known_tokens.csv --service-account-key-file=/etc/kubernetes/certs/server.crt
I have copied the certificates from kube-master machine to my local machine, I have installed the ca root certificate. The chrome/safari browsers are accepting the ca root certificate.
When I'm trying to access the https://10.57.50.161/ui
I'm getting the 'Unauthorized'
How can I access the kubernetes ui?
You can use kubectl proxy.
Depending if you are using a config file, via command-line run
kubectl proxy
or
kubectl --kubeconfig=kubeconfig proxy
You should get a similar response
Starting to serve on 127.0.0.1:8001
Now open your browser and navigate to
http://127.0.0.1:8001/ui/ (deprecated, see kubernetes/dashboard)
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
You need to make sure the ports match up.
This works for me that you can access from network
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
Quick-n-dirty (and unsecure) way to access the Dashboard:
$ kubectl edit svc/kubernetes-dashboard --namespace=kube-system
This will load the Dashboard config (yaml) into an editor where you can edit it.
Change line type: ClusterIP to type: NodePort.
Get the tcp port:
$ kubectl get svc kubernetes-dashboard -o json --namespace=kube-system
The line with the tcp port will look like:
"nodePort": 31567
In newer releases of kubernetes you can get the nodeport from get svc:
# This is kubernetes 1.7:
donn#host37:~$ sudo kubectl get svc --namespace=kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard 10.3.0.234 <nodes> 80:31567/TCP 2h
Do kubectl describe nodes to get a node IP address.
Browse to:
http://NODE_IP:31567
Good for testing. Not good for production due to lack of security.
Looking at your apiserver configuration, you will need to either present a bearer token (valid tokens will be listed in /etc/kubernetes/tokens/known_tokens.csv) or client certificate (signed by the CA cert in /etc/kubernetes/certs/ca.crt) to prove to the apiserver that you should be allowed to access the cluster.
https://github.com/kubernetes/kubernetes/issues/7307#issuecomment-96130676 describes how I was able to configure client certificates for a GKE cluster on my Mac.
To pass bearer tokens, you need to pass an HTTP header Authorization with a value Bearer ${KUBE_BEARER_TOKEN}. You can see an example of how this is done with curl here; in a browser, you will need to install an add-on/plugin to pass custom headers.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy &
Run the following command in your local laptop(or where you want to access the GUI)
ssh -L 8877:127.0.0.1:8001 -N -f -l root master_IPADDRESS
Get the secret key
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
Open the dashboard
http://localhost:8877/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Perform role-binding if you get any errors.
You can use kubectl proxy --address=clusterIP --port 8001 --accept-hosts '.*'
api server is already accessible on 6443 port on the node, but not authorize accesss to https://:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
i've generated client certificats signed by kubernetes ca cert, and converted to pkcs12 and integreted to my browser... when try to access to the this url they says that user are not authorized to access to the uri...

Setting up a Docker registry with Letsencrypt certificate

I'm setting up a domain registry as described here:
https://docs.docker.com/registry/deploying/
I generated a certificate for docker.mydomain.com and started the docker using their command on my server:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
I've started the docker and pointed to certificates I obtained using letsencrypt (https://letsencrypt.org/).
Now, when I browse to https://docker.mydomain.com:5000/v2/ I will get a page with just '{}', with a green lock (succesful secure page request).
But when I try to do a docker login docker.mydomain.com:5000 from a different server I see a error in the registry docker:
TLS handshake error from xxx.xxx.xxx.xxx:51773: remote error: bad certificate
I've tried some different variations in setting up the certificates, and gotten errors like:
remote error: unknown certificate authority
and
tls: first record does not look like a TLS handshake
What am I missing?
Docker seams to not support SNI : https://github.com/docker/docker/issues/9969
Update : Docker now should support SNI.
It's mean, when connecting to your server during the tls transaction, the docker client do not specify the domain name, so your server show the default certificate.
The solution could be to change to default certificate of your server to be to one valid for the docker domain.
This site works only in browsers with SNI support.
To check if your (sub-)domain works with clients not SNI-aware, you can use ssllabs.com/ssltest : If you DONT see the message, "This site works only in browsers with SNI support. " then it will works.