I am attempting to create an Ansible play in which SSL Certificates that were previously downloaded are then distributed to our F5's. I've updated to the latest version our license will allow, 12.1.4. I am able to place key and certificate files on the F5. I can manually configure these certs and everything works fine...but I need this process to be automated via Ansible.
Our F5's are currently being used as fail-overs for a service called WaveForm which monitors audio feeds. I am trying to use bigip_profile_client_ssl to update the client profile for WaveForm to point to the new SSL Certs that have been uploaded in earlier steps. The problem is that I can't figure out how to update the existing profile, bigip_profile_client_ssl only seems to create a new profile with the same name. I've searched the docs, google, etc and can't find an answer to how to perform, specifically, an update of a client SSL profile.
So, naturally, I was thinking perhaps I could delete the old entry and create a new entry. Simple, right? But I run into problems there as well: I can't seem to figure out how to set the Application or Partition/Path with the bigip_profile_client_ssl module. The Application setting specifically, I think, is what will bind the SSL configuration to the actual network resource.
See Ansible docs: https://docs.ansible.com/ansible/latest/modules/bigip_profile_client_ssl_module.html
Note this is all run on an internal network and is not accessible from the outside world.
Here is the module that I'm having trouble with:
# fullpath from facts: "/Common/wf.site.com.app/wf.site.com_client-ssl"
- name: Update Client SSL Profile for WaveForm
bigip_profile_client_ssl:
provider:
server: "{{ server }}"
server_port: "{{ server_port }}"
user: "{{ f5_ad_user }}"
password: "{{ f5_ad_password }}"
transport: rest
validate_certs: no # temporary ignore SSL validation
state: present # as opposed to absent
name: "wf.site.com_client-ssl"
parent: "/Common/clientssl"
cert_key_chain:
- cert: "/Common/default.crt" #test-cert-fullchain.crt"
key: "/Common/default.key" #test-key.key"
# chain:
delegate_to: localhost
After running this I end up with 2 entries for wf.site.com_client-ssl. The one is new and contains the correct certificates but is not configured to be used via Application setting, the other entry that is being actively used remains unchanged.
The correct entry looks something like this on the Local Traffic›Profiles:SSL:Client page:
Name | Application | Parent Profile | Partition/Path
wf.site.com_client-ssl | wf.site.com | clientssl | Common/wf.site.com.app
Where as the NEW entry looks something like this:
Name | Application | Parent Profile | Partition/Path
wf.site.com_client-ssl | (blank) | clientssl | Common
What are my options here? Shell commands?
I found that running my Ansible playbook multiple times did not result in multiple new client SSL profiles. So as a work around I disregard the existing configured Client SSL Profile, I used my Ansible playbook to create a new Client SSL Profile, modified the Application settings to point to this new Client SSL Profile, and the SSL to waveform still seems to be in-tact. I can update with Common/default.crt and default.key and see that wave forms SSL fails, and can run the ansible script again with the correct key and full chain certificate and see waveform operate as expected over valid HTTPS. The Application configuration doesn't seem to be doing anything else...just changed the configured Client SSL Profile and all seems to be working as desired.
I hope this helps someone else who might be running into similar issues.
Related
I am trying to create an HA cluster with HAProxy and below 3 master nodes.
On the proxy I am following the official documentation High Availability Considerations/haproxy configuration. I am passing the ssl verification to the Server Api option ssl-hello-chk.
Having said that I can understand that on my ~/.kube/config file I am using the wrong certificate-authority-data that I picked up from the prime master node e.g.:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <something-something>
server: https://ip:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: <something-something>
client-key-data: <something-something>
token: <something-something>
I found a relevant ticket on GitHub Unable to connect to the server: x509: certificate signed by unknown authority/onesolpark which makes sense that I should extract the certificate-authority-data of the proxy.
On this case I assume that I should extract the certificate-authority-data from one of the certs in /etc/kubernetes/pki/ most likely apiserver.*?
Any idea on this?
Thanks in advance for your time and effort.
Okay I managed to figured it out.
When a k8s admin decides to create a HA ckuster he should have minimum one LB but ideally he should have two LB that both are able to LB towards all Master nodes (3,5 etc).
So when the user wants to send a request to Server API towards one of the Master nodes, the request will go through ideally through a Virtual IP and forward to one the LB. As a second step the LB will forward the request to one of the Master nodes.
The problem that I wanted to solve is that the Server API had no record of the IP of the LB(s).
In result the user will get the error Unable to connect to the server: x509: certificate signed by unknown authority.
The solution can be found on this relevant question How can I add an additional IP / hostname to my Kubernetes certificate?.
Straight answer is simply add the LB(s) in the kubeadm config file before launch of Master Prime node e.g.:
apiServer:
certSANs:
- "ip-of-LB1"
- "domain-of-LB1"
- "ip-of-LB2"
- "domain-of-LB2" # etc etc
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
But as it is also mentioned the analytical documentation can be found here Adding a Name to the Kubernetes API Server Certificate.
Also if the user decides to create its own certificates and not use the default self sign certificates (populated from k8s by default) he can add the nodes manually as documented from the official site Certificates.
Then if you want to copy the ca.crt is under the default dir /etc/kubernetes/pki/ca.crt (unless defined differently), or the user can choose to simply copy the ~/.kube/config file for the kubectl communication.
Hope this helps someone else to spend less time in the future.
I'm not even sure I asked the question right...
I have three servers running minio in distributed mode. I need all three servers to run with TLS enabled. It's easy enough to run certbot, generate a cert for each node, drop said certs into /etc/minio/certs/ and go! but here's where I start running into issues.
The servers are thus:
node1.files.example.com
node2.files.example.com
node3.files.example.com
I'm launching minio using the following command:
MINIO_ACCESS_KEY=minio \
MINIO_SECRET_KEY=secret \
/usr/local/bin/minio server \
-C /etc/minio --address ":443" \
https://node{1...3}.files.example.com:443/volume/{1...4}/
This works and I am able to connect to all three servers from a webbrowser using https with good certs. however, users will connect to the server using the parent domain "files.example.com" (using distributed DNS)
I already ran certbot and generated the certs for the parent domain... and I copied the certs into /etc/minio/certs/ as well as /etc/minio/certs/CAs/ (calling both files "files.example.com-public.crt" and "files.example.com-public.key" respectively)... this did not work. when I try to open the parent domain "files.example.com" I get a cert error (chich I can bypass) indicating the certificate is for the node in which I have connected and not for the parent domain.
I'm pretty sure this is just a matter of putting the cert in the right place and naming it correctly... right? does anyone know how to do that? I also have an idea there might be a way to issue a cert that covers multiple domains... is that how I'm supposed to do this? how?
I already hit up minio's slack channel and posted on their github, but no ones replying to me. not even, "this won't work."
any ideas?
I gave up and ran certbot in manual mode. it had to install apache on one of the nodes, then certbot had me jump through a couple of minor hoops (namely it had me create a new txt record with my DNS provider, and then create a file with a text string on the server for verification). I then copied the created certs into my minio config directory (/etc/minio/certs/) on all three nodes. that's it.
to be honest, I'd rather use the plugin as it allows for an automated cert renewal, but I'll live with this for now.
You could also run all of them behind a reverse proxy to handle the TLS termination using a wildcard domain certificate (ie. *.files.example.com). The reverse proxy would centralize the certificates, DNS, and certbot script if you prefer, etc to a single node, essentially load balancing the TLS and DNS for the minio nodes. The performance hit of "load-balancing" TLS like this may be acceptable depending on your workload, considering the simplification to your current DNS and TLS cert setup.
[Digital Ocean example using nginx and certbot plugins] https://www.digitalocean.com/community/tutorials/how-to-create-let-s-encrypt-wildcard-certificates-with-certbot
I have setup a kubernetes cluster from scratch. This just means I did not use services provided by others, but used the k8s installer it self. Before we used to have other clusters, but with providers and they give you tls cert and key for auth, etc. Now this cluster was setup by myself, I have access via kubectl:
$ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h
$
I also tried this and I can add a custom key, but then when I try to query via curl I get pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope.
I can not figure out where can I get the cert and key for a user to authenticate using the API for tls auth. I have tried to understand the official docs, but I have got nowhere. Can someone help me find where those files are or how to add or get certificates that i can use for the rest API?
Edit1: my .kube.config file looks like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0t(...)=
server: https://private_IP:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS(...)Qo=
client-key-data: LS0(...)tCg==
It works from the localhost just normally.
In the other hand I noticed something. From the localhost I can access the cluster by generating the token using this method.
Also notice that for now I do not mind about creating multiple roles for multiple users, etc. I just need access to the API from remote and can be using "default" authentication or roles.
Now when I try to do the same from remote I get the following:
I tried using that config to run kubectl get all from remote, it runs for a while and then ends in Unable to connect to the server: dial tcpprivate_IP:6443: i/o timeout.
This happens because the config has private_IP, then I changed the IP to Public_IP:6443 and now get the following : Unable to connect to the server: x509: certificate is valid for some_private_IP, My_private_IP, not Public_IP:6443
Keep present that this is and AWS ec2 instance with elastic IP (You can think of Elastic IP as just a public IP on a traditional setup, but this public ip is on your public router and then this router routes requests to your actual server on private network). For AWS fans like I said, I can not use the EKS service here.
So how do I get this to be able to use the Public IP?
It seems your main problem is the TLS server certificate validation.
One option is to tell kubectl to skip the validation of the server certificate:
kubectl --insecure-skip-tls-verify ...
This has obviously the potential to be "insecure", but that depends on your use case
Another option is to recreate the cluster with the public IP address added to the server certificate. And it should also be possible to recreate only the certificate with kubeadm without recreating the cluster. Details about the latter two points can be found in this answer.
You need to setup RBAC for the user. define roles and rolebinding. follow the link for reference -> https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/
I'm trying to configure an HTTPS/Layer 7 Load Balancer with GKE. I'm following SSL certificates overview and GKE Ingress for HTTP(S) Load Balancing.
My config. has worked for some time. I wanted to test Google's managed service.
This is how I've set it up so far:
k8s/staging/staging-ssl.yml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-staging-lb-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-staging-global"
ingress.gcp.kubernetes.io/pre-shared-cert: "staging-google-managed-ssl"
kubernetes.io/ingress.allow-http: "false"
spec:
rules:
- host: staging.my-app.no
http:
paths:
- path: /*
backend:
serviceName: my-svc
servicePort: 3001
gcloud compute addresses list
#=>
NAME REGION ADDRESS STATUS
my-staging-global 35.244.160.NNN RESERVED
host staging.my-app.no
#=>
35.244.160.NNN
but it is stuck on FAILED_NOT_VISIBLE:
gcloud beta compute ssl-certificates describe staging-google-managed-ssl
#=>
creationTimestamp: '2018-12-20T04:59:39.450-08:00'
id: 'NNNN'
kind: compute#sslCertificate
managed:
domainStatus:
staging.my-app.no: FAILED_NOT_VISIBLE
domains:
- staging.my-app.no
status: PROVISIONING
name: staging-google-managed-ssl
selfLink: https://www.googleapis.com/compute/beta/projects/my-project/global/sslCertificates/staging-google-managed-ssl
type: MANAGED
Any idea on how I can fix or debug this further?
I found a section in the doc I linked to at the beginning of the post
Associating SSL certificate resources with a target proxy:
Use the following gcloud command to associate SSL certificate resources with a target proxy, whether the SSL certificates are self-managed or Google-managed.
gcloud compute target-https-proxies create [NAME] \
--url-map=[URL_MAP] \
--ssl-certificates=[SSL_CERTIFICATE1][,[SSL_CERTIFICATE2], [SSL_CERTIFICATE3],...]
Is that necessary when I have this line in k8s/staging/staging-ssl.yml?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
. . .
ingress.gcp.kubernetes.io/pre-shared-cert: "staging-google-managed-ssl"
. . .
I have faced this issue recently. You need to check whether your A Record correctly points to the Ingress static IP.
If you are using a service like Cloudflare, then disable the Cloudflare proxy setting so that ping to the domain will give the actual IP of Ingress. THis will create the Google Managed SSL certificate correctly with 10 to 15 minutes.
Once the certificate is up, you can again enable Cloudflare proxy setting.
I'm leaving this for anyone who might end up in the same situation as me. I needed to migrate from a self-managed certificate to a google-managed one.
I did create the google-managed certificate following the guide and was expecting to see it being activated before applying the certificate to my Kubernetes ingress (to avoid the possibility of a downtime)
Turns out, as stated by the docs,
the target proxy must reference the Google-managed certificate
resource
So applying the configuration with kubectl apply -f ingress-conf.yaml made the load balancer use the newly created certificate, which became active shortly after (15 min or so)
What worked for me after checking the answers here (I worked with a load balancer but IMO this is correct for all cases):
If some time passed this certificate will not work for you (It may be permamnently gone and it will take time to show that) - I created a new one and replaced it in the Load Balancer (just edit it)
Make sure that the certificate is being used a few minutes after creating it
Make sure that the DNS points to your service. And that your configuration is working when using http!! - This is the best and safest way (also if you just moved a domain - make sure that when you check it you reach to the correct IP)
After creating a new cert or if the problem was fixed - your domain will turn green but you still need to wait (can take an hour or more)
As per the following documentation which you provided, this should help you out:
The status FAILED_NOT_VISIBLE indicates that certificate provisioning failed for a domain because of a problem with DNS or the load balancing configuration. Make sure that DNS is configured so that the certificate's domain resolves to the IP address of the load balancer.
What is the TTL (time to live) of the A Resource Record for staging.my-app.no?
Use, e.g.,
dig +nocmd +noall +answer staging.my-app.no
to figure it out.
In my case, increasing the TTL from 60 seconds to 7200 let the domainStatus finally arrive in ACTIVE.
In addition to the other answers, when migrating from self-managed to google-managed certs I had to:
Enable http to my ingress service with kubernetes.io/ingress.allow-http: true
Leave the existing SSL cert running in the original ingress service until the new managed cert was Active
I also had an expired original SSL cert, though I'm not sure this mattered.
In my case, at work. We are leveraging the managed certificate a lot in order to provide dynamic environment for Developers & QA. As a result, we are provisioning & removing managed certificate quite a lot. This mean that we are also updating the Ingress resource as we are generating & removing managed certificate.
What we have founded out is that even if you delete the reference of the managed certificate from this annotation:
networking.gke.io/managed-certificates: <list>
It seems that randomly the Ingress does not remove the associated ssl-certificates from the LoadBalancer.
ingress.gcp.kubernetes.io/pre-shared-cert: <list>
As a result, when the managed certificate is deleted. The ingress will be "stuck" in a way, that no new managed certificate could be provision. Hence, new managed-ceritifcate will after some times transition from PROVISIONING state to FAILED_NOT_VISIBLE state
The only solution that we founded out so far, is that if a new certificate does not get provision after 30min. We will check if the annotation ingress.gcp.kubernetes.io/pre-shared-cert contains ssl-certificate that does not exist anymore.
You can check existing ssl-certificate with the command below
gcloud compute ssl-certificates list
If it happens that one ssl-certificate that does not exist anymore is still hanging around in the annotation. We'll then remove the unnecessary ssl-certificate from the ingress.gcp.kubernetes.io/pre-shared-cert annotation manually.
After applying the updated configuration, in about 5 minutes, the new managed certificate which was in FAILED_NOT_VISIBLE state should be provision and in ACTIVE state.
As already pointed by Mitzi https://stackoverflow.com/a/66578266/7588668
This is what worked for me
Create cert with subdomains/domains
Must Add it load balancer ( I was waiting for it to become active but only when you add it becomes active !! )
Add static IP as A record for domains/subdomain
It worked in 5min
In my case I needed alter the healthcheck and point it to the proper endpoint ( /healthz on nginx-ingress) and after the healtcheck returned true I had to make sure the managed certificate was created in the same namespace as the gce-ingress. After these two things were done it finally went through, otherwise I got the same error. "FAILED_NOT_VISIBLE"
I met the same issue.
I fixed it by re-looking at the documentation.
https://cloud.google.com/load-balancing/docs/ssl-certificates/troubleshooting?_ga=2.107191426.-1891616718.1598062234#domain-status
FAILED_NOT_VISIBLE
Certificate provisioning failed for the domain. Either of the following might be the issue:
The domain's DNS record doesn't resolve to the IP address of the Google Cloud load balancer. To resolve this issue, update the DNS records to point to your load balancer's IP address.
The SSL certificate isn't attached to the load balancer's target proxy. To resolve this issue, update your load balancer configuration.
Google Cloud continues to try to provision the certificate while the managed status is PROVISIONING.
Because my loadbalancer is behind cloudflare. By default cloudflare has cdn proxy enabled, and i need to first disable it after the DNS verified by Google, the cert state changed to active.
I had this problem for days. Even though the FQDN in Google Cloud public DNS zone correctly resolved to the IP of the HTTPS Load Balancer, certificate created failed with FAILED_NOT_VISIBLE. I eventually resolved the problem as my domain was set up in Google Domains with DNSSEC but had an incorrect DNSSEC record when pointing to the Google Cloud Public DNS zone. DNSSEC configuration can be verified using https://dnsviz.net/
I had the same problem. But my problem was in the deployment. I ran
kubectl describe ingress [INGRESS-NAME] -n [NAMESPACE]
The result shows an error in the resources.timeoutsec for the deployment. Allowed values must be less than 300 sec. My original value was above that. I reduced readinessProbe.timeoutSeconds to a lower number. After 30 mins the SSL cert was generated and the subdomain was verified.
It turns out that I had mistakenly done some changes to the production environment and others to staging. Everything worked as expected when I figured that out and followed the guide. :-)
I'm trying to set up kafka in SSL [1-way] mode. I've gone through the official documentation and successfully generated the certificates. I'll note down the behavior for 2 different cases. This setup has only one broker and one zookeeper.
Case-1: Inter-broker communication - Plaintext
Relevant entries in my server.properties file are as follows:
listeners=PLAINTEXT://localhost:9092, SSL://localhost:9093
ssl.keystore.location=/Users/xyz/home/ssl/server.keystore.jks
ssl.keystore.password=****
ssl.key.password=****
I've added a client-ssl.properties in kafka config dir with following entries:
security.protocol=SSL
ssl.truststore.location=/Users/xyz/home/ssl/client.truststore.jks
ssl.truststore.password=****
If I put bootstrap.servers=localhost:9093 or bootstrap.servers=localhost:9092 in my config/producer.properties file, my console-producers/consumers work fine. Is that the intended behavior? If yes, then why? Because I'm specifically trying to connect to localhost:9093 from producer/consumer in SSL mode.
Case-2: Inter-broker communication - SSL
Relevant entries in my server.properties file are as follows:
security.inter.broker.protocol=SSL
listeners=SSL://localhost:9093
ssl.keystore.location=/Users/xyz/home/ssl/server.keystore.jks
ssl.keystore.password=****
ssl.key.password=****
My client-ssl.properties file remains the same. I put bootstrap.servers=localhost:9093 in producer.properties file. Now, none of my producer/consumer can connect to kafka. I get the following msg:
WARN Error while fetching metadata with correlation id 0 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
What am I doing wrong?
In all these cases I'm using the following commands to start producers/consumers:
./kafka-console-producer.sh --broker-list localhost:9093 --topic test --producer.config ../config/client-ssl.properties
./kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --consumer.config ../config/client-ssl.properties
Make sure that the common names (CN) in your certificates match your hostname.
SSL protocol verify CN against hostname. I guess here you should have CN=localhost.
I had a similar issue and that's how I fixed it.
One important information regarding this: The behavior where the CN has to be equal to the hostname can be deactivated, by adding the following line to server.properties:
ssl.endpoint.identification.algorithm=
The default value for this setting is set to https, which ultimately activates the host to CN verification. This is the default since Kafka 2.0.
I've successfully tested a SSL setup (just on the broker side though) with the following properties:
############################ SSL Config #################################
ssl.truststore.location=/path/to/kafka.truststore.jks
ssl.truststore.password=TrustStorePassword
ssl.keystore.location=/path/to/kafka.server.keystore.jks
ssl.keystore.password=KeyStorePassword
ssl.key.password=PrivateKeyPassword
security.inter.broker.protocol=SSL
listeners=SSL://localhost:9093
advertised.listeners=SSL://127.0.0.1:9093
ssl.client.auth=required
ssl.endpoint.identification.algorithm=
You can also find a Shell script to generate SSL certificates (with key- and truststores) alongside some documentation in this github project: https://github.com/confluentinc/confluent-platform-security-tools
Well, both the given answers point out to the right direction, but some more details need to be added to end this confusion.
I generated the certs using this bash script from confluent, and when I looked inside the file, it made sense. I'm pasting the relevant section here:
echo " NOTE: currently in Kafka, the Common Name (CN) does not need to be the FQDN of"
echo " this host. However, at some point, this may change. As such, make the CN"
echo " the FQDN. Some operating systems call the CN prompt 'first / last name'"
There you go. When you're generating the certs, make sure to put localhost (or FQDN) when it asks for first / last name. Do remember that you need to use the same endpoint to expose the broker.