GCP managed SSL certificate stuck at FAILED_RATE_LIMITED? - ssl

I am trying to issue a SSL certificate for my loadbalancer, which serves traffic to an application hosted on GKE cluster in the backend. I have reserved a static IP for the loadbalancer, and I am able to access the application over HTTP.
Early this week, I had managed to generate SSL certificates and attached it to the load balancer using the GCP managed SSL certificates, and my domain was working on HTTPS network. However I had to delete the loadbalancer and launch it again using the Kubernetes Ingress. But now I am not able to apply the certificates to the loadbalancer. The certificates fail to provision with the error: FAILED_RATE_LIMITED.
Now I have deleted all the previously generated certificates on my account, and there is a single certificate now. According to the error, I have exhausted the number of GCP managed SSL certificates on my account. But I have deleted all the previously generated certs and I have almost waited for 10-11 hours, but the issue still exists. Is there a solution to this?

Related

Enable https traffic to Kubernetes pod with internal MTLS auth on EKS Fargate

I'm building a service that require PKI MTLS X509Certificate authentication.
So I have an AWS ACM Private CA that issues private client certificates to identify the client and regular ACM issued certificate to identify the server.
For the MTLS authentication I use Spring security (Java), which requires a trust store containing the private root CA certificate for authenticating clients as well as a PKCS#12 key store to enforce SSL (for the client to authenticate the server).
Everything works fine when I run it locally using SSL.
Before I enabled SSL in the application, everything worked fine in the cluster as well.
However, when I added MTLS logic to the application the connection hangs when talking to the application in the cluster.
I'm guessing that I need to configure https for my service/ingress in the cluster, but everything I find specifies an arn for the certificate to be used, while I already have it installed in the application.
All I want to do is to allow https traffic to pass through the load balancer into my application and let the application handle the SSL stuff.
Alternatively if it would be possible to configure X509Certificate authentication in Spring security without the SSL certificate for the client to verify the server.
In that case the SSL certificate would only be used in production and not locally.
Would that be possible and what's the pros and cons with each?
So in the end I ended up using the nginx ingress controller with ssl-passthrough.
However, EKS Fargate pods do not support the nginx ingress controller, so I had to create a new managed cluster.
Technically I could have used mixed cluster with both managed and Fargate nodes, but I felt that Fargate had given me enough headache and when I did some calculations I found that Fargate probably costs more in our case.

cloud run, custom domain, and ssl termination

I'm using cloud run behind a load balancer. However, it's simply a passthrough that performs ssl termination of my own certificate and exposes the cloud run service(s).
I read this and thinking of trying it out.
https://cloud.google.com/run/docs/mapping-custom-domains#run
In the Firebase docs, it says:
"After we verify domain ownership, we provision an SSL certificate for
your domain and deploy it across our global CDN within 24 hours after
you point your DNS A records to Firebase Hosting.
Your domain will be listed as one of the Subject Alternative Names
(SAN) in the FirebaseApp SSL certificate. "
For Cloud Run, it says something similar. It will generate and manage my SSL certificates. Does anybody have experience with this?
Will this newly generated certificate invalid my current cert? I assume so, and that's ok. I'm only using cloud run for subdomains like api-prod.example.io for my API and app-prod.example.io for my frontend nginx static webserver.
Is their any other considerations of why I should not move over? If I do move over, should I do Firebase instead? I supposed it's:
Firebase + Cloud Run
vs
Cloud Run
vs
GCP LB + Cloud Run + Own Managed Certificate (Current)
Thanks in advance!

Google Managed Certificate with ip address

I am tryin to create a Google Managed SSL Certificate for my compute engine instance. However, I am required to enter a domain. The issue is, I do not have a domain associated with my instance, I only have its external ip address.
How can I use the ip address of my instance for the certificate, or how do I associate it with a domain?
You might be confusing what Google is asking for.
In order to create an SSL certificate, you must own / manage /control a domain name.
Next, in order to use the SSL certificate that Google created (or by other means such as Let's Encrypt), you map the Google Service, such as a load balancer, to a backend such as Google Compute Engine VM instances.
If your goal is to create an SSL certificate using an IP address - you cannot. SSL certificates require a domain name. There are exceptions to this, such as using a machine name to create a self signed certificate, but this does not apply to your situation.
Another important item. Once you create a Google Managed SSL certificate, you cannot use it on your VM instance. You can only use it for Google managed services such as Load Balancer. The Load Balancer will then sit in front of your VM instance.
If your goal is to create an SSL certificate that you can install on your VM instance, look into Let's Encrypt. Let's Encrypt is simple to work with and their certificates are free. You will still need to own a domain name, but you will be able to control where it is installed.
Once you install a Let's Encrypt SSL certificate, you modify the DNS servers that your domain registrar setup to point your domain name to an IP address.

How to use Kubernetes SSL certificates

I am trying to build an HTTPs proxy server in front of another service in Kubernetes, using either an NginX proxy LoadBalancer server, or Ingress. Either way, I need a certificate and key so that my external requests get authenticated.
I'm looking at how to manage tls in a cluster, and I've noticed that the certificate used to connect to the container cluster is the same one as is mounted at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt on a running pod.
So I'm thinking that my node cluster already has a registered certificate, all I need is the key, throw it into a secret and mount that into my proxy server. But I can't find how.
Is it this simple? How would I do that? Or do I need to create a new certificate, sign it etc etc? Would I then need to replace the current certificate?
If you want an external request to get into your K8s cluster then this is the job of an ingress controller, or configuring the service with a loadbalancer, if your cloud provider supports it.
The certificate discussed in your reference is really meant to be used for intra-cluster communications, as it says:
Every Kubernetes cluster has a cluster root Certificate Authority (CA). The CA is generally used by cluster components to validate the API server’s certificate, by the API server to validate kubelet client certificates, etc.
If you go for an ingress approach then here is the doc for tls. At the bottom a list of alternatives, such as the load balancer approach.
I guess you could use the internal certificate externally if you are able to get all your external clients to trust it. Personally I'd probably use kube-lego, which automates getting certificates from Let's Encrypt, since most browsers trust this CA now.
Hope this helps

How to keep the SSL server certificate for verification in Cloud Foundry/Heroku?

I am developing an app to run in Cloud Foundry.
The app makes constant connections to a web service using https protocol.
The web service uses a pair of self-signed certificate created by openssl.
As there is no DNS setup, I am using IP address as the Common Name(CN) in the ssl certificate.
However, the web service IP address varies from time to time. The ssl certificate has to be re-generated each time.
In order for the app to connect, it needs to trust the SSL certificate so I have been packaging the public key for the web service’s SSL cert as a file with my app.
The problem is that I have to re-upload the app to Cloud Foundry once the public key of the SSL cert changes.
Here are some possible solutions:
Register a host name in DNS. In that case, the certificate is only bound to host name. (Might not be possible cos of the budget. )
Create a private CA and issue certificates from the CA, then install the CA as the trusted CA on the client. It is feasible and a common way for internal services. However, what if the app is pushed to the CF? How can we configure the node for the certs?
Disable the SSL server authentication. Not sure whether it would put the app at risk if the authentication is skipped. For the time being, the app pulls data from the web service.
I've been thinking of keeping the public key in the database. In that case, I don't need to re-upload the app to make it take effect. But I am not sure whether it is a safe way.
Question
I am seeking for a common and safe way to keep the SSL server cert in a Cloud Foundry env. Are any of the above solutions viable? If not, is there any other CF preferred ways?
Thank you
This is a bit old, but in case this helps...
Did you try to generate your server SSL certificate with whatever hostname (even "localhost"). As you are uploading this certificate in your application (i.e. to "blindly" trust it), I think that it could work and this would avoid dependencies with your IP address.