cloud run, custom domain, and ssl termination - ssl

I'm using cloud run behind a load balancer. However, it's simply a passthrough that performs ssl termination of my own certificate and exposes the cloud run service(s).
I read this and thinking of trying it out.
https://cloud.google.com/run/docs/mapping-custom-domains#run
In the Firebase docs, it says:
"After we verify domain ownership, we provision an SSL certificate for
your domain and deploy it across our global CDN within 24 hours after
you point your DNS A records to Firebase Hosting.
Your domain will be listed as one of the Subject Alternative Names
(SAN) in the FirebaseApp SSL certificate. "
For Cloud Run, it says something similar. It will generate and manage my SSL certificates. Does anybody have experience with this?
Will this newly generated certificate invalid my current cert? I assume so, and that's ok. I'm only using cloud run for subdomains like api-prod.example.io for my API and app-prod.example.io for my frontend nginx static webserver.
Is their any other considerations of why I should not move over? If I do move over, should I do Firebase instead? I supposed it's:
Firebase + Cloud Run
vs
Cloud Run
vs
GCP LB + Cloud Run + Own Managed Certificate (Current)
Thanks in advance!

Related

Google Cloud SSL Certificate still provisioning after 24 hours

My Google Cloud (load balancer, certificate, etc.)configuration has all of the required steps completed but the certificate is still provisioning.
I fixed it. The solution is to create a domain mapping and DNS record using the gcloud tool instead of Google Cloud Console.

Azure Application Gateway with Let's Encrypt certs for SaaS product

I'm running a Kubernetes cluster in AKS with Traefik as Ingress controller. I have cert-manager to automatically generate and renew certificates from Let's encrypt. It's a SaaS application, so users can choose to configure their own domain names. So for example, the "generic" URL would be mysaas.com, bust customers can choose to use something like customer.com and CNAME that to mysaas.com instead, and cert-manager will generate a cert for customer.com.
I'm currently looking at placing a WAF in front of all of this, and since I'm running in Azure I'm looking at Azure Application Gateway more specifically.
What I'm trying to figure out now, is how to automate configuration of the certs in the Application Gateway as well, so that it can decrypt/encrypt the traffic. These are the things I'm trying to figure out how to do (if they're even possible):
Automate upload of certs to Azure KeyVault and let Application Gateway read them from there
Con: Not sure how to do this
Pro: Cert management is still handled by cert-manager which I trust
Automate upload of certs directly to Application Gateway
Con: Not sure how to do this
Pro: Cert management is still handled by cert-manager which I trust
Skip encryption between Application Gateway and the AKS cluster and let Application Gateway handle generation/renewal of certs.
I would have to make the cluster private, but I guess that's a good thing.
Pro: Nice not having two things depend on the certs (Traefik and Application Gateway)
Con: The current setup using cert-manager works fine and I have monitoring in place for that. I'm not sure I can get a setup as nice as that using only Application Gateway, and I don't know if it's even possible.
I know that the Azure Application Gateway Ingress Controller exists, but I really like the setup with Traefik that I have in place today, and frankly there are too many things that scare me a bit about the AGIC.

Changes in Heroku. How to continue having free Cloudflare SSL on free Heroku dynos?

Heroku email from today:
When an app is migrated to the new infrastructure, its default
appname.herokuapp.com, DNS records, and any haiku.herokudns.com custom
domain records are modified to point to the IP addresses of the new
routing infrastructure. For a period of 24-48 hours, the app is
accessible via both the new and old routing infrastructure. When the
migration completes, the app will no longer be accessible via the old
routing infrastructure and all traffic must flow via the new
infrastructure. Requests for an app sent to the old infrastructure
will result in error code: H31 Misdirected Request.
To get correct and future-proof DNS targets for custom domains
associated with your Heroku apps, you can run heroku domains and
compare the DNS target in the output to the CNAME target that you’ve
configured with your DNS provider. If the DNS targets don’t match, you
need to update your DNS configuration to point to the DNS targets
provided by Heroku.
I’ve done the above. This then breaks the workaround to get free SSL from cloudflare to work with Heroku (because of the move away from *herokuapp.com, which allowed the workaround). So, now one has to upload a cloudflare certificate by using Heroku SSL (which one can only use on paid dynos)
Rest of the email:
If you have any SSL Endpoints associated to your app, you can verify
the DNS by following this step from the SSL Endpoint setup
documentation. Please note that the SSL Endpoint add-on is deprecated
and will be removed starting July 31, 2021. All existing and new
Heroku applications should use Heroku SSL, which includes Automated
Certificate Management (ACM).
Anyone with a workaround to enable the use of cloudflare SSL in a free Heroku dynos setup?

Getting 'SSL_ERROR_BAD_CERT_DOMAIN' error after deploying site using Surge to a custom domain

I'm using Surge.sh to deploy a simple react app to a custom domain i bought from GoDaddy.com.
I've followed the instructions regarding custom domains on their site and get a confirmation that my site was deployed successfully:
https://surge.sh/help/adding-a-custom-domain
On GoDaddy I've configured the CNAME and A types to point to Surge:
However when I open up the domain at https://codatheory.dev/ I receive an error message with error code: SSL_ERROR_BAD_CERT_DOMAIN
I'm quite new to hosting sites on custom domains, so I'm sure I've misunderstood something. The certificate registered on the site is provided by surge.sh.
What configuration steps can I take to resolve this issue? Do I need to create a new certificate to be signed by a CA in order to use this domain, or have I missed something in my deployment?
Thanks!
SSl with surge comes out of the box with *.surge.sh domains. For these domains you can force a redirect of http to https. However, for custom domains surge does not offer SSL as stated explicitly here and they mentioned that it is a feature of surge plus. To answer your Q, yes you could generate a certificate using some provider (e.g. https://letsencrypt.org/) and add it to surge but that would be within the frame of surge plus (not the free tier anymore).
I would try if I were you maybe s3 with cloudfront? it does not cost that much if the traffic is not that high.

Backend with self-signed certificate

I'm building a website with separated backend / frontend. For now this website is hosted on my Kubernetes cluster at home. There is one pod for the frontend and another for the backend.
Theses pods are accessible via Traefic. I have internal DNS name (ie backend.home.local and frontend.home.local) to access it. I have generated a self-signed CA that handle the SSL on my .home.local private domain so I can reach them in my private network from my computer which have registered my private CA.
My frontend communicates with my backend also via HTTPS using the same url ( in .home.local). My frontend knows the CA (I proceeded like here).
I also have an external domain name pointing on my frontend. I can access on it via HTTPS without trouble outside and inside my network.
Ok so far so good. My issue is that when I access to my frontend via my external domain name, my frontend succeed to communicate with the backend when I'm using a computer which have registered my private CA but it fails with a err_cert_authority_invalid when I'm using a computer without my CA.
I understand by that the end user have to have the CA of all resources of the website, else the browser throw an error, even if the frontend initiate an other SSL connection with the backend with its own the CA.
I also tries to deactivate https between the frontend and the backend but this time I have mixed contents error ... not better.
Do I inevitably have to have a backend accessible from outside with a proper let's encrypt certificate ? I would prefer have a backend not accessible via outside but I don't know if I can do that properly.
I hope this post is not too messy.
Have a good day.
Do I inevitably have to have a backend accessible from outside with a proper let's encrypt certificate ?
Yes, that is the case. This does not necessarily mean you have to change your backend service, you can do SSL termination for your backend through traefik. Setting up Let's Encrypt through traefik is fully automated process, it should be fairly easy.
Think about it - HTTPS is not just a "nice to have" feature, it is essential for security. All communication over public networks should only be done securely.