How to programmatically set the load balancer ciphers on Amazon Elastic BeansTalk? - load-balancing

I have been reading the Amazon Elastic Beanstalk documentation, and was able to add the following lines to a configuration file in .ebextensions to set the Elastic Load Balancer to use HTTPS:
option_settings:
# Elastic Load Balancer Options
- namespace: aws:elb:loadbalancer
option_name: LoadBalancerHTTPPort
value: 80
- namespace: aws:elb:loadbalancer
option_name: LoadBalancerPortProtocol
value: HTTP
- namespace: aws:elb:loadbalancer
option_name: LoadBalancerHTTPSPort
value: 443
- namespace: aws:elb:loadbalancer
option_name: LoadBalancerSSLPortProtocol
value: HTTPS
- namespace: aws:elb:loadbalancer
option_name: SSLCertificateId
value: arn:aws:iam:<my cert ARN>
That works perfectly. However, I was unable to find in the documentation any points on how to set the list of allowed ciphers for this load balancer without resorting to the Console or to the CLI commands. Any ideas on how to do this?

Contacted AWS support and confirmed that this is not yet supported. Bummer.

I'm not sure when/if this changed, but you can do this on a "classic" load balancer using .ebextensions:
option_settings:
aws:elb:policies:tlspolicy:
SSLProtocols: "Protocol-TLSv1.2,Server-Defined-Cipher-Order,ECDHE-RSA-AES256-SHA384,ECDHE-RSA-AES256-GCM-SHA384,ECDHE-RSA-AES128-SHA256,ECDHE-RSA-AES128-GCM-SHA256"
LoadBalancerPorts: "443"
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html

It seems this is possible now. According to this Elastic Load Balancer doc, you can create your own policy:
Custom Security Policy
You can create a custom negotiation configuration with the ciphers and protocols that you need. Note that Some security compliance standards (such as PCI, SOX, and so on) might require a specific set of protocols and ciphers to ensure that the security standards are met. In such cases, you can create a custom security policy to meet those standards.
There are instructions at the AWS page, Update the SSL Negotiation Configuration of Your Load Balancer, for using the aws elb command to customize a policy.
I don't want to just quote what's there, but I wanted to note that it seems possible now in case somebody else comes across the question here. In brief, though, you'd use aws elb create-load-balancer-policy to create a policy where you define the set of ciphers and their order, then aws elb set-load-balancer-policies-of-listener to enable it on your load balancers.

Related

Istio Ingress with cert-manager

I have Kubernetes with Kafka where is also running Istio with Strimzi. Certificates are stored in cert-manager. I want to use TLS passthrough in my ingress but I am a little bit confused of that.
When SIMPLE is used, there is credentialName, which must be the same as secret.
tls:
mode: SIMPLE
credentialName: httpbin-credential
It is nice and simple way. But how about mode: PASSTHROUGH when I have many hosts? I studied demo on istio web (https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/#deploy-an-nginx-server) and their certificate details are stored in server configuration file and they are creating configmap. In official Istio documentation is noted that this parameter is only for MUTUAL and SIMPLE.
What is correct and simple way to expose my hosts using istio ingress to external traffic using cert-manager?
The difference between SIMPLE & PASSTHROUGH is:
SIMPLE TLS instructs the gateway to pass the ingress traffic by terminating TLS.
PASSTHROUGH TLS instructs the gateway to pass the ingress traffic AS IS, without terminating TLS.

Google Managed SSL Certificate Stuck on FAILED_NOT_VISIBLE

I'm trying to configure an HTTPS/Layer 7 Load Balancer with GKE. I'm following SSL certificates overview and GKE Ingress for HTTP(S) Load Balancing.
My config. has worked for some time. I wanted to test Google's managed service.
This is how I've set it up so far:
k8s/staging/staging-ssl.yml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-staging-lb-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-staging-global"
ingress.gcp.kubernetes.io/pre-shared-cert: "staging-google-managed-ssl"
kubernetes.io/ingress.allow-http: "false"
spec:
rules:
- host: staging.my-app.no
http:
paths:
- path: /*
backend:
serviceName: my-svc
servicePort: 3001
gcloud compute addresses list
#=>
NAME REGION ADDRESS STATUS
my-staging-global 35.244.160.NNN RESERVED
host staging.my-app.no
#=>
35.244.160.NNN
but it is stuck on FAILED_NOT_VISIBLE:
gcloud beta compute ssl-certificates describe staging-google-managed-ssl
#=>
creationTimestamp: '2018-12-20T04:59:39.450-08:00'
id: 'NNNN'
kind: compute#sslCertificate
managed:
domainStatus:
staging.my-app.no: FAILED_NOT_VISIBLE
domains:
- staging.my-app.no
status: PROVISIONING
name: staging-google-managed-ssl
selfLink: https://www.googleapis.com/compute/beta/projects/my-project/global/sslCertificates/staging-google-managed-ssl
type: MANAGED
Any idea on how I can fix or debug this further?
I found a section in the doc I linked to at the beginning of the post
Associating SSL certificate resources with a target proxy:
Use the following gcloud command to associate SSL certificate resources with a target proxy, whether the SSL certificates are self-managed or Google-managed.
gcloud compute target-https-proxies create [NAME] \
--url-map=[URL_MAP] \
--ssl-certificates=[SSL_CERTIFICATE1][,[SSL_CERTIFICATE2], [SSL_CERTIFICATE3],...]
Is that necessary when I have this line in k8s/staging/staging-ssl.yml?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
. . .
ingress.gcp.kubernetes.io/pre-shared-cert: "staging-google-managed-ssl"
. . .
I have faced this issue recently. You need to check whether your A Record correctly points to the Ingress static IP.
If you are using a service like Cloudflare, then disable the Cloudflare proxy setting so that ping to the domain will give the actual IP of Ingress. THis will create the Google Managed SSL certificate correctly with 10 to 15 minutes.
Once the certificate is up, you can again enable Cloudflare proxy setting.
I'm leaving this for anyone who might end up in the same situation as me. I needed to migrate from a self-managed certificate to a google-managed one.
I did create the google-managed certificate following the guide and was expecting to see it being activated before applying the certificate to my Kubernetes ingress (to avoid the possibility of a downtime)
Turns out, as stated by the docs,
the target proxy must reference the Google-managed certificate
resource
So applying the configuration with kubectl apply -f ingress-conf.yaml made the load balancer use the newly created certificate, which became active shortly after (15 min or so)
What worked for me after checking the answers here (I worked with a load balancer but IMO this is correct for all cases):
If some time passed this certificate will not work for you (It may be permamnently gone and it will take time to show that) - I created a new one and replaced it in the Load Balancer (just edit it)
Make sure that the certificate is being used a few minutes after creating it
Make sure that the DNS points to your service. And that your configuration is working when using http!! - This is the best and safest way (also if you just moved a domain - make sure that when you check it you reach to the correct IP)
After creating a new cert or if the problem was fixed - your domain will turn green but you still need to wait (can take an hour or more)
As per the following documentation which you provided, this should help you out:
The status FAILED_NOT_VISIBLE indicates that certificate provisioning failed for a domain because of a problem with DNS or the load balancing configuration. Make sure that DNS is configured so that the certificate's domain resolves to the IP address of the load balancer.
What is the TTL (time to live) of the A Resource Record for staging.my-app.no?
Use, e.g.,
dig +nocmd +noall +answer staging.my-app.no
to figure it out.
In my case, increasing the TTL from 60 seconds to 7200 let the domainStatus finally arrive in ACTIVE.
In addition to the other answers, when migrating from self-managed to google-managed certs I had to:
Enable http to my ingress service with kubernetes.io/ingress.allow-http: true
Leave the existing SSL cert running in the original ingress service until the new managed cert was Active
I also had an expired original SSL cert, though I'm not sure this mattered.
In my case, at work. We are leveraging the managed certificate a lot in order to provide dynamic environment for Developers & QA. As a result, we are provisioning & removing managed certificate quite a lot. This mean that we are also updating the Ingress resource as we are generating & removing managed certificate.
What we have founded out is that even if you delete the reference of the managed certificate from this annotation:
networking.gke.io/managed-certificates: <list>
It seems that randomly the Ingress does not remove the associated ssl-certificates from the LoadBalancer.
ingress.gcp.kubernetes.io/pre-shared-cert: <list>
As a result, when the managed certificate is deleted. The ingress will be "stuck" in a way, that no new managed certificate could be provision. Hence, new managed-ceritifcate will after some times transition from PROVISIONING state to FAILED_NOT_VISIBLE state
The only solution that we founded out so far, is that if a new certificate does not get provision after 30min. We will check if the annotation ingress.gcp.kubernetes.io/pre-shared-cert contains ssl-certificate that does not exist anymore.
You can check existing ssl-certificate with the command below
gcloud compute ssl-certificates list
If it happens that one ssl-certificate that does not exist anymore is still hanging around in the annotation. We'll then remove the unnecessary ssl-certificate from the ingress.gcp.kubernetes.io/pre-shared-cert annotation manually.
After applying the updated configuration, in about 5 minutes, the new managed certificate which was in FAILED_NOT_VISIBLE state should be provision and in ACTIVE state.
As already pointed by Mitzi https://stackoverflow.com/a/66578266/7588668
This is what worked for me
Create cert with subdomains/domains
Must Add it load balancer ( I was waiting for it to become active but only when you add it becomes active !! )
Add static IP as A record for domains/subdomain
It worked in 5min
In my case I needed alter the healthcheck and point it to the proper endpoint ( /healthz on nginx-ingress) and after the healtcheck returned true I had to make sure the managed certificate was created in the same namespace as the gce-ingress. After these two things were done it finally went through, otherwise I got the same error. "FAILED_NOT_VISIBLE"
I met the same issue.
I fixed it by re-looking at the documentation.
https://cloud.google.com/load-balancing/docs/ssl-certificates/troubleshooting?_ga=2.107191426.-1891616718.1598062234#domain-status
FAILED_NOT_VISIBLE
Certificate provisioning failed for the domain. Either of the following might be the issue:
The domain's DNS record doesn't resolve to the IP address of the Google Cloud load balancer. To resolve this issue, update the DNS records to point to your load balancer's IP address.
The SSL certificate isn't attached to the load balancer's target proxy. To resolve this issue, update your load balancer configuration.
Google Cloud continues to try to provision the certificate while the managed status is PROVISIONING.
Because my loadbalancer is behind cloudflare. By default cloudflare has cdn proxy enabled, and i need to first disable it after the DNS verified by Google, the cert state changed to active.
I had this problem for days. Even though the FQDN in Google Cloud public DNS zone correctly resolved to the IP of the HTTPS Load Balancer, certificate created failed with FAILED_NOT_VISIBLE. I eventually resolved the problem as my domain was set up in Google Domains with DNSSEC but had an incorrect DNSSEC record when pointing to the Google Cloud Public DNS zone. DNSSEC configuration can be verified using https://dnsviz.net/
I had the same problem. But my problem was in the deployment. I ran
kubectl describe ingress [INGRESS-NAME] -n [NAMESPACE]
The result shows an error in the resources.timeoutsec for the deployment. Allowed values must be less than 300 sec. My original value was above that. I reduced readinessProbe.timeoutSeconds to a lower number. After 30 mins the SSL cert was generated and the subdomain was verified.
It turns out that I had mistakenly done some changes to the production environment and others to staging. Everything worked as expected when I figured that out and followed the guide. :-)

HTTPS endpoints for local kubernetes backend service addresses, after SSL termination

I have a k8s cluster that sits behind a load balancer. The request for myapisite.com passes through the LB and is routed by k8s to the proper deployment, getting the SSL cert from the k8s load balancer ingress, which then routes to the service ingress, like so:
spec:
rules:
- host: myapisite.com
http:
paths:
- backend:
serviceName: ingress-605582265bdcdcee247c11ee5801957d
servicePort: 80
path: /
tls:
- hosts:
- myapisite.com
secretName: myapisitecert
status:
loadBalancer: {}
So my myapisite.com resolves on HTTPS correctly.
My problem is that, while maintaining the above setup (if possible), I need to be able to go to my local service endpoints within the same namespace on HTTPS, i.e. from another pod I should be able to curl or wget the following without a cert error:
https:\\myapisite.namespace.svc.cluster.local
Even if I were interested in not terminating SSL until the pod level, creating a SAN entry on the cert for a .local address is not an option, so that solution is not viable.
Is there some simple way I'm missing to make all local DNS trusted in k8s? Or some other solution here that's hopefully not a reinvention of the wheel? I am using kubernetes version 1.11 with CoreDNS.
Thanks, and sorry in advance if this is a dumb question.
If your application can listen on both HTTP and HTTPS, you can configure both. Meaning you will be able to access via both HTTP and HTTPS by your preference. Now, how you create and distribute certificate is a different story, but you must solve it on your own (probably by using your own CA and storing cert/key in secret). Unless you want to use something like Istio and its mutual tls support to secure traffic between services.
While you write what you want to achieve, we don't really know why. The reason for this need might actually help to suggest the best solution

Enable HTTPS on GCE/GKE

I am running web site with Kubernetes on Google Cloud. At the moment, everything is working well - through http. But I need https. I have several services and one of them is exposed to the outside world, let's call it web. As far as I know, this is the only service that needs to be modified. I tried to creating a static IP and TCP/SSL loadbalancer ssl-LB in the Networking section of GCP and using that LB in web.yaml, which I create. Creating the service gets stuck with:
Error creating load balancer (will retry): Failed to create load
balancer for service default/web: requested ip <IP> is
neither static nor assigned to LB
aff3a4e1f487f11e787cc42010a84016(default/web): <nil>
According to GCP my IP is static, however. The hashed LB I cannot find anywhere and it should be assigned to ssl-LB anyway. How do I assign this properly?
More details:
Here are the contents of web.yaml
apiVersion: v1
kind: Service
metadata:
name: web
labels:
...
spec:
type: LoadBalancer
loadBalancerIP: <RESERVED STATIC IP>
ports:
- port: 443
targetPort: 7770
selector:
...
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web
spec:
replicas: 1
template:
metadata:
labels:
...
spec:
containers:
- name: web
image: gcr.io/<PROJECT>/<IMAGE NAME>
ports:
- containerPort: 7770
Since you have not mentioned this already, I'm just assuming you're using Google Container Engine (GKE) for your Kubernetes setup.
In the service resource manifest, if you set the Type to LoadBalancer, Kubernetes on GKE automatically sets up Network load balancing (L4 Load balancer) using GCE. You will have to terminate connections in your pod using your own custom server or something like nginx/apache.
If your goal is to set up a (HTTP/HTTPS) L7 load balancer (which looks to be the case), it will be simpler and easier to use the Ingress resource in Kubernetes (starting with v1.1). GKE automatically sets up a GCE HTTP/HTTPS L7 load balancing with this setup.
You will be able to add your TLS certificates which will get provisioned on the GCE load balancer automatically by GKE.
This setup has the following advantages:
Specify services per URL path and port (it uses URL Maps from GCE to configure this).
Set up and terminate SSL/TLS on the GCE load balancer (it uses Target proxies from GCE to configure this).
GKE will automatically also configure the GCE health checks for your services.
Your responsibility will be to handle the backend service logic to handle requests in your pods.
More info available on the GKE page about setting up HTTP load balancing.
Remember that when using GKE, it automatically uses the available GCE load balancer support for both the use cases described above and you will not need to manually set up GCE load balancing.

How to force SSL for Kubernetes Ingress on GKE

Is there a way to force an SSL upgrade for incoming connections on the ingress load-balancer? Or if that is not possible with, can I disable port :80? I haven't found a good documentation pages that outlines such an option in the YAML file. Thanks a lot in advance!
https://github.com/kubernetes/ingress-gce#frontend-https
You can block HTTP through the annotation kubernetes.io/ingress.allow-http: "false" or redirect HTTP to HTTPS by specifying a custom backend. Unfortunately GCE doesn't handle redirection or rewriting at the L7 layer directly for you, yet. (see https://github.com/kubernetes/ingress-gce#ingress-cannot-redirect-http-to-https)
Update: GCP now handles redirection rules for load balancers, including HTTP to HTTPS. There doesn't appear to be a method to create these through Kubernetes YAML yet.
This was already correctly answered by a comment on the accepted answer. But since the comment is buried I missed it several times.
As of GKE version 1.18.10-gke.600 you can add a k8s frontend config to redirect from http to https.
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: ssl-redirect
spec:
redirectToHttps:
enabled: true
# add below to ingress
# metadata:
# annotations:
# networking.gke.io/v1beta1.FrontendConfig: ssl-redirect
The annotation has changed:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
...
Here is the annotation change PR:
https://github.com/kubernetes/contrib/pull/1462/files
If you are not bound to the GCLB Ingress Controller you could have a look at the Nginx Ingress Controller. This controller is different to the builtin one in multiple ways. First and foremost you need to deploy and manage one by yourself. But if you are willing to do so, you get the benefit of not depending on the GCE LB (20$/month) and getting support for IPv6/websockets.
The documentation states:
By default the controller redirects (301) to HTTPS if TLS is enabled for that ingress . If you want to disable that behaviour globally, you
can use ssl-redirect: "false" in the NGINX config map.
The recently released 0.9.0-beta.3 comes with an additional annotation for explicitly enforcing this redirect:
Force redirect to SSL using the annotation ingress.kubernetes.io/force-ssl-redirect
Google has responded to our requests and is testing HTTP->HTTPS SSL redirection on their load balancers. Their latest answer said it should be in Alpha sometime before the end of January 2020.
Their comment:
Thank you for your patience on this issue. The feature is currently in testing and we expect to enter Alpha phase before the end of January. Our PM team will have an announcement with more details as we get closer to the Alpha launch.
My fingers are crossed that we'll have a straightforward solution to this very common feature in the near future.
UPDATE (April 2020):
HTTP(S) rewrites is now a Generally Available feature. It's still a bit rough around the edges and does not work out-of-the-box with the GCE Ingress Controller unfortunately. But time will tell and hopefully a native solution will appear.
A quick update. Here
Now a FrontEndConfig can be make to configure the ingress. Hopes it helps.
Example:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: my-frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: 301
You'll need to make sure that your load balancer supports HTTP and HTTPS
Worked on this for a long time. In case anyone isn't clear on the post above. You would rebuild your ingress with annotation -- kubernetes.io/ingress.allow-http: "falseā€ --
Then delete your ingress and redeploy. The annotation will have the ingress only create a LB for 443, instead of both 443 and 80.
Then you do a compute HTTP LB, not one for GKE.
Gui directions:
Create a load balancer and choose HTTP(S) Load Balancing -- Start configuration.
choose - From Internet to my VMs and continue
Choose a name for the LB
leave the backend configuration blank.
Under Host and path rules, select Advanced host and path rules with the action set to
Redirect the client to different host/path.
Leave the Host redirect field blank.
Select Prefix Redirect and leave the Path value blank.
Chose the redirect response code as 308.
Tick the Enable box for HTTPS redirect.
For the Frontend configuration, leave http and port 80, for ip address select the static
IP address being used for your GKE ingress.
Create this LB.
You will now have all http traffic go to this and 308 redirect to your https ingress for GKE. Super simple config setup and works well.
Note: If you just try to delete the port 80 LB that GKE makes (not doing the annotation change and rebuilding the ingress) and then adding the new redirect compute LB it does work, but you will start to see error messages on your Ingress saying error 400 invalid value for field 'resource.ipAddress " " is in use and would result in a conflict, invalid. It is trying to spin up the port 80 LB and can't because you already have an LB on port 80 using the same IP. It does work but the error is annoying and GKE keeps trying to build it (I think).
Thanks to the comment of #Andrej Palicka and according to the page he provided: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect now I have an updated and working solution.
First we need to define a FrontendConfig resource and then we need to tell the Ingress resource to use this FrontendConfig.
Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: myapp-prd
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
networking.gke.io/v1beta1.FrontendConfig: myapp-frontend-config
spec:
defaultBackend:
service:
name: myapp-app-service
port:
number: 80
---
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: myapp-frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT
You can disable HTTP on your cluster (note that you'll need to recreate your cluster for this change to be applied on the load balancer) and then set HTTP-to-HTTPS redirect by creating an additional load balancer on the same IP address.
I spend couple of hours on the same question, and ended up doing what I've just described. It works perfectly.
Redirecting to HTTPS in Kubernetes is somewhat complicated. In my experience, you'll probably want to use an ingress controller such as Ambassador or ingress-nginx to control routing to your services, as opposed to having your load balancer route directly to your services.
Assuming you're using an ingress controller, then:
If you're terminating TLS at the external load balancer and the LB is running in L7 mode (i.e., HTTP/HTTPS), then your ingress controller needs to use X-Forwarded-Proto, and issue a redirect accordingly.
If you're terminating TLS at the external load balancer and the LB is running in TCP/L4 mode, then your ingress controller needs to use the PROXY protocol to do the redirect.
You can also terminate TLS directly in your ingress controller, in which case it has all the necessary information to do the redirect.
Here's a tutorial on how to do this in Ambassador.