I am running web site with Kubernetes on Google Cloud. At the moment, everything is working well - through http. But I need https. I have several services and one of them is exposed to the outside world, let's call it web. As far as I know, this is the only service that needs to be modified. I tried to creating a static IP and TCP/SSL loadbalancer ssl-LB in the Networking section of GCP and using that LB in web.yaml, which I create. Creating the service gets stuck with:
Error creating load balancer (will retry): Failed to create load
balancer for service default/web: requested ip <IP> is
neither static nor assigned to LB
aff3a4e1f487f11e787cc42010a84016(default/web): <nil>
According to GCP my IP is static, however. The hashed LB I cannot find anywhere and it should be assigned to ssl-LB anyway. How do I assign this properly?
More details:
Here are the contents of web.yaml
apiVersion: v1
kind: Service
metadata:
name: web
labels:
...
spec:
type: LoadBalancer
loadBalancerIP: <RESERVED STATIC IP>
ports:
- port: 443
targetPort: 7770
selector:
...
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web
spec:
replicas: 1
template:
metadata:
labels:
...
spec:
containers:
- name: web
image: gcr.io/<PROJECT>/<IMAGE NAME>
ports:
- containerPort: 7770
Since you have not mentioned this already, I'm just assuming you're using Google Container Engine (GKE) for your Kubernetes setup.
In the service resource manifest, if you set the Type to LoadBalancer, Kubernetes on GKE automatically sets up Network load balancing (L4 Load balancer) using GCE. You will have to terminate connections in your pod using your own custom server or something like nginx/apache.
If your goal is to set up a (HTTP/HTTPS) L7 load balancer (which looks to be the case), it will be simpler and easier to use the Ingress resource in Kubernetes (starting with v1.1). GKE automatically sets up a GCE HTTP/HTTPS L7 load balancing with this setup.
You will be able to add your TLS certificates which will get provisioned on the GCE load balancer automatically by GKE.
This setup has the following advantages:
Specify services per URL path and port (it uses URL Maps from GCE to configure this).
Set up and terminate SSL/TLS on the GCE load balancer (it uses Target proxies from GCE to configure this).
GKE will automatically also configure the GCE health checks for your services.
Your responsibility will be to handle the backend service logic to handle requests in your pods.
More info available on the GKE page about setting up HTTP load balancing.
Remember that when using GKE, it automatically uses the available GCE load balancer support for both the use cases described above and you will not need to manually set up GCE load balancing.
Related
I trying to understand how to work load balancing in Istio.
Istio DestinationRule define rules for traffic balancing between pods.
K8s Service similar manages traffic load balancing between pods.
DestinationRule define host and k8s service define host.
But without k8s service, request failed with http code 503.
How k8s Service relates to DestinationRule?
Kubernetes service
Kubernetes service type ClusterIP uses kube-proxy's iptables rules to distribute the requests.
The documentation says:
By default, kube-proxy in userspace mode chooses a backend via a round-robin algorithm.
More about it here.
Destination rule
As mentioned here
You can think of virtual services as how you route your traffic to a given destination, and then you use destination rules to configure what happens to traffic for that destination. Destination rules are applied after virtual service routing rules are evaluated, so they apply to the traffic’s “real” destination.
Every HTTP route must have a target: a route, or a redirect. A route is a forwarding target, and it can point to one of several versions of a service described in DestinationRules. Weights associated with the service version determine the proportion of traffic it receives.
DestinationRule defines policies that apply to traffic intended for a service after routing has occurred.
And here
While a virtual service matches on a rule and evaluates a destination to route the traffic to, destination rules define available subsets of the service to send the traffic.
For example, if you have a service that has multiple versions running at a time, you can create destination rules to define routes to those versions. Then use virtual services to map to a specific subset defined by the destination rules or split a percentage of the traffic to particular versions.
503 without kubernetes service
But without k8s service, request failed with http code 503.
It's failing because there is no host, which was specified in virtual service, and destination rule.
For example, take a look at this virtual service and destination rule.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-route
spec:
hosts:
- reviews.prod.svc.cluster.local
http:
- name: "reviews-v2-routes"
match:
- uri:
prefix: "/wpcatalog"
route:
- destination:
host: reviews.prod.svc.cluster.local <---
subset: v2
- name: "reviews-v1-route"
route:
- destination:
host: reviews.prod.svc.cluster.local <---
subset: v1
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews-destination
spec:
host: reviews.prod.svc.cluster.local <---
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
If you check the host then you see there is your kubernetes service specified, and it won't work without it.
Additionaly when setting route rules to direct traffic to specific versions (subsets) of a service, care must be taken to ensure that the subsets are available before they are used in the routes. Otherwise, calls to the service may return 503 errors during a reconfiguration period.
More about it here.
DestinationRule define host and k8s service define host.
Destination rule host is your kubernetes service. Kubernetes service hosts are your pods,
you may be wondering, but why do I need a service?
As mentioned here.
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
Kubernetes Service relates to DestinationRule
I couldn't find the exact information how it works, so I will explain how I understand it.
You need kubernetes service so virtual service and destination rule could actually work.
As kubernetes service uses kube-proxy's iptables rules to distribute the requests, I assume that istio destination rule can ovveride it with his own rules, and apply them through envoy sidecar, because all traffic that your mesh services send and receive (data plane traffic) is proxied through Envoy, making it easy to direct and control traffic around your mesh without making any changes to your services.
More about it here.
Additional resources:
https://istio.io/latest/docs/reference/config/networking/destination-rule/#Subset
https://istio.io/latest/docs/examples/bookinfo/#apply-default-destination-rules
https://istio.io/latest/docs/concepts/traffic-management/#load-balancing-options
Let me know if you have any more questions.
We are connecting Ambassador (API Gateway - https://www.getambassador.io/) to Google VM instance group via a load balancer where in http/2 is enabled. This requires ssl must be enabled. There is no proper information on how to connect Ambassador to an SSL enabled end system.
We tried connecting to Google VM instance from an Ambassador pod which is running in kubernetes via normal http service as per the suggestion - https://github.com/datawire/ambassador/issues/585. But could not find a way to connect to an SSL enabled endpoint by providing SSL certificate.
kind: Service
apiVersion: v1
metadata:
name: a-b-service
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: a-b-mapping
grpc: True
headers:
lang: t
prefix: /a.Listener/
rewrite: /a.Listener/
service: http://<ip>:<port>/
timeout_ms: 60000
We want to connect to an SSL enabled Google VM instance group via loadbalancing. Also, how to provide SSL certificate to this
kind: Service
apiVersion: v1
metadata:
name: a-b-service
annotations:
.....
service: https://<ip>:443/ <---- https with ssl
timeout_ms: 60000
Can someone suggest how to achieve this?
Ambassador has a fair amount of documentation around TLS, so probably your best bet is to check out https://www.getambassador.io/reference/core/tls/ and start from there.
The short version is specifying a service that starts with https:// is enough for Ambassador to originate TLS, but if you want to control the originating certificate, you need to reference a TLSContext as well.
I'm running okd 3.6 (upgrade is a work in progress) with a f5 bigip appliance running 11.8. We currently have 2 virtual servers for http(s) doing nat/pat and talking to the clusters haproxy. The cluster is configured to use redhat/openshift-ovs-subnet.
I now have users asking to do tls passthrough. Can I add new virtual servers and a f5 router pod to the cluster and run this in conjunction with my existing virtual servers and haproxy?
Thank you.
Personally I think... yes, you can. If TLS passthrough is route configuration, then you just define the route as follows, then the HAProxy would transfer to your new virtual server.
apiVersion: v1
kind: Route
metadata:
labels:
name: myService
name: myService-route-passthrough
namespace: default
spec:
host: mysite.example.com
path: "/myApp"
port:
targetPort: 443
tls:
termination: passthrough
to:
kind: Service
name: myService
Frankly I don't know whether or not I could make sense your needs correclty. I might not answer appropriately against your question, so you had better read the following readings for looking for more appropriate solutions.
Passthrough Termination
Simple SSL Passthrough (Non-Prod only)
I setup cluster via gcloud container engine, where I have deployed my pods with nodejs server running on them. I am using LoadBalancer service and static IP for routing the traffic across these instances. Everything works perfectly, but I forget to specify write/read permission for google storage api, and my server cannot save files to the bucket storage.
According to this answer there is no way I can change permissions (scopes) for cluster after it was created. So I created a new cluster with correct permissions and re-deployed my containers. I would like to re-use the static IP, I have received from google, tell loadBalancer to use existing IP and remove old cluster. How to do that? I really don't want to change DNS.
If you are using a type: LoadBalancer style service then you can use the loadBalancerIP field on the service.
apiVersion: v1
kind: Service
spec:
type: LoadBalancer
loadBalancerIP: 10.10.10.10
...
If you are using an Ingress you can use an annotation on Google Cloud to set the IP address. Here you use the IP address name in Google Cloud rather than the IP address itself.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
annotations:
"kubernetes.io/ingress.global-static-ip-name": my-ip-name
spec:
...
Is there a way to force an SSL upgrade for incoming connections on the ingress load-balancer? Or if that is not possible with, can I disable port :80? I haven't found a good documentation pages that outlines such an option in the YAML file. Thanks a lot in advance!
https://github.com/kubernetes/ingress-gce#frontend-https
You can block HTTP through the annotation kubernetes.io/ingress.allow-http: "false" or redirect HTTP to HTTPS by specifying a custom backend. Unfortunately GCE doesn't handle redirection or rewriting at the L7 layer directly for you, yet. (see https://github.com/kubernetes/ingress-gce#ingress-cannot-redirect-http-to-https)
Update: GCP now handles redirection rules for load balancers, including HTTP to HTTPS. There doesn't appear to be a method to create these through Kubernetes YAML yet.
This was already correctly answered by a comment on the accepted answer. But since the comment is buried I missed it several times.
As of GKE version 1.18.10-gke.600 you can add a k8s frontend config to redirect from http to https.
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: ssl-redirect
spec:
redirectToHttps:
enabled: true
# add below to ingress
# metadata:
# annotations:
# networking.gke.io/v1beta1.FrontendConfig: ssl-redirect
The annotation has changed:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
...
Here is the annotation change PR:
https://github.com/kubernetes/contrib/pull/1462/files
If you are not bound to the GCLB Ingress Controller you could have a look at the Nginx Ingress Controller. This controller is different to the builtin one in multiple ways. First and foremost you need to deploy and manage one by yourself. But if you are willing to do so, you get the benefit of not depending on the GCE LB (20$/month) and getting support for IPv6/websockets.
The documentation states:
By default the controller redirects (301) to HTTPS if TLS is enabled for that ingress . If you want to disable that behaviour globally, you
can use ssl-redirect: "false" in the NGINX config map.
The recently released 0.9.0-beta.3 comes with an additional annotation for explicitly enforcing this redirect:
Force redirect to SSL using the annotation ingress.kubernetes.io/force-ssl-redirect
Google has responded to our requests and is testing HTTP->HTTPS SSL redirection on their load balancers. Their latest answer said it should be in Alpha sometime before the end of January 2020.
Their comment:
Thank you for your patience on this issue. The feature is currently in testing and we expect to enter Alpha phase before the end of January. Our PM team will have an announcement with more details as we get closer to the Alpha launch.
My fingers are crossed that we'll have a straightforward solution to this very common feature in the near future.
UPDATE (April 2020):
HTTP(S) rewrites is now a Generally Available feature. It's still a bit rough around the edges and does not work out-of-the-box with the GCE Ingress Controller unfortunately. But time will tell and hopefully a native solution will appear.
A quick update. Here
Now a FrontEndConfig can be make to configure the ingress. Hopes it helps.
Example:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: my-frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: 301
You'll need to make sure that your load balancer supports HTTP and HTTPS
Worked on this for a long time. In case anyone isn't clear on the post above. You would rebuild your ingress with annotation -- kubernetes.io/ingress.allow-http: "false” --
Then delete your ingress and redeploy. The annotation will have the ingress only create a LB for 443, instead of both 443 and 80.
Then you do a compute HTTP LB, not one for GKE.
Gui directions:
Create a load balancer and choose HTTP(S) Load Balancing -- Start configuration.
choose - From Internet to my VMs and continue
Choose a name for the LB
leave the backend configuration blank.
Under Host and path rules, select Advanced host and path rules with the action set to
Redirect the client to different host/path.
Leave the Host redirect field blank.
Select Prefix Redirect and leave the Path value blank.
Chose the redirect response code as 308.
Tick the Enable box for HTTPS redirect.
For the Frontend configuration, leave http and port 80, for ip address select the static
IP address being used for your GKE ingress.
Create this LB.
You will now have all http traffic go to this and 308 redirect to your https ingress for GKE. Super simple config setup and works well.
Note: If you just try to delete the port 80 LB that GKE makes (not doing the annotation change and rebuilding the ingress) and then adding the new redirect compute LB it does work, but you will start to see error messages on your Ingress saying error 400 invalid value for field 'resource.ipAddress " " is in use and would result in a conflict, invalid. It is trying to spin up the port 80 LB and can't because you already have an LB on port 80 using the same IP. It does work but the error is annoying and GKE keeps trying to build it (I think).
Thanks to the comment of #Andrej Palicka and according to the page he provided: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect now I have an updated and working solution.
First we need to define a FrontendConfig resource and then we need to tell the Ingress resource to use this FrontendConfig.
Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: myapp-prd
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
networking.gke.io/v1beta1.FrontendConfig: myapp-frontend-config
spec:
defaultBackend:
service:
name: myapp-app-service
port:
number: 80
---
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: myapp-frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT
You can disable HTTP on your cluster (note that you'll need to recreate your cluster for this change to be applied on the load balancer) and then set HTTP-to-HTTPS redirect by creating an additional load balancer on the same IP address.
I spend couple of hours on the same question, and ended up doing what I've just described. It works perfectly.
Redirecting to HTTPS in Kubernetes is somewhat complicated. In my experience, you'll probably want to use an ingress controller such as Ambassador or ingress-nginx to control routing to your services, as opposed to having your load balancer route directly to your services.
Assuming you're using an ingress controller, then:
If you're terminating TLS at the external load balancer and the LB is running in L7 mode (i.e., HTTP/HTTPS), then your ingress controller needs to use X-Forwarded-Proto, and issue a redirect accordingly.
If you're terminating TLS at the external load balancer and the LB is running in TCP/L4 mode, then your ingress controller needs to use the PROXY protocol to do the redirect.
You can also terminate TLS directly in your ingress controller, in which case it has all the necessary information to do the redirect.
Here's a tutorial on how to do this in Ambassador.