I have an EKS cluster running in one VPC and some ec2 instances on a legacy VPC, the 2 VPCs has peering between them.
I have a app on the EKS cluster needs to be reachable from inside the cluster and also to the ec2 instances on the legacy VPC.
Do I need to create 2 services for the app - one kind: clusterIP for in cluster communication and one kine: LoadBalancer for external VPC communication:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
OR - I can create only one service kind: nodeport/clusterIP/LB internal for both in-cluster and external VPC communication?
What is the preferred way.
Thanks,
Do I need to create 2 services for the app...create only one service kind: nodeport/clusterIP/LB internal for both in-cluster and external VPC communication?
You need only one service in this case. Either a typed NodePort where you get a cluster IP (for connection within the k8s cluster network); plus a port accessible at EC2 worker node. Typed LoadBalancer gets you a cluster IP too; plus the LB endpoint. As worker nodes come and go, LB give you more flexibility as you will only dealing with a known endpoint.
I am running web site with Kubernetes on Google Cloud. At the moment, everything is working well - through http. But I need https. I have several services and one of them is exposed to the outside world, let's call it web. As far as I know, this is the only service that needs to be modified. I tried to creating a static IP and TCP/SSL loadbalancer ssl-LB in the Networking section of GCP and using that LB in web.yaml, which I create. Creating the service gets stuck with:
Error creating load balancer (will retry): Failed to create load
balancer for service default/web: requested ip <IP> is
neither static nor assigned to LB
aff3a4e1f487f11e787cc42010a84016(default/web): <nil>
According to GCP my IP is static, however. The hashed LB I cannot find anywhere and it should be assigned to ssl-LB anyway. How do I assign this properly?
More details:
Here are the contents of web.yaml
apiVersion: v1
kind: Service
metadata:
name: web
labels:
...
spec:
type: LoadBalancer
loadBalancerIP: <RESERVED STATIC IP>
ports:
- port: 443
targetPort: 7770
selector:
...
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web
spec:
replicas: 1
template:
metadata:
labels:
...
spec:
containers:
- name: web
image: gcr.io/<PROJECT>/<IMAGE NAME>
ports:
- containerPort: 7770
Since you have not mentioned this already, I'm just assuming you're using Google Container Engine (GKE) for your Kubernetes setup.
In the service resource manifest, if you set the Type to LoadBalancer, Kubernetes on GKE automatically sets up Network load balancing (L4 Load balancer) using GCE. You will have to terminate connections in your pod using your own custom server or something like nginx/apache.
If your goal is to set up a (HTTP/HTTPS) L7 load balancer (which looks to be the case), it will be simpler and easier to use the Ingress resource in Kubernetes (starting with v1.1). GKE automatically sets up a GCE HTTP/HTTPS L7 load balancing with this setup.
You will be able to add your TLS certificates which will get provisioned on the GCE load balancer automatically by GKE.
This setup has the following advantages:
Specify services per URL path and port (it uses URL Maps from GCE to configure this).
Set up and terminate SSL/TLS on the GCE load balancer (it uses Target proxies from GCE to configure this).
GKE will automatically also configure the GCE health checks for your services.
Your responsibility will be to handle the backend service logic to handle requests in your pods.
More info available on the GKE page about setting up HTTP load balancing.
Remember that when using GKE, it automatically uses the available GCE load balancer support for both the use cases described above and you will not need to manually set up GCE load balancing.
I setup cluster via gcloud container engine, where I have deployed my pods with nodejs server running on them. I am using LoadBalancer service and static IP for routing the traffic across these instances. Everything works perfectly, but I forget to specify write/read permission for google storage api, and my server cannot save files to the bucket storage.
According to this answer there is no way I can change permissions (scopes) for cluster after it was created. So I created a new cluster with correct permissions and re-deployed my containers. I would like to re-use the static IP, I have received from google, tell loadBalancer to use existing IP and remove old cluster. How to do that? I really don't want to change DNS.
If you are using a type: LoadBalancer style service then you can use the loadBalancerIP field on the service.
apiVersion: v1
kind: Service
spec:
type: LoadBalancer
loadBalancerIP: 10.10.10.10
...
If you are using an Ingress you can use an annotation on Google Cloud to set the IP address. Here you use the IP address name in Google Cloud rather than the IP address itself.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
annotations:
"kubernetes.io/ingress.global-static-ip-name": my-ip-name
spec:
...
I have a composite service S.c which consumes 2 atomic service S.a and S.b where all three services are running in Kubernetes cluster. What would be a better pattern
1) Create Sa,Sb as a headless service and let Sc integrate with them via external Loadbalancer like NGINX+ (which uses DNS resolver to maintain updated backend pods)
2) Create Sa,Sb with clusterIP and let Sc access/resolve them via cluster DNS (skyDNS addon). Which will internally leverage IP-Table based load-balancing to pods.
Note: My k8s cluster is running on custom solution (on-premise VMs)
We have many composite services which consume 1 to many atomic services (like example above).
Edit: In few scenarios I would also need to expose services to external network
like Sb would need access both from Sc and outside.
In such it would make more sense to create Sb as a headless service, otherwise DNS resolver would always return only the clusterIP address and all external request will also get routed to clusterIP address.
My challenge is both scenarios (intra vs inter) are conflicting with each other
example: nginx-service (which has clusterIP) and nginx-headless-service (headless)
/ # nslookup nginx-service
Server: 172.16.48.11
Address 1: 172.16.48.11 kube-dns.kube-system.svc.cluster.local
Name: nginx-service
Address 1: 172.16.50.29 nginx-service.default.svc.cluster.local
/ # nslookup nginx-headless-service
Server: 172.16.48.11
Address 1: 172.16.48.11 kube-dns.kube-system.svc.cluster.local
Name: nginx-headless-service
Address 1: 11.11.1.13 wrkfai1asby2.my-company.com
Address 2: 11.11.1.15 imptpi1asby.my-company.com
Address 3: 11.11.1.4 osei623a-sby.my-company.com
Address 4: 11.11.1.6 osei511a-sby.my-company.com
Address 5: 11.11.1.7 erpdbi02a-sbyold.my-company.com
Using DNS + cluster IPs is the simpler approach, and doesn't require exposing your services to the public internet. Unless you want specific load-balancing features from nginx, I'd recommend going with #2.
Is there a way to force an SSL upgrade for incoming connections on the ingress load-balancer? Or if that is not possible with, can I disable port :80? I haven't found a good documentation pages that outlines such an option in the YAML file. Thanks a lot in advance!
https://github.com/kubernetes/ingress-gce#frontend-https
You can block HTTP through the annotation kubernetes.io/ingress.allow-http: "false" or redirect HTTP to HTTPS by specifying a custom backend. Unfortunately GCE doesn't handle redirection or rewriting at the L7 layer directly for you, yet. (see https://github.com/kubernetes/ingress-gce#ingress-cannot-redirect-http-to-https)
Update: GCP now handles redirection rules for load balancers, including HTTP to HTTPS. There doesn't appear to be a method to create these through Kubernetes YAML yet.
This was already correctly answered by a comment on the accepted answer. But since the comment is buried I missed it several times.
As of GKE version 1.18.10-gke.600 you can add a k8s frontend config to redirect from http to https.
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: ssl-redirect
spec:
redirectToHttps:
enabled: true
# add below to ingress
# metadata:
# annotations:
# networking.gke.io/v1beta1.FrontendConfig: ssl-redirect
The annotation has changed:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
...
Here is the annotation change PR:
https://github.com/kubernetes/contrib/pull/1462/files
If you are not bound to the GCLB Ingress Controller you could have a look at the Nginx Ingress Controller. This controller is different to the builtin one in multiple ways. First and foremost you need to deploy and manage one by yourself. But if you are willing to do so, you get the benefit of not depending on the GCE LB (20$/month) and getting support for IPv6/websockets.
The documentation states:
By default the controller redirects (301) to HTTPS if TLS is enabled for that ingress . If you want to disable that behaviour globally, you
can use ssl-redirect: "false" in the NGINX config map.
The recently released 0.9.0-beta.3 comes with an additional annotation for explicitly enforcing this redirect:
Force redirect to SSL using the annotation ingress.kubernetes.io/force-ssl-redirect
Google has responded to our requests and is testing HTTP->HTTPS SSL redirection on their load balancers. Their latest answer said it should be in Alpha sometime before the end of January 2020.
Their comment:
Thank you for your patience on this issue. The feature is currently in testing and we expect to enter Alpha phase before the end of January. Our PM team will have an announcement with more details as we get closer to the Alpha launch.
My fingers are crossed that we'll have a straightforward solution to this very common feature in the near future.
UPDATE (April 2020):
HTTP(S) rewrites is now a Generally Available feature. It's still a bit rough around the edges and does not work out-of-the-box with the GCE Ingress Controller unfortunately. But time will tell and hopefully a native solution will appear.
A quick update. Here
Now a FrontEndConfig can be make to configure the ingress. Hopes it helps.
Example:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: my-frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: 301
You'll need to make sure that your load balancer supports HTTP and HTTPS
Worked on this for a long time. In case anyone isn't clear on the post above. You would rebuild your ingress with annotation -- kubernetes.io/ingress.allow-http: "false” --
Then delete your ingress and redeploy. The annotation will have the ingress only create a LB for 443, instead of both 443 and 80.
Then you do a compute HTTP LB, not one for GKE.
Gui directions:
Create a load balancer and choose HTTP(S) Load Balancing -- Start configuration.
choose - From Internet to my VMs and continue
Choose a name for the LB
leave the backend configuration blank.
Under Host and path rules, select Advanced host and path rules with the action set to
Redirect the client to different host/path.
Leave the Host redirect field blank.
Select Prefix Redirect and leave the Path value blank.
Chose the redirect response code as 308.
Tick the Enable box for HTTPS redirect.
For the Frontend configuration, leave http and port 80, for ip address select the static
IP address being used for your GKE ingress.
Create this LB.
You will now have all http traffic go to this and 308 redirect to your https ingress for GKE. Super simple config setup and works well.
Note: If you just try to delete the port 80 LB that GKE makes (not doing the annotation change and rebuilding the ingress) and then adding the new redirect compute LB it does work, but you will start to see error messages on your Ingress saying error 400 invalid value for field 'resource.ipAddress " " is in use and would result in a conflict, invalid. It is trying to spin up the port 80 LB and can't because you already have an LB on port 80 using the same IP. It does work but the error is annoying and GKE keeps trying to build it (I think).
Thanks to the comment of #Andrej Palicka and according to the page he provided: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect now I have an updated and working solution.
First we need to define a FrontendConfig resource and then we need to tell the Ingress resource to use this FrontendConfig.
Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: myapp-prd
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
networking.gke.io/v1beta1.FrontendConfig: myapp-frontend-config
spec:
defaultBackend:
service:
name: myapp-app-service
port:
number: 80
---
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: myapp-frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT
You can disable HTTP on your cluster (note that you'll need to recreate your cluster for this change to be applied on the load balancer) and then set HTTP-to-HTTPS redirect by creating an additional load balancer on the same IP address.
I spend couple of hours on the same question, and ended up doing what I've just described. It works perfectly.
Redirecting to HTTPS in Kubernetes is somewhat complicated. In my experience, you'll probably want to use an ingress controller such as Ambassador or ingress-nginx to control routing to your services, as opposed to having your load balancer route directly to your services.
Assuming you're using an ingress controller, then:
If you're terminating TLS at the external load balancer and the LB is running in L7 mode (i.e., HTTP/HTTPS), then your ingress controller needs to use X-Forwarded-Proto, and issue a redirect accordingly.
If you're terminating TLS at the external load balancer and the LB is running in TCP/L4 mode, then your ingress controller needs to use the PROXY protocol to do the redirect.
You can also terminate TLS directly in your ingress controller, in which case it has all the necessary information to do the redirect.
Here's a tutorial on how to do this in Ambassador.