How to Enable IAP with GKE - express

I created a basic react/express app with IAP authentication and deployed to google app engine and everything works as expected. Now i'm moving from app engine deployment to kubernetes, all things work except the user authentication with IAP on kubernetes. How do I enable IAP user authentication with kubernetes?
Do I have to create a kubernetes secret to get user authentication to work? https://cloud.google.com/iap/docs/enabling-kubernetes-howto
Authentication code in my server.js https://cloud.google.com/nodejs/getting-started/authenticate-users#cloud-identity-aware-proxy

In order for Cloud IAP to be working with Kubernetes, you will need a group of one or more GKE instances, served by an HTTPS load balancer. The load balancer should be created automatically when you create an Ingress object in a GKE cluster.
Also required for enabling Cloud IAP in GKE: a domain name registered to the address of your load balancer and an App code to verify that all requests have an identity.
Once these requirements have been achieved, you can move forward with enabling Cloud IAP on Kubernetes Engine. This includes the steps to set up Cloud IAP access and creating OAuth credentials.
You will need to create a Kubernetes Secret to configure BackendConfig for Cloud IAP. The BackendConfig uses a Kubernetes Secret to wrap the OAuth client that you create when enabling the Cloud IAP.

You need to add an Ingress and enable IAP on the backend service
Create a BackendConfig object:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: app-bc
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: oauth-client-secret
attached it to the service:
apiVersion: v1
kind: Service
metadata:
name: app-service
annotations:
cloud.google.com/backend-config: '{"default": "app-bc"}'
cloud.google.com/neg: '{"ingress": true}'
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: nginx
and then create the ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
tls:
- secretName: ingress-tls-secret
rules:
- host: ""
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: app-service
port:
number: 80
You can find the full tutorial here hodo.dev/posts/post-26-gcp-iap/

Related

AKS Istio Ingress gateway Certificate is not valid

I have an AKS cluster with Istio install and I'm trying to deploy a containerised web api with TLS.
The api runs and is accessible but is showing as Not secure.
I have followed the directions on istios website to set this so not sure what I've missed.
I have created the secret with the command
kubectl create secret tls mycredential -n istio-system --key mycert.key --cert mycert.crt
and setup a gateway as follows
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: my-gateway
namespace: mynamespace
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: mycredential # must be the same as secret
hosts:
- 'dev.api2.mydomain.com'
The following virtual service
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: myapi
namespace: mynamespace
spec:
hosts:
- "dev.api2.mydomain.com"
gateways:
- my-gateway
http:
- match:
- uri:
prefix: "/myendpoint"
rewrite:
uri: " "
route:
- destination:
port:
number: 8080
host: myapi
and service
apiVersion: v1
kind: Service
metadata:
name: myapi
namespace: mynamespace
labels:
app: myapi
service: myapi
spec:
ports:
- name: http
port: 8080
targetPort: 80
selector:
app: myapi
The container exposes port 80
Can someone please point me in the right direction because I'm not sure what I've done wrong
I managed to resolve the issue by setting up cert manager and pointing it at letsencrypt to generate the certificate, rather than using the pre-purchased one I was trying to add manually.
Although it took some searching to find how to correctly configure this, it is now working and actually saves having to purchase certificates, so win win :)

How to configure https on deployment yaml file for asp.net core app locally in minikube?

I have an ASP.NET Core app that I want to configure with HTTPS in my local kubernetes clustur using minikube.
The deployment yaml file is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-volume
labels:
app: kube-volume-app
spec:
replicas: 1
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: ckubevolume
image: kubevolume
imagePullPolicy: Never
ports:
- containerPort: 80
- containerPort: 443
env:
- name: ASPNETCORE_ENVIRONMENT
value: Development
- name: ASPNETCORE_URLS
value: https://+:443;http://+:80
- name: ASPNETCORE_HTTPS_PORT
value: '443'
- name: ASPNETCORE_Kestrel__Certificates__Default__Password
value: mypass123
- name: ASPNETCORE_Kestrel__Certificates__Default__Path
value: /app/https/aspnetapp.pfx
volumeMounts:
- name: ssl
mountPath: "/app/https"
volumes:
- name: ssl
configMap:
name: game-config
You can see i have added environment variables for https in yaml file.
I also created a service for this deployment. The yaml file of the service is:
apiVersion: v1
kind: Service
metadata:
name: service-1
spec:
type: NodePort
selector:
component: web
ports:
- name: http
protocol: TCP
port: 100
targetPort: 80
- name: https
protocol: TCP
port: 200
targetPort: 443
But unfortunately the app is not opening by the service when I run the minikube service service-1 command.
However when I remove the env variables for https then the app is opening by the service. These are the lines which when I remove the app opens:
- name: ASPNETCORE_URLS
value: https://+:443;http://+:80
- name: ASPNETCORE_HTTPS_PORT
value: '443'
- name: ASPNETCORE_Kestrel__Certificates__Default__Password
value: mypass123
- name: ASPNETCORE_Kestrel__Certificates__Default__Path
value: /app/https/aspnetapp.pfx
I also confirmed with the shell that the certificate is present in the /app/https folder.
Whay I am doing wrong?
I think your approach does not fit well with the architecture of Kubernetes. A TLS certificate (for https) is coupled to a hostname.
I would recommend one of two different approaches:
Expose your app with a Service of type: LoadBalancer
Expose your app with an Ingress resource
Expose your app with a Service of type LoadBalancer
This is typically called a Network LoadBalancer as it exposes your app for TCP or UDP directly.
See LoadBalancer access in the Minikube documentation. But beware that your app get an external address from your LoadBalancer, and your TLS certificate probably has to match that.
Expose your app with an Ingress resource
This is the most common approach for Microservices in Kubernetes. In addition to your Service of type: NodePort you also need to create an Ingress resource for your app.
The cluster needs an Ingress controller and the gateway will handle your TLS certificate, instead of your app.
See How to use custom TLS certificate with ingress addon for how to configure both Ingress and TLS certificate in Minikube.
I would recommend to go this route.

How to set IAP (Identity Aware Proxy) authentication for back-end API service running on a GKE cluster

I have an application that has react in the front-end and a node service in the back-end. The app is deployed in the GKE cluster. Both the apps are exposed as a NodePort Service, and the fan out ingress path is done as follows :
- host: example.com
http:
paths:
- backend:
serviceName: frontend-service
servicePort: 3000
path: /*
- backend:
serviceName: backend-service
servicePort: 5000
path: /api/*
I have enabled authentication using IAP for both services. When enabling IAP for both the kubernetes services, new Client Id and Client Secret is created individually. But I need to provide authentication for the back-end API from the front-end, since they have 2 different accounts, its not possible, i.e when I call the back-end API service from the front-end the authentication fails because the cookies provided from the FE does not match in the back-end service.
What is the best way to handle this scenario. Is there a way to use the same client credentials for both these services and if so, Is that the right way to do it or Is there a way to authenticate the Rest API using IAP directly.
If IAP is setup using BackendConfig, then you can have two separate BackendConfig objects for frontend and backend applications but both of them use the same secrete (secretName) for oauthclientCredentials.
For frontend app
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: frontend-iap-config
namespace: namespace-1
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: common-iap-oauth-credentials
For backend app
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-iap-config
namespace: namespace-1
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: common-iap-oauth-credentials
Then refer these BackendConfigs from respective kubernetes service objects

Unable to successfully setup TLS on a Multi-Tenant GKE+Istio with LetsEncrypt (via Cert Manager)

I'm trying configure for TLS (LetsEncrypt) on a multi-tenant GKE+Istio setup.
I mainly followed this guide -> Full Isolation in Multi-Tenant SAAS with Kubernetes & Istio for setting up the multi-tenancy in GKE+Istio, which I was able to successfully pull-off. I'm able to deploy simple apps on their separate namespaces which are accessible through their respective subdomains.
I then tried to move forward and setup the TLS with LetsEncrypt. For this I mainly followed a different guide which is can be found here-> istio-gke . But unfortunately, following this guide didn't produce the result I wanted. When I was done with it, LetsEncrypt aren't even issuing certificates to my deployment or domain.
Thus I tried to follow a different guide which is as follows -> istio-gateway-tls-setup. Here I managed to get LetsEncrypt issue a certificate for my domain, but when I tried to test it out with openssl or other online ssl checkers, it says that I still aren't communicating securely.
Below are the results when I try describe the configurations of my certificates, issuer & gateway:
Certificate: kubectl -n istio-system describe certificate istio-gateway
Issuer: kubectl -n istio-system describe issuer letsencrypt-prod
Gateway: kubectl -n istio-system describe gateway istio-gateway
While here's the dry-run results for my helm install <tenant>
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /home/cjcabero/projects/aqt-ott-msging-dev/gke-setup/helmchart
NAME: tenanta
LAST DEPLOYED: Wed Feb 17 21:15:08 2021
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
frontend:
image:
pullPolicy: IfNotPresent
repository: paulbouwer/hello-kubernetes
tag: "1.8"
ports:
containerPort: 8080
service:
name: http
port: 80
type: ClusterIP
HOOKS:
MANIFEST:
---
# Source: helmchart/templates/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: tenanta
labels:
istio-injection: enabled
---
# Source: helmchart/templates/frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: tenanta
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 8080
selector:
app: frontend
---
# Source: helmchart/templates/frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: tenanta
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello tenanta
---
# Source: helmchart/templates/virtualservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tenanta-frontend-ingress
namespace: istio-system
spec:
hosts:
- tenanta.cjcabero.dev
gateways:
- istio-gateway
http:
- route:
- destination:
host: frontend.tenanta.svc.cluster.local
port:
number: 80
I don't understand how come even though LetsEncrypt seem to be able issue the certificate for my domain, it still aren't communicating securely.
Google Domains even managed to find that a certificate was issued in the domain in it's Transparency Report.
Anyway, I'm not sure if this could help, but I also tried the check the domain with an online ssl checker and here are the results -> https://check-your-website.server-daten.de/?q=cjcabero.dev.
By the way I did use Istio on GKE which results with Istio v1.4.10 & Kubernetes v1.18.15-gke.1100.

Unable to access Kibana after successful authentication with IBM Cloud AppID

I have a k8s deployment of Kibana in IBM Cloud. It is exposed through ClusterIP k8s service, a k8s Ingress and it is accessible for a single Cloud Directory user authenticated through IBM Cloud App ID.
Kubernetes correctly re-directs to App ID login screen. The issue is that Kibana deployment is not accessible after successful AppID authentication. I get 301 Moved Permanently in a loop.
The same k8s deployment as above is exposed through k8s NodePort and works fine.
The same setup as above works correctly for a simple hello-world app with authentication.
I followed this tutorial.
In App ID Authentication Settings, the redirect URL is:
https://our-domain/app/kibana/appid_callback
Here are portions of the k8s definitions, which are relevant:
---
kind: Service
apiVersion: v1
metadata:
name: kibana-sec
namespace: default
labels:
app: kibana-sec
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 5601
selector:
app: kibana-sec
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.bluemix.net/redirect-to-https: "True"
ingress.bluemix.net/appid-auth: "bindSecret=<our-bindSecret> namespace=default requestType=web serviceName=kibana-sec"
...
spec:
rules:
- host: <our-domain>
http:
paths:
...
- backend:
serviceName: kibana-sec
servicePort: 8080
path: /app/kibana/
tls:
- hosts:
- <our-domain>
secretName: <our-secretName>
status:
loadBalancer:
ingress:
- ip: <IPs>
- ip: <IPs>
There is no "ingress.bluemix.net/rewrite-path" annotation for our service.