REST APIs for Google Kubernetes Engine (GKE) - api

What are the RESP full APIs available for GKE and how do I call them? Currently I want to integrate GKE with my on-premise tool to deploy containers on GKE. I have all the required images already built and want to trigger an API call in GKE to deploy my docker image. Which API should I call? what and how do I provide for the authentication ?

The list of available Google Kubernetes Engines REST Resource APIs such as can be found on the Google Kubernetes Engine public doc
This is the Restful API to interact with the Cluster and not with Kubernetes. To interact with Kubernetes and container management, you use Kubectl.
And depending on your method of authentication, you can use the Google OAuth 2.0 authentication if you are authenticating via the browser, APIs if you are authenticating within your code ,or use Kubectl.

As #Sunny J. mentioned, in GKE docs you can only find APIs to interact with cluster for configuration only. If you want to manage workloads, you need to interact with Kubernetes API server. This is API reference. First you need to get address and port on which API server is listening:
kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " "
If you need root access to the cluster, you can create service account and cluster role binding to cluster-admin:
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
Now reveal its token:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
When making request, set Authorization header this way, replacing token with one you received using previous command:
Authorization: "Bearer <token>"
Good luck with Kubernetes:)

Related

AWS EKS service account authentication

Is there a different way other than asking for get-token to authenticate on EKS?
Today my .kube\config looks like this
apiVersion: v1
clusters:
- cluster:
certificate-authority-datXXa: X==
server: https://XXX.gr7.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:XXX:cluster/XXX
contexts:
- context:
cluster: arn:aws:eks:us-east-1:XXX:cluster/XXX
namespace: xx
user: arn:aws:eks:us-east-1:XXX:cluster/XXX
name:arn:aws:eks:us-east-1:XXX:cluster/XXX
users:
- name: arn:aws:eks:us00000-east-1:0:cluster/user
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
Allthough the service I am trying to leverage just supports basic token or client-key-data authentication.
Therefore I want to have a user that I could connect with for example:
apiVersion: v1
clusters:
- cluster:
certificate-authority-datXXa: X==
server: https://XXX.gr7.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:XXX:cluster/XXX
contexts:
- context:
cluster: arn:aws:eks:us-east-1:XXX:cluster/XXX
namespace: xx
user: arn:aws:eks:us-east-1:XXX:cluster/XXX
name:arn:aws:eks:us-east-1:XXX:cluster/XXX
users:
**- name: test1
user:
client-certificate-data: LS0tLS1CXXXX==
client-key-data: LS0tLS1CXXXX==**
**- name: test2
user:
token: bGsdfsdoarxxxxx**
Tried creating a serviceaccount but I am not able to create a .kube/config
Unlike Azure Kubernetes Service, AWS EKS uses only one type of authentication, that is webhook token based authentication which is implemented by AWS IAM Authencticator and uses native kubernetes RBAC policies for authorization.
https://docs.aws.amazon.com/eks/latest/userguide/cluster-auth.html
A Service Account provides an identity for processes that run in a Pod, and maps to a ServiceAccount object. When you authenticate to the API server, you identify yourself as a particular user. Kubernetes recognises the concept of a user, however, Kubernetes itself does not have a User API.
In AWS in case your pod needs to access any aws services, you create a service account with required IAM roles and have to attach that to the pod. In that case, you can't use service account for client side authentication.
https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html

How to set IAP (Identity Aware Proxy) authentication for back-end API service running on a GKE cluster

I have an application that has react in the front-end and a node service in the back-end. The app is deployed in the GKE cluster. Both the apps are exposed as a NodePort Service, and the fan out ingress path is done as follows :
- host: example.com
http:
paths:
- backend:
serviceName: frontend-service
servicePort: 3000
path: /*
- backend:
serviceName: backend-service
servicePort: 5000
path: /api/*
I have enabled authentication using IAP for both services. When enabling IAP for both the kubernetes services, new Client Id and Client Secret is created individually. But I need to provide authentication for the back-end API from the front-end, since they have 2 different accounts, its not possible, i.e when I call the back-end API service from the front-end the authentication fails because the cookies provided from the FE does not match in the back-end service.
What is the best way to handle this scenario. Is there a way to use the same client credentials for both these services and if so, Is that the right way to do it or Is there a way to authenticate the Rest API using IAP directly.
If IAP is setup using BackendConfig, then you can have two separate BackendConfig objects for frontend and backend applications but both of them use the same secrete (secretName) for oauthclientCredentials.
For frontend app
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: frontend-iap-config
namespace: namespace-1
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: common-iap-oauth-credentials
For backend app
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-iap-config
namespace: namespace-1
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: common-iap-oauth-credentials
Then refer these BackendConfigs from respective kubernetes service objects

How to Enable IAP with GKE

I created a basic react/express app with IAP authentication and deployed to google app engine and everything works as expected. Now i'm moving from app engine deployment to kubernetes, all things work except the user authentication with IAP on kubernetes. How do I enable IAP user authentication with kubernetes?
Do I have to create a kubernetes secret to get user authentication to work? https://cloud.google.com/iap/docs/enabling-kubernetes-howto
Authentication code in my server.js https://cloud.google.com/nodejs/getting-started/authenticate-users#cloud-identity-aware-proxy
In order for Cloud IAP to be working with Kubernetes, you will need a group of one or more GKE instances, served by an HTTPS load balancer. The load balancer should be created automatically when you create an Ingress object in a GKE cluster.
Also required for enabling Cloud IAP in GKE: a domain name registered to the address of your load balancer and an App code to verify that all requests have an identity.
Once these requirements have been achieved, you can move forward with enabling Cloud IAP on Kubernetes Engine. This includes the steps to set up Cloud IAP access and creating OAuth credentials.
You will need to create a Kubernetes Secret to configure BackendConfig for Cloud IAP. The BackendConfig uses a Kubernetes Secret to wrap the OAuth client that you create when enabling the Cloud IAP.
You need to add an Ingress and enable IAP on the backend service
Create a BackendConfig object:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: app-bc
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: oauth-client-secret
attached it to the service:
apiVersion: v1
kind: Service
metadata:
name: app-service
annotations:
cloud.google.com/backend-config: '{"default": "app-bc"}'
cloud.google.com/neg: '{"ingress": true}'
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: nginx
and then create the ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
tls:
- secretName: ingress-tls-secret
rules:
- host: ""
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: app-service
port:
number: 80
You can find the full tutorial here hodo.dev/posts/post-26-gcp-iap/

redis cluster K8S connection

I'm running a redis-cluster in K8S:
kubectl get services -o wide
redis-cluster ClusterIP 10.97.31.167 <none> 6379/TCP,16379/TCP 22h app=redis-cluster
When connecting to the cluster IP from the node itself connection is working fine:
redis-cli -h 10.97.31.167 -c
10.97.31.167:6379> set some_val 1
-> Redirected to slot [11662] located at 10.244.1.9:6379
OK
Is there some way I can access the redis server from my local development VM without exposing every single pod as a service?
When deploying my application to run inside the cluster itself (later, when in production), I should use the cluster IP too, or should I use the internal IPs of the pods as master IPs of the redis-master servers?
Simple forwarding to the remote machine won't work:
devvm:ssh -L 6380:10.97.31.167:6379 -i user.pem admin#k8snode.com
On dev VM:
root#devvm:~# redis-cli -h 127.0.0.1 -p 6380 -c
127.0.0.1:6380> set jaheller 1
-> Redirected to slot [11662] located at 10.244.1.9:6379
The redis connection is timeouting at this point.
I belive in all scenarios you just need to expose the service using kubernetes Service object of type:
Cluster IP ( in case you are consuming it inside the cluster )
NodePort ( for external access )
LoadBalancer ( in case of public access and if you are on cloud provider)
NodePort with external loadbalancer ( for public external access if you are on local infrastructure )
Dont need to worry about individual pods. The Service will take care of them.
Docs:
https://kubernetes.io/docs/concepts/services-networking/service/
I don't think you need any port redirection. You have to build an ingress controller upon your cluster though, i.e. nginx ingress controller
Then you just set up a single ingress service with exposed access, which will serve a cluster traffic.
Here is an example of Ingress Controller to Access cluster service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: redis-cluster-ing
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- backend:
serviceName: redis-cluster
servicePort: 6379
You may check a step-by-step instruction

How to re-assign static IP address from one cluster to another in the google container engine

I setup cluster via gcloud container engine, where I have deployed my pods with nodejs server running on them. I am using LoadBalancer service and static IP for routing the traffic across these instances. Everything works perfectly, but I forget to specify write/read permission for google storage api, and my server cannot save files to the bucket storage.
According to this answer there is no way I can change permissions (scopes) for cluster after it was created. So I created a new cluster with correct permissions and re-deployed my containers. I would like to re-use the static IP, I have received from google, tell loadBalancer to use existing IP and remove old cluster. How to do that? I really don't want to change DNS.
If you are using a type: LoadBalancer style service then you can use the loadBalancerIP field on the service.
apiVersion: v1
kind: Service
spec:
type: LoadBalancer
loadBalancerIP: 10.10.10.10
...
If you are using an Ingress you can use an annotation on Google Cloud to set the IP address. Here you use the IP address name in Google Cloud rather than the IP address itself.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
annotations:
"kubernetes.io/ingress.global-static-ip-name": my-ip-name
spec:
...