AWS EKS service account authentication - amazon-eks

Is there a different way other than asking for get-token to authenticate on EKS?
Today my .kube\config looks like this
apiVersion: v1
clusters:
- cluster:
certificate-authority-datXXa: X==
server: https://XXX.gr7.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:XXX:cluster/XXX
contexts:
- context:
cluster: arn:aws:eks:us-east-1:XXX:cluster/XXX
namespace: xx
user: arn:aws:eks:us-east-1:XXX:cluster/XXX
name:arn:aws:eks:us-east-1:XXX:cluster/XXX
users:
- name: arn:aws:eks:us00000-east-1:0:cluster/user
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
Allthough the service I am trying to leverage just supports basic token or client-key-data authentication.
Therefore I want to have a user that I could connect with for example:
apiVersion: v1
clusters:
- cluster:
certificate-authority-datXXa: X==
server: https://XXX.gr7.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:XXX:cluster/XXX
contexts:
- context:
cluster: arn:aws:eks:us-east-1:XXX:cluster/XXX
namespace: xx
user: arn:aws:eks:us-east-1:XXX:cluster/XXX
name:arn:aws:eks:us-east-1:XXX:cluster/XXX
users:
**- name: test1
user:
client-certificate-data: LS0tLS1CXXXX==
client-key-data: LS0tLS1CXXXX==**
**- name: test2
user:
token: bGsdfsdoarxxxxx**
Tried creating a serviceaccount but I am not able to create a .kube/config

Unlike Azure Kubernetes Service, AWS EKS uses only one type of authentication, that is webhook token based authentication which is implemented by AWS IAM Authencticator and uses native kubernetes RBAC policies for authorization.
https://docs.aws.amazon.com/eks/latest/userguide/cluster-auth.html
A Service Account provides an identity for processes that run in a Pod, and maps to a ServiceAccount object. When you authenticate to the API server, you identify yourself as a particular user. Kubernetes recognises the concept of a user, however, Kubernetes itself does not have a User API.
In AWS in case your pod needs to access any aws services, you create a service account with required IAM roles and have to attach that to the pod. In that case, you can't use service account for client side authentication.
https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html

Related

How to set IAP (Identity Aware Proxy) authentication for back-end API service running on a GKE cluster

I have an application that has react in the front-end and a node service in the back-end. The app is deployed in the GKE cluster. Both the apps are exposed as a NodePort Service, and the fan out ingress path is done as follows :
- host: example.com
http:
paths:
- backend:
serviceName: frontend-service
servicePort: 3000
path: /*
- backend:
serviceName: backend-service
servicePort: 5000
path: /api/*
I have enabled authentication using IAP for both services. When enabling IAP for both the kubernetes services, new Client Id and Client Secret is created individually. But I need to provide authentication for the back-end API from the front-end, since they have 2 different accounts, its not possible, i.e when I call the back-end API service from the front-end the authentication fails because the cookies provided from the FE does not match in the back-end service.
What is the best way to handle this scenario. Is there a way to use the same client credentials for both these services and if so, Is that the right way to do it or Is there a way to authenticate the Rest API using IAP directly.
If IAP is setup using BackendConfig, then you can have two separate BackendConfig objects for frontend and backend applications but both of them use the same secrete (secretName) for oauthclientCredentials.
For frontend app
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: frontend-iap-config
namespace: namespace-1
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: common-iap-oauth-credentials
For backend app
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-iap-config
namespace: namespace-1
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: common-iap-oauth-credentials
Then refer these BackendConfigs from respective kubernetes service objects

How to Enable IAP with GKE

I created a basic react/express app with IAP authentication and deployed to google app engine and everything works as expected. Now i'm moving from app engine deployment to kubernetes, all things work except the user authentication with IAP on kubernetes. How do I enable IAP user authentication with kubernetes?
Do I have to create a kubernetes secret to get user authentication to work? https://cloud.google.com/iap/docs/enabling-kubernetes-howto
Authentication code in my server.js https://cloud.google.com/nodejs/getting-started/authenticate-users#cloud-identity-aware-proxy
In order for Cloud IAP to be working with Kubernetes, you will need a group of one or more GKE instances, served by an HTTPS load balancer. The load balancer should be created automatically when you create an Ingress object in a GKE cluster.
Also required for enabling Cloud IAP in GKE: a domain name registered to the address of your load balancer and an App code to verify that all requests have an identity.
Once these requirements have been achieved, you can move forward with enabling Cloud IAP on Kubernetes Engine. This includes the steps to set up Cloud IAP access and creating OAuth credentials.
You will need to create a Kubernetes Secret to configure BackendConfig for Cloud IAP. The BackendConfig uses a Kubernetes Secret to wrap the OAuth client that you create when enabling the Cloud IAP.
You need to add an Ingress and enable IAP on the backend service
Create a BackendConfig object:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: app-bc
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: oauth-client-secret
attached it to the service:
apiVersion: v1
kind: Service
metadata:
name: app-service
annotations:
cloud.google.com/backend-config: '{"default": "app-bc"}'
cloud.google.com/neg: '{"ingress": true}'
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: nginx
and then create the ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
tls:
- secretName: ingress-tls-secret
rules:
- host: ""
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: app-service
port:
number: 80
You can find the full tutorial here hodo.dev/posts/post-26-gcp-iap/

Unable to create a PodPreset on EKS cluster

Environment:
AWS managed Kubernetes cluster (EKS)
Action:
Create a PodPreset object by applying the following:
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
meta data:
name: sample
spec:
selector:
matchLabels:
app: microservice
env:
- name: test_env
value: "6379"
volumeMounts:
- name: shared
mountPath: /usr/shared
volumes:
- name: shared
emptyDir: {}
Observation:
unable to recognize "podpreset.yaml": no matches for kind "PodPreset" in version "settings.k8s.io/v1alpha1"
Looks like that the API version settings.k8s.io/v1alpha1 is not supported by default by EKS.
I'm using EKS as well, I just run this commands to check it out:
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
The I run
curl localhost:8001/apis
And clearly in my case settings.k8s.io/v1alpha1 was not supported. I recommend running the same checks.
Also checking here it's mentioned that
You should have enabled the API type settings.k8s.io/v1alpha1/podpreset
I don't know how can the settings.k8s.io/v1alpha1 can be enabled in EKS.
EKS does not enable any k8s Alpha feature and as of today, PodPreset is a k8s Alpha feature. So if you want to achieve something like above, you have will have to create a Mutating Admission webhook which is supported by EKS now. But it is not sure simple use cases, PodPreset can handle most of the simple use cases hopefully it will enter Beta Phase soon.
As of 03.11.2020 there is still an open GitHub request for this.

REST APIs for Google Kubernetes Engine (GKE)

What are the RESP full APIs available for GKE and how do I call them? Currently I want to integrate GKE with my on-premise tool to deploy containers on GKE. I have all the required images already built and want to trigger an API call in GKE to deploy my docker image. Which API should I call? what and how do I provide for the authentication ?
The list of available Google Kubernetes Engines REST Resource APIs such as can be found on the Google Kubernetes Engine public doc
This is the Restful API to interact with the Cluster and not with Kubernetes. To interact with Kubernetes and container management, you use Kubectl.
And depending on your method of authentication, you can use the Google OAuth 2.0 authentication if you are authenticating via the browser, APIs if you are authenticating within your code ,or use Kubectl.
As #Sunny J. mentioned, in GKE docs you can only find APIs to interact with cluster for configuration only. If you want to manage workloads, you need to interact with Kubernetes API server. This is API reference. First you need to get address and port on which API server is listening:
kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " "
If you need root access to the cluster, you can create service account and cluster role binding to cluster-admin:
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
Now reveal its token:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
When making request, set Authorization header this way, replacing token with one you received using previous command:
Authorization: "Bearer <token>"
Good luck with Kubernetes:)

Docker Containers as Jenkins Slave

I want to know are the following scenarios possible , please help me out:-
Scenario 1:-
I have my local system as a Jenkins Master and Every time I need A slave to run my automation test script , a docker container spins up as a Jenkins slave and my script is executed on the slave and after the execution is completed the container is destroyed .
Is this possible . I want to keep my local system as the Jenkins master .
Scenario 2:-
Can i spin up multiple containers as the Jenkins slave for local system as a Jenkins master.
Thanks
Scenario 1 is at least covered by the JENKINS/Kubernetes Plugin: see its README
Based on the Scaling Docker with Kubernetes article, automates the scaling of Jenkins agents running in Kubernetes.
But that requires a Kubernetes setup, which means, in your case (if you have only one machine), a minikube.
I have written a message on Scenario 1 (with Kubernetes) at this link:
Jenkins kubernetes plugin not working
Here the post.
Instead of using certificates, I suggest you to use credentials in kubernetes, by creating a serviceAccount:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
and deploying jenkins using that serviceAccount:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: jenkins
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
serviceAccountName: jenkins
....
I show you my screenshots for Kubernetes plugin (note Jenkins tunnel for the JNLP port, 'jenkins' is the name of my kubernetes service):
For credentials:
Then fill the fileds (ID will be autogenerated, description will be shown in credentials listbox), but be sure to have created serviceAccount in kubernetes as I said before:
My instructions are for the Jenkins master inside kubernetes. If you want it outside the cluster (but slaves inside) I think you have to use simple login/password credentials.
I hope it helps you.