AWS Use assume-role to create EKS cluster - amazon-eks

I read that we should use assume-role to create AWS EKS cluster. From the documention, I only find the use of EKS service role but I don't see how to create a cluster with a role. Do I miss anything?

Related

In amazon eks - how to view logs which are prior to eks fargate node creation and logs while pods is starting up

I'm using amazon EKS fargate. I can see container logs using fluentbit side car etc no problem at all. But those logs ONLY show what is happening inside the container AFTER it has started up
I enabled aws eks cluster logging fully
Now I would like to see logs in cloudwatch which is equivalent of
kubectl describe pod
command
I have searched the ENTIRE cloudwatch clustername log group and am not able to find logs like
"pulling image into container"
"efs not mounted"
etc
I want to see logs in cloudwatch prior to the actual container creation stage
IS it possible at all using eks fargate ?
Thanks a bunch
You can use Container Insights which can collect metrics by using performance log events using the embedded metric format. The logs are stored in CloudWatch Logs. CloudWatch generates several metrics automatically from the logs which you can view in the CloudWatch console.
In Amazon EKS and Kubernetes, Container Insights uses a containerized version of the CloudWatch agent to discover all of the running containers in a cluster. It then collects performance data at every layer of the performance stack.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-view-metrics.html

EMR serverless cannot connect to s3 in another region

I have an EMR serverless app that cannot connect to S3 bucket in another region. Is there a workaround for that? Maybe a parameter to set in Job parameters or Spark parameters when submitting a new job.
The error is this:
ExitCode: 1. Last few exceptions: Caused by: java.net.SocketTimeoutException: connect timed out Caused by: com.amazon.ws.emr.hadoop.fs.shaded.org.apache.http.conn.ConnectTimeoutException
In order to connect to an S3 bucket in another region or access external services, the EMR Serverless application needs to be created with a VPC.
This is mentioned on the considerations page:
Without VPC connectivity, a job can access some AWS service endpoints in the same AWS Region. These services include Amazon S3, AWS Glue, Amazon DynamoDB, Amazon CloudWatch, AWS KMS, and AWS Secrets Manager.
Here's an example AWS CLI command to create an application in a VPC - you need to provide a list of Subnet IDs and Security Group IDs. More details can be found in configuring VPC access.
aws emr-serverless create-application \
--type SPARK \
--name etl-jobs \
--release-label "emr-6.6.0" \
--network-configuration '{
"subnetIds": ["subnet-01234567890abcdef","subnet-01234567890abcded"],
"securityGroupIds": ["sg-01234566889aabbcc"]
}'

EMR on EKS - Create a job execution role

How to create an IAM role to run workloads on Amazon EMR on EKS?
The official documentation remains very vague on this point in particular
This is multistep process. We have EKS cluster on which we run EMR jobs. So we need to setup IAM + EKS role mapping configurations. AWS Documentation is detailed which creates confusion sometimes.
Following are high level steps which might help you to refer AWS documentation again and understand.
Note: These steps assumes you already have created EKS.
Create Namespace to run your spark jobs.
Create RBAC Role and Role binding for your cluster configuration. (This is EKS level role based access mechanism.)
Edit aws-auth to update roleARN for AWSServiceRoleForAmazonEMRContainers. (EKS Authorisation)
Create EMR virtual cluster and assigned it to EKS namespace created above.
Create trust policy to access EKS cluster for EMR containers.
Create Job execution role and associate above policy to it.
Submit your emr on eks job.

How to handle S3 events inside a Kubernetes Cluster?

I have a Kubernetes cluster that runs on AWS EKS,
now I want to handle S3 object creation events in a pod,
like I would with AWS Lambda.
How can I handle S3 events from inside a Kubernetes cluster?
S3 events can only be sent to SQS, SNS or Lambda functions. So any S3 -> K8s integration will have to use those event destinations. I can think of couple of options to trigger K8s Jobs using S3 events (I'm assuming the K8s cluster is EKS)
Option 1:
S3 events are sent to a SQS queue
K8s Job polling the SQS queue -- the Job will have a ServiceAccount mapped to an IAM role which can read from SQS
Option 2:
S3 events trigger a Lambda function
This Lambda function uses container image which has kubectl/helm installed.
Lambda function's IAM role will be mapped to a K8s Role
Lambda function runs some code/script that creates following when invoked (using kubectl/helm)
a. A ConfigMap with the event information is
b. A Job that takes the ConfigMap name/label as a parameter or environment variable

Spinnaker for CD

I was planning to use Jenkins(CI) ----> Spinnaker(CD) integration for AWS EKS.
Does Spinnaker support multi-cluster deployments?
For example:
I will have 4 clusters in different accounts
and I want to have 1 Spinnaker deployed to one of the clusters and manage other 3 as well.
is it possible to do?
Yes is possible. The suggested way is to have Spinnaker Running in an AWS Account called Spinnaker or CD and in a specific namespace called spin.
A great guide to follow for Spinnaker in EKS is Continuous Delivery using Spinnaker on Amazon EKS