How to automatically update security group when eks scale worker node? - amazon-eks

I have ec2 in region A, and eks in region B, eks worker nodes need access a port exposed by ec2, i manually maintaining the ec2 security group which public ip(eks worker ip) can access ec2. The issue is i need update the ec sg manually once scale or upgrade eks node group. there should have a smarter way. i have some ideas, anyone can give some guidance or best practice?
solution 1: use lambda cron job monitor ec2 auto scale group, then update sg.
solution 2: in kubernetes, monitor the node change, use oicd update sg.
note: ec2 and eks in different region.

Related

does aws eks fargate pricing charge for the cluster only if you don't have any pods running

if I create an EKS Fargate cluster and just keep it without deploying anything else on top of it do I still incur a charge. From what I have seen it does not seem to incur a charge and when I went though the pricing here https://aws.amazon.com/eks/pricing/
I think you get charged only once you start to run your pods. I want to confirm this. I am not sure if AWS will charge for the control pane as mentioned here
I think you get charged only once you start to run your pods.
AWS managed control plane is a recurring charges, doesn't matter if you have worker node (eg. EC2, Fargate) or not. Also, supporting resources such as NAT, EIP are chargeable.

How can I maintain a list of constant masters and workers under conf/masters and conf/workers in a managed Scaling cluster?

I am using an AWS EMR cluster with Alluxio installed n every node. I want to now deploy Alluxio in High Availability.
https://docs.alluxio.io/os/user/stable/en/deploy/Running-Alluxio-On-a-HA-Cluster.html#start-an-alluxio-cluster-with-ha
I am following the above documentation, and see that "On all the Alluxio master nodes, list all the worker hostnames in the conf/workers file, and list all the masters in the conf/masters file".
My concern is that since I have an AWS-managed scaling cluster the worker nodes keep added and removed based on cluster loads. How can I maintain a list of constant masters and workers under conf/masters and conf/workers in a managed Scaling cluster?
this conf/workers and conf/masters conf file is only used for intiial setup through scripts. Once the cluster is running, you don’t need to update them any more.
E.g., in an say EMR cluster, you can add a new slave node as Alluxio worker and as long as you specify the correct Alluxio master address, this new Alluxio worker will be able to register itself and serve in the fleet like other workers,

EMR on EKS - Create a job execution role

How to create an IAM role to run workloads on Amazon EMR on EKS?
The official documentation remains very vague on this point in particular
This is multistep process. We have EKS cluster on which we run EMR jobs. So we need to setup IAM + EKS role mapping configurations. AWS Documentation is detailed which creates confusion sometimes.
Following are high level steps which might help you to refer AWS documentation again and understand.
Note: These steps assumes you already have created EKS.
Create Namespace to run your spark jobs.
Create RBAC Role and Role binding for your cluster configuration. (This is EKS level role based access mechanism.)
Edit aws-auth to update roleARN for AWSServiceRoleForAmazonEMRContainers. (EKS Authorisation)
Create EMR virtual cluster and assigned it to EKS namespace created above.
Create trust policy to access EKS cluster for EMR containers.
Create Job execution role and associate above policy to it.
Submit your emr on eks job.

Creating a kubernetes cluster on GCP using Spinnaker

For end to end devops automation I want to have an environment on demand. For this I need to Spun up and environment on kubernetes which is eventually hosted on GCP.
My Use case
1. Developer Checks in the code in feature branch
2. Environment in Spun up on Google Cloud with Kubernetes
3. Application gets deployed on Kubernetes
4. Gets tested and then the environment gets destroyed.
I am able to do everything with Spinnaker except #2. i.e create Kube Cluster on GCP using Spinnaker.
Any help please
Thanks,
Amol
I'm not sure Spinnaker was meant for doing what the second point in your list. Spinnaker assumes a collection of resources (VM's or a Kubernetes cluster) and then works with that. So instead of spinning up a new GKE cluster Spinnaker makes use of existing clusters. I think it'd be better (for you costs as well ;) if you seperate the environments using Kubernetes namespaces.

Restarting EC2 Apache instance on CloudWatchAlarm

I was wondering if there can be a process to restart apache if an alarm is triggered on ec2 instance. Either process can be triggered by Alarm or by SNS. In Alarm Actions i can see Auto Scaling or ECS Services or EC2 instance reboot kind option. I am trying to see if Lambda + SNS can work. But it dosen`t seem appropriate.
I am running ubuntu instances.
Yes you can achieve this by using Combination of AWS Lambda and EC2 Run Command Service from AWS.
https://aws.amazon.com/blogs/aws/new-ec2-run-command-remote-instance-management-at-scale/
You can create Lambda function that will trigger based on Cloudwatch Alarm and on trigger make Lambda to run service apache2 restart on your Ubuntu EC2 instance.