Connect EKS cluster using boto3 and then need to drain a particular node - amazon-eks

I have a scenario,
1) I need to connect a EKS cluster using boto3
2) then, need to drain a particular node using boto3
any one have solution for this?

Related

Can you connect Amazon ElastiŠ”ache Redis to amazon EMR pyspark?

I have been trying several solutions with custom jars from redislab and also with --packages in spark submit emr and still no suceess is there any simple way in emr to connect to elasticache ?

How to automatically update security group when eks scale worker node?

I have ec2 in region A, and eks in region B, eks worker nodes need access a port exposed by ec2, i manually maintaining the ec2 security group which public ip(eks worker ip) can access ec2. The issue is i need update the ec sg manually once scale or upgrade eks node group. there should have a smarter way. i have some ideas, anyone can give some guidance or best practice?
solution 1: use lambda cron job monitor ec2 auto scale group, then update sg.
solution 2: in kubernetes, monitor the node change, use oicd update sg.
note: ec2 and eks in different region.

How to send data to AWS S3 from Kafka using Kakfa Connect without Confluent?

I have a local instance of Apache Kafka 2.0.0 , it running very well. In my test I produce and consume data from twitter and put them in a specific topic twitter_tweets and everything is OK. But now I want to consume the topic twitter_tweets with Kafka Connect using de connector Kafka Connect S3 and obviusly store the data in AWS S3 without using Confluent-CLI.
Can I do this without Confluent? Anyone have an example or something to help me?
without Confluent
S3 Sink is open source; so is Apache Kafka Connect.
Connect framework is not specific to Confluent
You may use Kafka Connect Docker image, for example, or you may use confluent-hub to install S3 Connect on your own Kafka Connect installation.

Can redis-py reliably use AWS ElastiCache Redis cluster?

I am trying to move away from a single AWS ElastiCache (Redis) server as Celery broker to a Redis cluster. Trouble is - nowhere in the Celery or redis-py documentation can I find the way to connect to the AWS RedisCluster.
redis-py that is used by Celery to communicate with the Redis server can be configured to use Redis Sentinel, but AWS does not support it (at least I did not find sentinel support in the AWS ElastiCache documentation).
So is there a way to communicate somehow with the ElastiCache Redis cluster using redis-py, or, is there a way to instruct Celery to use redis-py-cluster (a separate project)?
Elasticache should give you a configuration endpoint address that you can use for connecting to celery. Just use that endpoint in either the setting for the broker_url or results_backend.

Terraform - Cloudwatch alarm Elasticache cluster metrics

Can someone suggest on how we can create alarms for Elasticache cluster for "CPUUtilization" and "FreeableMemory" using Terraform?
Elasticache seems like an exception where we are unable to get cluster level metrics. Seems like current workaround is to create alarms at node level.
Haven't tried below but seems like a workaround -
https://github.com/azavea/terraform-aws-redis-elasticache/blob/develop/main.tf
It is possible, and documented here.
Here is my terraform module to create cloudwatch alerts for rds and cache clusters at node level.
https://bitbucket.org/rkkrishnaa/terraform/src/master/
I have added Jenkinsfile to deploy the alerts through CI.