AWS EKS Self managed EC2 instances Patching [closed] - amazon-eks

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
We are running a self managed EC2 instances in the EKS Cluster. While upgrading control plane is managed by AWS, the worker nodes are self managed. For running security updates and patches, we use the latest optimized AMI and role out the new instances. Some of these are manual effort, what is the best automated approach that you are following to update/patch those self managed EC2 worker nodes?
Steps that we follow:
Look for the latest optimized AMI version released by AWS
Update the Launch configuration with new AMI
Scale up nodes with New AMI
Seamlessly transfer pods from old to new nodes
scale down and delete the old nodes.
The issue here after we update with new optimized AMI, we still find out some security updates that are listed during scanning those instances.

Some of these are manual effort, what is the best automated approach
I suggest to have a look at Managed Node Groups since that is an automated approach for the lifecycle management that you are doing.
Nodes run using the latest Amazon EKS optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to ensure that your applications stay available.
All managed nodes are provisioned as part of an Amazon EC2 Auto Scaling group that is managed for you by Amazon EKS. All resources including the instances and Auto Scaling groups run within your AWS account. Each node group uses the Amazon EKS optimized Amazon Linux 2 AMI and can run across multiple Availability Zones that you define.

Related

does aws eks fargate pricing charge for the cluster only if you don't have any pods running

if I create an EKS Fargate cluster and just keep it without deploying anything else on top of it do I still incur a charge. From what I have seen it does not seem to incur a charge and when I went though the pricing here https://aws.amazon.com/eks/pricing/
I think you get charged only once you start to run your pods. I want to confirm this. I am not sure if AWS will charge for the control pane as mentioned here
I think you get charged only once you start to run your pods.
AWS managed control plane is a recurring charges, doesn't matter if you have worker node (eg. EC2, Fargate) or not. Also, supporting resources such as NAT, EIP are chargeable.

How to move data analytics into AWS?

I've installed tiger and I have one problem, I hope you could help me to solve it. Suppose I install tiger at a data center (physical datacenter) either using Docker and the AIO or Kubernetes. I get it installed, I connect to data sources, I do the ETL, I create the LDM, Metrics, Insights, Dashboard KPI. However, I realized that we need to have a cloud strategy and we need to move our data analytics - on premise Tiger - to AWS. Can I shutdown then the docker image or kubernetes, SCP it to either 1. AWS EC2 instance OR 2. AWS EKS. Can someone walked me theoretically through these steps?
I suppose that datasources are not on yet on AWS and that there is a VPN connection between the on premise data center and AWS or even AWS Direct Connect between on premise data center and AWS Region for customer.
if you are thinking about moving Tiger but not data source, it would be definitely challenging because of the latency (and also security).
Well, if a customer has good and secure link between public cloud and on-premise, then it should work.
In such a case both deployments of Tiger can work fully in parallel, on top of the same data source. So such a migration would be almost zero-downtime.

Can't deploy marketplace object on GKE

I have a running Kubernetes cluster on Google Cloud Platform.
I want to deploy a postgres image to my cluster.
When selecting the image and my cluster, I get the error:
insufficient OAuth scope
I have been reading about it for a few hours now and couldn't get it to work.
I managed to set the scope of the vm to allow APIs:
Cloud API access scopes
Allow full access to all Cloud APIs
But from the GKE cluster details, I see that everything is disabled except the stackdriver.
Why is it so difficult to deploy an image or to change the scope?
How can I modify the cluster permissions without deleting and recreating it?
Easiest way is to delete and recreate the cluster because there is no direct way to modify the scopes of a cluster. However, there is a workaround. Create a new node pool with the correct scopes and make sure to delete any of the old node pools. The cluster scopes will change to reflect the new node pool.
More details found on this post

Apache Ignite vs redis cluster(use partition) vs other solution [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
Hi I looking for In memory data grid or similar one.
My use case.
Data griding in memory. scale out available.
backup node available.
persistent backup available.
(optional) free or opensource solution
I did googling and I found candidates below
- Apache Ignite
- Redis cluster
- Hazelcast(community)
I prefer Ignite to Hazelcast because, Ignite support use direct buffer.
But I don't know Redis cluster partitioning whether it is stable or not. and, I don't know if apache ignite performance better than redis cluster or not.
Apache Ignite comparable to redis cluster? or impropert comparison?
Thanks for your answer
But I don't know Redis cluster partitioning whether it is stable or not
Redis cluster feature is stable since 3.x version and used in production by many companies.
Apache Ignite comparable to redis cluster? or impropert comparison?
Comparison Apache Ignite vs Redis only is wrong, because these projects have different grade. Redis is positioned as a storage and not as a Data Grid like Apache Ignite. So for proper comparison Apache Ignite should be compared vs Redisson - Redis Java Client
with features of In-Memory Data Grid. It offers the same features as Apache Ignite.
Redisson supports fully managed Redis services like AWS Elasticache, Azure Redis Cache. You don't need to manage/deploy/maintain Redis cluster by yourself of hire devops to do this. Apache Ignite doesn't offer such feature and you should manage/deploy/maintain it by yourself.
I used Redis in production for one of the largest US mobile network operator (IoT department). It is stable from 2.8 (Master/Slave) but cluster stable is from 3.2. Used 2.8 for 3 years and 3.2 cluster for 2 years on production with about 50k TPS load with no restart for years and no issues (except BGSAVE and memory issues but that was due to RAM limitations).
If we compare Redis and Apache Ignite:
Performance. Redis is faster, single thread and 100% in memory.
Data structure. Redis is key-value storage (even that is not a limitation, you can imagine and map almost everything in key-value models). Ingrid is a data grid as it was mentioned above.
If you are looking for a memory data grid and performance is on second priority then Ingrid will be more appropriate for you.
Redis only provides a key-value storage, while Ignite is much more functional. Here is a good feature comparison provided by GridGain: https://www.gridgain.com/resources/product-comparisons/redis-comparison
Which one to use, depends on your requirements and expectations.

DC/OS running a service on each agent

Is there any way of running a service (single instance) on each deployed agent node? I need that because each agent needs to mount a storage from S3 using s3fs
The name of the feature you're looking for is "daemon tasks", but unfortunately, it's still in the planning phase for Mesos itself.
Due to the fact that schedulers don't know the entire state of the cluster, Mesos needs to add a feature to enable this functionality. Once in Mesos it can be integrated with DC/OS.
The primary workaround is to use Marathon to deploy an app with the UNIQUE constraint ("constraints": [["hostname", "UNIQUE"]]) and set the app instances to the number of agent nodes. Unfortunately this means you have to adjust the instances number when you add new nodes.