does aws eks fargate pricing charge for the cluster only if you don't have any pods running - amazon-eks

if I create an EKS Fargate cluster and just keep it without deploying anything else on top of it do I still incur a charge. From what I have seen it does not seem to incur a charge and when I went though the pricing here https://aws.amazon.com/eks/pricing/
I think you get charged only once you start to run your pods. I want to confirm this. I am not sure if AWS will charge for the control pane as mentioned here

I think you get charged only once you start to run your pods.
AWS managed control plane is a recurring charges, doesn't matter if you have worker node (eg. EC2, Fargate) or not. Also, supporting resources such as NAT, EIP are chargeable.

Related

How to move data analytics into AWS?

I've installed tiger and I have one problem, I hope you could help me to solve it. Suppose I install tiger at a data center (physical datacenter) either using Docker and the AIO or Kubernetes. I get it installed, I connect to data sources, I do the ETL, I create the LDM, Metrics, Insights, Dashboard KPI. However, I realized that we need to have a cloud strategy and we need to move our data analytics - on premise Tiger - to AWS. Can I shutdown then the docker image or kubernetes, SCP it to either 1. AWS EC2 instance OR 2. AWS EKS. Can someone walked me theoretically through these steps?
I suppose that datasources are not on yet on AWS and that there is a VPN connection between the on premise data center and AWS or even AWS Direct Connect between on premise data center and AWS Region for customer.
if you are thinking about moving Tiger but not data source, it would be definitely challenging because of the latency (and also security).
Well, if a customer has good and secure link between public cloud and on-premise, then it should work.
In such a case both deployments of Tiger can work fully in parallel, on top of the same data source. So such a migration would be almost zero-downtime.

Pod on Fargate from EKS does not have access to AWS default credentials

I am trying to run a pod on fargate from EKS that needs to access s3 via boto3 python client and I cant figure out why this is happening. It works just fine when scheduled on a eks ec2 node.
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I have a properly setup fargate profile and followed this guide.
Does anyone know why aws credentials are not within the context of this pod? Does this have anything to do with the pod execution role?
I have a properly setup fargate profile and followed this guide.
That's a great start and it will ensure your pods are scheduled on Fargate rather than EC2.
Does anyone know why aws credentials are not within the context of this pod? Does this have anything to do with the pod execution role?
Without knowing what exactly you defined it's impossible to troubleshoot but yes, it's worth checking the pod execution role for starters.
However, given that you want to access an S3 bucket from your pod you need to make sure the pod's service account uses the respective policy. Last year we introduced IRSA, allowing you to assign least privileges on the pod level and given you're on Fargate this is the way to go. So, please peruse and apply IRSA as per doc and report back if anything is not working as expect.

How to increase availability of EKS

As EKS SLA, a 99.9% avialbility is committed by Amazon.
How I can increase that to 99.99% (or even 99.999%)? Would it help if I add master/slave nodes?
Thanks.
I don't think there is a way to do this and still call your setup an AWS EKS Cluster. It is defined in the EKS SLA that an EKS Cluster means that the control plane would be run by AWS. Three masters in different AZs provide pretty much good HA.
A workaround may be introducing a queue between the control plane (i.e. the k8s api) and your requests. The queue can retain the requests which were not successful due to availability issues and can again send the request based on some priority or time-based logic. This won't increase the HA for real-time tasks, but wouldn't let requests made from asynchronous use cases go to waste.

Creating a kubernetes cluster on GCP using Spinnaker

For end to end devops automation I want to have an environment on demand. For this I need to Spun up and environment on kubernetes which is eventually hosted on GCP.
My Use case
1. Developer Checks in the code in feature branch
2. Environment in Spun up on Google Cloud with Kubernetes
3. Application gets deployed on Kubernetes
4. Gets tested and then the environment gets destroyed.
I am able to do everything with Spinnaker except #2. i.e create Kube Cluster on GCP using Spinnaker.
Any help please
Thanks,
Amol
I'm not sure Spinnaker was meant for doing what the second point in your list. Spinnaker assumes a collection of resources (VM's or a Kubernetes cluster) and then works with that. So instead of spinning up a new GKE cluster Spinnaker makes use of existing clusters. I think it'd be better (for you costs as well ;) if you seperate the environments using Kubernetes namespaces.

Automatic cluster setup and app deployment on GCE Kubernetes

We are looking for a solid, declarative (yaml), based proceedure to automate the setup of our Kubernetes cluster and application deployments on Google Container Engine.
As our last resort in a serious failure we want to be able to:
Create a new GCE cluster
Execute all our deployments to their latest versions
Execute all the steps in the correct order
What are the solutions people are currently using. Doing this manually takes us about an hour and is error prone. Really it could take 15-20 mins if automated.
You should take a look at Google Cloud Deployment Manager. It "automates the creation and management of your Google Cloud Platform resources for you" meaning that it can create a Google Container Engine cluster as well as create your deployments.
Looking through the GKE deployment manager example should help get you started.