Pod on Fargate from EKS does not have access to AWS default credentials - aws-fargate

I am trying to run a pod on fargate from EKS that needs to access s3 via boto3 python client and I cant figure out why this is happening. It works just fine when scheduled on a eks ec2 node.
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I have a properly setup fargate profile and followed this guide.
Does anyone know why aws credentials are not within the context of this pod? Does this have anything to do with the pod execution role?

I have a properly setup fargate profile and followed this guide.
That's a great start and it will ensure your pods are scheduled on Fargate rather than EC2.
Does anyone know why aws credentials are not within the context of this pod? Does this have anything to do with the pod execution role?
Without knowing what exactly you defined it's impossible to troubleshoot but yes, it's worth checking the pod execution role for starters.
However, given that you want to access an S3 bucket from your pod you need to make sure the pod's service account uses the respective policy. Last year we introduced IRSA, allowing you to assign least privileges on the pod level and given you're on Fargate this is the way to go. So, please peruse and apply IRSA as per doc and report back if anything is not working as expect.

Related

In amazon eks - how to view logs which are prior to eks fargate node creation and logs while pods is starting up

I'm using amazon EKS fargate. I can see container logs using fluentbit side car etc no problem at all. But those logs ONLY show what is happening inside the container AFTER it has started up
I enabled aws eks cluster logging fully
Now I would like to see logs in cloudwatch which is equivalent of
kubectl describe pod
command
I have searched the ENTIRE cloudwatch clustername log group and am not able to find logs like
"pulling image into container"
"efs not mounted"
etc
I want to see logs in cloudwatch prior to the actual container creation stage
IS it possible at all using eks fargate ?
Thanks a bunch
You can use Container Insights which can collect metrics by using performance log events using the embedded metric format. The logs are stored in CloudWatch Logs. CloudWatch generates several metrics automatically from the logs which you can view in the CloudWatch console.
In Amazon EKS and Kubernetes, Container Insights uses a containerized version of the CloudWatch agent to discover all of the running containers in a cluster. It then collects performance data at every layer of the performance stack.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-view-metrics.html

does aws eks fargate pricing charge for the cluster only if you don't have any pods running

if I create an EKS Fargate cluster and just keep it without deploying anything else on top of it do I still incur a charge. From what I have seen it does not seem to incur a charge and when I went though the pricing here https://aws.amazon.com/eks/pricing/
I think you get charged only once you start to run your pods. I want to confirm this. I am not sure if AWS will charge for the control pane as mentioned here
I think you get charged only once you start to run your pods.
AWS managed control plane is a recurring charges, doesn't matter if you have worker node (eg. EC2, Fargate) or not. Also, supporting resources such as NAT, EIP are chargeable.

how do we restore kubeflow from backups if the installation is destroyed or how we can back the kubflow as it was if the eks cluster is destroyed

How am i going to take backup for my kubeflow pipeline and restore it if the installing is failed or the eks cluster is destroyed. i have some finding to get the image of the vanila i am using for database and find out how to take backup and restore but i didnt have any luck so far.
i have e kubflow running on aws eks cluster
and i have 15/16 kubeflow pipeline running
i used vanilla for database
so now i need yours help to know how to backup the pipelines
and restore the kubeflow pipeline if anything happens to the kubeflow
or eks.
If you inspect the Kubeflow manifest file, you'll see the list of dependencies it has. The largest one is the database. For those on AWS, you can use RDS as a target for running the database rather than a self-hosted one in Kubernetes.
You can see the instructions for that here.
I used to be the Product Manager for Open Source MLOps at AWS and my team wrong that post.

Spinnaker AWS Provider not allowing create cluster

Deployed Spinnaker in AWS to run a test in the same account. However unable to configure server groups. If I click create the task is queued with the account configured via hal on the CLI. Anyway to troubleshoot this, the logs are looking light.
Storage backend needs to be configured correctly.
https://www.spinnaker.io/setup/install/storage/

Can't deploy marketplace object on GKE

I have a running Kubernetes cluster on Google Cloud Platform.
I want to deploy a postgres image to my cluster.
When selecting the image and my cluster, I get the error:
insufficient OAuth scope
I have been reading about it for a few hours now and couldn't get it to work.
I managed to set the scope of the vm to allow APIs:
Cloud API access scopes
Allow full access to all Cloud APIs
But from the GKE cluster details, I see that everything is disabled except the stackdriver.
Why is it so difficult to deploy an image or to change the scope?
How can I modify the cluster permissions without deleting and recreating it?
Easiest way is to delete and recreate the cluster because there is no direct way to modify the scopes of a cluster. However, there is a workaround. Create a new node pool with the correct scopes and make sure to delete any of the old node pools. The cluster scopes will change to reflect the new node pool.
More details found on this post