My application running in EKS (AWS Kubernetes) is failing to access an S3 bucket.
I'm getting a 400 Bad Request errors in my app.
I suspect a permission is missing, so for testing I added arn:aws:iam::aws:policy/AmazonS3FullAccess to any role I could find related to my EKS cluster. Still failing.
Using an S3 client from my local computer, I can access the bucket so I suspect I'm missing some configuration.
Any ideas?
Ok... issue was resolved. I'm leaving this here for future reference.
The problem was a mismatch of the bucket region, us-west-2 and the endpoint I had configured in my application. It should have been s3.us-west-2.amazonaws.com.
The error returned by S3 was not clear.
I hope this helps others.
Related
I am trying to setup a simple on-demand backup of an s3 bucket in AWS and anything I try I always get an access denied. See screenshot:
I have tried create a new bucket which is completely public, I've tried setting the access policy on the Vault, I've tried in different regions, all have the same result. Access Denied!
The messaging doesn't advise anything other than Access Denied, really helpful!
Can anyone give me some insight into what this message is referring to and more over how I can resolve this issue.
For aws backup, you need to set up a service role.
Traditionally you need 2 policies attached.
[AWSBackupServiceRolePolicyForBackup]
[AWSBackupServiceRolePolicyForRestore]
For S3, it seems there is a separate policy that you need to attach to your service role.
[AWSBackupServiceRolePolicyForS3Backup]
[AWSBackupServiceRolePolicyForS3Restore]
Just putting this here for those who will be looking for this answer.
To solve this problem for AWS CDK (javascript/typescript) you can use the following examples:
https://github.com/SimonJang/blog-aws-backup-s3/blob/68a05f8cb443411a23f02aa0c188adfe15bab0ff/infrastructure/lib/infrastructure-stack.ts#L63-L200
or this:
https://github.com/finnishtransportagency/hassu/blob/8adc0bea3193ff016a9aaa6abe0411292714bbb8/deployment/lib/hassu-database.ts#L230-L312
I've confirmed that my S3 credentials are set correctly and even tried full permissions on the bucket and still receive the same message. On the Azure side, I've added Storage Blob Data Owner permissions to my account and can list files I manually upload through the portal with the credentials I used (signing in through AAD rather than using a token).
Any help is appreciated here.
As it turned out there were no issues with either the Azure or AWS configuration. Trying the same commands in a linux VM worked perfectly. If you're experiencing this error it may be an issue with the AWS CLI or some other local config.
We are facing error while we are trying to load a huge zip file from S3 bucket to redshift from EC2 instance and even aginity. Waht is the real issue here?
As far as we have checked this can be because of the VPC NACL rules but not sure.
Error :
ERROR: Connection timed out after 50000 milliseconds
I also got this error and the Enhanced VPC Routing is enabled , check the routing from your Redshift cluster to S3.
There are several ways to let the Redshift cluster reach S3 , you can see the link below:
https://docs.aws.amazon.com/redshift/latest/mgmt/enhanced-vpc-routing.html
I solved this error by setting NAT for my private subnet which is used by my Redshift cluster.
I think you are correct, it might be because bucket access rules or secret/access keys.
Here are some pointers to debug it further if above doesn't work.
Create a small zip file, then try again if its something because of Size(but I don't think it is possible case.)
Split your zip file into multiple zip files and create Manifest file for loading rather then single file.
I hope your will find this useful.
You should create an IAM role which authorizes Amazon Redshift to access other AWS services like S3 on your behalf, you must associate that role with an Amazon Redshift cluster before you can use the role to load or unload data.
Check below link for setting up IAM role:
https://docs.aws.amazon.com/redshift/latest/mgmt/copy-unload-iam-role.html
I got this error when the Redshift cluster had Enhanced VPC Routing enabled, but no route in the route table for S3. Adding the S3 endpoint fixed the issue. Link to docs.
After making some changes for an aws hosted static website, I deleted an aws s3 bucket through the AWS console. However, the bucket is now orphaned. Although it is not listed in the AWS console, I can see still reach what is left of it through the CLI and through the URI.
When I try to recreate a www bucket with the same name, the AWS console returns the following error:
Bucket already exists
The bucket with issues has a www prefix, so now I have two different versions (www and non-www) of the same website.
The problem URI is:
www.michaelrieder.com and www.michaelrieder.com.s3-eu-west-1.amazonaws.com
I made many failed attempts to delete the bucket using the aws s3 CLI utility. I tried aws rb force, aws rm, and any other command I remotely thought might work.
I need to delete and recreate the bucket with exactly the same name so I can have www website redirection working correctly as aws enforces static website naming conventions strictly.
When I execute the aws s3 CLI command for example:
aws s3 rb s3://www.michaelrieder.com --force --debug
A typical CLI error message is:
An error occurred (AccessDenied) when calling the DeleteObject operation: Access Denied
It thought it might be a cache related issue and that the the bucket would flush itself after a period of time, but the issue has persisted for over 48 hours.
It seems to be a permissions issue, but I cannot find a way to change the phantom bucket’s permissions or any method of deleting the bucket or even it’s individual objects, since I do not have access to the bucket via the AWS console or the aws s3 CLI.
Appreciate any ideas. Please help.
Create folders and upload files to my S3 bucket stopped working.
The remote server returned an error: (403) Forbidden.
Everything seems to work previously as i did not change anything recently
After days of testing - i see that i am able to create folders in my bucket from localhost but same code doesnt work on the EC2 instance.
I must resolve the issue ASAP.
Thanks
diginotebooks
Does your EC2 instance have a role? If yes, what is this role? Is it possible that someone detached or modified a policy that was attached to it?
If your instance doesn't have a role, how do you upload files to S3? Using the AWS CLI tools? Same questions for the IAM profile used.
If you did not change anything - are you using the same IAM credentials from the server and localhost? May be related to this.
Just random thoughts...