How can I read anonymously POSTed files on AWS S3? - amazon-s3

I have a bucket (let's call it bucket-1) on AWS S3 from which I cannot read. I have another bucket (let's call it bucket-2) from which I can read.
I can list the contents of both buckets, but I cannot copy any of the contents of bucket-1.
% aws s3 ls s3://bucket-1/ | grep 0046
2016-03-09 15:39:50 4413909 0046f326-6e7d-4c16-80e4-491fa0b19dd7
% aws s3 cp s3://bucket-1/0046f326-6e7d-4c16-80e4-491fa0b19dd7 .
A client error (403) occurred when calling the HeadObject operation: Forbidden
In the course of trying to figure this out I switched back to using Access Keys of the AWS Account not an IAM User (assuming that the AWS Account has basically all privileges).
Start digging
Assuming the cause is in the permissions, I run
% aws s3api get-bucket-acl --bucket my-bucket-1
% aws s3api get-bucket-acl --bucket my-bucket-2
Common
It shows my AWS Account as the owner of both buckets.
Different
bucket-2 has one permission: FULL_CONTROL for my AWS Account.
bucket-1 lists several permissions, FULL_CONTROL is not among these. It lists
READ
WRITE
READ_ACP
WRITE_ACP
for my AWS Account.
In the web console the objects in bucket-1 don't have any permission set. The objects in bucket-2 have the same permission as the bucket they are in.
It is likely that different methods were used to store the files in the two buckets. The objects in bucket-2 were likely created via the API, while the objects in bucket-1 originate from a anonymous POST. (Yes, bucket-1 has the permission WRITE for Everyone.)
Digging deeper
Even with the credentials of my AWS Account I don't have the permission to query the acl of the object.
% aws s3api get-object-acl --bucket bucket-1 --key 0046f326-6e7d-4c16-80e4-491fa0b19dd7
A client error (AccessDenied) occurred when calling the GetObjectAcl operation: Access Denied
% aws s3api get-object --bucket bucket-1 --key 0046f326-6e7d-4c16-80e4-491fa0b19dd7 local.file
A client error (AccessDenied) occurred when calling the GetObject operation: Access Denied
% aws s3api head-object --bucket bucket-1 --key 0046f326-6e7d-4c16-80e4-491fa0b19dd7
A client error (403) occurred when calling the HeadObject operation: Forbidden
Questions
On a helpful site on the internet I found that one can use put-object-acl to set the acl to bucket-owner-full-control. I tried that. But you have to do this with the credentials of the owner of the file - and how can you do that if the file was posted anonymously?
What else can I try?
Do objects on S3, like buckets, have an owner?
If so, where in the web console can I find that information?

Don't allow anonymous uploads to your bucket. If you do, and the uploader doesn't set the permissions correctly, the only action available to you is to delete the object.
It is possible to set the bucket policy so that the anonymous upload is denied unless the uploaded sets the ACL to bucket-owner-full-control, but that's only useful for future uploads.
In event... is there a legitimate application for anonymous uploads? Highly dubious.

Related

AWS S3 Bucket created with force_delete=true fails to delete with Access Denied via terraform

I create an s3 bucket via terraform for the purpose of storing VPC Flow Logs:
resource "aws_s3_bucket" "bucket" {
bucket = local.bucket_name
force_destroy = true
tags = var.tags
}
After the bucket is created, and flow-log service is created, there are a few entries under "/AWSLogs/..."
after I remove the flow-log service I attempt the terraform destroy, but it fails with the following entry, one for each object:
deleting: S3 object (AWSLogs/.../...98d659c.log.gz) version (null): AccessDenied: Access Denied
there are no policies, because they get deleted first.
ACLs are bucket owner and s3 log delivery group have full access, the rest are turned off. and owner is set to data.aws_canonical_user_id.current.id
ACL permissions are not quite enough. The IAM role you are using requires the s3:DeleteObject* permissions.

S3 Access Denied with boto for private bucket as root user

I am trying to access a private S3 bucket that I've created in the console with boto3. However, when I try any action e.g. to list the bucket contents, I get
boto3.setup_default_session()
s3Client = boto3.client('s3')
blist = s3Client.list_objects(Bucket=f'{bucketName}')['Contents']
ClientError: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
I am using my default profile (no need for IAM roles). The Access Control List on the browser states that the bucket owner has list/read/write permissions. The canonical id listed as the bucket owner is the same as the canonical id I get when I go to 'Your Security Credentials'.
In short, it feels like the account permissions are ok, but boto is not logging in with the right profile. In addition, running similar commands from the command line e.g.
aws s3api list-buckets
also gives Access Denied. I have no problem running these commands at work, where I have a work log-in and IAM roles. It's just running them on my personal 'default' profile.
Any suggestions?
It appears that your credentials have not been stored in a configuration file.
You can run this AWS CLI command:
aws configure
It will then prompt you for Access Key and Secret Key, then will store them in the ~.aws/credentials file. That file is automatically used by the AWS CLI and boto3.
It is a good idea to confirm that it works via the AWS CLI first, then you will know that it should work for boto3 also.
I would highly recommend that you create IAM credentials and use them instead of root credentials. It is quite dangerous if the root credentials are compromised. A good practice is to create an IAM User for specific applications, then limit the permissions granted to that application. This avoids situations where a programming error (or a security compromise) could lead to unwanted behaviour (eg resources being used or data being deleted).

S3 access denied when trying to run aws cli

using the AWS CLI I'm trying to run
aws cloudformation create-stack --stack-name FullstackLambda --template-url https://s3-us-west-2.amazonaws.com/awsappsync/resources/lambda/LambdaCFTemplate.yam --capabilities CAPABILITY_NAMED_IAM --region us-west-2
but I get the error
An error occurred (ValidationError) when calling the CreateStack operation: S3 error: Access Denied
I have already set my credential with
aws configure
PS I got the create-stack command from the AppSync docs (https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-lambda-resolvers.html)
Looks like you accidentally skipped l letter at the end of template file name:
LambdaCFTemplate.yam -> LambdaCFTemplate.yaml
First make sure the S3 URL is correct. But since this is a 403, I doubt it's the case.
Yours could result from a few different scenarios.
1.If both APIs and IAM user are MFA protected, you have to generate temporary credentials using aws sts get-session-token and use it
2.Use a role to provide cloudformation read access to the template object in S3. First create a IAM role with read access to S3. Then create a parameter like below and ref it in resource properties IamInstanceProfile block
"InstanceProfile":{
"Description":"Instance Profile Name",
"Type":"String",
"Default":"iam-test-role"
}

Download from Requester Pays S3 bucket using EC2 identity

I'm trying to list and download files from a Requester Pays S3 bucket:
aws s3 ls --request-payer requester s3://requester-pays-bucket/
I'm running this command from an EC2 instance, but it fails:
Unable to locate credentials. You can configure credentials by running "aws configure".
The error is clear, however I'm still a little surprised. The goal of a Requester Pays bucket is to offload the cost of S3 data transfers to the requester. Since I'm initiating my request from EC2, my identity as requester should already be clear to S3, no?
Can S3 or the AWS CLI somehow automatically pick up my identity from the EC2 instance I'm running on? Or do I have to provide credentials in some explicit way?
You have to explicitly provide credentials of an IAM user which have access to your S3 bucket. Just go to IAM dashboard of your AWS account and create a new user which have programmatic access to s3. After this you will be provided with a secret access key and access key ID.
Then login into your EC2 instance, run command "aws configure" in your terminal and you will be asked for access key id , secret access key , default region if you want to provide ,just enter these details and you are good to go with your command.

How to fix AWS S3 bucket mission "Sorry! You do not have permissions to view this bucket."

After messing around with S3 bucket permission, I can't access the s3 bucket from AWS console and CLI. Always getting this error from the console
Sorry! You do not have permissions to view this bucket.
Using the CLI on any s3api call, would get Access Denied.
A client error (AccessDenied) occurred when calling the GetBucketVersioning operation: Access Denied
A client error (AccessDenied) occurred when calling the PutObjectAcl operation: Access Denied
Anyone know how to fix this issue.
I solved the problem in the end it was the bucket policy where my IP was in the blocked instead of allow access. I used a different ip and able to update bucket policy.