what is difference between the two functions in aws acl - amazon-s3

I'm practicing deployment using aws s3 and cloudFront.
The issue I met was that account B approached S3, which made by account A, and uploaded the object using Github Actions.
I solved this issue with changing two things.
First, I added external account with Grantee in S3 ACL like picture.
Second, I added code in .yml file --acl bucket-owner-full-control like below for Github Actions.
...
- name: S3 Deploy
run: aws s3 sync ./dist s3://s3-bucket-name/ --acl bucket-owner-full-control
Both two options are set, the function is working.
But, I don't understand why this option --acl bucket-owner-full-control is needed.
I thought the first setting was enough for account B to access S3 owned by account A.
I found something that may be related to this topic. Is it related to this issue?

Related

Generate AWS CloudFormation Template from Pre-Existing S3 bucket

I have an S3 bucket that already has all the correct policies, lifecycles, etc. that I like.
I am converting what is pre-existing into Terraform Infra as Code because we are going to be deploying in multiple regions.
I cannot seem to figure out how to export a CloudFormation template of a pre-existing S3 bucket. I can figure it out for generic services, but not for a specific S3 bucket.
Can this be done?
Eg, I would like to do something like what is described here but for an S3 bucket.
Otherwise, I can try my best to look at the GUI and see if I'm capturing all the little details, but I'd rather not.
(I am not including details on the contents of the bucket, of course, but just the configuration of the bucket. I.e., how would I recreate the same bucket, but using Terraform -- or CloudFormation -- instead.)

Orphaned AWS s3 Bucket Cannot Be Deleted

After making some changes for an aws hosted static website, I deleted an aws s3 bucket through the AWS console. However, the bucket is now orphaned. Although it is not listed in the AWS console, I can see still reach what is left of it through the CLI and through the URI.
When I try to recreate a www bucket with the same name, the AWS console returns the following error:
Bucket already exists
The bucket with issues has a www prefix, so now I have two different versions (www and non-www) of the same website.
The problem URI is:
www.michaelrieder.com and www.michaelrieder.com.s3-eu-west-1.amazonaws.com
I made many failed attempts to delete the bucket using the aws s3 CLI utility. I tried aws rb force, aws rm, and any other command I remotely thought might work.
I need to delete and recreate the bucket with exactly the same name so I can have www website redirection working correctly as aws enforces static website naming conventions strictly.
When I execute the aws s3 CLI command for example:
aws s3 rb s3://www.michaelrieder.com --force --debug
A typical CLI error message is:
An error occurred (AccessDenied) when calling the DeleteObject operation: Access Denied
It thought it might be a cache related issue and that the the bucket would flush itself after a period of time, but the issue has persisted for over 48 hours.
It seems to be a permissions issue, but I cannot find a way to change the phantom bucket’s permissions or any method of deleting the bucket or even it’s individual objects, since I do not have access to the bucket via the AWS console or the aws s3 CLI.
Appreciate any ideas. Please help.

gsutil cannot copy to s3 due to authentication

I need to copy many (1000+) files to s3 from GCS to leverage an AWS lambda function. I have edited ~/.boto.cfg and commented out the 2 aws authentication parameters but a simple gsutil ls s3://mybucket fails from either an GCE or EC2 VM.
Error is The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256..
I use gsutil version: 4.28 and locations of GCS and S3 bucket are respectively US-CENTRAL1 and US East (Ohio) - in case this is relevant.
I am clueless as the AWS key is valid and I enabled http/https. Downloading from GCS and uploading to S3 using my laptop's Cyberduck is impracticable (>230Gb)
As per https://issuetracker.google.com/issues/62161892, gsutil v4.28 does support AWS v4 signatures by adding to ~/.boto a new [s3] section like
[s3]
# Note that we specify region as part of the host, as mentioned in the AWS docs:
# http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
host = s3.eu-east-2.amazonaws.com
use-sigv4 = True
The use of that section is inherited from boto3 but is currently not created by gsutil config so it needs to be added explicitly for the target endpoint.
For s3-to-GCS, I will consider the more server-less Storage Transfer Service API.
I had a similar problem. Here is what I ended up doing on a GCE machine:
Step 1: Using gsutil, I copied files from GCS to my GCE hard drive
Step 2: Using aws cli (aws s3 cp ...), I copied files from GCE hard drive to s3 bucket
The above methodology has worked reliably for me. I tried using gsutil rsync but it fail unexpectedly.
Hope this helps

How to use S3 adapter cli for snowball

I'm using s3 adapter to copy files from a snowball device to local machine.
Everything appears to be in order as I was able to run this command and see the bucket name:
aws s3 ls --endpoint http://snowballip:8080
But besides this, aws doesn't offer any examples for calling cp command. How do I provide the bucket name and the key with this --endpoint flag.
Further, when I ran this:
aws s3 ls --endpoint http://snowballip:8080/bucketname
It returned 'Bucket'... Not sure what that means because I expect to see the files.
I can confirm the following is correct for snowball and snowball edge, as #sqlbot says in the comment
aws s3 ls --endpoint http://snowballip:8080 s3://bucketname/[optionalprefix]
References:
http://docs.aws.amazon.com/cli/latest/reference/
http://docs.aws.amazon.com/snowball/latest/ug/using-adapter-cli.html
Just got one in the post

How to migrate S3 bucket to another account

I need to move data from an S3 bucket to another bucket, on a different account.
I was able to sync buckets by running:
aws s3 sync s3://my_old_bucket s3://my_new_bucket --profile myprofile
myprofile contents:
[profile myprofile]
aws_access_key_id = old_account_key_id
aws_secret_access_key = old_account_secret_access_key
I have also set policies both on origin and destination. Origins allows listing and getting, and destination allows posting.
The commands works perfectly and I can log in to the other account and see the files. But I can't take ownership or make the new bucket public. I need to be able to make changes as I was able to in the old account. New account is totally unrelated to new account. It looks like files are retaining permissions and they are still owned by the old account.
How can I set permissions in order to gain full access to files with the new account?
Add --acl bucket-owner-full-control to your CLI call, so your command should look something like this:
aws s3 sync s3://my_old_bucket s3://my_new_bucket --acl bucket-owner-full-control --profile myprofile
bucket-owner-full-control is a canned ACL (In short, a canned ACL is a predefined grant), check this out to see what other options are available and what they do in the S3 Canned ACL documentation.
This will result in the Objects being owned by the destination bucket.
"It looks like files are retaining permissions and they are still owned by the old account."
The uploader of the Object owns it.
As the new bucket is owned by the new account, you can update the bucket ACL to grant the bucket owner full-control permission on objects by passing the grants option. 1
--grants full=id=canonicalUserId-ofTheBucketOwner
You can view the Canonical ID for the bucket owner from the AWS S3 Console. 2