How do I modify object permissions using aws s3 command? - amazon-s3

It seems to me that aws s3 does not have a dedicated command to modify object permissions. I have some files that are uploaded via s3fuse. Afterwards, I would like to make them public. Is there any way to make those files public using aws s3 command?
Thanks.

I found out how to do this. It seems there is another cli called aws s3api, that mimics the underlying s3 api. Using aws s3api put-object-acl http://docs.aws.amazon.com/cli/latest/reference/s3api/put-object-acl.html command, I can change object permissions directly.
aws s3api put-object-acl --acl public-read --bucket mybucket --key targets/my_binary

Related

Query multiple files from AWS s3 bucket

I have a list of file names in the s3 bucket. I need to find out which of these files actually exist in s3 bucket. I was thinking about running a query in aws cli. Something like this:
aws s3api list-objects-v2 --bucket my-bucket --output json --query "Contents[?contains(Key, '23043')]"
Is there a way to pass a list of all keys i have instead of having to re-run this query for every key?

How to perform a multipart upload using AWS CLI s3api command?

From my understanding, an S3 object-store should be able to resume incomplete multipart uploads. I am trying to test this against a local S3 storage system.
I'm aware the AWS CLI will automatically perform a multipart upload on larger files via aws s3 cp, but how do I perform the same operation using aws s3api?
I've read the documentation and know that it's a three step process:
Run aws s3api create-multipart-upload (docs)
Run aws s3api upload-part (docs)
Finish by running aws s3api complete-multipart-upload which reconstructs the object (docs)
I attempted to perform these steps against a file that was 7GB in size. After running the first command, I received the expected upload-id which is required for all subsequent commands:
aws --profile foo --endpoint-url=https://endpoint:9003 s3api create-multipart-upload --bucket mybucket1 --key 'some7gfile.bin/01' --output json
{
"Bucket": "mybucket1",
"UploadId": "41a1462d-0d23-47f6-83aa-377e7aedbb8a",
"Key": "some7gfile.bin/01"
}
I assumed the 01 portion of some7gfile.bin/01 denoted the first part, but that doesn't appear to be the case. What does the 01 mean? Is it arbitrary?
When I tried running the second upload-part command, I received an error:
aws --profile foo --endpoint-url=https://endpoint:9003 s3api upload-part --bucket mybucket1 --key 'some7gfile.bin/01' --part-number 1 --body part01 --upload-id "41a1462d-0d23-47f6-83aa-377e7aedbb8a" --output json
Error parsing parameter '--body': Blob values must be a path to a file.
Does the source file have to be split into different parts prior to running the upload-part step? If so, what's the most efficient way to do this?

Is there a way to point cloudformation template in S3 Bucket while deploying using aws cloudformatiohn deploy

I know that I can use aws cloudformation create-stack or aws cloudformation update-stack with --template-url switch to point to an existing template placed in S3 Bucket.
I would like to use aws cloudformation deploy, the same command for both creating and updating a CloudFormation stack for which I placed the template already in my S3 Bucket. Is it possible with any combination of the options?
The following syntax works, but it first uploads the template to the S3 Bucket:
aws cloudformation deploy \
--stack-name my-stack \
--template-file my-stack-template.yaml \
--s3-bucket my-bucket \
--s3-prefix templates \
--profile my-profile \
--region us-east-1
It first uploads the template my-stack-template.yaml as something like 1a381d4c65d9a3233450e92588a708b38.template in my-bucket/templates which I do not want. I would like to be able to deploy the stack through this method using the template already placed in the S3 Bucket and not needing it to be on my local computer.
Sadly there is no such way. The only way for the template to be not re-upload is when there are no changes to deploy. You have to use create-stack if you want to use pre-existing templates in S3.

Make sure that a PutBucketWebsite operation launched via an aws cli script is executed only once the target bucket has been created

I am trying to setup an AWS S3 bucket for static website hosting.
I want to automate the operations via a script that calls aws cli commands.
So far my script, simplified, looks like this
aws s3api delete-bucket --bucket my-bucket --region eu-west-1
aws s3api create-bucket --bucket my-bucket --create-bucket-configuration LocationConstraint=eu-west-1
aws s3 website s3://my-bucket/ --index-document index.html --error-document error.html
aws s3api put-bucket-policy --bucket my-bucket --policy file://policy.json
Sometimes this script works just fine. Sometimes though the following error occurs
An error occurred (NoSuchBucket) when calling the PutBucketWebsite operation: The specified bucket does not exist
I guess this has to do with the fact that I start deleting the bucket and then I build it again and when the PutBucketWebsite operation starts executing the bucket has not yet been recreated.
Is there a way to make sure the PutBucketWebsite operation is executed only once my-bucket has been created?
You can use the wait command to ensure the bucket exists before you try uploading to the bucket:
aws s3api wait bucket-exists --bucket my-bucket
https://docs.aws.amazon.com/cli/latest/reference/s3api/wait/bucket-exists.html
This will poll every 5 seconds until the bucket is created.
It might also be a good idea to confirm that the bucket has been deleted properly before trying to recreate it:
aws s3api wait bucket-not-exists --bucket my-bucket

Granting read access to the Authenticated Users group for a file

How do I grant read access to the Authenticated Users group for a file? I'm using s3cmd and want to do it while uploading but I'm just focusing directly on changing the acl. What should I put in for http://acs.amazonaws.com/groups/global/AuthenticatedUsers? I have tried every combination of AuthenticatedUsers possible.
./s3cmd setacl
--acl-grant=read:http://acs.amazonaws.com/groups/global/AuthenticatedUsers
s3://BUCKET/FILE
./s3cmd setacl
--acl-grant=read:AuthenticatedUsers
s3://BUCKET/FILE
This doesn't seem to be possible with s3cmd. Instead I had to switch to the aws cli tools.
Here are the directions to install them:
http://docs.aws.amazon.com/cli/latest/userguide/installing.html
It's possible to set the acl to read by authenticated users during upload with the command:
aws s3 cp <file-to-upload> s3://<bucket>/ --acl authenticated-read
Plus a whole load of other combinations you can check out here:
http://docs.aws.amazon.com/cli/latest/reference/s3/index.html#cli-aws-s3
The following command works for me with s3cmd version 1.6.0:
s3cmd setacl s3://<bucket>/<file-name> --acl-grant='read:http://acs.amazonaws.com/groups/global/AuthenticatedUsers' for an individual file.
s3cmd setacl s3://<bucket>/<dir-name> --acl-grant='read:http://acs.amazonaws.com/groups/global/AuthenticatedUsers' --recursive
for all files in a directory.
This is from http://s3tools.org/s3cmd:
Upload a file into the bucket ~$ s3cmd put addressbook.xml
s3://logix.cz-test/addrbook.xml File 'addressbook.xml' stored as
s3://logix.cz-test/addrbook.xml (123456 bytes) Note about ACL
(Access control lists) — a file uploaded to Amazon S3 bucket can
either be private, that is readable only by you, possessor of the
access and secret keys, or public, readable by anyone. Each file
uploaded as public is not only accessible using s3cmd but also has a
HTTP address, URL, that can be used just like any other URL and
accessed for instance by web browsers.
~$ s3cmd put --acl-public --guess-mime-type storage.jpg
s3://logix.cz-test/storage.jpg File 'storage.jpg' stored as
s3://logix.cz-test/storage.jpg (33045 bytes) Public URL of the
object is: http://logix.cz-test.s3.amazonaws.com/storage.jpg
Now anyone can display the storage.jpg file in their browser. Cool, eh?
try changing public to authenticated and that should work.
see http://docs.amazonwebservices.com/AmazonS3/latest/dev/ACLOverview.html#CannedACL
it explains on amazon side how to use their ACLs, supposedly if you use public in s3cmd - this would translate to public-read in amazon, so authenticated should translate to authenticated-read.
If you're willing to use Python, the boto library provides all the functionality to get and set an ACL; from the boto S3 documentation:
b.set_acl('public-read')
Where b is a bucket. Of course in your case you should change 'public-read' to 'authenticated-read'. You can do something similar for keys (files).
If you want to do it at bucket level you can do -
aws s3api put-bucket-acl --bucket bucketname --grant-full-control uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers
Docs - http://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-acl.html
Here is an example command that will set the ACL on an S3 object to authenticated-read.
aws s3api put-object-acl --acl authenticated-read --bucket mybucket --key myfile.txt
.