How to perform a multipart upload using AWS CLI s3api command? - amazon-s3

From my understanding, an S3 object-store should be able to resume incomplete multipart uploads. I am trying to test this against a local S3 storage system.
I'm aware the AWS CLI will automatically perform a multipart upload on larger files via aws s3 cp, but how do I perform the same operation using aws s3api?
I've read the documentation and know that it's a three step process:
Run aws s3api create-multipart-upload (docs)
Run aws s3api upload-part (docs)
Finish by running aws s3api complete-multipart-upload which reconstructs the object (docs)
I attempted to perform these steps against a file that was 7GB in size. After running the first command, I received the expected upload-id which is required for all subsequent commands:
aws --profile foo --endpoint-url=https://endpoint:9003 s3api create-multipart-upload --bucket mybucket1 --key 'some7gfile.bin/01' --output json
{
"Bucket": "mybucket1",
"UploadId": "41a1462d-0d23-47f6-83aa-377e7aedbb8a",
"Key": "some7gfile.bin/01"
}
I assumed the 01 portion of some7gfile.bin/01 denoted the first part, but that doesn't appear to be the case. What does the 01 mean? Is it arbitrary?
When I tried running the second upload-part command, I received an error:
aws --profile foo --endpoint-url=https://endpoint:9003 s3api upload-part --bucket mybucket1 --key 'some7gfile.bin/01' --part-number 1 --body part01 --upload-id "41a1462d-0d23-47f6-83aa-377e7aedbb8a" --output json
Error parsing parameter '--body': Blob values must be a path to a file.
Does the source file have to be split into different parts prior to running the upload-part step? If so, what's the most efficient way to do this?

Related

Query multiple files from AWS s3 bucket

I have a list of file names in the s3 bucket. I need to find out which of these files actually exist in s3 bucket. I was thinking about running a query in aws cli. Something like this:
aws s3api list-objects-v2 --bucket my-bucket --output json --query "Contents[?contains(Key, '23043')]"
Is there a way to pass a list of all keys i have instead of having to re-run this query for every key?

Is there a way to point cloudformation template in S3 Bucket while deploying using aws cloudformatiohn deploy

I know that I can use aws cloudformation create-stack or aws cloudformation update-stack with --template-url switch to point to an existing template placed in S3 Bucket.
I would like to use aws cloudformation deploy, the same command for both creating and updating a CloudFormation stack for which I placed the template already in my S3 Bucket. Is it possible with any combination of the options?
The following syntax works, but it first uploads the template to the S3 Bucket:
aws cloudformation deploy \
--stack-name my-stack \
--template-file my-stack-template.yaml \
--s3-bucket my-bucket \
--s3-prefix templates \
--profile my-profile \
--region us-east-1
It first uploads the template my-stack-template.yaml as something like 1a381d4c65d9a3233450e92588a708b38.template in my-bucket/templates which I do not want. I would like to be able to deploy the stack through this method using the template already placed in the S3 Bucket and not needing it to be on my local computer.
Sadly there is no such way. The only way for the template to be not re-upload is when there are no changes to deploy. You have to use create-stack if you want to use pre-existing templates in S3.

Make sure that a PutBucketWebsite operation launched via an aws cli script is executed only once the target bucket has been created

I am trying to setup an AWS S3 bucket for static website hosting.
I want to automate the operations via a script that calls aws cli commands.
So far my script, simplified, looks like this
aws s3api delete-bucket --bucket my-bucket --region eu-west-1
aws s3api create-bucket --bucket my-bucket --create-bucket-configuration LocationConstraint=eu-west-1
aws s3 website s3://my-bucket/ --index-document index.html --error-document error.html
aws s3api put-bucket-policy --bucket my-bucket --policy file://policy.json
Sometimes this script works just fine. Sometimes though the following error occurs
An error occurred (NoSuchBucket) when calling the PutBucketWebsite operation: The specified bucket does not exist
I guess this has to do with the fact that I start deleting the bucket and then I build it again and when the PutBucketWebsite operation starts executing the bucket has not yet been recreated.
Is there a way to make sure the PutBucketWebsite operation is executed only once my-bucket has been created?
You can use the wait command to ensure the bucket exists before you try uploading to the bucket:
aws s3api wait bucket-exists --bucket my-bucket
https://docs.aws.amazon.com/cli/latest/reference/s3api/wait/bucket-exists.html
This will poll every 5 seconds until the bucket is created.
It might also be a good idea to confirm that the bucket has been deleted properly before trying to recreate it:
aws s3api wait bucket-not-exists --bucket my-bucket

unable to copy files from Amazon S3 even with region specified

First off: I'm new to using the aws cli:
I've got problems to copy files from amazon S3 using the aws cli, while aws s3 ls works as expected and shows me all the buckets, $ aws s3 cp s3://mybucket/subdir/* /patch/to/local/dir/ --region us-east-2 --source-region us-east-2 keeps barking at me with
A client error (301) occurred when calling the HeadObject operation: Moved Permanently - when I log into S3 using the AWS website, I get "us-east-2" in the urls while it displays US West (Oregon) on the side. I've also tried the above with both regions set to us-west-2 but that didn't work either. What may be going on here and how do I get the files copied correctly?
You are trying to download data from s3 bucket. Firstly configure aws-cli using:
aws configure
Once configured, use s3 sync command, this will download all sub directries locally.
aws s3 sync s3://mybucket/subdir/ /patch/to/local/dir/
As you are using s3 cp command, use it as
aws s3 cp s3://mybucket/subdir/ /patch/to/local/dir/ --recursive

How do I modify object permissions using aws s3 command?

It seems to me that aws s3 does not have a dedicated command to modify object permissions. I have some files that are uploaded via s3fuse. Afterwards, I would like to make them public. Is there any way to make those files public using aws s3 command?
Thanks.
I found out how to do this. It seems there is another cli called aws s3api, that mimics the underlying s3 api. Using aws s3api put-object-acl http://docs.aws.amazon.com/cli/latest/reference/s3api/put-object-acl.html command, I can change object permissions directly.
aws s3api put-object-acl --acl public-read --bucket mybucket --key targets/my_binary