Query multiple files from AWS s3 bucket - amazon-s3

I have a list of file names in the s3 bucket. I need to find out which of these files actually exist in s3 bucket. I was thinking about running a query in aws cli. Something like this:
aws s3api list-objects-v2 --bucket my-bucket --output json --query "Contents[?contains(Key, '23043')]"
Is there a way to pass a list of all keys i have instead of having to re-run this query for every key?

Related

How to perform a multipart upload using AWS CLI s3api command?

From my understanding, an S3 object-store should be able to resume incomplete multipart uploads. I am trying to test this against a local S3 storage system.
I'm aware the AWS CLI will automatically perform a multipart upload on larger files via aws s3 cp, but how do I perform the same operation using aws s3api?
I've read the documentation and know that it's a three step process:
Run aws s3api create-multipart-upload (docs)
Run aws s3api upload-part (docs)
Finish by running aws s3api complete-multipart-upload which reconstructs the object (docs)
I attempted to perform these steps against a file that was 7GB in size. After running the first command, I received the expected upload-id which is required for all subsequent commands:
aws --profile foo --endpoint-url=https://endpoint:9003 s3api create-multipart-upload --bucket mybucket1 --key 'some7gfile.bin/01' --output json
{
"Bucket": "mybucket1",
"UploadId": "41a1462d-0d23-47f6-83aa-377e7aedbb8a",
"Key": "some7gfile.bin/01"
}
I assumed the 01 portion of some7gfile.bin/01 denoted the first part, but that doesn't appear to be the case. What does the 01 mean? Is it arbitrary?
When I tried running the second upload-part command, I received an error:
aws --profile foo --endpoint-url=https://endpoint:9003 s3api upload-part --bucket mybucket1 --key 'some7gfile.bin/01' --part-number 1 --body part01 --upload-id "41a1462d-0d23-47f6-83aa-377e7aedbb8a" --output json
Error parsing parameter '--body': Blob values must be a path to a file.
Does the source file have to be split into different parts prior to running the upload-part step? If so, what's the most efficient way to do this?

Make sure that a PutBucketWebsite operation launched via an aws cli script is executed only once the target bucket has been created

I am trying to setup an AWS S3 bucket for static website hosting.
I want to automate the operations via a script that calls aws cli commands.
So far my script, simplified, looks like this
aws s3api delete-bucket --bucket my-bucket --region eu-west-1
aws s3api create-bucket --bucket my-bucket --create-bucket-configuration LocationConstraint=eu-west-1
aws s3 website s3://my-bucket/ --index-document index.html --error-document error.html
aws s3api put-bucket-policy --bucket my-bucket --policy file://policy.json
Sometimes this script works just fine. Sometimes though the following error occurs
An error occurred (NoSuchBucket) when calling the PutBucketWebsite operation: The specified bucket does not exist
I guess this has to do with the fact that I start deleting the bucket and then I build it again and when the PutBucketWebsite operation starts executing the bucket has not yet been recreated.
Is there a way to make sure the PutBucketWebsite operation is executed only once my-bucket has been created?
You can use the wait command to ensure the bucket exists before you try uploading to the bucket:
aws s3api wait bucket-exists --bucket my-bucket
https://docs.aws.amazon.com/cli/latest/reference/s3api/wait/bucket-exists.html
This will poll every 5 seconds until the bucket is created.
It might also be a good idea to confirm that the bucket has been deleted properly before trying to recreate it:
aws s3api wait bucket-not-exists --bucket my-bucket

Which S3 API should I use to see the list of buckets owned by me only

can you please help me to see the list of buckets owned by me onlyl. I have tried the following api, but it shows all the buckets avaialable in S3. But i want to see only the buckets owned by me not others.
aws s3api list-buckets
An Amazon S3 bucket is always owned by an AWS account. Individual IAM Users do not own buckets.
When you issue a command such as aws s3 ls or aws s3api list-buckets, you will only see a list of buckets owned by the account. (It will not list buckets owned by a different account.)
Therefore, that is the correct command.

Move the s3 bucket to other aws server

I have created a AWS s3 buckets and here uploaded many of images but now i want to move all images to other AWS s3 buckets.
so can we direct copy buckets or link to other AWS server.
Please provide suggestion.
You can use the AWS Command-Line Interface (CLI) S3 modules cp ( copy ) command to copy files from bucket to bucket:
aws s3 cp S3://mybucket/file.jpg S3://anotherbucket/file.jpg
See cp command documentation.

How do I modify object permissions using aws s3 command?

It seems to me that aws s3 does not have a dedicated command to modify object permissions. I have some files that are uploaded via s3fuse. Afterwards, I would like to make them public. Is there any way to make those files public using aws s3 command?
Thanks.
I found out how to do this. It seems there is another cli called aws s3api, that mimics the underlying s3 api. Using aws s3api put-object-acl http://docs.aws.amazon.com/cli/latest/reference/s3api/put-object-acl.html command, I can change object permissions directly.
aws s3api put-object-acl --acl public-read --bucket mybucket --key targets/my_binary