How to properly copy entire s3 bucket with exactly same permissions - amazon-s3

I'm trying to make a copy of s3 bucket on aws and it is really pain.
My reference s3 bucket is: s3://original
Duplicated ver. of this bucket: s3://original-copy
My goal is:
generate kubernetes.tf file with kops create cluster ... => DONE
kops is kind enough to create --state=s3://original => DONE
now I want to create a new s3 bucket with exactly same content as in s3://original just the name is different s3://original-copy => PROBLEM
Command
aws s3 cp s3://original s3://original-copy --recursive --acl bucket-owner-full-control
Even though bucket is duplicated it seems like there is some problem with s3 bucket permissions
Then I am adjusting values in terraform/data folder with a new reference to s3://original-copy as well as at s3://original-copy
s3://original-copy/cluster_name/config
s3://original-copy/cluster_name/cluster.spec
files.
But there is a problem with permissions all the time.
Error:
s3context.go:145] unable to get bucket location from region "us-east-1"; scanning all regions: AccessDenied: Access Denied
Idea
The main idea is that kops will generate
kubernetes.tf file and data folder with proper files (all within terraform folder) just once
--state=s3://original bucket just once
Once we have some example (patterns) of s3 and kuberetes.tf we would stop using kops.

Related

Automating S3 bucket creation to store tf.state file prior to terraform execution

Is there a solution that generates an s3 bucket for storing tf.state file prior to terraform execution?
Meaning I don't want to create an s3 bucket on the console and specify it in the backend.tf, I want the s3 bucket to be automatically created and the tf.state file stored on this automatically created s3 bucket.
Does anyone have any experience with this? Is this even possible?

AWS s3 event ObjectRemoved - get file

I am trying to access the file that has been deleted from an s3 Bucket using aws lambdas.
I have set up a trigger for s3:ObjectRemoved*, however after extracting the bucket and file name of the deleted file, the file is deleted from s3 so I do not have access to the contents of the file.
What approach should be taken with AWS lambda to get the contents of the file after a file is deleted from an s3 bucket.
Comment proposed by #keithRozario was useful however with versioning, applying a GET request will result in a not found error as per the s3 documentation.
#Ersoy suggestion of creating a 'bin' bucket or directory/prefix with the same file name and working with that as per your requirements.
In my case copying the initial object created to a bin directory and then accessing that folder when the file is deleted from the main upload directory.

move files to s3 in EC2

I have S3 bucket in EC2 . I want to remove multiple files between s3 folders . however it showing deleted files but files are still there
command:
aws s3 rm s3://mybucket/path1/publish/test/dummyfile_*.dat
got below message
delete: s3://mybucket/path1/publish/test/dummyfile_*.dat,. But file is still present
can anyone please help
"Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all Regions."
from https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#CoreConcepts
If you make a copy of a S3 object to an EC2 instance, you simply made a copy of it.
You can use aws s3 sync to synchronize S3 objects (files) between S3 and your EC2 instance, see https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html

sync local folder structure to s3 root structure

In my pipeline I am trying to sync my local folder (or should I say repository folder) to the s3 bucket. Now I can do the aws s3 sync . s3:// but this off course gives an error, since the bucket is not specified. But basically that is exactly what I want. Exactly how my folder-structure locally is; is how I want in S3.
so locally:
bucket1/file1.txt
bucket1/file2.txt
bucket1/subbucket1/file3.txt
needs to go exactly to root of my s3 account... how to fix this?
btw; the sync might be an overkill since I only want to copy (and overwrite!) to the s3 folders, coming from the root. Not (yet) interested in deleting etc.
what can I do..?
The AWS Command-Line Interface (CLI) aws s3 sync command requires a bucket name.
Therefore, you will either need to write a script that extracts the bucket name and inserts it into the aws s3 sync command, or you'll need to write your own program to use in place of the AWS CLI.
If you have a limited number of buckets and they don't change that often, you could just write a script that repeatedly calls the AWS CLI, such as:
aws s3 sync bucket1/ s3://bucket1/
aws s3 sync bucket2/ s3://bucket2/
etc.
if somebody comes to the same question:
for file in `find -type f`;
do
newFilename="${file#./}"
dirName=$ENVIRONMENT-$(dirname "$newFilename")
#get first part of dir (only root)
dirName="${dirName%%/*}"
echo bucket: $dirName
if aws s3api head-bucket --bucket "$dirName" 2>/dev/null; then
echo "bucket already exists"
else
if [[ $dirName == *"/"* ]]; then
echo $dirName
echo "This bucket is a subfolder and will not be created"
else
aws s3 mb s3://$dirName
fi
fi
aws s3 cp $newFilename s3://$ENVIRONMENT-$newFilename
done
the scripts retrieves all the files that it can find;
then it will check the root directory (relative to the current folder)
it will check it the directory exists as a bucket. If not; it will be created.
And then every file will be copied.
Since i do not know if a root-directory exists (as a bucket) we have to manually check it.
I couldn't use the sync because I might not have an existing bucket.
If you do know that your root directory as a bucket exists; then i would use the sync, one liner vs 10-liner :see_no_evil:.
anyway, that was it for me!

Move the s3 bucket to other aws server

I have created a AWS s3 buckets and here uploaded many of images but now i want to move all images to other AWS s3 buckets.
so can we direct copy buckets or link to other AWS server.
Please provide suggestion.
You can use the AWS Command-Line Interface (CLI) S3 modules cp ( copy ) command to copy files from bucket to bucket:
aws s3 cp S3://mybucket/file.jpg S3://anotherbucket/file.jpg
See cp command documentation.