Compare (not sync) the contents of a local folder and a AWS S3 bucket - amazon-s3

I need to compare the contents of a local folder with a AWS S3 bucket so that where there are differences a script is executed on the local files.
The idea is that local files (pictures) get encrypted and uploaded to S3. Once the upload has occurred I delete the encrypted copy of the pictures to save space. The next day new files get added to the local folder. I need to check between the local folder and the S3 bucket which pictures have already been encrypted and uploaded so that I only encrypt the newly added pictures rather than all of them all over again. I have a script that does exactly this between two local folders but I'm struggling to adapt it so that the comparison is performed between a local folder and a S3 bucket.
Thank you to anyone who can help.
Here is the actual script I am currently using for my picture sorting, encryption and back up to S3:
!/bin/bash
perl /volume1/Synology/scripts/Exiftool/exiftool '-createdate
perl /volume1/Synology/scripts/Exiftool/exiftool '-model=camera model missing' -r -if '(not $model)' -overwrite_original -r /volume1/photo/"input"/ --ext .DS_Store -i "#eaDir"
perl /volume1/Synology/scripts/Exiftool/exiftool '-Directory
cd /volume1/Synology/Pictures/"Pictures Glacier back up"/"Compressed encrypted pics for Glacier"/post_2016/ && (cd /volume1/Synology/Pictures/Pictures/post_2016/; find . -type d ! -name .) | xargs -i mkdir -p "{}"
while IFS= read -r file; do /usr/bin/gpg --encrypt -r xxx#yyy.com /volume1/Synology/Pictures/Pictures/post_2016/**///$(basename "$file" .gpg); done < <(comm -23 <(find /volume1/Synology/Pictures/Pictures/post_2016 -type f -printf '%f.gpg\n'|sort) <(find /volume1/Synology/Pictures/"Pictures Glacier back up"/"Compressed encrypted pics for Glacier"/post_2016 -type f -printf '%f\n'|sort))
rsync -zarv --exclude=#eaDir --include="/" --include=".gpg" --exclude="" /volume1/Synology/Pictures/Pictures/post_2016/ /volume1/Synology/Pictures/"Pictures Glacier back up"/"Compressed encrypted pics for Glacier"/post_2016/
find /volume1/Synology/Pictures/Pictures/post_2016/ -name ".gpg" -type f -delete
/usr/bin/aws s3 sync /volume1/Synology/Pictures/"Pictures Glacier back up"/"Compressed encrypted pics for Glacier"/post_2016/ s3://xyz/Pictures/post_2016/ --exclude "" --include ".gpg" --sse

It would be inefficient to continually compare the local and remote folders, especially as the quantity of objects increases.
A better flow would be:
Unencrypted files are added to a local folder
Each file is:
Copied to another folder in an encrypted state
Once that action is confirmed, the original file is then deleted
Files in the encrypted local folder are copied to S3
Once that action is confirmed, the source file is then deleted
The AWS Command-Line Interface (CLI) has an aws s3 sync command that makes it easy to copy new/modified files to an Amazon S3 bucket, but this could be slow if you have thousands of files.

Related

Scaleway GLACIER class object storage with restic

Scaleway recently launched GLACIER class storage "C14 Cold Storage Class"
They have a great plan of 75GB free and I'd like to take advantage of this using the restic backup tool.
To get this working I have successfully followed the S3 instructions for repository creation and uploading, with one caveat. I can not successfully pass the storage-class header as GLACIER.
Using awscliv2, I can successfully pass a header that looks very much like this from my local machine: aws s3 cp object s3://bucket/ --storage-class GLACIER
But with restic, having dug through some github issues, I can see an option to pass a -o flag. The linked issues resolution is not that clear to me so I have tried the following restic commands without successfully seeing the "GLACIER" class of storage label next to the files objects in the Scaleway bucket console:
restic -r s3:s3.fr-par.scw.cloud/restic-testing -o GLACIER --verbose backup ~/test.txt
restic -r s3:s3.fr-par.scw.cloud/restic-testing -o storage-class=GLACIER --verbose backup ~/test.txt
Can someone suggest another option?
I'm starting to use C14's GLACIER storage class with restic, and until now it seems be working very well.
I suggest to create the repository in the usual way with restic -r s3:s3.fr-par.scw.cloud/test-bucket init, which will create the config file and keys in the STANDARD storage class.
For backups, I'm using the command:
$ restic backup -r s3:s3.fr-par.scw.cloud/test-bucket -o s3.storage-class=GLACIER --host host /path
similar to what you did, apart the option is s3.storage-class and not storage-class.
In this way files in the data and snapshots directories are in GLACIER storage class, and you can add backups with no problem.
I can also mount the repository while data is in GLACIER class (I suppose all the info are taken from cache) so I can do restic mount /mnt/c14 and I can browse the files, also if I cannot copy them or see their content.
If I need to restore files, I restore all bucket in STANDARD class with s3cmd restore --recursive s3://test-bucket/ (see s3cmd), I test that all files are correctly in standard class with:
$ aws s3 ls s3://test-bucket --recursive | tr -s ' ' | cut -d' ' -f 4 | xargs -n 1 -I {} sh -c "aws s3api head-object --bucket unitedhost --key '{}' | jq -r .StorageClass" | grep --quiet GLACIER
which returns true if at least one file is in GLACIER class, so you have to wait this command to returns false.
Obviously a restore will need more time, but I'm using C14 glacier as a second or third backup, while using another restic repository in Backblaze B2 which is a warm storage.
In addition to vstefanoxx 's answer : Here is my workflow.
I setup the restic repository just like vstefanoxx.
Now, if you want to prune the repository... you cannot as the files are in glacier and restic needs read-write access to the bucket to prune.
What is interesting about Scaleway is that file transferts between glacier and standard class are free. So let's move the data back to the standard class :
s3cmd restore --recursive s3://test-bucket
And wait until the end of the process using the command given by vstefanoxx. Once your data is in the standard class it costs you five times more, so we have to be efficient :-)
So we now prune the repository:
restic prune -r s3:s3.fr-par.scw.cloud/test-bucket
And once it is finished, move everything (in fact data, index and snapshots but not keys) back to glacier:
s3cmd cp s3://test-bucket/data/ s3://test-bucket/data/ --recursive --storage-class=GLACIER
s3cmd cp s3://test-bucket/index/ s3://test-bucket/index/ --recursive --storage-class=GLACIER
s3cmd cp s3://test-bucket/snapshots/ s3://test-bucket/snapshots/ --recursive --storage-class=GLACIER
So we are now to a point where we have pruned the repository, trying to pay the least amount of money !
The chosen answer doesn't seem to work when doing incremental backups. I went with a different solution.
I set up a normal bucket, initialized with your usual restic init. Then I set up the following lifetime rule:
<?xml version="1.0" ?>
<LifecycleConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Rule>
<ID>data-to-glacier</ID>
<Filter>
<Prefix>data/</Prefix>
</Filter>
<Status>Enabled</Status>
<Transition>
<Days>0</Days>
<StorageClass>GLACIER</StorageClass>
</Transition>
</Rule>
</LifecycleConfiguration>
Days is set to 0, which means that the rule will be applied to all files. Rules are not applied continuously though, they're applied once a day at midnight UTC.
This rule will only apply to the files in data/, which are the big files.
This rule description is supposed to be used with s3cmd but you can also do it from the dashboard if you prefer a GUI.

How to download S3-Bucket, compress on the fly and reupload to another s3 bucket without downloading locally?

I want to download the contents of a s3 bucket (hosted on wasabi, claims to be fully s3 compatible) to my VPS, tar and gzip and gpg it and reupload this archive to another s3 bucket on wasabi!
My vps machine only has 30GB of storage, the whole buckets is about 1000GB in size so I need to download, archive, encrypt and reupload all of it on the fly without storing the data locally.
The secret seems to be in using the | pipe command. But I am stuck even in the beginning of download a bucket into an archive locally (I want to go step by step):
s3cmd sync s3://mybucket | tar cvz archive.tar.gz -
In my mind at the end I expect some code like this:
s3cmd sync s3://mybucket | tar cvz | gpg --passphrase secretpassword | s3cmd put s3://theotherbucket/archive.tar.gz.gpg
but its not working so far!
What am I missing?
The aws s3 sync command copies multiple files to the destination. It does not copy to stdout.
You could use aws s3 cp s3://mybucket - (including the dash at the end) to copy the contents of the file to stdout.
From cp — AWS CLI Command Reference:
The following cp command downloads an S3 object locally as a stream to standard output. Downloading as a stream is not currently compatible with the --recursive parameter:
aws s3 cp s3://mybucket/stream.txt -
This will only work for a single file.
You may try https://github.com/kahing/goofys. I guess, in your case it could be the following algo:
$ goofys source-s3-bucket-name /mnt/src
$ goofys destination-s3-bucket-name /mnt/dst
$ tar -cvzf /mnt/src | gpg -e -o /mnt/dst/archive.tgz.gpg

Files will not move or copy from folder on file system to local bucket

I am using the command
aws s3 mv --recursive Folder s3://bucket/dsFiles/
The aws console is not giving me any feedback. I change the permissions of the directory
sudo chmod -R 666 ds000007_R2.0.1/
It looks like AWS is passing through those files and giving "File does not exist" for every directory.
I am confused about why AWS is not actually performing the copy is there some size limitation or recursion depth limitation?
I believe you want to cp, not mv. Try the following:
aws s3 cp $local/folder s3://your/bucket --recursive --include "*".
Source, my answer here.

Transporting with data from S3 amazon to local server

I am trying to import data from S3 and using the described below script (which I sort of inherited). It's a bit long...The problem is I kept receiving following output:
The config profile (importer) could not be found
I am not a bash person-so be gentle, please. It seemed there are some credentials missing or something else is wrong with configuration of "importer" on local machine.
In S3 configs(the console) - there is a user with the same name, which, according to permissions can perform access the bucket and download data.
I have tried changing access keys in amazon console for the user and creating file, named "credentials" in home/.aws(there was no .aws folder in home dir by default-created it), including the new keys in the file, tried upgrading AWS CLI with pip - nothing helped
Then I have modified the "credentials", placing [importer] as profile name, so it looked like:
[importer]
aws_access_key = xxxxxxxxxxxxxxxxx
aws_secret+key = xxxxxxxxxxxxxxxxxxx
Appears, that I have gone through the "miss-configuration":
A client error (InvalidAccessKeyId) occurred when calling the ListObjects operation: The AWS Access Key Id you provided does not exist in our records.
Completed 1 part(s) with ... file(s) remaining
And here's the part, where I am stuck...I placed the keys, I have obtained from the amazon into that config file. Double checked...Any suggestions? I can't produce anymore keys-aws quota/user. Below is part of the script:
#!/bin/sh
echo "\n$0 started at: `date`"
incomming='/Database/incomming'
IFS='
';
mkdir -p ${incomming}
echo "syncing files from arrivals bucket to ${incomming} incomming folder"
echo aws --profile importer \
s3 --region eu-west-1 sync s3://path-to-s3-folder ${incomming}
aws --profile importer \
s3 --region eu-west-1 sync s3://path-to-s3-folder ${incomming}
count=0
echo ""
echo "Searching for zip files in ${incomming} folder"
for f in `find ${incomming} -name '*.zip'`;
do
echo "\n${count}: ${f} --------------------"
count=$((count+1))
name=`basename "$f" | cut -d'.' -f1`
dir=`dirname "$f"`
if [ -d "${dir}/${name}" ]; then
echo "\tWarning: directory "${dir}/${name}" already exist for file: ${f} ... - skipping - not imported"
continue
fi

s3cmd: backup folder to Amazon S3

I'm using s3cmd to backup my databases to Amazon S3, but I'd also like to backup a certain folder and archive it.
I have this part from this script that successfully backups the databases to S3:
# Loop the databases
for db in $databases; do
# Define our filenames
filename="$stamp - $db.sql.gz"
tmpfile="/tmp/$filename"
object="$bucket/$stamp/$filename"
# Feedback
echo -e "\e[1;34m$db\e[00m"
# Dump and zip
echo -e " creating \e[0;35m$tmpfile\e[00m"
mysqldump -u root -p$mysqlpass --force --opt --databases "$db" | gzip -c > "$tmpfile"
# Upload
echo -e " uploading..."
s3cmd put "$tmpfile" "$object"
# Delete
rm -f "$tmpfile"
done;
How can I add another section to archive a certain folder, upload to S3 and then delete the local archive?
Untested and basic but this should get the job done with some minor tweaks
# change to tmp dir - creating archives with absolute paths can be dangerous
cd /tmp
# create archive with timestamp of dir /path/to/directory/to/archive
tar -czf "$stamp-archivename.tar.gz" /path/to/directory/to/archive
# upload archive to s3 bucket 'BucketName'
s3cmd put "/tmp/$stamp-archivename.tar.gz" s3://BucketName/
# remove local archive
rm -f "/tmp/$stamp-archivename.tar.gz"