I am trying to import data from S3 and using the described below script (which I sort of inherited). It's a bit long...The problem is I kept receiving following output:
The config profile (importer) could not be found
I am not a bash person-so be gentle, please. It seemed there are some credentials missing or something else is wrong with configuration of "importer" on local machine.
In S3 configs(the console) - there is a user with the same name, which, according to permissions can perform access the bucket and download data.
I have tried changing access keys in amazon console for the user and creating file, named "credentials" in home/.aws(there was no .aws folder in home dir by default-created it), including the new keys in the file, tried upgrading AWS CLI with pip - nothing helped
Then I have modified the "credentials", placing [importer] as profile name, so it looked like:
[importer]
aws_access_key = xxxxxxxxxxxxxxxxx
aws_secret+key = xxxxxxxxxxxxxxxxxxx
Appears, that I have gone through the "miss-configuration":
A client error (InvalidAccessKeyId) occurred when calling the ListObjects operation: The AWS Access Key Id you provided does not exist in our records.
Completed 1 part(s) with ... file(s) remaining
And here's the part, where I am stuck...I placed the keys, I have obtained from the amazon into that config file. Double checked...Any suggestions? I can't produce anymore keys-aws quota/user. Below is part of the script:
#!/bin/sh
echo "\n$0 started at: `date`"
incomming='/Database/incomming'
IFS='
';
mkdir -p ${incomming}
echo "syncing files from arrivals bucket to ${incomming} incomming folder"
echo aws --profile importer \
s3 --region eu-west-1 sync s3://path-to-s3-folder ${incomming}
aws --profile importer \
s3 --region eu-west-1 sync s3://path-to-s3-folder ${incomming}
count=0
echo ""
echo "Searching for zip files in ${incomming} folder"
for f in `find ${incomming} -name '*.zip'`;
do
echo "\n${count}: ${f} --------------------"
count=$((count+1))
name=`basename "$f" | cut -d'.' -f1`
dir=`dirname "$f"`
if [ -d "${dir}/${name}" ]; then
echo "\tWarning: directory "${dir}/${name}" already exist for file: ${f} ... - skipping - not imported"
continue
fi
Related
I'm trying to get an ML job to run on AWS Batch. The job runs in a docker container, using credentials generated for a Task IAM Role.
I use DVC to manage the large data files needed for the task, which are hosted in an S3 repository. However, when the task tries to pull the data files, it gets an access denied message.
I can verify that the role has permissions to the bucket, because I can access the exact same files if I run an aws s3 cp command (as shown in the example below). But, I need to do it through DVC so that it downloads the right version of each file and puts it in the expected place.
I've been able to trace down the problem to s3fs, which is used by DVC to integrate with S3. As I demonstrate in the example below, it gets an access denied message even when I use s3fs by itself, passing in the credentials explicitly. It seems to fail on this line, where it tries to list the contents of the file after failing to find the object via a head_object call.
I suspect there may be a bug in s3fs, or in the particular combination of boto, http, and s3 libraries. Can anyone help me figure out how to fix this?
Here is a minimal reproducible example:
Shell script for the job:
#!/bin/bash
AWS_CREDENTIALS=$(curl http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)
export AWS_DEFAULT_REGION=us-east-1
export AWS_ACCESS_KEY_ID=$(echo "$AWS_CREDENTIALS" | jq .AccessKeyId -r)
export AWS_SECRET_ACCESS_KEY=$(echo "$AWS_CREDENTIALS" | jq .SecretAccessKey -r)
export AWS_SESSION_TOKEN=$(echo "$AWS_CREDENTIALS" | jq .Token -r)
echo "AWS_ACCESS_KEY_ID=<$AWS_ACCESS_KEY_ID>"
echo "AWS_SECRET_ACCESS_KEY=<$(cat <(echo "$AWS_SECRET_ACCESS_KEY" | head -c 6) <(echo -n "...") <(echo "$AWS_SECRET_ACCESS_KEY" | tail -c 6))>"
echo "AWS_SESSION_TOKEN=<$(cat <(echo "$AWS_SESSION_TOKEN" | head -c 6) <(echo -n "...") <(echo "$AWS_SESSION_TOKEN" | tail -c 6))>"
dvc doctor
# Succeeds!
aws s3 ls s3://company-dvc/repo/
# Succeeds!
aws s3 cp s3://company-dvc/repo/00/0e4343c163bd70df0a6f9d81e1b4d2 mycopy.txt
# Fails!
python3 download_via_s3fs.py
download_via_s3fs.py:
import os
import s3fs
# Just to make sure we're reading the credentials correctly.
print(os.environ["AWS_ACCESS_KEY_ID"])
print(os.environ["AWS_SECRET_ACCESS_KEY"])
print(os.environ["AWS_SESSION_TOKEN"])
print("running with credentials")
fs = s3fs.S3FileSystem(
key=os.environ["AWS_ACCESS_KEY_ID"],
secret=os.environ["AWS_SECRET_ACCESS_KEY"],
token=os.environ["AWS_SESSION_TOKEN"],
client_kwargs={"region_name": "us-east-1"}
)
# Fails with "access denied" on ListObjectV2
print(fs.exists("company-dvc/repo/00/0e4343c163bd70df0a6f9d81e1b4d2"))
Terraform for IAM role:
data "aws_iam_policy_document" "standard-batch-job-role" {
# S3 read access to related buckets
statement {
actions = [
"s3:Get*",
"s3:List*",
]
resources = [
data.aws_s3_bucket.company-dvc.arn,
"${data.aws_s3_bucket.company-dvc.arn}/*",
]
effect = "Allow"
}
}
Environment
OS: Ubuntu 20.04
Python: 3.10
s3fs: 2023.1.0
boto3: 1.24.59
I want to download the contents of a s3 bucket (hosted on wasabi, claims to be fully s3 compatible) to my VPS, tar and gzip and gpg it and reupload this archive to another s3 bucket on wasabi!
My vps machine only has 30GB of storage, the whole buckets is about 1000GB in size so I need to download, archive, encrypt and reupload all of it on the fly without storing the data locally.
The secret seems to be in using the | pipe command. But I am stuck even in the beginning of download a bucket into an archive locally (I want to go step by step):
s3cmd sync s3://mybucket | tar cvz archive.tar.gz -
In my mind at the end I expect some code like this:
s3cmd sync s3://mybucket | tar cvz | gpg --passphrase secretpassword | s3cmd put s3://theotherbucket/archive.tar.gz.gpg
but its not working so far!
What am I missing?
The aws s3 sync command copies multiple files to the destination. It does not copy to stdout.
You could use aws s3 cp s3://mybucket - (including the dash at the end) to copy the contents of the file to stdout.
From cp — AWS CLI Command Reference:
The following cp command downloads an S3 object locally as a stream to standard output. Downloading as a stream is not currently compatible with the --recursive parameter:
aws s3 cp s3://mybucket/stream.txt -
This will only work for a single file.
You may try https://github.com/kahing/goofys. I guess, in your case it could be the following algo:
$ goofys source-s3-bucket-name /mnt/src
$ goofys destination-s3-bucket-name /mnt/dst
$ tar -cvzf /mnt/src | gpg -e -o /mnt/dst/archive.tgz.gpg
I need to compare the contents of a local folder with a AWS S3 bucket so that where there are differences a script is executed on the local files.
The idea is that local files (pictures) get encrypted and uploaded to S3. Once the upload has occurred I delete the encrypted copy of the pictures to save space. The next day new files get added to the local folder. I need to check between the local folder and the S3 bucket which pictures have already been encrypted and uploaded so that I only encrypt the newly added pictures rather than all of them all over again. I have a script that does exactly this between two local folders but I'm struggling to adapt it so that the comparison is performed between a local folder and a S3 bucket.
Thank you to anyone who can help.
Here is the actual script I am currently using for my picture sorting, encryption and back up to S3:
!/bin/bash
perl /volume1/Synology/scripts/Exiftool/exiftool '-createdate
perl /volume1/Synology/scripts/Exiftool/exiftool '-model=camera model missing' -r -if '(not $model)' -overwrite_original -r /volume1/photo/"input"/ --ext .DS_Store -i "#eaDir"
perl /volume1/Synology/scripts/Exiftool/exiftool '-Directory
cd /volume1/Synology/Pictures/"Pictures Glacier back up"/"Compressed encrypted pics for Glacier"/post_2016/ && (cd /volume1/Synology/Pictures/Pictures/post_2016/; find . -type d ! -name .) | xargs -i mkdir -p "{}"
while IFS= read -r file; do /usr/bin/gpg --encrypt -r xxx#yyy.com /volume1/Synology/Pictures/Pictures/post_2016/**///$(basename "$file" .gpg); done < <(comm -23 <(find /volume1/Synology/Pictures/Pictures/post_2016 -type f -printf '%f.gpg\n'|sort) <(find /volume1/Synology/Pictures/"Pictures Glacier back up"/"Compressed encrypted pics for Glacier"/post_2016 -type f -printf '%f\n'|sort))
rsync -zarv --exclude=#eaDir --include="/" --include=".gpg" --exclude="" /volume1/Synology/Pictures/Pictures/post_2016/ /volume1/Synology/Pictures/"Pictures Glacier back up"/"Compressed encrypted pics for Glacier"/post_2016/
find /volume1/Synology/Pictures/Pictures/post_2016/ -name ".gpg" -type f -delete
/usr/bin/aws s3 sync /volume1/Synology/Pictures/"Pictures Glacier back up"/"Compressed encrypted pics for Glacier"/post_2016/ s3://xyz/Pictures/post_2016/ --exclude "" --include ".gpg" --sse
It would be inefficient to continually compare the local and remote folders, especially as the quantity of objects increases.
A better flow would be:
Unencrypted files are added to a local folder
Each file is:
Copied to another folder in an encrypted state
Once that action is confirmed, the original file is then deleted
Files in the encrypted local folder are copied to S3
Once that action is confirmed, the source file is then deleted
The AWS Command-Line Interface (CLI) has an aws s3 sync command that makes it easy to copy new/modified files to an Amazon S3 bucket, but this could be slow if you have thousands of files.
I was downloading a file using awscli:
$ aws s3 cp s3://mybucket/myfile myfile
But the download was interrupted (computer went to sleep). How can I continue the download? S3 supports the Range header, but awscli s3 cp doesn't let me specify it.
The file is not publicly accessible so I can't use curl to specify the header manually.
There is a "hidden" command in the awscli tool which allows lower level access to S3: s3api.† It is less user friendly (no s3:// URLs and no progress bar) but it does support the range specifier on get-object:
--range (string) Downloads the specified range bytes of an object. For
more information about the HTTP range header, go to
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.
Here's how to continue the download:
$ size=$(stat -f%z myfile) # assumes OS X. Change for your OS
$ aws s3api get-object \
--bucket mybucket \
--key myfile \
--range "bytes=$size-" \
/dev/fd/3 3>>myfile
You can use pv for a rudimentary progress bar:
$ aws s3api get-object \
--bucket mybucket \
--key myfile \
--range "bytes=$size-" \
/dev/fd/3 3>&1 >&2 | pv >> myfile
(The reason for this unnamed pipe rigmarole is that s3api writes a debug message to stdout at the end of the operation, polluting your file. This solution rebinds stdout to stderr and frees up the pipe for regular file contents through an alias. The version without pv could technically write to stderr (/dev/fd/2 and 2>), but if an error occurs s3api writes to stderr, which would then get appended to your file. Thus, it is safer to use a dedicated pipe there, as well.)
† In git speak, s3 is porcelain, and s3api is plumbing.
Use s3cmd it has a --continue function built in. Example:
# Start a download
> s3cmd get s3://yourbucket/yourfile ./
download: 's3://yourbucket/yourfile' -> './yourfile' [1 of 1]
123456789 of 987654321 12.5% in 235s 0.5 MB/s
[ctrl-c] interrupt
# Pick up where you left off
> s3cmd --continue get s3://yourbucket/yourfile ./
Note that S3 cmd is not multithreaded where awscli is multithreaded, e.g. awscli is faster. A currently maintained fork of s3cmd, called s4cmd appears to provide the multi-threaded capabilities while maintaining the usability features of s3cmd:
https://github.com/bloomreach/s4cmd
I am running the following command to remove files from a gcs bucket prior to loading new files there.
gsutil -m rm gs://mybucket/subbucket/*
If there are no files in the bucket, it throws the "CommandException: One or more URLs matched no objects".
I would like for it to delete the files if exists without throwing the error.
There is same error with gsutil ls gs://mybucket/subbucket/*
How can I rewrite this without having to handle the exception explicitly? Or, how to best handle these exceptions in batch script?
Try this:
gsutil -m rm gs://mybucket/foo/* 2> /dev/null || true
Or:
gsutil -m ls gs://mybucket/foo/* 2> /dev/null || true
This has the effect of suppressing stderr (it's directed to /dev/null), and returning a success error code even on failure.
You might not want to ignore all errors as it might indicate something different that file not found. With the following script you'll ignore only the 'One or more URLs matched not objects' but will inform you of a different error. And if there is no error it will just delete the file:
gsutil -m rm gs://mybucket/subbucket/* 2> temp
if [ $? == 1 ]; then
grep 'One or more URLs matched no objects' temp
if [ $? == 0 ]; then
echo "no such file"
else
echo temp
fi
fi
rm temp
This will pipe stderr to a temp file and will check the message to decide whether to ignore it or show it.
And it also works for single file deletions. I hope it helps.
Refs:
How to grep standard error stream
Bash Reference Manual - Redirections
You may like rsync to sync files and folders to a bucket. I used this for clearing a folder in a bucket and replacing it with new files from my build script.
gsutil rsync -d newdata gs://mybucket/data - replaces data folder with newdata