How to use S3 adapter cli for snowball - amazon-s3

I'm using s3 adapter to copy files from a snowball device to local machine.
Everything appears to be in order as I was able to run this command and see the bucket name:
aws s3 ls --endpoint http://snowballip:8080
But besides this, aws doesn't offer any examples for calling cp command. How do I provide the bucket name and the key with this --endpoint flag.
Further, when I ran this:
aws s3 ls --endpoint http://snowballip:8080/bucketname
It returned 'Bucket'... Not sure what that means because I expect to see the files.

I can confirm the following is correct for snowball and snowball edge, as #sqlbot says in the comment
aws s3 ls --endpoint http://snowballip:8080 s3://bucketname/[optionalprefix]
References:
http://docs.aws.amazon.com/cli/latest/reference/
http://docs.aws.amazon.com/snowball/latest/ug/using-adapter-cli.html
Just got one in the post

Related

Copy files from GCLOUD to S3 with SDK GCloud

I am trying to copy a file between gcloud and aws s3 with sdk gcloud console and it shows me an error. I have got the way to copy the gcloud file to a local directory (gsutil -D cp gs://mybucket/myfile C:\tmp\storage\file) and to upload this local file to s3 using aws cli (aws s3 cp C:\tmp\storage\file s3://my_s3_dirctory/file), and it works perfectly, but i would like to do all of this directly, with no need to download the files and only using SDK Gcloud console.
When i try to do this, the system shows me an error:
gsutil -D cp gs://mybucket/myfile s3://my_s3_dirctory/file.csv
Failure: Host [...] returned an invalid certificate. (remote hostname
"....s3.amazonaws.com" does not match certificate)...
I have edited and uncommented that lines in .boto file, but the error continues:
# To add HMAC aws credentials for "s3://" URIs, edit and uncomment the
# following two lines:
aws_access_key_id = [MY_AWS_ACCESS_KEY_ID]
aws_secret_access_key = [MY_AWS_SECRET_ACCESS_KEY]
I am a noob in this and i dont know what is boto and i have no idea if i am editing it well or not. I dont know if i can to put the keys directly in the sentence, because i dont know how works .boto file...
Can somebody help me whit that, please? And explain the whole process to me so this works?? I really apreciate this... It would be very helpful for me!
Thak you so much.

unable to copy files from Amazon S3 even with region specified

First off: I'm new to using the aws cli:
I've got problems to copy files from amazon S3 using the aws cli, while aws s3 ls works as expected and shows me all the buckets, $ aws s3 cp s3://mybucket/subdir/* /patch/to/local/dir/ --region us-east-2 --source-region us-east-2 keeps barking at me with
A client error (301) occurred when calling the HeadObject operation: Moved Permanently - when I log into S3 using the AWS website, I get "us-east-2" in the urls while it displays US West (Oregon) on the side. I've also tried the above with both regions set to us-west-2 but that didn't work either. What may be going on here and how do I get the files copied correctly?
You are trying to download data from s3 bucket. Firstly configure aws-cli using:
aws configure
Once configured, use s3 sync command, this will download all sub directries locally.
aws s3 sync s3://mybucket/subdir/ /patch/to/local/dir/
As you are using s3 cp command, use it as
aws s3 cp s3://mybucket/subdir/ /patch/to/local/dir/ --recursive

gsutil cannot copy to s3 due to authentication

I need to copy many (1000+) files to s3 from GCS to leverage an AWS lambda function. I have edited ~/.boto.cfg and commented out the 2 aws authentication parameters but a simple gsutil ls s3://mybucket fails from either an GCE or EC2 VM.
Error is The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256..
I use gsutil version: 4.28 and locations of GCS and S3 bucket are respectively US-CENTRAL1 and US East (Ohio) - in case this is relevant.
I am clueless as the AWS key is valid and I enabled http/https. Downloading from GCS and uploading to S3 using my laptop's Cyberduck is impracticable (>230Gb)
As per https://issuetracker.google.com/issues/62161892, gsutil v4.28 does support AWS v4 signatures by adding to ~/.boto a new [s3] section like
[s3]
# Note that we specify region as part of the host, as mentioned in the AWS docs:
# http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
host = s3.eu-east-2.amazonaws.com
use-sigv4 = True
The use of that section is inherited from boto3 but is currently not created by gsutil config so it needs to be added explicitly for the target endpoint.
For s3-to-GCS, I will consider the more server-less Storage Transfer Service API.
I had a similar problem. Here is what I ended up doing on a GCE machine:
Step 1: Using gsutil, I copied files from GCS to my GCE hard drive
Step 2: Using aws cli (aws s3 cp ...), I copied files from GCE hard drive to s3 bucket
The above methodology has worked reliably for me. I tried using gsutil rsync but it fail unexpectedly.
Hope this helps

Copying files from AWS s3 (SSE) bucket to google cloud

We were trying to copy some data from a S3 bucket to google cloud storage. However, the gsutil copy command results in the following error:
gsutil cp s3://my_s3_bucket/datadir1 gs://my_google_bucket
Error:
Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4
Is there a way to get around this?
The latest version of gsutil supports AWS Signature Version 4 for calls to S3, but you'll need to explicitly enable it.
First, update to the latest version of gsutil (you'll need 4.28 or higher). In the [S3] section of your ".boto" configuration file, set these parameters:
[s3]
use-sigv4 = True
host = s3.<some AWS region>.amazonaws.com

Granting read access to the Authenticated Users group for a file

How do I grant read access to the Authenticated Users group for a file? I'm using s3cmd and want to do it while uploading but I'm just focusing directly on changing the acl. What should I put in for http://acs.amazonaws.com/groups/global/AuthenticatedUsers? I have tried every combination of AuthenticatedUsers possible.
./s3cmd setacl
--acl-grant=read:http://acs.amazonaws.com/groups/global/AuthenticatedUsers
s3://BUCKET/FILE
./s3cmd setacl
--acl-grant=read:AuthenticatedUsers
s3://BUCKET/FILE
This doesn't seem to be possible with s3cmd. Instead I had to switch to the aws cli tools.
Here are the directions to install them:
http://docs.aws.amazon.com/cli/latest/userguide/installing.html
It's possible to set the acl to read by authenticated users during upload with the command:
aws s3 cp <file-to-upload> s3://<bucket>/ --acl authenticated-read
Plus a whole load of other combinations you can check out here:
http://docs.aws.amazon.com/cli/latest/reference/s3/index.html#cli-aws-s3
The following command works for me with s3cmd version 1.6.0:
s3cmd setacl s3://<bucket>/<file-name> --acl-grant='read:http://acs.amazonaws.com/groups/global/AuthenticatedUsers' for an individual file.
s3cmd setacl s3://<bucket>/<dir-name> --acl-grant='read:http://acs.amazonaws.com/groups/global/AuthenticatedUsers' --recursive
for all files in a directory.
This is from http://s3tools.org/s3cmd:
Upload a file into the bucket ~$ s3cmd put addressbook.xml
s3://logix.cz-test/addrbook.xml File 'addressbook.xml' stored as
s3://logix.cz-test/addrbook.xml (123456 bytes) Note about ACL
(Access control lists) — a file uploaded to Amazon S3 bucket can
either be private, that is readable only by you, possessor of the
access and secret keys, or public, readable by anyone. Each file
uploaded as public is not only accessible using s3cmd but also has a
HTTP address, URL, that can be used just like any other URL and
accessed for instance by web browsers.
~$ s3cmd put --acl-public --guess-mime-type storage.jpg
s3://logix.cz-test/storage.jpg File 'storage.jpg' stored as
s3://logix.cz-test/storage.jpg (33045 bytes) Public URL of the
object is: http://logix.cz-test.s3.amazonaws.com/storage.jpg
Now anyone can display the storage.jpg file in their browser. Cool, eh?
try changing public to authenticated and that should work.
see http://docs.amazonwebservices.com/AmazonS3/latest/dev/ACLOverview.html#CannedACL
it explains on amazon side how to use their ACLs, supposedly if you use public in s3cmd - this would translate to public-read in amazon, so authenticated should translate to authenticated-read.
If you're willing to use Python, the boto library provides all the functionality to get and set an ACL; from the boto S3 documentation:
b.set_acl('public-read')
Where b is a bucket. Of course in your case you should change 'public-read' to 'authenticated-read'. You can do something similar for keys (files).
If you want to do it at bucket level you can do -
aws s3api put-bucket-acl --bucket bucketname --grant-full-control uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers
Docs - http://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-acl.html
Here is an example command that will set the ACL on an S3 object to authenticated-read.
aws s3api put-object-acl --acl authenticated-read --bucket mybucket --key myfile.txt
.