I need to copy many (1000+) files to s3 from GCS to leverage an AWS lambda function. I have edited ~/.boto.cfg and commented out the 2 aws authentication parameters but a simple gsutil ls s3://mybucket fails from either an GCE or EC2 VM.
Error is The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256..
I use gsutil version: 4.28 and locations of GCS and S3 bucket are respectively US-CENTRAL1 and US East (Ohio) - in case this is relevant.
I am clueless as the AWS key is valid and I enabled http/https. Downloading from GCS and uploading to S3 using my laptop's Cyberduck is impracticable (>230Gb)
As per https://issuetracker.google.com/issues/62161892, gsutil v4.28 does support AWS v4 signatures by adding to ~/.boto a new [s3] section like
[s3]
# Note that we specify region as part of the host, as mentioned in the AWS docs:
# http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
host = s3.eu-east-2.amazonaws.com
use-sigv4 = True
The use of that section is inherited from boto3 but is currently not created by gsutil config so it needs to be added explicitly for the target endpoint.
For s3-to-GCS, I will consider the more server-less Storage Transfer Service API.
I had a similar problem. Here is what I ended up doing on a GCE machine:
Step 1: Using gsutil, I copied files from GCS to my GCE hard drive
Step 2: Using aws cli (aws s3 cp ...), I copied files from GCE hard drive to s3 bucket
The above methodology has worked reliably for me. I tried using gsutil rsync but it fail unexpectedly.
Hope this helps
Related
I am trying to capture all incoming traffic on a specific port using GoReplay and to upload it directly to S3 servers.
I am running a simple file server on port 8000 and a gor instance using the (simple) command
gor --input-raw :8000 --output-file s3://<MyBucket>/%Y_%m_%d_%H_%M_%S.log
I does create a temporal file at /tmp/ but other than that, id does not upload any thing to S3.
Additional information :
The OS is Ubuntu 14.04
AWS cli is installed.
The AWS credentials are deffined within the environent
It seems the information you are providing or scenario you explained is not complete however to upload a file from your EC2 machine to S3 is simple as written command below.
aws s3 cp yourSourceFile s3://your bucket
To see your file you can see your file by using below command
aws s3 ls s3://your bucket
However, s3 is object storage and you can't use it to upload those files which are continually editing or adding or updating.
We were trying to copy some data from a S3 bucket to google cloud storage. However, the gsutil copy command results in the following error:
gsutil cp s3://my_s3_bucket/datadir1 gs://my_google_bucket
Error:
Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4
Is there a way to get around this?
The latest version of gsutil supports AWS Signature Version 4 for calls to S3, but you'll need to explicitly enable it.
First, update to the latest version of gsutil (you'll need 4.28 or higher). In the [S3] section of your ".boto" configuration file, set these parameters:
[s3]
use-sigv4 = True
host = s3.<some AWS region>.amazonaws.com
I'm trying to update a static website I'm hosting on amazon AWS S3 - just need to put a new version of my resume up there. I've gone through the documentation and it seems as though I need to 'invalidate' the file - but all the guides I'm finding only talk about using cloudfront, which is a service I don't use.
So for a static website where I need to update a single file, how do I do that without cloudfront?
You can upload the file directly to S3 through the AWS S3 Console, programmatically using a package for python, ruby, etc., or using the AWS Command Line.
If you are using the AWS Command Line, you can upload a file to s3 using these commands:
$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-east-1
Default output format [None]: json
$ aws s3 cp myvideo.mp4 s3://mybucket/
I'm using s3 adapter to copy files from a snowball device to local machine.
Everything appears to be in order as I was able to run this command and see the bucket name:
aws s3 ls --endpoint http://snowballip:8080
But besides this, aws doesn't offer any examples for calling cp command. How do I provide the bucket name and the key with this --endpoint flag.
Further, when I ran this:
aws s3 ls --endpoint http://snowballip:8080/bucketname
It returned 'Bucket'... Not sure what that means because I expect to see the files.
I can confirm the following is correct for snowball and snowball edge, as #sqlbot says in the comment
aws s3 ls --endpoint http://snowballip:8080 s3://bucketname/[optionalprefix]
References:
http://docs.aws.amazon.com/cli/latest/reference/
http://docs.aws.amazon.com/snowball/latest/ug/using-adapter-cli.html
Just got one in the post
I am trying to load data from aws S3 to google cloud storage:
- I am using gsutil
- I've made the file on S3 public
on the gsutil command line on a windows machine I entered:
python C:\gsutil\gsutil cp https://s3.amazonaws.com/my_bucket/myfile.csv.gz gs://my_folder
The error I receive is:
InvalidUriError: Unrecognized scheme "https"
I have tried substituting http for https
I have successfully uploaded from my local computer to google cloud storage substituting in a local file.
Thanks.
http or https are not supported by gsutil command. Try...
python C:\gsutil\gsutil cp s3://my_bucket/myfile.csv.gz gs://my_folder