copy data from scaleway/aws s3 to google cloud - amazon-s3

I am trying to copy files/folder from scaleway(object storage bucket) to google cloud bucket using gsutil
gsutil cp -R s3://scaleway-bucket gs://cloud-storage-bucket
and I am getting error :
AccessDeniedException: 403 InvalidAccessKeyId
InvalidAccessKeyIdThe AWS Access Key Id you provided does not exist in our records.
I have checked .boto file for the access and secret keys, entries are correct.
I think either I am missing something or doing it incorrectly.
Thanks.

I have uninstalled/removed and reinstalled gcloud and gsutil. It worked for me.
Thanks.

Related

aws s3 ls gives error botocore.utils.BadIMDSRequestError: <botocore.awsrequest.AWSRequest object at 0x7f3f6cb44d00>

Recently I installed aws cli on a linux machine following the documentation from aws official website. In the first go, I was able to run the s3 commands without any issue. As part of my development, I uninstalled aws-cli and re-installed it. I was getting the error botocore.utils.BadIMDSRequestError: <botocore.awsrequest.AWSRequest object at 0x7f3f6cb44d00>
when I execute aws s3 ls
I figured it out.
I just need to add the region
aws configure
AWS Access Key ID [******************RW]:
AWS Secret Access Key [******************7/]:
Default region name [None]: **us-east-1**
Then it works!
Thanks.

Copy files from GCLOUD to S3 with SDK GCloud

I am trying to copy a file between gcloud and aws s3 with sdk gcloud console and it shows me an error. I have got the way to copy the gcloud file to a local directory (gsutil -D cp gs://mybucket/myfile C:\tmp\storage\file) and to upload this local file to s3 using aws cli (aws s3 cp C:\tmp\storage\file s3://my_s3_dirctory/file), and it works perfectly, but i would like to do all of this directly, with no need to download the files and only using SDK Gcloud console.
When i try to do this, the system shows me an error:
gsutil -D cp gs://mybucket/myfile s3://my_s3_dirctory/file.csv
Failure: Host [...] returned an invalid certificate. (remote hostname
"....s3.amazonaws.com" does not match certificate)...
I have edited and uncommented that lines in .boto file, but the error continues:
# To add HMAC aws credentials for "s3://" URIs, edit and uncomment the
# following two lines:
aws_access_key_id = [MY_AWS_ACCESS_KEY_ID]
aws_secret_access_key = [MY_AWS_SECRET_ACCESS_KEY]
I am a noob in this and i dont know what is boto and i have no idea if i am editing it well or not. I dont know if i can to put the keys directly in the sentence, because i dont know how works .boto file...
Can somebody help me whit that, please? And explain the whole process to me so this works?? I really apreciate this... It would be very helpful for me!
Thak you so much.

gsutil cannot copy to s3 due to authentication

I need to copy many (1000+) files to s3 from GCS to leverage an AWS lambda function. I have edited ~/.boto.cfg and commented out the 2 aws authentication parameters but a simple gsutil ls s3://mybucket fails from either an GCE or EC2 VM.
Error is The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256..
I use gsutil version: 4.28 and locations of GCS and S3 bucket are respectively US-CENTRAL1 and US East (Ohio) - in case this is relevant.
I am clueless as the AWS key is valid and I enabled http/https. Downloading from GCS and uploading to S3 using my laptop's Cyberduck is impracticable (>230Gb)
As per https://issuetracker.google.com/issues/62161892, gsutil v4.28 does support AWS v4 signatures by adding to ~/.boto a new [s3] section like
[s3]
# Note that we specify region as part of the host, as mentioned in the AWS docs:
# http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
host = s3.eu-east-2.amazonaws.com
use-sigv4 = True
The use of that section is inherited from boto3 but is currently not created by gsutil config so it needs to be added explicitly for the target endpoint.
For s3-to-GCS, I will consider the more server-less Storage Transfer Service API.
I had a similar problem. Here is what I ended up doing on a GCE machine:
Step 1: Using gsutil, I copied files from GCS to my GCE hard drive
Step 2: Using aws cli (aws s3 cp ...), I copied files from GCE hard drive to s3 bucket
The above methodology has worked reliably for me. I tried using gsutil rsync but it fail unexpectedly.
Hope this helps

gsutil - How to copy/download all files from Google private cloud?

Google Play Developer account reports are stored on private Google Cloud Storage bucket.
Every Google Play Developer account has Google Cloud Storage bucket ID
So to access I have installed gsutil on my windows machine.
Now I am using this command to copy all files from bucket
gsutil cp -r dir gs://[bucket_id]
its says
CommandException: No URLs matched
When I list all directories on bucket, this command works
gsutil ls gs://[bucket_id]
Can anyone help here to understand the gsutil exception ?
This exception is because destination URL is missing
It should be like...
gsutil cp -r dir gs://[bucket_id] [destination_bucket_url]

How can I download a file from an S3 bucket with wget?

I can push some content to an S3 bucket with my credentials through S3cmd tool with s3cmd put contentfile S3://test_bucket/test_file
I am required to download the content from this bucket in other computers that don't have s3cmd installed on them, BUT they have wget installed.
when I try to download some content from my bucket with wget I get this:
https://s3.amazonaws.com/test_bucket/test_file
--2013-08-14 18:17:40-- `https`://s3.amazonaws.com/test_bucket/test_file
Resolving s3.amazonaws.com (s3.amazonaws.com)... [ip_here]
Connecting to s3.amazonaws.com (s3.amazonaws.com)|ip_here|:port... connected.
HTTP request sent, awaiting response... 403 Forbidden
`2013`-08-14 18:17:40 ERROR 403: Forbidden.
I have manually made this bucket public through the Amazon AWS web console.
How can I download content from an S3 bucket with wget into a local txt file?
You should be able to access it from a url created as follows:
http://{bucket-name}.s3.amazonaws.com/<path-to-file>
Now, say your s3 file path is:
s3://test-bucket/test-folder/test-file.txt
You should be able to wget this file with following url:
http://test-bucket.s3.amazonaws.com/test-folder/test-file.txt
Go to S3 console
Select your object
Click 'Object Actions'
Choose 'Download As'
Use your mouse right-click to 'Copy Link Address'
Then use the command:
wget --no-check-certificate --no-proxy 'http://your_bucket.s3.amazonaws.com/your-copied-link-address.jpg'
AWS cli has a 'presign' command that one can use to get a temporary public URL to a private s3 resource.
aws s3 presign s3://private_resource
You can then use wget to download the resource using the presigned URL.
Got it ... If you upload a file in an S3 bucket with S3CMD with the --acl public flag then one shall be able to download the file from S3 with wget easily ...
Conclusion: In order to download with wget, first of one needs to upload the content in S3 with s3cmd put --acl public --guess-mime-type <test_file> s3://test_bucket/test_file
alternatively you can try:
s3cmd setacl --acl-public --guess-mime-type s3://test_bucket/test_file
notice the setacl flag above. THAT WILL set the file in s3 accessible publicly
then you can execute the wget http://s3.amazonaws.com/test_bucket/test_file
I had the same situation for couple of times. It’s the fastest and the easiest way to download any file from AWS using CLI is next command:
aws s3 cp s3://bucket/dump.zip dump.zip
File downloaded way faster than via wget, at least if you are outside of US.
I had the same error and I solved it by adding a Security Groups Inbound rule:
HTTPS type at port 443 to my IP address ( as I'm the only one accessing it ) for the subnet my instance was in.
Hope it helps anyone who forgot to include this
Please make sure that the read permission has been given correctly.
If you do not want to enter any account/password, just by wget command without any password, make sure the permission is like the following setting shows.
By Amazon S3 -> Buckets -> Permisions - Edit
Check the Object for "Everyone (public access)" and save changes.permission setting like this - screenshot
or choose the objest and go to "Actions" -> "Make public", would do the same thing under permission settings.
incase you do not have access to install aws client on ur Linux machine try below method.
got to the bucket and click on download as button. copy the link generated.
execute command below
wget --no-check-certificate --no-proxy --user=username --ask-password -O "download url"
Thanks
you have made the bucket public, you need to also make the object public.
also, the wget command doesn't work with the S3:// address, you need to find the object's URL in AWS web console.
I know I'm too late to this post. But thought I'll add something no one mentioned here.
If you're creating a presigned s3 URL for wget, make sure you're running aws cli v2.
I ran into the same issue and realized s3 had this problem
Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4
This gets resolved once you presign on aws cli v2
The simplest way to do that is to disable Block all public firstly.
Hit your bucket name >> go to Permissions >> Block public access (bucket settings)
If it is on >> hit Edit >> Uncheck the box, then click on Save changes
Now hit the object name >> Object action >> Make public using ACL >> then confirm Make public
After that, copy the Object URL, and proceed to download
I hope it helps the future askers. Cheers
I had the same mistake
I did the following :
created IAM role > AWS Service type > AmazonS3FullAccess policy inside
applied this role to the EC2 instance
in the the Security Groups opened inbound HTTP and HTTPS to Anywhere-IPv4
made the S3 bucket public
profit! wget works! ✅