Creating bucket in S3 without amazon acc only AWS keys - amazon-s3

I have a AWS_KEY_ID and AWS_SECRET_ACCESS_KEY for amazon s3. I don't know the account. How can I create a bucket in s3 in browser without entering the Amazon account only using the AWS keys?

You have two options: use the AWS CLI or s3cmd, In each case you'll first have to create a credentials file that contains the KEY_ID and SECRET_ACCESS_KEY. Here is a blog post explaining that:
http://blogs.aws.amazon.com/security/post/Tx3D6U6WSFGOK2H/A-New-and-Standardized-Way-to-Manage-Credentials-in-the-AWS-SDKs
Then download your utility of choice (AWS CLI or s3cmd), install, and use the command line to create your bucket.
This is an example using the AWS CLI:
aws s3 mb s3://your-bucket-name --region us-east-1
Here are some instructions on the two options:
Use the AWS CLI:
http://docs.aws.amazon.com/cli/latest/userguide/using-s3-commands.html
Use s3cmd: http://s3tools.org/usage

Related

AWS secret key and Roles usage in the same aws s3 cp command

Assuming I'm on an EC2 instance which is configured with the destination bucket, is there a way to use keys for the source S3 bucket and do a copy something like this?
aws s3 cp s3://<Access key>:<secret key>#<source bucket folder> <destination bucket folder>
The AWS CLI does not support specifying two different accounts to access buckets.
You do have options:
Use the credentials for the destination bucket. In the account for the source bucket add a bucket policy granting your destination account read access to the bucket. Details.
If you cannot grant read access to the source account, create your own client using your favorite language and the AWS SDK. Initialize two client handles, one for each account. Then do a read/write copy operation. This is very easy to do in Python with boto3.

unable to copy files from Amazon S3 even with region specified

First off: I'm new to using the aws cli:
I've got problems to copy files from amazon S3 using the aws cli, while aws s3 ls works as expected and shows me all the buckets, $ aws s3 cp s3://mybucket/subdir/* /patch/to/local/dir/ --region us-east-2 --source-region us-east-2 keeps barking at me with
A client error (301) occurred when calling the HeadObject operation: Moved Permanently - when I log into S3 using the AWS website, I get "us-east-2" in the urls while it displays US West (Oregon) on the side. I've also tried the above with both regions set to us-west-2 but that didn't work either. What may be going on here and how do I get the files copied correctly?
You are trying to download data from s3 bucket. Firstly configure aws-cli using:
aws configure
Once configured, use s3 sync command, this will download all sub directries locally.
aws s3 sync s3://mybucket/subdir/ /patch/to/local/dir/
As you are using s3 cp command, use it as
aws s3 cp s3://mybucket/subdir/ /patch/to/local/dir/ --recursive

gsutil cannot copy to s3 due to authentication

I need to copy many (1000+) files to s3 from GCS to leverage an AWS lambda function. I have edited ~/.boto.cfg and commented out the 2 aws authentication parameters but a simple gsutil ls s3://mybucket fails from either an GCE or EC2 VM.
Error is The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256..
I use gsutil version: 4.28 and locations of GCS and S3 bucket are respectively US-CENTRAL1 and US East (Ohio) - in case this is relevant.
I am clueless as the AWS key is valid and I enabled http/https. Downloading from GCS and uploading to S3 using my laptop's Cyberduck is impracticable (>230Gb)
As per https://issuetracker.google.com/issues/62161892, gsutil v4.28 does support AWS v4 signatures by adding to ~/.boto a new [s3] section like
[s3]
# Note that we specify region as part of the host, as mentioned in the AWS docs:
# http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
host = s3.eu-east-2.amazonaws.com
use-sigv4 = True
The use of that section is inherited from boto3 but is currently not created by gsutil config so it needs to be added explicitly for the target endpoint.
For s3-to-GCS, I will consider the more server-less Storage Transfer Service API.
I had a similar problem. Here is what I ended up doing on a GCE machine:
Step 1: Using gsutil, I copied files from GCS to my GCE hard drive
Step 2: Using aws cli (aws s3 cp ...), I copied files from GCE hard drive to s3 bucket
The above methodology has worked reliably for me. I tried using gsutil rsync but it fail unexpectedly.
Hope this helps

Updating files for my static website on S3

I'm trying to update a static website I'm hosting on amazon AWS S3 - just need to put a new version of my resume up there. I've gone through the documentation and it seems as though I need to 'invalidate' the file - but all the guides I'm finding only talk about using cloudfront, which is a service I don't use.
So for a static website where I need to update a single file, how do I do that without cloudfront?
You can upload the file directly to S3 through the AWS S3 Console, programmatically using a package for python, ruby, etc., or using the AWS Command Line.
If you are using the AWS Command Line, you can upload a file to s3 using these commands:
$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-east-1
Default output format [None]: json
$ aws s3 cp myvideo.mp4 s3://mybucket/

Move the s3 bucket to other aws server

I have created a AWS s3 buckets and here uploaded many of images but now i want to move all images to other AWS s3 buckets.
so can we direct copy buckets or link to other AWS server.
Please provide suggestion.
You can use the AWS Command-Line Interface (CLI) S3 modules cp ( copy ) command to copy files from bucket to bucket:
aws s3 cp S3://mybucket/file.jpg S3://anotherbucket/file.jpg
See cp command documentation.