Move AWS s3 bucket to another aws account - amazon-s3

Does AWS provide a way to copy a bucket from one account to a different account? I am uploading several of files to my own bucket for development purposes, but now I'm going to want to switch the bucket to client account.
what all the possiable soluation to do that?

You can copy the contents of one bucket to another owned by a different account, but you cannot transfer ownership of a bucket to a new account. The way to think about it is you're transferring ownership of the objects in the bucket, not the bucket itself.
Amazon has very detailed articles about this procedure.
In the source account, attach the following policy to the bucket you want to copy.
#Bucket policy in the source AWS account
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DelegateS3Access",
"Effect": "Allow",
"Principal": {"AWS": "222222222222"},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::sourcebucket/*",
"arn:aws:s3:::sourcebucket"
]
}
]
}
Attach a policy to a user or group in the destination AWS account to delegate access to the bucket in the source AWS account. If you attach the policy to a group, make sure that the IAM user is a member of the group.
#User or group policy in the destination AWS account
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::sourcebucket",
"arn:aws:s3:::sourcebucket/*",
"arn:aws:s3:::destinationbucket",
"arn:aws:s3:::destinationbucket/*"
]
}
}
When these steps are completed, you can copy objects by using the AWS Command Line Interface (CLI) commands cp or sync. For example, the following aws s3 sync command could be used to copy the contents from a bucket in the source AWS account to a bucket in the destination AWS account.
aws s3 sync s3://sourcebucket s3://destinationbucket

You can not move the whole bucket to another account. You should delete the bucket first in one account and re-create the bucket with the same name in another account. It takes up to 24 hours for a bucket name to become available again after you delete it.
Or you can create the new bucket in a needed account - move all data there and then delete old bucket.
There are different tools that can help you to make this actions but I assume I shouldn't paste any links here.
Do you need to move the bucket to another region or you need to make these changes within one?

Related

Why S3 cross-region replication is not working for us when we're upload a file with PHP?

S3 cross region replication is not working for us when we're upload a file with PHP.
When we upload the file from the AWS interface it replicate to the other bucket it's working great, but when we use S3 API for PHP: putObject it's upload but don't replicate to the other bucket.
What are we missing here?
Thanks
As I commented, it would be great to see the bucket policy of the upload bucket, the bucket policy of the destination bucket, and the permissions granted to whatever IAM role / user the PHP is using.
My guess is that there's some difference in config/permissioning between the source bucket's owning account (which is likely what you use when manipulating from the AWS Console interface) and whatever account or role or user is representing your PHP code. For example:
If the owner of the source bucket doesn't own the object in the bucket, the object owner must grant the bucket owner READ and READ_ACP permissions with the object access control list (ACL)
Pending more info from the OP, I'll add some potentially helpful trouble-shooting resources:
Can't get amazon S3 cross-region replication between two accounts to work
AWS Troubleshooting Cross-Region Replication
I don't know if is the same for replicating buckets between accounts, but I use this policy to replicate objects uploaded on a bucket in us-east-1 to a bucket in eu-west-1 and it works like a charm, both uploading files manually of from a python script.
{
"Version": "2008-10-17",
"Id": "S3-Console-Replication-Policy",
"Statement": [
{
"Sid": "S3ReplicationPolicyStmt1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS Account ID>:root"
},
"Action": [
"s3:GetBucketVersioning",
"s3:PutBucketVersioning",
"s3:ReplicateObject",
"s3:ReplicateDelete"
],
"Resource": [
"arn:aws:s3:::<replicated region ID>.<replicated bucket name>",
"arn:aws:s3:::<replicated region ID>.<replicated bucket name>/*"
]
}
]
}
Where:
- is, of course, your AWS account ID
- is the AWS region ID (eu-west-1, us-east-1, ...) where the replicated bucket will be (in my case is eu-west-1)
- is the name of bucket you want to replicate.
So say you want to replicate a bucket called "my.bucket.com" in eu-west-1, the Resource ARN to put in the policy will be arn:aws:s3:::eu-west-1.my.bucket.com. Same with the leading /*
Also the replication rule is set as follows:
- Source: entire bucket
- Destination: the bucket I mentioned above
- Destination options: leave all unchecked
- IAM role: Create new role
- Rule name: give it a significant name
- Status: Enabled

Read Only Bucket Policy Settings for Amazon S3 - For Streaming Audio Snippets

I want to outsource audio snippets off my shop page to amazon S3.
My goal is: public/everyone can read but only the owner/me can write.
Here is the code I used
Under Permission - Bucket Policy I'm using the following code
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*"
}
]
}
But the permissions I get are confusing me. see screenshot.
And when I click on the relevant file I get this
Do I have to click on "everyone" and add "read"?
Here is another window where I had to change the policy to false (on the right side) because otherwise I was getting "Access denied"
And then there is a third permission window (kind of global? outside the bucket thing)
I guess what I'm asking is: Is this how you do it, if you want to set up files to "read only" for public and "read and write" for the owner?
Can someone confirm that this is set up and looking right?
Help is very much appreciated. Thanks.
I'm not 100% sure this is the best answer but what comes to mind is having a private read and write s3 that syncs with your public bucket. AWS is strict in their public vs private buckets so I don't imagine they would allow only owner write access. I could be wrong. Basically, have a personal private s3 bucket that syncs to your public bucket for everyone else.
Along the lines of this,
Automatically sync two Amazon S3 buckets, besides s3cmd?

Accessing different region s3 bucket from an ec2 instance

I have assigned a role with the following policy to my ec2 instance running on us-west-2 region -
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "*"
}
]
}
and trying to access a bucket from ap-southeast-1 region. The problem is every aws s3 operations are timing out. I have also tried specifying region in the command --region ap-southeast-1.
From the documentation, I found this pointer -
Endpoints are supported within the same region only. You cannot create
an endpoint between a VPC and a service in a different region.
So, what is the process to access bucket from a different region using aws-cli or boto client from the instance?
Apparently, to access bucket from a different region, the instance also needs access to the public internet. Therefore, the instance needs to have a public ip or it has to be behind a NAT.
I think it is not necessary to specify the region of the bucket in order to access to it, you can check some boto3 examples from here:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.get_object
I would check to make sure you've given the above permission to the correct User or Role.
Run the command;
aws sts get-caller-identity
You may think the EC2 instance is using credentials you've set when it may be using an IAM role.

Google storage transfer job (From Amazon s3 bucket) fails with error Unauthorized

I had created a google storage transfer job which has the following configuration
Source S3 bucket has the following policy,
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::jaya/*",
"arn:aws:s3:::jaya"
]
}
]
}
The transfer job kicks but failed for an unknown reason, the reason I got from the transfer job is,
Object: s3_fetch:0001_part_00.gz
Details: Http error code: Unauthorized.
The funniest thing is, source bucket has the file 0001_part_00.gz which means transfer job can able to fetch the file name from s3 bucket but it couldn't able to download from s3. What could be the reason?
I was having the same issue all day today but now it's working.
I tried creating a bucket from within the job creation process and it worked. I then recreated one of my previously failing jobs (to a bucket created outside the transfer job) and it's also working.
So maybe something was broken at AWS or GCP today.

Amazon AWS - different access permissions for files in the same bucket

Has anyone know is there an option to set file permissions for certain file on AWS S3 service to restrict access to that file only.
Here is the thing. I have a bucket with public read policy as below:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket-name/*"
}
]}
There are bunch of files there which are related to some data in my database. When I delete that record it is not actually deleted. So I want to make that file (which is related to a deleted record) within this bucket to be public inaccessible.
I have two not very pretty ideas how to resolve this.
Copy all that data in another bucket with different policy.
Rename file and update policy to disable access to files with certain prefix of suffix (not sure if this is possible)
But all that requires write/delete action which I'd like to avoid. So the question is, is there is a way to set some kind of a permission to a single file to prevent an access?
Thanks,
Ante
Check if AWS S3 ACL provides what you are looking for http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html