I need to move data from an S3 bucket to another bucket, on a different account.
I was able to sync buckets by running:
aws s3 sync s3://my_old_bucket s3://my_new_bucket --profile myprofile
myprofile contents:
[profile myprofile]
aws_access_key_id = old_account_key_id
aws_secret_access_key = old_account_secret_access_key
I have also set policies both on origin and destination. Origins allows listing and getting, and destination allows posting.
The commands works perfectly and I can log in to the other account and see the files. But I can't take ownership or make the new bucket public. I need to be able to make changes as I was able to in the old account. New account is totally unrelated to new account. It looks like files are retaining permissions and they are still owned by the old account.
How can I set permissions in order to gain full access to files with the new account?
Add --acl bucket-owner-full-control to your CLI call, so your command should look something like this:
aws s3 sync s3://my_old_bucket s3://my_new_bucket --acl bucket-owner-full-control --profile myprofile
bucket-owner-full-control is a canned ACL (In short, a canned ACL is a predefined grant), check this out to see what other options are available and what they do in the S3 Canned ACL documentation.
This will result in the Objects being owned by the destination bucket.
"It looks like files are retaining permissions and they are still owned by the old account."
The uploader of the Object owns it.
As the new bucket is owned by the new account, you can update the bucket ACL to grant the bucket owner full-control permission on objects by passing the grants option. 1
--grants full=id=canonicalUserId-ofTheBucketOwner
You can view the Canonical ID for the bucket owner from the AWS S3 Console. 2
Related
I'm practicing deployment using aws s3 and cloudFront.
The issue I met was that account B approached S3, which made by account A, and uploaded the object using Github Actions.
I solved this issue with changing two things.
First, I added external account with Grantee in S3 ACL like picture.
Second, I added code in .yml file --acl bucket-owner-full-control like below for Github Actions.
...
- name: S3 Deploy
run: aws s3 sync ./dist s3://s3-bucket-name/ --acl bucket-owner-full-control
Both two options are set, the function is working.
But, I don't understand why this option --acl bucket-owner-full-control is needed.
I thought the first setting was enough for account B to access S3 owned by account A.
I found something that may be related to this topic. Is it related to this issue?
I have 2 AWS accounts, Account A and Account B.
Account A has a EKS cluster running with a flink cluster running on it. To manage the IAM roles, we use Kube2iam.
All the pods on cluster have specific roles assigned to them. For simplicity lets say the role for one of the pods is Pod-Role
The K8s worker nodes have the role Worker-Node-role
Kube2iam is correctly configured to make proper EC2 metadata calls when required.
Account B has a S3 bucket, which the Pod hosted in Account A worked node need to read.
Possible Solution:
Create a role in Account B, let's say, AccountB_Bucket_access_role with a policy that allows reading the bucket. Add Pod-Role as a trusted entity to it.
Add a policy in Pod-role which allows switching to AccountB_Bucket_access_role, basically the STS AssumeRole action.
Create a AWS profile in Pod, let's say, custom_profile, with role_arn set to AccountB_Bucket_access_role role's arn.
While deploying the flink pod, set AWS_PROFILE=AccountB_Bucket_access_role.
QUESTION: Given above whenever the flink app needs to talk to S3 bucket, it first assumes the AccountB_Bucket_access_role role and is able to read the S3 bucket. But setting AWS_PROFILE actually switches the role for flink app, hence all the POD-ROLE permissions are lost, and they are required for proper functioning of flink app.
Is there a way, that this AWS custom_profile could only be used when reading S3 bucket and it switches to POD-ROLE after that.
val flinkEnv: StreamExecutionEnvironment = AppUtils.setUpAndGetFlinkEnvRef(config.flink)
val textInputFormat = new TextInputFormat(new Path(config.path))
env
.readFile(
textInputFormat,
config.path,
FileProcessingMode.PROCESS_CONTINUOUSLY,
config.refreshDurationMs
)
This is what I use in flink job to read S3 file.
Nvm, we can configure a role of one account to access a particular bucket from another account. Access Bucket from another account
Assuming I'm on an EC2 instance which is configured with the destination bucket, is there a way to use keys for the source S3 bucket and do a copy something like this?
aws s3 cp s3://<Access key>:<secret key>#<source bucket folder> <destination bucket folder>
The AWS CLI does not support specifying two different accounts to access buckets.
You do have options:
Use the credentials for the destination bucket. In the account for the source bucket add a bucket policy granting your destination account read access to the bucket. Details.
If you cannot grant read access to the source account, create your own client using your favorite language and the AWS SDK. Initialize two client handles, one for each account. Then do a read/write copy operation. This is very easy to do in Python with boto3.
can you please help me to see the list of buckets owned by me onlyl. I have tried the following api, but it shows all the buckets avaialable in S3. But i want to see only the buckets owned by me not others.
aws s3api list-buckets
An Amazon S3 bucket is always owned by an AWS account. Individual IAM Users do not own buckets.
When you issue a command such as aws s3 ls or aws s3api list-buckets, you will only see a list of buckets owned by the account. (It will not list buckets owned by a different account.)
Therefore, that is the correct command.
I'm trying to make a copy of s3 bucket on aws and it is really pain.
My reference s3 bucket is: s3://original
Duplicated ver. of this bucket: s3://original-copy
My goal is:
generate kubernetes.tf file with kops create cluster ... => DONE
kops is kind enough to create --state=s3://original => DONE
now I want to create a new s3 bucket with exactly same content as in s3://original just the name is different s3://original-copy => PROBLEM
Command
aws s3 cp s3://original s3://original-copy --recursive --acl bucket-owner-full-control
Even though bucket is duplicated it seems like there is some problem with s3 bucket permissions
Then I am adjusting values in terraform/data folder with a new reference to s3://original-copy as well as at s3://original-copy
s3://original-copy/cluster_name/config
s3://original-copy/cluster_name/cluster.spec
files.
But there is a problem with permissions all the time.
Error:
s3context.go:145] unable to get bucket location from region "us-east-1"; scanning all regions: AccessDenied: Access Denied
Idea
The main idea is that kops will generate
kubernetes.tf file and data folder with proper files (all within terraform folder) just once
--state=s3://original bucket just once
Once we have some example (patterns) of s3 and kuberetes.tf we would stop using kops.