I'm trying to create a cross account codepipeline and there is no appropriate document for this scenario.
AccounT - A has s3 bucket with yaml file
Account- B Will have Codepipeline
Account B codepipeline should have S3 as source in source stage from Account A and cloudformation deploy method in deploy stage. Can someone please help on what are the roles and other needs has to fulfilled to achieve this task.
There are two things that you need to make this work.
Your bucket needs to use a customer KMS key, not the default. This is because you can't grant permissions to another account to use the default key, meaning another account can't decrypt the data in the bucket. You need to grant permission in the key policy to allow the other account to decrypt using that key. Ideally not just to the entire account, but the role that is being used in your CodePipeline source step.
You have to grant access to the other account in your S3 bucket policy. Ideally not just to the entire account, but the role that is being used in your CodePipeline source step.
I have a project that does some of this using organizations. It isn't exactly what you want, in that the CodePipeline in my project lives in "AccountT" and the pipeline runs CloudFormation (or other things) run in "Account-B". So in my case only CloudFormation is reaching back to the bucket in "AccountT". I don't think it should be a big change to modify it to work the way you need it to work. My project is largely based off this AWS article.
Related
(+)
I just found a similar question and answer with help from petrch (thanks!) and being try to apply...
CodeBuild upload build artifact to S3 with ACL
I'm updating accountB's S3 bucket by accountA's CodeBuild project.
A problem is, all the object from accountA's CodeBuild deny to access.
My purpose is using this S3 bucket for static hosting.
I set all requirements for static hosting and it's working fine when I uploaded simple index.html manually.
But the individual object from accountA's CodeBuild project show below attached error.
ex) index.html properties & permission
I checked the Disable artifact encryption option in the artifact setting in the CodeBuild project.
and also on the override params,
encryptionDisabled: true
This code build project is working fine when I save the output in the same account S3.
(S3 static hosting site in AccountA is working well)
But getting access issue in accountB's S3.
Before try to touch KMS policy, I want to know if I missed some configurations in the CodeBuild.
Please advice me what I have to do or missed...
Thanks.
(+)
I just found a similar question and answer with help from petrch (thanks!) and being try to apply...
CodeBuild upload build artifact to S3 with ACL
Upload the objects with bucket-owner-full-control canned ACL, otherwise the objects will be still "owned" by the source account.
See:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html
It says:
Amazon S3 access control lists (ACLs) enable you to manage access to buckets and objects. Each bucket and object has an ACL attached to it as a subresource. It defines which AWS accounts or groups are granted access and the type of access. When a request is received against a resource, Amazon S3 checks the corresponding ACL to verify that the requester has the necessary access permissions.
When you create a bucket or an object, Amazon S3 creates a default ACL that grants the resource owner full control over the resource. This is shown in the following sample bucket ACL (the default object ACL has the same structure)
So the object has ACL of the source bucket, it's not very obvious, but you can provide an ACL during the PutObject action from the source account. So it can still be just one call.
Background
I am working on a task to generate AWS QuickSight report in Account B from AWS Systems Manager Inventory data in the Account A S3 bucket (s3 sync).
I have successfully added all the resource sync data in to cross account (Account A) S3 bucket using SSM resource data sync. Bucket is encrypted using AWS-KMS key (key is located in Account A) and same key has been used in resource data sync in the all accounts to add data in cross account bucket.
Moreover, I am using Athena in Account B to create sample database and schemas from S3 Sync data.
Problem
Athena can create successfully database and schemas in Account B and also add metadata from Account B to Account A S3 bucket. It keeps showing access denied when I am trying to see "preview table".
Error
Your query has the following error(s):
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 3F5896D43C82733B; S3 Extended Request ID
(Path: s3://bucket/AWS:Application/accountid=../region=us-east-1/resourcetype=ManagedInstanceInventory/i-..json)
Athena and QuickSight are working in the account where bucket and key are located, but I am want to keep bucket in different account.
I am trying to implement Best practices for patching your AWS and hybrid environment, but with different account and with KMS key.
I have followed all the document about Athena cross-account access with KMS but no luck. Also added decrypt IAM policy to QuickSight Service role.
My IAM role has full admin access. It uses assume role.
Can someone guide me on this issue? Thank you.
If you're i
If you create a resource data sync for an AWS Region that came online since the Asia Pacific (Hong Kong) Region (ap-east-1) or later, then you must enter a region-specific service principal entry in the SSMBucketDelivery section. The following example includes a region-specific service principal entry for ssm.ap-east-1.amazonaws.com
https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-inventory-datasync.html
I'm trying to list and download files from a Requester Pays S3 bucket:
aws s3 ls --request-payer requester s3://requester-pays-bucket/
I'm running this command from an EC2 instance, but it fails:
Unable to locate credentials. You can configure credentials by running "aws configure".
The error is clear, however I'm still a little surprised. The goal of a Requester Pays bucket is to offload the cost of S3 data transfers to the requester. Since I'm initiating my request from EC2, my identity as requester should already be clear to S3, no?
Can S3 or the AWS CLI somehow automatically pick up my identity from the EC2 instance I'm running on? Or do I have to provide credentials in some explicit way?
You have to explicitly provide credentials of an IAM user which have access to your S3 bucket. Just go to IAM dashboard of your AWS account and create a new user which have programmatic access to s3. After this you will be provided with a secret access key and access key ID.
Then login into your EC2 instance, run command "aws configure" in your terminal and you will be asked for access key id , secret access key , default region if you want to provide ,just enter these details and you are good to go with your command.
I want to create a Python Lambda function to take uploaded s3 images and create a thumbnail version of them.
I have permission problems where I cannot get access to my bucket. I understand that I need to create a bucket policy. I don't understand how I can make a policy which works for a lambda request performing the thumbnail process?
It sounds like you want to do the following:
Fire lambda whenever the something is uploaded to your bucket
Read a file from the bucket
Write a (thumbnail) file back to the bucket
You'll need 3 different permissions to do that:
The S3 service will need permission to invoke your lambda function (this is done for you when you add an S3 event source via the AWS Lambda console).
The lambda execution role (the one selected on the Configuration tab of the Lambda Console) will need read/write access to call S3. You can generate a policy for this on the policy generator by selecting IAM Policy from the drop down and then selecting the S3 permissions you need.
For added security, you can set a bucket policy on S3 to only allow the lambda function to access it. You can generate this from the policy generator as well by selecting S3 policy. You would then enter lambda.amazonaws.com as the Principal.
For example, I have a website with User A and B.
Both of them can login to my website using my own login system.
How do I make certain files from S3 accessible only to User A once he login to my website?
Note: I saw "Permission" in AWS Management Console with "Authenticated Users" option but it seems that it's meant for other S3 users only, is it something I can use to achieve my goal?
You need to use Amazon IAM - you can define what part of any S3 bucket A can see, as well as B and each will not have access to do 'anything'. In general you should never use the account ID and secret for anything, always make an IAM user have just whats needed to run your stuff. The admin user likely does not need EC2 or SQS, or SimpleDB, etc.
Federated access is great for allowing arbitrary users to sign into your website and only be granted access for say 12 hours. They get special AWSIDs for that access that will work only on the section of S3 you let them look at.