I noticed that AWS CodePipeline creates its S3 folder that has block all public access set to off - amazon-s3

I noticed that AWS CodePipeline creates its S3 folder that has block all public access set to off.
I assume that means anyone can see the code in this folder that is deployed using this pipeline?
If true then that is bad!!!!
How can I make it private by default so only my CodePipline can see it?

The artifact bucket created by CodePipeline is not public. On the S3 console, you can confirm it says "Objects can be public" which translates to: [1]
The bucket is not public, but anyone with the appropriate permissions
can grant public access to objects
This can also be confirmed via the ACL tab under the bucket details in S3 console. Also the default Bucket policy restricts unencrypted objects and insecure connections.
This means that while bucket is not Public, individual objects can be Public with appropriate ACL.
'Block all public access' is a higher-level restrictive setting to block public access to all of your objects at the bucket or the account level.
This means that Public ACL objects in the bucket will also be blocked due to this overarching setting.
[1] https://docs.aws.amazon.com/AmazonS3/latest/user-guide/s3-user-guide.pdf

Related

AWS S3 Bucket created with force_delete=true fails to delete with Access Denied via terraform

I create an s3 bucket via terraform for the purpose of storing VPC Flow Logs:
resource "aws_s3_bucket" "bucket" {
bucket = local.bucket_name
force_destroy = true
tags = var.tags
}
After the bucket is created, and flow-log service is created, there are a few entries under "/AWSLogs/..."
after I remove the flow-log service I attempt the terraform destroy, but it fails with the following entry, one for each object:
deleting: S3 object (AWSLogs/.../...98d659c.log.gz) version (null): AccessDenied: Access Denied
there are no policies, because they get deleted first.
ACLs are bucket owner and s3 log delivery group have full access, the rest are turned off. and owner is set to data.aws_canonical_user_id.current.id
ACL permissions are not quite enough. The IAM role you are using requires the s3:DeleteObject* permissions.

S3 objects deny access - These objects came from another account's AWS CodeBuild project

(+)
I just found a similar question and answer with help from petrch (thanks!) and being try to apply...
CodeBuild upload build artifact to S3 with ACL
I'm updating accountB's S3 bucket by accountA's CodeBuild project.
A problem is, all the object from accountA's CodeBuild deny to access.
My purpose is using this S3 bucket for static hosting.
I set all requirements for static hosting and it's working fine when I uploaded simple index.html manually.
But the individual object from accountA's CodeBuild project show below attached error.
ex) index.html properties & permission
I checked the Disable artifact encryption option in the artifact setting in the CodeBuild project.
and also on the override params,
encryptionDisabled: true
This code build project is working fine when I save the output in the same account S3.
(S3 static hosting site in AccountA is working well)
But getting access issue in accountB's S3.
Before try to touch KMS policy, I want to know if I missed some configurations in the CodeBuild.
Please advice me what I have to do or missed...
Thanks.
(+)
I just found a similar question and answer with help from petrch (thanks!) and being try to apply...
CodeBuild upload build artifact to S3 with ACL
Upload the objects with bucket-owner-full-control canned ACL, otherwise the objects will be still "owned" by the source account.
See:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html
It says:
Amazon S3 access control lists (ACLs) enable you to manage access to buckets and objects. Each bucket and object has an ACL attached to it as a subresource. It defines which AWS accounts or groups are granted access and the type of access. When a request is received against a resource, Amazon S3 checks the corresponding ACL to verify that the requester has the necessary access permissions.
When you create a bucket or an object, Amazon S3 creates a default ACL that grants the resource owner full control over the resource. This is shown in the following sample bucket ACL (the default object ACL has the same structure)
So the object has ACL of the source bucket, it's not very obvious, but you can provide an ACL during the PutObject action from the source account. So it can still be just one call.

How to turn off encryption for an object in AWS s3 bucket via CLI?

I'm trying to verify my S3 bucket in Google Console to investigate, why it's URL was marked as potentially harmful in Google Safe Browsing.
The only method, which is possible with S3 bucket for verification, is HTML file. As far as I understood for the verification, I need the file to be publicly accessible without any additional headers/authorization url query and such things with direct url like https://{bucket}.s3.amazonaws.com/google{personal_id}.html
I allowed public access to bucket objects with ACL. But it is still not downloadable because of KMS default encryption with following error:
<Error>
<Code>InvalidArgument</Code>
<Message>
Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.
</Message>
<ArgumentName>Authorization</ArgumentName>
<ArgumentValue>null</ArgumentValue>
<RequestId>D99DE7A5CA2AAFC3</RequestId>
<HostId>
3Wot7SGS0sVmiaPz3Fj5riyWKzPxCRpYaQEk13FUjOdS0O4881txJ0Ze1mkmfIgZqon7te+8eWI=
</HostId>
</Error>
I found how to turn off KMS encryption for an object without touching default encryption of a bucket in Aws Console:
I can go to the object's properties and select encryption "None". After that the object is finally downloadable.
But I can't find how to do the same in AWS CLI. put-object command allows only to set up enryption SHA256 and KMS, but it doesn't allow none.
So the question is: How to change existing s3 object encryption to "None" using AWS CLI?
The only way to change the existing s3 object encryption to "None" is to re-PUT the object without encryption.

S3 Access Denied with boto for private bucket as root user

I am trying to access a private S3 bucket that I've created in the console with boto3. However, when I try any action e.g. to list the bucket contents, I get
boto3.setup_default_session()
s3Client = boto3.client('s3')
blist = s3Client.list_objects(Bucket=f'{bucketName}')['Contents']
ClientError: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
I am using my default profile (no need for IAM roles). The Access Control List on the browser states that the bucket owner has list/read/write permissions. The canonical id listed as the bucket owner is the same as the canonical id I get when I go to 'Your Security Credentials'.
In short, it feels like the account permissions are ok, but boto is not logging in with the right profile. In addition, running similar commands from the command line e.g.
aws s3api list-buckets
also gives Access Denied. I have no problem running these commands at work, where I have a work log-in and IAM roles. It's just running them on my personal 'default' profile.
Any suggestions?
It appears that your credentials have not been stored in a configuration file.
You can run this AWS CLI command:
aws configure
It will then prompt you for Access Key and Secret Key, then will store them in the ~.aws/credentials file. That file is automatically used by the AWS CLI and boto3.
It is a good idea to confirm that it works via the AWS CLI first, then you will know that it should work for boto3 also.
I would highly recommend that you create IAM credentials and use them instead of root credentials. It is quite dangerous if the root credentials are compromised. A good practice is to create an IAM User for specific applications, then limit the permissions granted to that application. This avoids situations where a programming error (or a security compromise) could lead to unwanted behaviour (eg resources being used or data being deleted).

AWS Lambda working with S3

I want to create a Python Lambda function to take uploaded s3 images and create a thumbnail version of them.
I have permission problems where I cannot get access to my bucket. I understand that I need to create a bucket policy. I don't understand how I can make a policy which works for a lambda request performing the thumbnail process?
It sounds like you want to do the following:
Fire lambda whenever the something is uploaded to your bucket
Read a file from the bucket
Write a (thumbnail) file back to the bucket
You'll need 3 different permissions to do that:
The S3 service will need permission to invoke your lambda function (this is done for you when you add an S3 event source via the AWS Lambda console).
The lambda execution role (the one selected on the Configuration tab of the Lambda Console) will need read/write access to call S3. You can generate a policy for this on the policy generator by selecting IAM Policy from the drop down and then selecting the S3 permissions you need.
For added security, you can set a bucket policy on S3 to only allow the lambda function to access it. You can generate this from the policy generator as well by selecting S3 policy. You would then enter lambda.amazonaws.com as the Principal.